Nothing changes...
Some of you may have seen earlier blogs, and even the Broadband-Testing report, on our recently acquired US client Talari Networks, whose technology basically lets you combine multiple broadband Internet connections (and operators) to give you the five-nine's levels of reliability (and performance) associated with them damnedly expensive MPLS-based networks, for a lot less dosh.
You can actually connect up to eight different operators, though according to Talari, this was not enough for one potential customer who said "but what if all eight networks go down at the same time?" Would dread having to provide the budget for that bloke's dinner parties - "yes I know we've only got four guests, but I thought we should do 24 of each course, just in case there's a failure or two..."
Anyway - one potential issue (other than paranoia) for some was the entry cost; not crazy money but not pennies either. So, it makes sense for Talari to move "up" in the world, so that the relative entry cost is less significant and that's exactly what they've done with the launch of the high(er)-end Talari Mercury T5000 - a product designed for applications such as call centres that have the utmost requirements for reliability and performance and where that entry cost is hugely insignificant once it saves a few outages; or even just the one.
If you still haven't got wot they do, in Talari-ese it provides "end-to-end QoS across multiple, simultaneous, disparate WAN networks, combining them into a seamless constantly monitored secure virtual WAN". Or, put another way, it gives you more resilience (and typically more performance) than an MPLS-based network for a lot lower OpEx.
So where exactly does it play? The T5000 supports bandwidth aggregation up to 3.0Gbps upstream/3.0 Gbps downstream across, of course, up to eight WAN connections. It also acts as a control unit for all other Talari appliances, including the T510 for SOHO and small branch offices, and the T730, T750 and T3000 for large branch offices and corporate/main headquarters, for up to 128 branch connections.
I's pretty flexible then, and just to double-check, we're going to be let loose on the new product in the new year, so watcheth this space...
Following
on from last week's OD of SDN at Netevents, we have some proper, physical
(ironically) SDN presence in the launch of an SDN controller from HP.
This complete the story I covered this summer of HPs SDN solution - the Virtual Application Network - which we're still hoping to test asap. Basically the controller gives you an option of proprietary or open (OpenFlow), or both.
The
controller, according to the HP blurb, moves network intelligence from the
hardware to the software layer, giving businesses a centralised view of their
network and a way to automate the configuration of devices in the
infrastructure. In addition, APIs will be available, so that third-party
developers can create enterprise applications for these networks. HPs own
examples include Sentinel Security - a product for network access control and
intrusion prevention and some Virtual Cloud Networks software, which will
enable cloud providers to bring to market more automated and scalable
public-cloud services.
Now
it's a case of seeing is believing - bring it on HP!
And here's my tip for next buzz-phrase mania - "Data Centre In A Box"; you heard it here (if not) first...
- Big Data
Trailblazers
- Cloud Trailblazers
- Emerging
Markets Trailblazers
- Mobile
Technology Trailblazers
- Networking
Trailblazers
- Security
Trailblazers
- Storage
Trailblazers
- Sustainable
IT Trailblazers
- Virtualization Trailblazers
One of the
problems we've faced in trying to maximise throughput in the past has not been
at the network - say WAN - level, but what happens once you get that (big) data
off the network and try to store at the same speed directly onto the storage.
We saw this
limitation, for example, last year, when testing with Isilon and Talon Data and
using traditional storage technology - the 10gigabit line speeds we were
achieving with the Talon Data just couldn't be sustained when transferring all
that data onto the storage cluster. While we believe that regular SSD (Solid
State Disk) technology would have provided a slight improvement, we still
wouldn't have been talking end-to-end consistent, top-level performance.
So it's with some interest - to say the least - that I've started working with a US start-up, Constant Velocity Technology, that reckons it has the capability to solve exactly this problem. We're currently looking to put together a test with them: http://johnpaulmatlick.wix.com/cvt-web-site-iii - and another "big data" high-speed transfer technology client of mine, Bitspeed, with a view to proving we can do 10Gbps, end-to-end, from disk to disk.
Even more interesting, this is happening in "Hollywood" in one of the big-name M&E companies there. However, if any of you reading this are server vendors, then please get in touch as we need a pair of serious servers (without storage) to assist with the project!
Life beyond
networking...
In this guest blog post Computer Weekly blogger Adrian Bridgwater tries out a new 1 Gbps broadband service.
In light of the government's push to extend "superfast" broadband to every part of the UK by 2015, UK councils have reportedly been given £530m to help establish connections in more rural regions as inner city connectivity continues to progress towards the Broadband Delivery UK targets.
Interestingly, telecoms regulatory body Ofcom has defined "superfast" broadband as connection speeds of greater than 24 Mbps. But making what might be a quantum leap in this space is Hyperoptic Ltd, a new ISP with an unashamedly biased initial focus on London's "multiple-occupancy dwellings" as target market for its 1-gigabit per second fibre-based connectivity.
Hyperoptic's premium 1 gig service is charged at £50 per month, although a more modest 100 Mbps connectivity is also offered £25 per month. Lip service is also paid to a 20 Mbps at £12.50 per month contract for customers on a budget who are happy to sit just below the defined "superfast" broadband cloud base.
Hyperoptic's managing director Dana Pressman Tobak has said that there is a preconception that fibre optic is expensive and therefore cannot be made available to consumers. "At the same time, the UK is effectively lagging in our rate of fibre broadband adoption, holding us back in so many ways -- from an economic and social perspective. Our pricing shows that the power of tomorrow can be delivered at a competitive and affordable rate," she said.
Cheaper than both Virgin and BT's comparable services, Hyperoptic's London-based service and support crew give the company an almost cottage industry feel, making personal visits to properties to oversee installations as they do.
While this may be a far cry from Indian and South African based call centres, the service is not without its teething symptoms and new physical cabling within resident's properties is a necessity for those who want to connect.
Upon installation users will need to decide on the location of their new router, which may be near their front door if cabling has only been extended just inside the property. This will then logically mean that home connection will be dependent on a WiFi connection, which, at best, will offer no more than 70 Mbps as is dictated by the upper limit of the 802.11n wireless protocol.
Sharing the juice out
It is as this point that users might consider a gigabit powerline communications option to send the broadband juice around a home (or business for that matter) premises using the electric power transmission lines already hard wired into a home or apartment building.
Gigabit by name is not necessarily gigabit by nature in this instance unfortunately, despite this word featuring in many of these products' names, which is derived from the 10/100/1000 Mbps Ethernet port that they have inside.
If you buy a 1 gigabit powerline adapter today you'll probably notice the number 500 used somewhere in the product name - and this is the crucial number to be aware of here as this is a total made up of both upload and download speeds added together i.e. 250 Mbps is all you can realise from the total 1 gigabit you have installed at this stage via the powerline route.
Our tests show uplink and downlink speeds of roughly 180 Mbps were achieved in both directions using a new iMac running Apple Max OS X Lion. Similar results were replicated on a PC running Windows 7 64-bit version.
The above image shows a wireless connection test while the below image shows a hard wired connection.

So in summary
It would appear that some of Hyperoptic's technology is almost before its time, in a good way. After all, future proofing is no bad thing house design architects looking to place new cable structures in 'new build' properties and indeed website owners themselves are arguably almost not quite ready yet for 1 gigabit broadband.
As the landscape for broadband ancillary services and high performing transactions-based and/or HTML5-enriched websites now matures we may witness a "coming together" of these technologies. Hyperoptic says it will focus next on other cities outside of the London periphery and so the government's total programme may yet stay on track.
It's been a busy old Spring so far - I'm still trying to get my head around the recession - IT is going bonkers, spending like the world is about to end (does somebody know something we don't?), every flight I take from wherever to wherever is full and when I take a few days off on the Spanish and SoF coastlines the places are packed.
The result is a lot of tests and reports to update on, which can be found on the www.broadband-testing.co.uk website as normal, for free download. Gartner said it at the start of the year, IDC has supported the argument and I'm in the thick of it - network optimisation that is, whether LAN, WAN, Cloud or inter-planetary. As a result, we've got two new reports up on L-B/ADC solution providers, Kemp and jetNEXUS. Both are going for the "you don't need to spend stupid money to optimise app delivery" angle and both succeed; however, the focus of the tests are quite different. With Kemp we showed that you can move from IPv4 to IPv6 and not take a performance hit at all - very impressive. With jetNEXUS we showed that you can d**k around with data at L7 as much as you want and still get great throughput, manipulating data as you wish with no programming skills required whatsoever. Could put a few people out of a job... no problem let them loose with sledgehammers to knock down my old home town of Wakefield so someone can rebuild it properly. What was it that John Betjeman said about Slough?
The same could be said of Vegas; since arriving back with what felt like pneumonia I've been in an "who's the most ill" competition with my HP mate Martin O'Brien who contracted several unpleasant things while were both out at Interop. Elton John had to cancel the rest of his Vegas shows because he contracted (the same?) respiratory problems. Well if it's good enough for Elton...
One of the things to come out of Interop meetings wot I have spoken about is the proposed testing of HPs (along with F5) Virtual Application Networking solution. What is interesting here is that the whole aspect of profiling network performance management on a per user, per application basis is to get that profile as accurate as possible in the first place. While HPs IMC management system (inherited from the 3Com acquisition) does some app monitoring, it doesn't go "all the way". But we know men (and women) who can... If you checkout the Broadband-Testing website, you'll also see a review of Centrix's WorkSpace products. With these you can take application monitoring down to the level of recording when a user logs into an app, how long they have it loaded for and even when they are actively using it or not. Now that IS the way to get accurate profiling; take note HP. Let the spending continue...
Back from Interop and my 'beloved' Vegas from which I escaped just in time before being air-con'd to death as my ongoing cough continues to remind me. Is it possible to sue "air"?
I don't know - maybe there are people out there (mainly the people who were "out there") who enjoy the delicious contrast of walking in from 42c temperatures into 15c, time and again, then in reverse, and the joy of being able to hear at least three different sorts of piped music at any one time, the exhilaration for the nostrils of seven or more simultaneous smells, 24 hours a day? Must be me being picky. I like my sound in stereo at least, but all coming from the same source...
Anyway - reflections on the show itself; easy when there's less smoke and more mirrors AKA taking away the hype. What I found was a trend - that others at the show also confirmed - towards making best of breed "components" again, rather than trying to create a complete gizmo. For example, we had Vineyard Networks creating a DPI engine that it then bolts on to someone's hardware, such as Netronome's dedicated packet processing architecture, that then sits - for example - on an HP or Dell blade server. I like this approach - it's what people were doing in the early '90's; pushing the boundaries, making networking more interesting - more fun even - and simply trying to do something better.
There are simply more companies doing more "stuff" at the moment. Take a recently acquired client of mine who I met out there for the first time, Talari Networks, enabling link aggregation across multiple different service providers - not your average WanOp approach. A full report on the technology has just been posted on the Broadband-Testing website: www.broadband-testing.co.uk - so please go check it out. Likewise, a report from Centrix Software on its WorkSpace applications. Reading between the lines on what HP is able to do with its latest and greatest reinvention of networking - Virtual Application Networking or VAN - as we described on this blog last week, along with buddy F5 Networks, I reckon there is just one piece of the proverbial jigsaw missing and that is something that Centrix can most definitely provide with WorkSpace. The whole of VAN is based around accurately profiling user and application behaviour, combining the two - in conjunction with available bandwidth and other resource - to create the ideal workplace on a per user, per application basis at all times, each and every time they log into the network, from wherever that may be.
Now this means that you want the user/application behaviour modelling to be as accurate as possible, so your starting point has to be, to use a technical term much loved by builders, "spot on". Indeed, there is no measurement in the world more accurate than "spot on". While HPs IMC is able to provide some level of user and application usage analysis, I for one know that it cannot get down to the detailed level that Centrix WorkSpace can - identifying when a user loads up an application, whether that application is "active" or not during the open session and when that application is closed down... and that's just for starters. I feel a marriage coming on...
Live from the home of tack - i.e. Vegas, the Blackpool of the desert but without the classiness...or piers - is the latest bombardment of SDN, er, ness, care of Interop 2012.
Starting with a direct follow-up to my last blog entry - HPs take on SDN, AKA VAN (ok - enough TLAs...) or Virtual Application Networks, the big question was, who was going to drive the VAN since HP doesn't have the whole solution to deliver it? The answer is F5 Networks. So, the idea is to being to deliver a completely optimised, end to end solution on a per user/per application basis by using templates to define every aspect of performance etc. Makes total sense, sounds too good to be true. So, what's the answer - test it of course; watch this space on that one.
Meantime, I'll be reporting in daily from the show - seeing lots of new (to me) vendors who, one way or t'other, are all ticking the SDN/Big Data/Cloud boxes.
It seems to me that we need to get back to basics with SDN so that people actually understand what it is. For example, there's a definite belief among some that it does away with hardware... Nice idea - so we have software that exists in a vacuum that somehow delivers traffic? There also seems to be confusion between different vendors SDN solutions and OpenFlow. For those wot don't know, here's what OpenFlow is - in a classical router or switch, the fast packet forwarding (data path) and the high level routing decisions (control path) occur on the same device.
An OpenFlow Switch separates these two functions. The data path portion still resides on the switch, while high-level routing decisions are moved to a separate controller, typically a standard server. The OpenFlow Switch and Controller communicate via the OpenFlow protocol, which defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats.
The data path of an OpenFlow Switch presents a clean flow table abstraction; each flow table entry contains a set of packet fields to match, and an action (such as send-out-port, modify-field, or drop). When an OpenFlow Switch receives a packet it has never seen before, for which it has no matching flow entries, it sends this packet to the controller. The controller then makes a decision on how to handle this packet. It can drop the packet, or it can add a flow entry directing the switch on how to forward similar packets in the future.
In other words it provides one, open-standard methodology of optimising traffic, end-to-end, but it is not a solution in its own right, just a potential part of the action.
Whatever - the interesting theme here is that no one talks about MPLS any longer (well maybe apart from Cisco and Juniper that is) despite it still being THE methodology used to move all our data around the 'net and beyond. There are factions that stand for the WAN optimisation kills MPLS idea. And for good reason - but there's no overnight change here, given the gazillions invested in MPLS networks. It'll be interesting to see what the vendors here make of the situation, at least from a timeline perspective...
Meantime it's showtime, meaning a walk past a beach, complete with wave machine and hundreds of Americans trying to get skin cancer, in order to get to the exhibition halls - this is Vegas, after all.
Wore my journalist hat yesterday to attend an HP update event on its ESSN division (don't worry about what the initials stand for, but N is for Networking...).
While not the key focus of yesterday's blurb, the key thing for me to take from the event was the company's very recent announcement that they are going into the VAN market; no - not competing with Transits, though you could say the network is "in transit" but Virtual Application Networks - all part of the current SDN or Software Defined Network movement. For many years HP (as Procurve) and others have been trying to crack the whole "end to end" optimisation problem. I've been trying to personally crack it using any number of vendor parts since 1999....
So, VAN is the latest attempt. The aim is to use preconfigured templates to characterise the network resources required to deliver an application to users - i.e. to enable consistent, reliable and repeatable deployment of cloud applications in minutes. An end-to-end control plane virtualises the network and enables programming of the physical devices to create multi-tenant, on-demand, topology and device-independent provisioning. The idea is to be completely open, so this isn't an HP closed shop solution; that that they have created it.
Speaking with one of HPs customers, Mark Bramwell of Wellcome Trust at the event, we both agreed that it sounds like the latest and greatest "smoke and mirrors", "too good to be true" solution BUT - if it works, then great - every user has optimised applications, on a per user, per application basis. So we both agreed - the only sensible option is for me to test it. Watch this space on that one...
Speaking yet further on the subject in a broader manner with Lars Koelendorf who heads up HP EMEAs mobile and wireless stuff, we agreed that the ideal way to rebuild a network is to start with IPv6; with so many addresses available, every user could have their own virtual IP address that IS their identity so, whatever client they are using and wherever they are, all the logic sits behind their VIP(v6) address and the HP VAN man is complete. They would, of course, drive applications faster across the network than any other user type...
In conversation with Axel Pawlik, MD of RIPE NCC (which is obviously better than an unripe version).
The RIPE NCC is an independent, not-for-profit membership organisation that supports the infrastructure of the Internet in Europe, the Middle East and parts of Central Asia. The most prominent activity of the RIPE NCC is to act as a Regional Internet Registry (RIR) providing global Internet resources (IPv4, IPv6) and related services to a current membership base of around 6,800 members in over 75 countries. So these guys are involved at the heart of the IPv6 movement. Here's Axel's views on a few key areas:
What is at the
heart of the IPv4/IPv6 issue?
"Although the IANA's pool of available IPv4 addresses is exhausted, the RIPE NCC can still assign IPv4 addresses to its members from its own reserves of IPv4 address space. We cannot predict how long this supply will last."
"IPv4 addresses and IPv6 addresses can't communicate directly with each other. So, before IPv6 addresses can be used to access the Internet, your organisation's networks, services and products need to be IPv6 compatible or enabled. This requires planning and investment in time, equipment and training. New hardware and software is required to make networks ready for an IPv6-based Internet."
IPv6 - What's The Deal?
"Unless businesses act now to safeguard their networks, the future expansion of the Internet could be compromised. IPv6 is the next generation of IP addressing. Designed to account for the future growth of the Internet, the pool of IPv6 addresses contains 340 trillion, trillion, trillion unique addresses. This huge number of addresses is expected to accommodate the predicted growth and innovation of the Internet and Internet-related services over the coming years."
How will my customers be affected by the deployment of IPv6 in my
networks?
"End users of the Internet may not notice any difference when using the Internet with an IPv6 address or an IPv4 address. However, if you do not invest in IPv6 infrastructure now, in the future there may be parts of the Internet that your customers cannot reach with an IPv4 address if the destination is on an IPv6-only network."
What needs to be done?
- Network operators should ensure that their networks are IPv6 enabled and
can be used by their customers to access other IPv6 networks.
- Software producers should ensure that that their software is IPv6
compliant.
- Hardware vendors should ensure that their products are IPv6 compatible.
- Content providers should prepare networks so that they are accessible using IPv6 as well as IPv4.
It's a question we should all ask.
He points out that the number of addresses for IPv4 has long been predicted to run out soon arguing that, meanwhile, our readiness to move over to IPv6 looks increasingly unlikely to happen any time soon. Conventional wisdom among many analysts said that the industry wouldn't be ready for the switch until 2015. Personally, based on the indicators he sees every day, Davis thinks it could be even more distant.
But - and this is a big but (no pun intended for American readers) - the world IS running out of IPv4 addresses. This means that two of the current booms in technology he identifies, cloud computing and the" Internet of Things", might not be sustainable. You can't have an Internet of Things, Davis argues, if the 'things' in question (gadgets) can't get on the Internet. They simply won't be able to without an IP address, and all the IP addresses available under the old system are rapidly being used up.
Davis believes that, while it might all sound a bit "Mad Max", the IP crisis does bear some of the hallmarks of an apocalypse. For example, there are some alarming inequalities in the way resources are being shared out, he notes with just 20% of the world owning the majority of IP addresses. Hardly ideal... India, for example, - which when I last looked at my globe is quite a large country (with rapid IT deployment) has only three Class B address ranges (i.e. 130,000 addresses). In contrast as Davis points out, just one US IT company alone, HP, can trump that with its two class A IP address ranges (i.e. 32,000,000 addresses). Could this lack of infrastructure restrict the growth of the BRICs (Brazil, Russian, India and China) he asks, therefore, and will the developing nations become frustrated at their lack of, well, development?
This brings Davis onto another aspect of the next version
of IP, which he believes nobody has really given much air time to as yet. With
IPv6 giving companies complete visibility over the movements and browsing
habits of smart phone and laptop users, it could become a marketing manager's
dream.
If only we had the same perfect information about the migration from IP4 to IP6... (watch this space).
"M" might stand for Murder in the London theatre world, but the ultimate "M" word in IT has to be "Migration".
Apply this word to the challenge that is moving from IPv4 to IPv6 and you can probably hear the howls of despair and mistake them for an attempted murder. There are, however, some fundamental tools/advanced features of IPv6 that are designed to ease this process. These have been adopted to a lesser or greater degree by different vendors, so it's worth noting the availability of these features when shopping around for IPv6 assistance and future proofing.
We'll start with three absolutely fundamental ways to manage your IP addresses and how these work in a migratory environment.
NAT: NAT (Network Address Translation) has became a pretty fundamental tool for alleviating the issues with limited IPv4 address spaces, with most companies enabling it on their network gateways and other devices. So how to transition this to IPv6. First, there is what is known as Carrier Grade NAT (AKA Large Scale NAT) whereby Carriers/ISPs can allocate multiple clients to a single IPv4 address, standardising behaviour for IPv4 NAT devices and the applications running over them, using features such as "fairness" mechanisms - user allocated port quotas and the like.
We also have specific transition technologies such as NAT 64. This is a mechanism to allow IPv6 hosts to communicate with IPv4 servers. The NAT64 server is the endpoint for at least one IPv4 address and an IPv6 network segment of 32-bits. The IPv6 client embeds the IPv4 address it wishes to communicate with using these bits, and sends its packets to the resulting address. The NAT64 server then creates a NAT mapping between the IPv6 and the IPv4 address, allowing them to communicate.
DNS: As with the 64-bit version of NAS, we also have a 64-bit version of DNS. The IPv6 end user's DNS requests are received by the DNS64 device, which resolves the requests.
If there is an IPv6 DNS record (AAAA record), then the resolution is forwarded to the end user and they can access the resource directly.
If there is no IPv6 address but there is an IPv4 address (A record), then DNS64 converts the A record into an AAAA record using its NAT64 prefix and forwards it to the end user. The end user then accesses the NAT64 device that NATs this traffic to the IPv4 server.
Dual Stacks/DS-Lite: An obvious feature to look for is dual-stack support where all IPv4 and IPv6 features can run simultaneously. In addition there is DS-Lite (Dual Lite Stack) which enables incremental IPv6 deployment, providing a single IPv6 network that can serve IPv4 and IPv6 clients. Basically this works using IPv4 (tunneled from customer's gateway) over IPv6 (carrier's network) to a NAT device (carrier's device allowing connection to IPv4 Internet, which can also apply LSN/CGN). Because of IPv4 address exhaustion, Dual Lite Stack was created to enable an ISP to omit the deployment of any IPv4 address to the customer's on-premises equipment, or CPE. Instead, only global IPv6 addresses are provided. (Regular Dual-Stack deploys global addresses for both IPv4 and IPv6.)
I've recently been in conversation with a number of network product vendors - from Cisco to Infoblox - users and test equipment vendors, with respect to what must be the ultimate in "let's sweep it under the carpet and forget about it for a while" IT topics and that is IPv6.
With the last of the public IPv4 address allocation now long gone and the Far East already deploying IPv6 big time, the reality is that we do all need to start thinking about moving from the "4" to the "6", albeit gradually in most cases. And with LTE around the corner in the mobile world, that being pure IP-based, how many new IP addresses will suddenly be demanded? And where are they going to get allocated from?
In the States recently and having a casual natter with Infoblox' Steve Garrison, Steve was saying how many companies still carry out IP Address Management (IPAM) using Excel spreadsheets (got to be in the "Top 10 misuses of a spreadsheet"). So how will they cope with the complexities of deploying IPv6?
Another worry, from a conversation with F5 Networks and others that dabble in L4-7 data "mucking about" is the potential performance hit when moving from IPv4 to IPv6. This is something that (quelle suprise!) vendors don't openly talk about, but F5 has seen up to 50% performance hit on some rival products (tested internally) when moving from IPv4 to IPv6 and generally reckons its own see up to 10% performance loss in the same circumstances. This claim was substantiated in talks with other vendors large and small, such as with a newly acquired load-balancing client of ours, Kemp Technology.
So, on the basis that someone has to do something about it, we are launching an IPv6 performance test program, with a view to developing what is effectively an ongoing buyers guide/approved list for companies to short-list their potential IPv6 related procurements with.
Over the next few days we'll be looking at some of the key elements of IPv6 deployment - think in terms of something akin to the Top 10 Considerations when moving to IPv6. Because, sooner or later, we're all going to have to do it...
So, when a brand spankers new report out from Ericsson (or should I say Sony?) tells us that mobile data traffic will go berserk over the next five years, I think beyond the "well Ericsson would say that wouldn't they" and say it should be paid close attention to. And, besides, I am Eric's son, so we have something in common.
Headlines from the report are:
- Mobile data traffic will grow 10-fold between 2011 and 2016, mainly driven by video.
- Mobile broadband subscriptions grew by 60 percent in one year and are expected to grow from 900 million in 2011 to almost 5 billion in 2016.
- By 2016, users living on less than 1 percent of the Earth's total land area are set to generate around 60 percent of mobile traffic.
So, we're looking at a 10-fold increase in mobile data traffic.That's why I test a lot of optimisation products for a living. And all those vendors, talking in recent years of the era of unlimited bandwidth? Very funny indeed...
And it's not just the mobile networks that will get choked. Consider if just 1% of companies moved their (let's say currently "in-house") IT activities onto the public cloud? Do you think the Internet would cope? Don't think I need to answer that one...
-- Advertisement --