Colt's Blog

Bookmark and Share


Colt’s managed broadcast network sparks great interest at IBC

By: Ken Wood - 14 Sep 2012

  • Close
  • Bookmark and Share

The International Broadcasting Conference is always a busy affair attended by companies from all over the world, covering every aspect of broadcasting today. With the industry in flux over increased customer demand for HD and 3D, new ways of interactivity and 24/7 content provision there was plenty of lively conversation and debate.

IBC Amsterdam Colt Stage

The 2012 Olympics was the first Games to feature live 3D television broadcasts but it also showed the appetite for multi-channel simultaneous live broadcasting, the BBC alone broadcast 24 dedicated Olympics channels, and that demand is set to grow. With viewers demanding increasing levels of high quality, live content from across the globe, broadcasters require higher bandwidth and increased flexibility from their delivery network to help them meet the demand. I was delighted with the reception that our new partnership announcement with Hibernia Media got. Essentially, Colt is now able to provide access to high-quality live broadcast content from the USA, Europe and Asia. Our broadcast-quality network means that high-demand live events can be allocated all the resource required to ensure a great viewer experience, whether content is HD or 3D.

IBC Amsterdam

High bandwidth demand for live events was just one of the topics for debate. The second screen, or triple screen experience, where viewers interact with TV, mobile and tablet while watching TV, was also one of the major themes this year. The ‘lean-forward’ experience of interacting with the programme you’re watching is increasingly replacing the passive consumption of content. That interactivity involves more content, creating greater demands for flexible infrastructure that can scale up on demand helping broadcasters maximise viewer ratings and increase share of wallet.

Overall IBC gave us a great opportunity to meet customers and new contacts, talk about our new services and cement existing relationships with our partners: Hibernia, Nevion and BT Wholesale. Together we can help our broadcast customers to generate significant revenues from live sports and news content while at the same time providing scalable and flexible services to help meet the technology-led evolution they are facing. We look forward to making that happen.

Emerging Markets create new opportunities at Wholesale World Congress

By: Peter Hutchings - 11 Sep 2012

  • Close
  • Bookmark and Share

Once again Colt will be making its way to Madrid this week for the annual Wholesale World Congress (WWC) meeting. For those of us who work in Voice this is a vital opportunity to meet new partners and clients, exploring new methods of working and getting to know each other better. With such an industry wide focus, WWC brings together Tier 1 to 3 carriers, mobile and wireless operators, ISPs, VoIP companies and technology partners, to create an event that spans the whole of the telecoms industry.

At Colt we’re especially excited this year about the new possibilities that are arising from the emerging markets, with a special focus being placed on Africa. The existing below-the-ground infrastructure has remained quite poor across many countries on the continent. Instead, the technology and telecoms boom has been driven by mobile and increased internet services demand. We’ve seen a 100-fold increase in sub-Saharan Internet bandwidth in the past three years with the potential now at 25Tb/s. The rapidly increasing population means that this demand is going to grow across all areas of data connectivity and telecoms services.

Such a rapidly growing market presents its own set of challenges to overcome and I know the Voice Trading team are looking forward to discussing and addressing how Colt can benefit our clients and partners in the years ahead. Wholesale World Congress plays a vital role in giving us the opportunity to have these discussions and understand how our peers within the market are responding.

If you’re attending, please come and say “Hello” and meet the rest of the team. We’re delighted to be attending WWC and can’t wait to discuss the exciting prospects that surround the international wholesale telecoms community right now.

Tier III Design Certification for Modular Data Centre brings great benefits

By: Victor Smith - 04 Sep 2012

  • Close
  • Bookmark and Share

Yesterday, Colt Data Centre Services were delighted to share the news that we’ve been awarded the Uptime Institute’s Tier III Certification of Design Documents for two of our halls in our London 3 facility. This is great news not only for us, but for our customers as well. Certification is an independent assessment of our data centres’ resilience and as such ensures that our customers can be assured they are getting the highest quality design available.

The data centre industry has a history of building no two data centres alike. Today we are progressing towards an era where the benefits of standardisation for data centres and best practices in design, build and operation are becoming more evident. When it comes to housing their critical compute, our customers are risk averse and want a solution and provider they can trust. Certification such as the Tier certification for Design and Management and Operations stamp of approval give our customers the reassurance that our design and operations are independently assessed by one of the industries foremost authorities, The Uptime Institute. An independently verified certification reinforces the confidence our customers need when outsourcing their wholesale data centre and colocation services.

More importantly though the certification underlines the outstanding features that our data centres already offer, minimising service disruption, a modular build that allows for Critical Cooling Systems and UPS to be maintained or replaced with no detrimental effect on resilience and of course our formidable speed to market with such a robust solution.

When you combine these high-standard design features with the location of our London 3 centre, located just 25 miles away from the city centre, you have a very powerful, accessible and state-of-the-art facility. I’m very pleased we’ve been awarded this certification and am now looking forward to improving on our already cutting-edge services and solutions.

The real meaning of outages; coming full circle on resilient cloud

By: Carl Brooks - 31 Aug 2012

  • Close
  • Bookmark and Share

Colt is always keen to support discussion on key issues affecting customers and service providers. In a recent research piece on www.451research.com, Carl Brooks from 451 Research addressed the issue of service outages and its impact on this new IT-as-a-service world. We felt it was so well argued and balanced that it needed a wider audience. Thanks to 451 Research for allowing us to repost this as a guest blog.

451 Research logo

In the first part of this series, T1R touched on several recent outages from cloud providers, websites and even a good old-fashioned mainframe outage . Hot on the heels of those are yet more reported outages – salesforce.com suffered a major outage on July 10, following a less serious outage on June 28. Murphy's law strikes again, but what does it portend for cloud users?

The promise of cloud computing was that it made all the hard work invisible and delivered the useful end product – servers and storage – as though it were magic. IT operators can appreciate the toil and expense that goes into maintaining a highly reliable environment; cloud users weren't supposed to; at least, that was the premise. In return for what are essentially dedicated virtual servers, cloud providers do all of the work that IT shops consider expensive and headache-producing: replicate data, provide failover if a server or a connection goes down, maintain all the underlying equipment, facilities and relationships with transit and peering providers and so on.

In the enterprise, that kind of reliability carries with it a stiff price tag, and the usual answer to question 'what's our uptime capability?' is 'How much are you paying?' Cloud computing has been seen as a panacea to that unpleasant reality. Software as a service has aided that illusion, since even when salesforce.com or the like goes down, it was rarely business-critical IT functions, and the assumption was that IaaS providers would take their infrastructure that much more seriously, and they do. IaaS providers generally maintain a minimum of 99.95% uptime for all their services, considerably better than the average IT shop. Of course, that's lower yet than more traditional hosting providers can provide; one of the tradeoffs of building on cheap commodity gear and designing around resilient automation systems instead of premium gear and lots of staff.

Of course, the truth is that every service provider will have outages, and the basic tradeoff of the cloud is that you give up control over your infrastructure in return for easy access, not to get invulnerability from anything going wrong. It appears that the answer to reliability for the enterprise is the same as it's always been (buy more resources), albeit the nuances and the technology are changing. Cloud enables new and better ways to achieve resiliency, but not with the current state of the enterprise.

Design for failure

Advocates and early adopters of cloud technology are quick to say 'design for failure,' meaning that infrastructure and applications should be built with the expectation that any and all parts of the system can fail. That's not exactly a new design philosophy, but it's rudimentary in the enterprise compared with how cloud people mean it. Netflix, which operates entirely on Amazon Web Services (AWS) at this point, famously wrote a 'chaos monkey' into its systems – a bit of code that semi-randomly disables parts of Netflix's online infrastructure to continually improve Netflix's response to problems. The company replicates all important data and systems across multiple geographical locations by default and takes fault tolerance to an extreme.

That's a few evolutionary steps above RAID and tape and spare parts on the shelf, which is the disaster plan for most traditional IT. Only the most important applications get site replication and automatic failover, because it's expensive. Conversely, using an IaaS provider means that replication is both easy and relatively less expensive, but more importantly, it is the only way to get demonstrable improvements in reliability.

Users can't buy more or less reliable cloud servers (SLAs can be purchased, but that doesn't mean actual uptime performance, of course).

As enterprise IT continually automates, refines and adopts the techniques that cloud providers and cloud users have pioneered, it will begin to adopt this philosophy, and that will happen in the management layer. All indications point to enterprise IT that functions more like IaaS than not, eventually. This kind of capability requires advanced management capabilities to extend across an organization, of course, but that too is on the way as enterprise IT departments face pressure to deliver the same kind of flexibility as Amazon does.

Server goes down? Instead of troubleshooting until it can be brought back up, why not bring up a duplicate, refresh it with a snapshot of the failed one, recheck any data against the master copy for consistency and then go troubleshoot the failure. Downtime becomes minutes instead of hours and leaves more time for root cause analysis and less time bug hunting. Some IT shops already do this today; it's common in the virtual desktop world and fluid environment like test and development.

Multi-cloud

That's the internal IT story. The need for security against outages is a concern of external IT users, those consuming public cloud services and hosted cloud-style environments. This may eventually include multiple cloud providers, but again, management will be the key. The natural extension of the traditional IT disaster-recovery strategy into public cloud would be to host failover environments with different providers. To some extent this is what AWS offers with its multiple Regions and Availability Zones, and its own best practices say that users need to duplicate resources and spread across several regions. But the wary IT professional would prefer to use another vendor entirely, for extra assurance, and that seems obvious.

The problem is that IaaS environments are not homogenous. In fact, they're usually markedly different in quirky ways, to the extent that applications are designed around performance characteristics for different providers. AWS is somewhat non-deterministic in routing traffic, so it's used for applications that aren't dependent on consistent I/O. Rackspace Cloud has great machine-to-machine connectivity, so can do exactly that. Microsoft Azure's Blob storage has fantastic consistency, speed and availability; some users treat it as a back end for Web applications running on their own IT. What all this means is that moving an infrastructure stack between providers is not at all trivial. In many cases, it's not possible.

However, that too, is a part of the evolution underway. Tier1 Research sees hybrid IT deployments becoming the dominant model over time. In time, there will be enough providers and environments to suit all comers. AWS has already created offerings like Cluster Compute to serve HPC users, and industry- and application-specific cloud offerings are already here. Azure would be the example of a Microsoft-specific cloud, for instance.

There will come a time when external clouds will be both resources and failover pools, and enterprises will actually be able to do many of the things that cloud computing seems to promise, but it will be a variegated landscape.

T1R take

Several trends need to materialize more fully before we see multi-cloud deployments that can protect enterprises from outages 'in the cloud.' One of them is more fully abstracted applications and infrastructure, which will be in turn more portable across different providers. This means a continual re-examination of how an IT department or MIS thinks about its operations, moving more and more toward modular, replaceable parts for all parts of IT, not just the obvious candidates. There's no reason an application server can't be as easily and painlessly replaceable as a hard drive in a RAID array, but it means rethinking a lot of received wisdom about production environments and traditional application stacks. The next trend is the adoption of true, multi-cloud, multi-resource management tools that can treat a server in the basement the same as a server at an IaaS provider. Cloud brokering technology needs to mesh with traditional IT management framework all the way up and down the stack. Companies like EnStratus and DynamicOps (bought by VMware for just this reason) can fill in some of those gaps, but we're a long way from mainstream.

The good news is that there is an ongoing opportunity for service providers to pick up enterprise business as these trends move forward, but the basic economic premise of hosted infrastructure only grows stronger as enterprises make these kinds of shifts. Infrastructure providers should be looking to encourage and enable hybrid environments and rudimentary multi-cloud approaches that can bridge that gap.

Future Proofing the Financial Sector

By: Hugh Cumberland - 29 Aug 2012

  • Close
  • Bookmark and Share

The news that EU regulators plan to implement a Volcker Rule style regulation has been met with understandable resistance. Banks across Europe are arguing that any rule which aims to curb proprietary trading, specifically those trades facilitated by customer deposits, would impact on market liquidity and ultimately curtail GDP growth.

Much like the now repealed Glass–Steagall Act of the 1930s, this regulation aims to separate high street banking businesses from their arguably riskier investment banking arms. No doubt there will be debate about the degree of ring fencing required but, whether the lobbying paper submitted to the Liikanen Commission gains an audience within the EU or not, the regulatory landscape will continue to evolve in line with events. This means that banks across Europe will have to continue to adapt in order to comply.

This change places a serious demand on IT – not traditionally the most adaptable group within an organisation. Duplication of infrastructures and systems would require considerable budget and inhouse resource at a time when we are seeing the financial services industry refocusing on the core business of banking. Clearly there would be cost benefits in maintaining shared infrastructures if ring fencing is implemented but any infrastructure change in response to regulations does provide the opportunity for far more than compliance.

Given the opportunities offered by emerging markets and cross asset trading, there is competitive advantage to be gained from infrastructures that have the agility and flexibility to allow the bank to respond quickly to regulatory changes but also to emerging markets. For example, certain investment banks are already deciding to spin off proprietary trading activity into separate operations. Fast, efficient business transformation cannot be achieved with legacy systems and traditional approaches to IT procurement. With ring-fenced retail operations and the regulatory requirements of Basel III and MiFIR, banks will need agile infrastructures that not only allow them to manage change in the short term, but also to provide the flexibility to respond to unknown regulations, and to future-proof services as new ways of doing business develop.

This includes services that support proprietary trading activity such as high frequency trading (HFT), which is currently responsible for an estimated 50 to 70% of global equity trading and 30% of the global trading activity in foreign exchange (FX), generating greater trading efficiencies, more market liquidity, tighter spreads and better prices. As the sector adapts to change, the greatest share of future revenue opportunities in HFT and new forms of proprietary trading will be claimed by the most adaptable players who successfully restructure their IT operations for both speculative trading and commercial banking. For banks, this flexibility is not only about short-term compliance. It’s also a hedge against the future.

Please choose your country