A major telco planning to move mission-critical services to the cloud; a growing buzz around software-defined networking (SDN); business models of traditional network suppliers under threat. There was a lot to chew on at the recent Cloud-Net Summit in London organised by Layer123, and that was from attending one morning session only. Apparently, there were some pretty heated workshop discussions the day previously.
Of course, industry debate about how telcos can better manage and simplify their networks – and deliver a wide range of services in a more cost-effective and timely way – has been going on for years. But advances in cloud computing and storage capacity, plus signs that software-defined networking is ready to move beyond the university campus, is giving the discussion fresh impetus. It also focuses minds when an operator the size of Deutsche Telekom says it has radical plans to overhaul its network and head for the cloud.
In his excellent presentation at Cloud-Net, Axel Clauberg, vice president of IP architecture and design at Deutsche Telekom (DT), complained that growing network complexity – and the support of multiple protocols with expensive interfaces– was holding back the operator’s efforts to bring service innovation to market. He also noted that Germany’s cable operators, running more IP-optimised networks than DT’s, were causing competitive problems. “If we want to compete in this market and be profitable, we need to drastically simplify our IP service production process,” said Clauberg. It might be good business for suppliers if DT continued to run multiple network elements on switches and routers, he added, but it wasn’t doing the telco’s profit margins any good. The old way of doing things, insisted Clauberg, was not sustainable.
The DT plan is to radically simplify the network and create a new architecture based on native IP. Dubbed TeraStream, the new DT network will comprise only two types of routers: R1 and R2. R1 routers are customer facing, residing in the access network; R2 routers are connected to the telco’s cloud-based data centres and peering partners. The R1 and R2 routers are connected by an optical ring that forms the core of the network. The core transmits Ethernet frames carrying IPv6 traffic. “We want to get rid of all the expensive internal interfaces you would get in an MPLS network,” said Clauberg.
The task of service delivery moves away from the network to what Clauberg calls network-centric data centres. “The more special hardware and software in the network elements, the more difficult it is to manage and scale,” he said. “We want to concentrate on IP service delivery [from network-centric data centres] using COTS [commercial off-the-shelf software].”
A range of network services will be housed in DT’s data centres, including DHCP, DNS and OSS. Other services include IPTV, OTT apps, content delivery networks, IMS components, and the support of legacy protocols (MPLS and IPv4). “Users will also be able to self-provision, which is possible today with cable operators,” said Clauberg. “And if content providers want to sell directly to their customers, we will provide virtual machine storage capacity to do that.” The first customer trial of TeraStream is scheduled for this year.
Under new management?
Co-hosted by the Open Networking Foundation (ONF), the Cloud-Net Summit naturally devoted time to OpenFlow, the network management protocol that ONF promotes. Based on SDN, OpenFlow envisages a radical simplification of switches and routers to enable easier and more cost-effective service delivery.
Today, switches are complicated beasts. Dan Pitt, executive director at ONF, wants that to change. Instead of being self-contained systems, exchanging peer-to-peer protocols that determine such things as network topology, routing methods and security, the OpenFlow vision is to move the control plane out of the switch and into a centralised SDN controller platform. It certainly sounds compelling. Using what Pitt calls ordinary software, which can be handled by ordinary programmers with no special training, network policies (using OpenFlow) can be attached to applications. It is the control tier, not the switch, that now determine where the packets go and what policies they adhere to (such as throughput, latency, security levels, multicasting and if data should even stay in one country or not). “Company policy can be translated into routing algorithms,” said Pitt.
There is still a lot of work to do, however, if the ONF vision of a super flexible network is to become reality. While DT’s Clauberg said he could see a real-time OSS role for OpenFlow in the data centre and the access network, the network core was currently out of bounds. The presence of BGP (border gateway protocol) for the public internet, he said, made the network core incompatible with OpenFlow. Clauberg said he was keeping a watchful eye on progress made by the ONF hybrid network working group, which is tackling this issue. “We are depending on its results,” he said.
There are other questions, too. As Pitt acknowledged at Cloud-Net, there is apprehension among operators about throwing out their current gear and introducing OpenFlow-compatible switches. Nor have the arguments for a centralised control plane, as envisaged by ONF, appear to have been won outright.
There is also a question mark over the availability of these stripped-down switches and how enthusiastic traditional network suppliers – such as Cisco and Juniper – would be in making them. Unless they can differentiate using OpenFlow software, it is hard to imagine they will be working strenuously to commoditise their products. That said, Cisco and Juniper clearly take OpenFlow seriously. Both are founding member companies of ONF. Wisely, however, ONF does not allow vendors to have seats on the board. Suppliers, protecting vested interests, could easily slow down standardisation work.