Put smarts at the edge to help manage the backbone..
Managing bandwidth from the headend to the set-top is an ongoing challenge because the resource is always constrained at some point by some limit – the capacity of a node, for example, or the total spectrum available, be it 750 MHz, 860 MHz or 1 GHz. In the other direction from the headend, however, toward the cloud, the challenge isn’t scarcity but avoiding getting complacent and sloppy. It’s so easy to just throw more bandwidth out there by expanding the capacity on existing fiber, or by installing some more fiber.
Going out toward a metro ring, a super headend, or a regional or national backbone, there’s typically plenty of capacity. But the question of traffic management still comes up.
“Intelligence in the network is an idea whose time has either come, or is coming soon,” said Tom Mock, Ciena’s vice president of strategic planning.
Service providers of all stripes have been spending money, a lot of money, the last two or three years on backbone and network infrastructure. The issue now is how to monetize it.
Telcos are used to rolling out five to 10 services a year, Mock observed. Some competitors try out more than 100 a year. If a company is offering only four or five new services, it loses its ability to differentiate, crippling its ability to compete.
Figure 1: Immense backbone capacity makes super headends practical, but congestion can still occur.
|
That makes it critical to be prepared to provide many, many more services, and that is going to necessitate the flexibility in the network to support doing so.
“For example, a cable operator offering broadband wants to offer higher speeds, more deterministic performance, better QoS, all as a way to deliver more services. The infrastructure itself must adapt to changes in the service mix. The problem is that it’s hard to predict what’s coming,” Mock continued.
That means that the infrastructure has to be able to have the power and flexibility to define services on the fly, and the network, including both hardware and software, must be able to reconfigure itself accordingly. Otherwise, the network itself becomes a bottleneck, if not a gate.
The other term for this is service velocity, and it needs to increase. Customer experience has to increase – service providers have to add value, in the form of differentiated services. But neither are possible without the cost of the network going down, explained Ray Mota, president of Synergy Research Group.
The most effective way to drop network costs is to move toward converged networks. “It’s not an issue that the network drives applications; we’re in a situation where networks drive the applications, so the game has changed somewhat,” Mota said.
It is now on the way to being conventional wisdom that a key to managing the network for whatever you want to call it – flexibility, service velocity, intelligent networking – is using deep packet inspection (DPI) and policy management.
Juniper Networks, long an advocate of intelligence in the network, is moving even further in that direction by incorporating limited DPI and policy management functions directly into its routers that would work in conjunction with a CMTS.
One element of providing differentiated services is understanding what traffic is coming through. DPI is there as a tool for monitoring traffic.
Once you can identify what every subscriber is doing, you can then make sure the network is provisioning an appropriate level of service per subscriber, on an application basis.
That requires policy decision functions, which “typically are in an external box; you go off to an external box and say, “What should I do with subscriber A, subscriber B?” said Mike Sheehan, product manager at Juniper Networks.
Why do you want to do that? It starts as a customer service issue. Say you have a subscriber who plays online games. If that subscriber doesn’t get a level of service appropriate to his or her application, there’s no way that sub can win his or her game, Sheehan said.
In a system with subscriber awareness, moving off to an external box is a bottleneck. Alleviating that bottleneck drops opex cost, Sheehan said.
Managing traffic means managing bandwidth – application awareness as an avenue to bandwidth management.
If customer A is a business customer who gets guaranteed service, and customer B is a residential customer who gets best-effort service, you can identify which is which, separate them to keep one from affecting the other, and allocate service appropriately, Sheehan explained. That allows you to avoid putting another 10 GigE link in your core routers when you might not even need it, he said.
Camiant Vice President Randy Fuller backed up the notion that using DPI and policy management in the access network can have side benefits for the core. “If you can manage the traffic peaks in the access network, then you can smooth out growth in the core,” he said.
Peaks in the access network are a big wave in a little pool. Those same peaks represent a shallow wave in the core network, Fuller allowed, but it’s still a wave.
Cisco says the increasing transport of video on the network is leading to immense usage growth at the edge. Although many expected video traffic to increase, the problem is that few anticipated that such a large proportion of that traffic would have to be carried on the upstream, so many network architects built their networks with inadequate upstream capacity.
Most network traffic is at 10 Gigabit Ethernet. Many service providers are preparing not only for jumping to 40 GigE, but for the next increment after that, 100 GigE. Cisco thinks that with traffic growing, that will be inadequate, so it has begun commercializing systems that can be scaled to 400 GigE.
Furthermore, content is coming from so many different directions, and coming from such distances – from a super headend or some remote server farm. So Cisco’s latest routers are incorporating terabytes worth of cache, which can be subsequently forwarded through a DOCSIS channel, through a DSLAM, through a mobile network, explained Sanjeev Mervana, a Cisco senior product line manager.
Caching content on the edge, combined with the 400 GigE capacity, doesn’t solve any particular bandwidth management problem so much as alleviate whatever congestion there might be between super headends and regional headends, though it may help avoid any potential bottlenecks even 10 years down the line, with what today looks like capacity overkill.
The new routers include forward error correction (FEC), to contribute to video integrity. Furthermore, Mervana said, they can even perform ad insertion in cached content.
In essence, what Juniper, Cisco and others are doing is helping cable operators gradually converge their networks, to gain operational savings (opex). “Comcast is moving toward IP delivery,” Mervana noted. “They want to compete with the phone incumbents, and the only way to do that is to converge infrastructure.”
Converging networks, by the way, doesn’t only mean converging parallel data and video networks, it can also mean making sure two video networks built separately can work together.
“If you want to connect your Houston region to your Miami region, you have to get common behaviors across both,” Mock said. “That’s difficult, but fortunately Carrier Ethernet is standardized – you have common parameters.”
“Standardized” perhaps, but not fully standard. The Metro Ethernet Forum (MEF) is still working on Carrier Ethernet standards. But once those standards are finalized, “eventually everything will converge on MEF-compliant service,” Mock said.
Most major service providers have, are building, or are planning a pair of super headends. Only one is really needed, but redundancy is critical should one fail. The benefit of doing program ingest at just two points rather than at 200 – or even 20 – is obvious. And should one super headend fail, there must be some connection between the two.
The super headend approach provides the option for greater control over quality at the point where video is encoded. “You just send it out to the regional headends and tell them, ‘Don’t mess it up, just pass it through,’” noted Eric Conley, CEO of Mixed Signals.
Don’t forget that even if the network is capable of supporting new services, it’s all for naught if you can’t bill for them. “There’s a certain amount of back office work that has to be done to make sure you can provide, bill for and manage new services,” Mock reminded.
An interesting question was brought up recently with an announcement by NEC. The company is doing something it probably could not do unless there was enormous capacity available in the backbone – which NEC appears prepared to help fill up. If you have that enormous capacity on the backbone, why compress video?
The company has begun sales for a device capable of long-distance transmission of uncompressed, high-definition video through up to 100 miles of optical fiber.
Since video signals are not compressed, image deterioration is eliminated, NEC reminds.
Mota and Sheehan discuss the issue at greater length in a recorded Webcast, originally conducted in November, called “Bringing Intelligence to the Services Edge”.
Another perspective on the issue can be heard in another Webcast, this one sponsored by Alcatel-Lucent, that will be held on Dec. 11. Register for “Enabling Profitable Business Services Evolution”.