The cloud is being sold as the greatest innovation since the food industry started building zip-lock technology into the packages of everything from hot dogs to shredded cheese, but there have been concerns: can cloud computing scale to cover hundreds of thousands of subscribers, and aren’t you just courting trouble by automatically building in too much delay?
It turns out the answers are yep and nope, respectively. The world is safe for the cloud, according to ActiveVideo.
ActiveVideo is among the many companies advocating for moving more processes into the cloud, and has had some success with the likes of Cablevision and most recently Ziggo in The Netherlands. As far as ActiveVideo is concerned, the question about whether cloud solutions can scale was laid to rest long ago.
But potential clients are still concerned about latency. What happens, they ask, if we move to the cloud, and customers start pushing buttons, and nothing happens for half a second, or a full second, or – heaven forfend – six?
So ActiveVideo went out and figured out where the delay is in network systems, measured the latency associated with each, and published a white paper today that reveals what they found: the latency associated with cloud solutions is very, very comfortably within the range of what is commonly acceptable. The word they repeatedly use to describe the latency associated with cloud solutions is “inconsequential.”
And if it isn’t – and it might not be – then the problem isn’t anything inherent in the cloud approach, it’s because of the way the network is architected. And if that’s the case, there are some relatively simple things that a network operator can do to get reduce overall latency to acceptable levels, explained Sachin Sathaye, ActiveVideo’s VP, strategy & product marketing, and Jeremy Edmonds, the company’s director, solutions architecture, and the author of the report.
There are few standards that directly address network latency, in the context of cloud computing that means the time between a viewer pressing a button on their remotes and something happening on their TV screens. Generally, few people even notice a delay of 250 milliseconds or less. That’s the latency figure recommended in the Broadband Forum’s TR-126 guidelines.
As a practical matter, viewers tend to be comfortable with delays as long as 500 millisecond delays – especially if the length of the delay remains consistent from one operation to the next.
ActiveVideo discovered that latency in cable systems in which the user interface is built into the set-top box (STB) typically fell in a range of 140 ms to 450 ms. Edmonds said a key factor in the range is the set-top box; all STBs introduce some latency, but the amount can vary dramatically from one box to the next. Some systems that have architected multiple technologies in series (switched digital video, EBIF, etc.) create dependencies among the technologies that can add latencies, Edmonds noted. These systems can often be rearchitected to remove or alleviate those dependencies.
Other elements that add latency include the remote control itself (older IR-based remotes tend to be the worst), STBs, and the TV itself – some HDMI systems have filters and buffers that alone can add 50 ms to overall latency.
ActiveVideo determined its cloud solution typically represents about 127 ms of latency. Incorporate that into the latency budget of the typical network – managed or unmanaged – and the total latency of the system ends up within that 140- to 450 ms range. Furthermore, latency in non-cloud solutions can sometimes be inconsistent. Put the user interface in the cloud, and the latency experienced by any individual subscriber ends up at a consistent figure.
“We’ve done everything from legacy networks to bleeding edge networks,” Edmonds said, “If we’re dealing with a system that has never been tuned for latency, we’ll often end up at the high end, or maybe even out of the range of what’s acceptable, but we’ve always been able hit the acceptable range without any forklifts.”