Adaptive bit rate transmission only adds to the challenges.
Most service providers currently deliver MPEG transport streams to their customers, although on at least one leg of their journeys, those streams are likely to travel on a packet network. As streaming services gain popularity, that’s guaranteed to happen more often. There are new standards available that operators can use to assure their networks are able to properly handle the traffic.
Today, MPEG video is typically carried over a UDP/IP/Ethernet protocol stack in headend, core and hub site networks. While these network protocols and hardware were originally designed for data applications, they have met varied levels of success in the delivery of video and audio streams to subscriber last-mile networks.
The delivery of data files and the delivery of video are very different. Older network components were successfully deployed in data networks because upper-layer protocol automatic repeat request (ARQ) features were able to handle bit and packet errors during data file transfers. The end user still received and opened the desired file (a text document or an MP3 file, for example), even if it took a little longer for delivery due to the need for retransmissions.
Payload types directly influence the ideal design for network equipment deployed in the network. Data networks, characterized by packets arriving randomly in time and with delay insensitivity, are well served with economical network switches and routers with small internal queue depths. Smaller queues cause occasional packet loss from queue overflow when too many packets arrive on too many ports simultaneously. However, data networks simply recover from the errors, and such faults are usually transparent to users.
But when it comes to streaming a video program using the UDP/IP/Ethernet stack, packet loss is intolerable since most any video packet loss results in a viewable artifact.
Compressed MPEG video and audio payload streams, with their regular periodic packet rates and sustained nature, offer frequent opportunities to align in time, demanding deeper queues to prevent loss. And with highly compressed media streams, large-screen HD video and multichannel audio, any loss is easily perceptible – the perfect storm.
Furthermore, while attention has been given to testing network components for data transfers within various standards organizations, until recently there have been no standards aimed at testing IP devices for video delivery.
To address how to evaluate network devices for video streaming, the SCTE has recently published ANSI/SCTE 175 2011 – a Recommended Practice that defines how providers and manufacturers can test network devices to determine how well a particular device performs for streaming video and audio multimedia. By using multimedia test streams and analyzing for delivery loss and jitter, the device under test is characterized for its ability to deliver high streaming availability under actual service provider operating conditions, including in-service firmware updates, module and system resets, failover events, and system expansions and upgrades.
Providers no longer need to guess how well an offered product will perform in media streaming networks or depend on data-only-oriented performance benchmarks. The tests are simple and straightforward and can be implemented by a service provider or vendor with a minimum of effort.
The results from an extended, multi-day test run are presented in per-stream availability (consistent with ANSI/SCTE 168-6 2010), allowing the provider to assess whether the device can achieve the needed four or five 9s (99.99 percent or 99.999 percent) availability that minimizes subscriber trouble calls. ANSI/SCTE 175 2011 provides the performance tests for today’s cutting-edge video streaming network equipment to optimize networks for video and audio multimedia streaming before deployment.
The standardized tests and procedures outlined in ANSI/SCTE 175 2011 are key to ensuring a target level of performance in advanced digital video networks. It has taken some years to formalize the testing process needed for network devices, and, of course, it will take even longer until devices are characterized according to the procedures.
But the march of video technologies isn’t waiting for the dust to settle. Rather, it is already moving to its next frontier: three-screen delivery utilizing Internet protocols. Explosive growth of Internet-delivered video is rapidly pushing providers to adopt technologies for streaming MPEG video through HTTP. What new demands and challenges will HTTP video streaming present, and what will this mean for network equipment performance and selection?
NEW CHALLENGES IN DELIVERING OTT VIDEO
Today, video accounts for 40 percent of consumer Internet traffic. By 2015, the size of consumer Internet traffic will have tripled, and 60 percent will be video, making Internet video traffic six times larger than it is today. And video isn’t stopping there: Mobile video (video delivered to video-enabled mobile devices like smartphones and tablets) is experiencing explosive growth rates. Over the last three years, mobile video viewers have more than doubled, and mobile video traffic will be about 40 times larger in 2015 than it is today.
Streaming MPEG-compressed video with HTTP over the TCP/IP/Ethernet protocol stack using Apple’s HLS, Microsoft’s Silverlight or the emerging MPEG DASH schemes, for example, changes the demands imposed on the network. Video over HTTP uses encoders that process video streams into small files, or chunks, each holding typically two to 10 seconds of displayed video.
These chunks are then published on origin servers (typically in a content delivery network) and sent to caching servers, from which the client application must request each chunk in succession in order to be viewed.
Unlike the UDP/IP protocols used by most providers today for streaming video, TCP guarantees the delivery of all packets through packet retransmission, eliminating the need for lossless, low-jitter delivery. Furthermore, because not all client devices have the same processing power and available network bandwidth, HTTP streaming is adaptive. If a client-viewing device cannot get chunks fast enough for a high-quality, high-bit-rate stream, it dynamically and seamlessly downshifts and requests a lower-quality, lower-bit-rate stream to display uninterrupted video.
While there are many new system concerns that can affect performance, network performance again plays a key role in viewer satisfaction. During program viewing, the client application must initiate a timely request for the next chunk before the current chunk has completed playout.
Next, the network must deliver the request, and the server must issue a timely response to this request. The network must then deliver the chunk fast enough to the client. Finally, the client must successfully queue the chunk to permit seamless playout to the viewer. This communication between network components and client devices, coupled with the video stream adapting to bandwidth and client device limitations, can be extremely challenging to troubleshoot when something goes wrong if not carefully monitored and managed.
The “pull” nature of this process requires the client to initiate the file transfers. This puts new focus on request/response times in which the client, network and server contribute to video playout. Server and link availability, network congestion, and available bandwidth are all critical to avoid quality issues. But solving these problems is no easy task.
For example, accurately estimating bandwidth growth trends is complicated by the nature of adaptive bit rate operating characteristics: If network bandwidth is not sufficient to support the number of clients demanding an asset, then clients will adapt and request a lesser-bit-rate, lesser-quality stream variant commensurate with available bandwidth.
Unless a network operations manager understands the dynamics of client type and variant type, as well as bandwidth utilization, through insight gained with adequate system management tools, then investing in more available bandwidth can be a shot in the dark. The new investment could simply lead to continued congestion as clients automatically move to higher-bit-rate variants when new bandwidth becomes available. Bandwidth demands would effectively grow in step with available network bandwidth, leaving the service provider one step behind customer demand.
Targeted HTTP streaming-specific metrics, such as VeriStream, which characterizes per-stream flow QoS performance in terms of the dynamic demands of the client player, provide critical insight into network and system performance. Combined with synchronized player and server metrics, a comprehensive, end-to-end quality assurance solution provides the needed observability into the complex interactions in order to successfully and cost-effectively operate a provider-scale HTTP streaming system.
High-program availability performance tests, as described in SCTE 175 2011, can provide a good indication of how devices will perform using HTTP streaming traffic, while new characterization tests may be needed for the new client and server functions introduced with HTTP streaming.
High-availability network device performance is required for both UDP and HTTP network streaming, and providers today are rolling out HTTP streaming alongside UDP streaming. While client and server performance are important additional factors in achieving satisfactory HTTP streaming, UDP performance of network devices per ANSI/SCTE 175 2011 is still required for legacy systems and is a good place to start when choosing devices and configurations to support high-availability HTTP streaming.
Careful attention to QoS policy enforcement will be necessary to maintain UDP network performance in the presence of the rapid growth of HTTP streaming. Effective metrics are crucial in giving the service provider critical insight into how well deployed networks and systems are working. Target performance goals of four to five 9s can only be achieved if accurate and reliable measurements are available to know what the current and evolving availability performance is. Only by a commitment to continuous improvement, including the measure-improve-measure cycle, can one get one’s bearings and know the best direction forward to HTTP streaming.