Since its humble beginning more than 80 years ago in Farnsworth’s lab, video has grown to be a ubiquitous part of life. Devices like HDTVs, smartphones, and tablets coupled with services like YouTube, Netflix, and Facebook have put video near the center of most people’s lives. Video has important commercial applications too, including teleconferencing, security surveillance and machine vision.
Accumulated progress in two key areas enabled video to become so prevalent: digital compression algorithms and integrated circuits (IC). Advances in IC processing power made the complex compression algorithms necessary for video to be used on common networks and storage devices practical. Today, digital video can be economically transmitted across networks such as cable and Ethernet, and can be stored in memory devices such as disk drives and flash.
Raw IC processing power has multiple manifestations including General Purpose CPUs (GP-CPUs), Systems on a Chip (SoCs), custom ASICs and FPGAs. Not surprisingly, all of these approaches can be used to compress and decompress video and to build video processing systems. The right choice depends on the overall system requirements.
As an engineering services provider specializing in the development of embedded products related to digital media processing, Cardinal Peak has developed a large number of video encoders for clients in diverse markets including broadcast video distribution, enterprise video, security/CCTV, and defense and law enforcement. In this article we discuss the advantages and tradeoffs associated with designs based around the four approaches mentioned above: GP-CPUs, SoCs, custom ASICs and FPGAs.
We define a GP-CPU architecture as one where the video processing and compression occurs in the CPU itself. One advantage of this approach is that the software development tools for GP-CPUs are easy to use and are familiar to developers. As one example, a PC with a frame capture card can capture video, compress it, and stream it onto a network in real-time. Small, “low-power,” inexpensive single board computers are widely available. Using such components, video processing systems that don’t appear to be a PC to the end user can be designed. Sometimes a video encoding/processing system can be assembled without designing any hardware at all.
The more diverse the tasks that must be performed simultaneously with encoding, the more compelling a GP-CPU approach can be. The wide availability of third-party software to implement numerous other functions the video processing system may require (such as web servers, databases, etc.) make GP-CPU systems attractive and flexible when the device must do more than just compress video. The time to market for GP-CPU solutions can also be quite fast.
So what are the disadvantages? An important one is cost. PC-based systems tend to be more expensive than custom designed hardware solutions. Furthermore, even low-power PC solutions have a higher power consumption than other alternatives. As a final consideration, when real-time encoding is required, these systems tend to have lower image quality. One reason for this is the limited range they can search for motion compensation, and their general inability to support 1080P60 compression. Motion compensated prediction is critical to the performance of algorithms such as H.264 and HEVC, and is an extremely demanding computational task.
System on a Chip (SoC) solutions
SoC solutions allow all the components of a complete video processing system to be assembled on one chip at a compelling price. The SoC sub-systems are flexible enough to allow a variety of image processing functions to be implemented, including encoding. As one example, TI’s 8168 has multiple video input interfaces. It also contains dedicated hardware compression engines, a DSP, and a general purpose ARM processor. Its 3-D GPU allows sophisticated graphical interfaces. APIs are available to simplify operations like picture-in-picture processing, alpha blending of two images, etc. The ARM core presents a familiar Linux programming environment to developers, and drivers are available for most of the sub-systems. Overall power consumption is lower than that of a GP-CPU approach, making SoC solutions attractive for mobile applications.
So what are the disadvantages? Developers will find that the software base on which they are building presents a higher learning curve. More customization is required than for a GP-CPU system. When using SoC’s, the need to design hardware is unavoidable, implying a longer time to market and a higher NRE.
Custom ASICs can be viewed as SoCs highly optimized for a particular video application. This approach promises the highest quality compression at the lowest cost. Ambarella, for example, has focused on camera applications while Magnum Semiconductor has focused on broadcast video encoding.
Custom ASICs offer special support for features that are specific to the market niche they serve; this can save developer time when putting together a complete system solution. A chip targeted at cameras may support image stabilization, for example, while one targeted at broadcast encoding may have special features to make it easy to insert closed caption information into the compressed stream.
Video quality is often high because dedicated hardware for operations like motion compensation can be included on the chip. The power consumption is usually quite attractive.
The disadvantages? From a developer’s perspective, it may be hard to get the support you need from the company selling the ASIC. Often, if you can’t make a compelling case that you will buy huge quantities, the supplier will not be willing to sell them to you. Also, the custom nature of the hardware may require driver development if the hardware’s full functionality is not supported by the ASIC vendor’s software. This can add expense and complexity to the overall design.
FPGAs are attractive when a custom solution is needed for a lower volume market niche. Although FPGA solutions are generally not the cheapest or the lowest power approach, they do allow the exact features needed to be quickly assembled into a hardware solution. One attractive feature of FPGAs is the fact that many low latency video encoding IP cores are available. If your system needs FPGA processing for other reasons anyway, this approach should definitely be considered.
It’s clear the use of video for both business and consumer applications will continue to grow, so engineers will be continue to be challenged with selecting the right option for each specific application from an ever-growing array of hardware and software options.