Cloud data storage and processing is now a vital element in the operation of many businesses and not just those that provide the more obvious services of online shopping, audio/video distribution, and social media and gaming platforms. Other commercial and industrial operations, like retail outlets and manufacturing plants, increasingly rely on data uploaded and stored remotely rather than locally. There are many reasons for this that include: consolidating data from across multiple sites to enable more effective analysis, reducing the dependence on local computing facilities and the need for trained IT staff, ensuring data security both in terms of backup and avoiding unauthorized access, and providing a readily scalable solution that can grow with the business without the otherwise inevitable step function of major capital expenditure.
In addition to businesses moving away from having their own data centers to using a cloud-based infrastructure, we are now seeing new demands for cloud services as the Internet of Things (IoT) starts to become a reality. This connection of “Things,” such as sensors and controls, to the Internet underpins the “Smart” revolution that is expected to increasingly automate everything in our lives—from the lighting, heating, and security of our homes and offices, through to our transportation networks.
Not surprising then that, according to industry analysts IDC, the current double-digit growth in spending on data centers and related cloud computing hardware is forecast to continue for a number of years yet.
However this growth comes at a cost, which is not solely the equipment cost but also has to take account of the energy consumed powering this equipment. Indeed, several sources report that, over the typical three-year life of a server, the cost of powering it, including the cooling systems needed to maintain safe operating temperatures, can exceed its purchase price. While efforts have been made to minimize cooling costs, by siting data centers in cooler climates and through establishing higher maximum equipment operating temperatures, there’s no escaping the trend that will see the peak power consumption of a typical server board rising from 2 kW or 3 kW today to reach 5 kW or more in the near future as processor performance continues to increase.
Addressing this demand requires a power supply solution that maximizes efficiency at every stage. Traditionally this is achieved using a “distributed power architecture” that is not unlike the principle of utility electricity distribution where higher voltages at lower currents minimize the losses due to conductor resistance, which are proportional to current and distance. This architecture typically comprises a front-end AC-DC converter that distributes a 48 V DC supply to each bay in a server rack. Then, an intermediate bus converter (IBC) converts this down to 12 V for distribution to each server board. Finally, on the board, multiple point-of-load (POL) converters, positioned close to the major power-consuming components, supply the final low voltages these devices require.
The distributed power architecture has served (no pun intended) the industry well but the introduction of digitally controllable power supplies enables some notable advances that provide the means to achieve even greater efficiencies. By taking advantage of the controllability offered by PMBus, an industry-standard protocol for communicating with digital supplies, system designers have been able to develop software algorithms that enable power systems to respond in real-time to changing load conditions. For example, the technique of Dynamic Bus Voltage (DBV) optimization can adjust the intermediate bus voltage in response to high power demand by outputing a higher voltage from the IBC stage in order to reduce the output current and hence minimize distribution losses.
A newer technique, Adaptive Voltage Scaling (AVS), takes advantage of the latest feature offered by high-performance microprocessors and FPGAs, which allows their supply voltage and clock frequency to adapt to processing demands and also compensate for the effects of silicon process variations and temperature. To support AVS, the PMBus specification has recently been revised to define the AVSBus, which allows POL converters to respond to AVS requests from an attached processor.
These techniques and others like them mark the beginning of Software-Defined Power (SDP) architectures. Such solutions depend on the ready availability of PMBus-compatible digitally controllable IBC and POL supplies but, while these are now appearing in the market from a number of vendors, there is an issue concerning true “plug and play” compatibility between supplies that appear to offer similar specifications. This is because they can behave differently when sent the same PMBus command.
Clearly this situation is not conducive to the wider-scale adoption of SDP, and companies were formed with the aim of specifying standards for the interoperability of IBC and POL supplies.
The growth in demand for Cloud services is not going to go away but neither can we ignore the need to power the Cloud more efficiently. Software-Defined Power offers a solution but to fully realize its promise and achieve the scalability data center operators need, the industry needs to adopt standards that ensure true compatibility between products from different vendors.