by Eric Persson, Infineon Technologies
To understand why gallium-nitride components shrink the size of power supplies, examine how they dissipate energy at high frequencies.
Silicon power FETs have come a long way over the past 35 years. Modern high-voltage superjunction FETs have developed over the past 15 years to even exceed what was thought to be the “theoretical limit of performance” for silicon. As a result, power supplies have become more efficient, with many suppliers offering 96% peak efficiency today. Power density has benefited as well. Today’s high- performance server power supplies have power densities of 40 W/in.3 or better.
Yet the industry goal is to significantly boost this power density for the future. You might wonder how this can be accomplished.
First, it is important to look at what is limiting density right now. Many power supply functions occupy a relatively fixed volume for given power supply requirements. For example, the size of the dc bus capacitor is typically dictated by the holdup-time requirement. But the EMI (electromagnetic interference) filters, PFC (power factor correction) and dc-to-dc stages, along with their thermal management, represent more than half the power supply volume. These functions can potentially be much smaller if the operating frequency can significantly rise without the corresponding penalty of increased switching loss.
So, why not simply increase the operating frequency of existing power supply topologies to improve the density? Often, the limiting factor is the power semiconductors in both the PFC and dc-to-dc circuits. These power transistors and rectifiers operate in switching modes that have frequency-dependent switching losses. Thus, boosting the switching frequency also increases the switching loss in the power semiconductors. This is exactly the opposite of what is needed: If the power supply density is to rise, the losses in the power supply will have to drop.
Density and efficiency
Many of the common methods of improving efficiency do not necessarily improve power density. In fact, oftentimes the opposite is true. For example, lowering the operating frequency of a power supply reduces the frequency-dependent switching losses. But the lower frequency also necessitates use of bigger magnetics. Thus the tradeoff: The highest-efficiency power supply will have the lowest density for a given design approach.
But improved efficiency (reduced power loss) is necessary to improve density for two reasons: First, lower losses also correspondingly reduce the size of heat sinks, fans and other thermal management devices.
Moreover, for a given maximum internal temperature limit, the internal power dissipated must drop as the physical volume of the power supply shrinks. Form factors with more optimal surface area/volume ratio can help, but the overall trend is toward less ability to dissipate power as physical volume shrinks. So efficiency and density are linked: Smaller power supply volume requires a proportional reduction in losses.
It is possible to boost frequency without additional switching loss through use of a different control strategy. Regardless of the transistor technology, zero-voltage switching (ZVS) is one key to minimizing switching loss and enabling use of higher frequencies. The majority of power supply topologies are based on the concept of using transistors to switch a voltage source into an inductive load. The goal of ZVS is to use energy stored in the parasitic capacitance of the switching device along with inductor current to losslessly commutate the switch capacitance. This is instead of hard-switching where the transistor forces commutation and dissipates the energy stored in the device capacitance.
Hard switching is common in the PFC stage of a power supply. Consider a typical boost converter stage consisting of an inductor, diode, bus capacitor and transistor acting as a switch. Also consider the switching waveform at the moment the transistor turns on. Initially, current flowing through the inductor charges the bus capacitor, and the switch node voltage (Vsw) therefore equals the bus voltage. When the switching transistor turns on, it begins conducting current. Transistor current ramps up to the level of the inductor current (IL).
But the voltage at the switch node Vsw initially doesn’t drop. If the diode was perfect and there was no reverse-recovery charge, Vsw would begin to drop toward zero immediately. But if the diode is a PN junction diode (or the body diode of a synchronous rectifier), then it cannot immediately stop conducting, so the current in the transistor continues to ramp up, as does the corresponding reverse current in the diode.
Current continues to ramp up until the diode recovers its ability to block voltage and stops conducting. At this point, there is a significant reverse-recovery current on top of the steady-state inductor current. The transistor is supporting the total current while still supporting the full bus voltage across its drain-source. This leads to the high peak power dissipated in the transistor during the turn-on interval.
The power dissipation, P(t), curve is the product of device current times voltage. It peaks at the same instant as the inductor current. Finally, the current through the transistor discharges the capacitance of the switch node and drives the voltage to zero, thus dissipating the energy stored in the diode capacitance and the transistor’s own self-capacitance.
To summarize, in a hard-switched turn-on, there are three main energy loss mechanisms each cycle:
1. Commutation or crossover loss—proportional to current rise time; faster turn-on means lower loss
2. Reverse recovery loss (does not apply for Schottky diode)—depends mostly on the diode characteristic; diodes with large Qrr, like the body diode of a superjunction FET, can have an extremely large Qrr and completely dominate the turn-on loss
3. Eoss loss—this is the energy stored in the capacitance of the switch node (including the switch itself, the diode, and parasitic capacitance in the inductor) that gets dissipatively discharged each time the switch turns-on.
As a simple example of ZVS, consider the same boost PFC circuit as in the previous example, but with a different control strategy. The previous example operated in continuous conduction mode (CCM) with the current through the inductor never falling to zero. Now suppose the current is allowed to reach zero each cycle. Of course this means the ripple current of the PFC stage has a much higher magnitude (so there is both more rms current and corresponding conduction loss). But allowing the inductor to fully discharge sets up the condition for lossless commutation of the diode—essentially free ZVS.
When the inductor current reaches zero, the equivalent circuit is an LC, but the capacitance is not the big bulk capacitor on the dc bus—it is blocked by the diode. Instead, the total capacitance is the combined output capacitance of the switch plus the parasitic capacitance of the diode and inductor. The initial condition of the circuit is that C is charged to the bus voltage. The capacitance will resonate, and its voltage will ring down to negative bus voltage. But the switch will clamp the voltage as it crosses zero volts. When the switch does turn on again, the voltage across it is already zero, thus eliminating the turn-on switching loss. This mode of operating the PFC circuit is known as critical conduction mode, or CrCM.
The concept of using small amounts of energy stored in the inductor or in device capacitance is common practice for enabling ZVS in a variety of topologies and control strategies. The LLC converter is a good example of a dc-to-dc stage that uses resonance to realize ZVS in the back half of a power supply.
As previously mentioned, ZVS can work with any type of switch, but here is where the big difference between conventional silicon FETs and GaN HEMTs becomes important: If the effective capacitance of the switch is made much smaller, the time required to make the ZVS transition also drops correspondingly. Or alternatively, the time can be made the same, but the amount of stored charge needed can be reduced correspondingly.
It is desirable to operate at a higher frequency to realize higher energy density. This is where the shorter transition time becomes important. Normally in the LLC circuit, the ZVS transition only takes a small percentage of the total resonant period—for example, it may be 330 nsec, about 5% of the period, for one cycle of a typical 150-kHz operating frequency. But if the frequency rises 4x to 600 kHz, the 330 nsec (per edge) becomes 20% of the period.
The transition time (also known as dead time) is “non-productive” power transferring time—it is simply time spent waiting for the lossless ZVS transition. This means that as the dead time becomes a larger percentage of the total period, the productive portion of the resonant period is proportionally smaller, and this drives the rms current much higher due to the higher peak-to-average ratio.
In other words, to boost frequency significantly in a ZVS circuit, the circulating energy needed to realize ZVS must drop proportionally. Otherwise, the penalty of higher rms current on both the primary and secondary sides will kill the efficiency of the power supply, making it impossible to improve density.
But the relationship between capacitance, charge and energy in modern high-voltage (superjunction) MOSFETs is complex because the capacitance is so nonlinear. This nonlinearity makes it difficult to compare devices based on datasheet capacitance values as they can change three orders of magnitude depending on voltage. It also makes a big difference between devices that are optimal for hard switching (low Eoss), versus those best for ZVS soft switching (low Qoss).
A graph of Qoss versus Eoss clearly illustrates this difference. Consider the case of a 650-V, 70-mΩ-rated high-performance superjunction FET compared to a GaN HEMT with the same rated on-resistance. The superjunction charge Qoss rises steeply, reaching 90% of its final value within the first 20 V applied. The slope then abruptly diminishes so applying another 380 V only adds 10% more charge. This behavior arises because of how charge distributes in the columnar structures of a superjunction FET.
This behavior has an interesting effect: The energy needed to pump charge into the Coss at low voltage is smaller due to the V2 relationship in E = ½ CV2. Though 90% of the charge gets stored in the first 20 V, a far smaller portion of the total energy is stored by this point. This characteristic nonlinearity explains why superjunction FETs can have a relatively low Eoss for a given Qoss. It is this nonlinearity that makes them excellent (low Eoss) for hard-switching applications compared to the other silicon alternatives.
In stark contrast, the GaN HEMT is a lateral device and has a low, nearly linear capacitance versus voltage. Its graph of Qoss versus voltage has a shallow slope rising to a value an order of magnitude smaller than that of the superjunction Qoss. This even distribution of charge along the voltage axis results in an Eoss having a final value (at 400 V) almost equaling that of the superjunction device, since Eoss is the integration of charge times voltage. All in all, the HEMT is better in Eoss, but by a much smaller margin than Qoss where it is 10x improved.
To further illustrate the effect of Qg and Qoss, consider the well-known LLC circuit. Suppose the same LLC circuit is used for comparison of both Si and GaN on the primary side running at about 325 kHz, delivering 750 W from a 385-V bus.
The waveforms for the superjunction FETs on the primary of the LLC show once the upper gate turns off, the drain voltage takes more than 350 nsec to slew from bus to zero volts. The superjunction nonlinear charge creates long, shallow tails on the voltage that mandate the long dead time. Even with a 350-nsec dead time, the voltage has not yet reached zero (so it is near ZVS) when the lower gate turns on. It may seem that turning on slightly early, before the voltage across the switch is really zero, is a small compromise, but it is not. Don’t forget that nearly half the Eoss still remains at only 20 V on the drain because of the nonlinearity. In other words, this dead time is pushed to be as short as possible without significant compromises in power loss (and efficiency).
With the GaN HEMT in the same circuit, under the same conditions, the gate voltage has a much faster rise and fall time than in the superjunction device. This is a result of the gate driver having a much easier time driving the low-charge gate of the GaN device. Moreover, the low Qoss of the HEMT makes the drain voltage linear and much faster as well. This lets the dead time be 3x shorter, and there is have no additional loss from non-ZVS.
Thus, for power supply applications that require higher density, GaN HEMTs provide far superior properties that enable higher operating frequencies while simultaneously reducing the overall losses. In the example and conditions described here with a 750-W output, the overall efficiency of the LLC converter is 96.5% for superjunction, 97.8% for GaN. This is a 38% reduction in power loss, from 27.2 to 16.9 W because of the GaN HEMT.
Infineon Technologies AG