Battery-powered devices are ubiquitous, and the demand continues to grow. What once might have been just a simple trip to the gym, for example, with nothing more than workout clothes and athletic shoes, now commonly involves wireless earbuds, a cellphone or other music source, and a personal fitness monitor. According to a recent MarketsandMarkets study, the lithium-ion (Li-ion) battery market is expected to grow at a compound annual growth rate (CAGR) of 16.6 percent from 2016 to 2024. Research and Markets places that number at 15.5 percent from 2017 to 2025. The close agreement between these two projections suggests that the trend is both clear and well into the double-digit range.
It is because of this rapid growth in battery-powered technology that optimized battery run time has become increasingly important. For consumer products, battery run time is a matter of convenience and enjoyment, making it a key driver in the buying decision. Manufacturers clearly understand this fact. That is why television, radio, and print advertisements frequently highlight it as a differentiator for computers, tablets, smartphones, and more. Optimization is no easy task, it depends on the architectural and programming decisions the R&D engineer makes.
Applications Where Battery Run Time Matters
Before delving into the decisions engineers can make to conserve battery charge, it’s first important to understand why optimizing battery life is so critical. A key reason for Internet of Things (IoT) devices, is that battery run time can be critical to a project’s economic viability. Consider smart agriculture applications that rely on tens of thousands of sensors to monitor air and soil temperature, plant color, light levels, fertilizer presence, soil morphology, and more. The many costs associated with dead batteries—data loss, purchasing new batteries, labor costs, disposal costs, reputation costs, and opportunity costs—add up to huge amounts of money, even for a modest farm. Likewise, environmental applications rely on wireless sensors to monitor several different factors. These sensors are often placed in remote locations and are hard to access. Replacing a single battery can require many hours of travel and labor.
Sometimes, batteries are nearly impossible to change, such as a Li-ion cell powering a wireless health monitor in a dairy cow’s stomach. These IoT devices can improve an animal’s health by providing veterinary professionals with early indications of pregnancy, injury, and illness. That information allows veterinarians to begin treating cows sooner and to use less medicine. A typical dairy cow may have a productive life of 4 or 5 years, so the monitor’s battery should last at least that long.
Wireless medical devices offer another prime example of an application where battery run time matters. Sometimes, a battery failure may be merely an inconvenience that delays a procedure or test, in a hospital or physician’s office where charging stations and spare devices are readily available. Other times, such as in an ambulance crew or other emergency response situation, a failed medical device can be a life safety issue. If the battery-powered device is inside the patient, such as the case with a pacemaker or internal defibrillator, changing the battery means exposing the patient to an expensive and risky operation.
Design Decisions Drive Battery Life
When it comes to conserving battery charge, design engineers have many architectural and programming options at their disposal. Some of the options must be determined early in the product development cycle; others can be changed shortly before the final test. For this reason, it is important for design and validation engineers to think about battery run time throughout the product development cycle.
Here’s are four key options engineers can use to conserve battery charge:
1. Voltage and MCU
Some embedded device architectures include efficient DC-DC buck converters that enable the device design engineer to operate the microcontroller unit (MCU) and peripherals at different voltages. These voltage choices usually represent trade-offs between device performance and charge consumption. Some MCUs have math accelerators optimized for the arithmetic associated with cyclic redundancy checks, cryptography, and data analysis. Such high-speed peripherals let the device designer put the MCU to sleep faster, and may also run at lower power than the MCU.
2. Memory
There are many options related to memory. The type of memory utilized in the design—EEPROM, SRAM, FRAM, flash, and so on—will affect both charge consumption and device performance. Some MCU module architectures can disable power to unused RAM segments. , some MCU architectures include small SRAM caches to perform most program operations with this low-power memory technology.
3. Clocks and timers
Design engineers can choose between various types of MCU module clocks and timers. Most MCUs have a fast master clock for fast reactions and signal processing and a slow auxiliary clock for keeping the real-time clock alive during sleep. Some also have a low-power, wake-up timer that combines a small current sink, a comparator, and an RC circuit whose period depends on resistance and capacitance. is not very precise, but it suffices for some applications and consumes less power than a crystal oscillator.
4. Firmware programming
Many firmware decisions can affect battery life. The first firmware choice the engineer should make is to optimize the MCU clock because a slowly clocked processor consumes less current than one with a fast clock. For code segments where the processor is largely idle, a slow MCU clock conserves charge. It may also be possible to configure the device display update rate, the frequency at which the MCU receives data from sensors, and the blink rates and durations for light-emitting diodes (LEDs). Additionally, programmers can adjust the frequency at which the device transmits data and the retry strategy when the radio fails to communicate with its gateway or base station.
Gaining Insight to Drive Better Decisions
With so many decisions to make and with ever-increasing pressure to hit tight product launch schedules, how does the engineer know where to focus product development efforts to optimize charge consumption? The key lies in understanding exactly how much charge is consumed by various components, operating states, and firmware operations. This information can be obtained using event-based power analysis (EBPA) to correlate current measurements with RF and sub-circuit events. By performing EBPA, design engineers can quickly understand how much charge each feature consumes and then use this insight to make the right design decisions.
As an example, the graph of four waveforms (Figure 1) shows a horizontal axis time-sliced into segments based on RF power (green line), the LED voltage (blue line), and other overall device current levels (actual measurements in yellow, smoothed measurements in brown). The horizontal axis depicts the various time slice labels: MCU, DUT Tx, and LED1.

Once the solution software has time-sliced the waveform per the RF and sub-circuit events of interest, it can show the engineer exactly how the charge consumption is associated with each of these events (Figure 2). In the regions specified above, for example, one-third of the charge is consumed during DUT Tx time-slices, while more than 60 percent of the charge is consumed during LED1 time slices. The design engineer running the software can adjust the time being analyzed to focus on just a few milliseconds of device activity, or to consider a much longer period of operation.

It is often helpful to understand how a device spends its time and charge at or above various current levels. The ideal tool for this task is the complementary cumulative distribution function (CCDF), as shown in Figure 3. This graph shows that the device spends nearly 78 percent of its time at current levels above 7.16 mA, but it spends more than 91 percent of its charge above this level. The steep drops near the right side of the graph indicate that much of the charge is consumed at rates of approximately 12.5 and 14.5 mA. Using this insight, the design engineer can identify further options for conserving charge.

The Bottom Line
The battery-powered device market is growing quickly. Demand for ever-longer run time is also on the rise. This is particularly true for consumer, IoT, and medical applications. Making the right architectural and programming decisions is critical for optimizing run time, but doing so quickly enough to hit fast time-to-market objectives remains challenging. Using event-based power analysis to correlate current waveforms with RF and sub-circuit events, is the ideal way to deal with this dilemma. It not only provides R&D engineers with fast, visual insight, but can allow them to quickly iterate their designs to optimize battery life. It’s a smart choice for any engineer looking to develop a successful battery-powered device.