Displayed precision from an instrument reading or calculation can easily mislead someone into thinking that the result has more accuracy than it actually has.
A few weeks ago, I received a blast email from a research firm touting their latest market-prediction report. The pitch proclaimed, “the global electric motor market size was $96,967.9 million in 2017, and is projected to reach $136,496.1 million in 2025.” Well….I was certainly impressed by a market size shown to six significant figures for 2017 and a prediction to seven figures for 2025, as that’s truly amazing precision; that must be some super crystal ball they have. That number of significant digits makes it sound like a technical report from the metrology experts at the National Institute of Standards and Technology (NIST) rather than a market-size prediction.
Let’s be honest: most market research is inherently imprecise, as it’s a combination of insight, surveys and questionnaires, experience, guesswork, and plain old luck. If the final numbers turn out to be accurate to ±20% or ±30% in retrospect, I’d say that was pretty good for an assessment that was trying to look three to five years ahead.
Even when dealing with real-world measurements such as sensors and their analog front ends (AFE) rather than forecasts, achieving accuracy to 1% is doable (one part in 100), getting to 0.1% is much harder, and getting to 0.01% or better takes serious effort. Even if you start out near-perfect, factors such as temperature change or aging affect the results fairly quickly and dramatically.
There’s a name for such misuse of math and statistics: “innumeracy.” The term was coined and explained by mathematician John Allen Paulos in his classic 1988 book “Innumeracy: Mathematical Illiteracy and its Consequences,” (Figure 1). Don’t let the book’s age affect its relevance, as what he said is still true and probably gotten worse due to the widespread and casual use of standard applications such as Excel or more advanced statistical-software packages.

Some quick examples of innumeracy:
- When someone thinks an increase from 20% to 40% is a gain of twenty percent rather than 100% because they don’t clearly understand the meaning of percent and percentages.
- When someone uses more than two, perhaps three, significant figures to characterize a semi-quantitative estimate of the market for anything in five years.
- Someone provides a conclusion based on an extrapolated “trend line” with just a few data points near the starting point.
The engineering dilemma is that the apparent precision as represented by additional significant figures is relatively cheap and easy to obtain. Even a low-end hardware-store digital voltmeter (DVM) can display 2½ (199) or even or 3½ (1999) digits (Figure 2). Before you know it, those “precise” numbers also carry an aura of accuracy, which is not generally the true case. (It’s like giving management a rough “ballpark” number and soon finding it embedded in a formal presentation; with an official commitment.)

Do you need a lot of precision to do most engineering projects? Usually not (although there are many exceptions, of course). Keep in mind that old-time engineers built solid bridges, ships, aircraft, and even rockets using the slide rule, an analog calculator with two, sometimes three significant digits of precision (Figure 3). Using this crude tool and its low precision forced designers to stop and think if the answer made sense – which was a good thing.

Once accuracy is under control, it’s time to worry about sources of error in the data, measurements, or analysis to see if these errors can be minimized, and real precision increased. Among the common techniques for doing so are these:
- Use components that are inherently more accurate (such as 0.1% resistors versus 1% units or low-temperature coefficient devices) and perhaps even stress or age them if appropriate.
- Use topologies that cause self-cancel errors (the Wheatstone bridge or matched resistors on the same die.
- Add various preventative measures such as shielding, electrical or mechanical isolation, or thermal stabilization.
Of course, there are many facets and perspectives on the precision and accuracy story. Just don’t let numbers, analyses, and answers with an excessive number of digits allow you unintentionally mislead others or even yourself. Keep in mind the old saying: “It’s better to be roughly right than precisely wrong.” There’s still a lot of truth to that cliché.
Related EE World content
Wheatstone bridge, Part 1: Principles and basic applications
Wheatstone bridge, Part 2: Additional considerations
Tolerance analysis distinguishes prototypes from production units
Thermistors, thermocouples, and RTDs for thermal management
Analog computation, Part 1: What and why
Analog computation, Part 2: When and how
How to measure current and energy use accurately
Understanding the lock-in amplifier, Part 1: The sensing challenge
Leave a Reply
You must be logged in to post a comment.