The mathematics that enable sensor fusion include probabilistic modeling and statistical estimation using Bayesian inference and techniques like particle filters, Kalman filters, and α-β-γ filters, which allow for combining data from multiple sensors while accounting for noise and uncertainty to produce a more accurate overall picture of a system or environment. Neural networks (NNs) are also used for sensor fusion.
Balancing the conflicting accuracy and computational efficiency requirements can be a major challenge in sensor fusion, especially in applications like robotics and autonomous systems (Figure 1). Examples of typical sensor fusion process elements include:
- State Vector: represents the parameters like position, velocity, or acceleration that will be estimated using sensor data.
- Motion Model: describes how the state vector is expected to change over time.
- Measurement Model: describes how sensor data relates to the real system state, including noise and other measurement characteristics.

Because sensor fusion is based on modeling, various forms of Bayesian inference are key to performance optimization. Bayesian inference uses probability models to make inferences about probability distributions of sensor data and anticipated measurements using previous data inputs. It’s based on Bayes’ theorem and conditional probability.
How do digital filters work?
Digital filters for sensor fusion estimate the state of a system in noisy environments by combining uncertain measurements with a model of the system. Particle filters, Kalman filters, and α-β-γ filters are implemented using recursive algorithms. The new state estimate is based on the previous estimate and new sensor data.
Different types of filters present tradeoffs in terms of computational complexity and performance. Particle filters are the most complex and are suited for nonlinear systems. Kalman filters are less complex, and α-β-γ filters are even lower in complexity.
Particle filters use Monte Carlo simulations to estimate probability distributions and combine prior knowledge with observed data to estimate the current state. They rely on samples called particles and are used in systems with complex, non-linear dynamics operating in noisy environments.
Common Kalman filters
There are a variety of Kalman filter implementations. The choice often depends on the type of sensor, the available state information, and noise considerations. Some common Kalman filter variations include:
- Linear Kalman Filter (LKF): a basic recursive Bayesian filter that estimates the state of a linear system with Gaussian noise.
- Extended Kalman Filter (EKF): an extension of the LKF for nonlinear systems. It linearizes nonlinear state transitions and measurements.
- Extended Information Filter (EIF): another variation for nonlinear systems. It’s like the EKF but uses an information vector that’s the inverse of the covariance matrix to improve the precision of state estimates.
- Unscented Kalman Filter (UKF): uses a set of “sigma points” to capture the state uncertainty more accurately than the EKF while requiring fewer computing resources.
The α-β-γ filter
An α-β-γ filter is simpler but less accurate than a Kalman filter. While Kalman filters dynamically calculate gain based on the system model and noise statistics, an α-β-γ filter uses fixed gains. The system model used by an α-β-γ filter is also simpler, requiring fewer computing resources.
The “α,” “β,” and “γ” weights are manually selected. They are crucial parameters that control the importance given to the current measurement versus the predicted state based on previous measurements. An α-β-γ filter suits non-critical and resource-constrained applications (Table 1).

Do neural networks fit in?
NNs can learn complex patterns, making them useful for sensor fusion in dynamic environments. They can be used alone or with other tools like Kalman filters.
A hybrid system combining an NN and a Kalman filter can improve performance. The NN handles complex non-linear data relationships, while the Kalman filter provides state estimations based on known system dynamics.
NNs in sensor fusion applications can also preprocess data to minimize noise before it is input to the Kalman filter. The NN can optimize the parameters of the Kalman filter and support the non-linear parts of the system dynamics inside it, improving the accuracy of state estimates.
Summary
Sensor fusion balances the performance tradeoffs of various sensor technologies to arrive at a more complete picture of environmental or system conditions. Various types of digital filters using Bayesian inference are available, including particle, Kalman, and α-β-γ filters. NNs can also be used for sensor fusion, sometimes in combination with digital filters.
References
A Sensor Fusion Method Based on an Integrated Neural Network and Kalman Filter for Vehicle Roll Angle Estimation, MDPI sensors
Kalman Filter, Sensor Fusion, and Constrained Regression: Equivalences and Insights, UC Berkeley
Sensor Fusion Algorithms Explained, Udacity
Sensor Fusion and Tracking for Autonomous Systems, MathWorks
The α−β−γ filter, KalmanFilter.net
The how & why of sensor fusion, Silo AI
What is Sensor Fusion?, Dewesoft
EEWorld Online related content
Sensor fusion – How does that work?
Sensor fusion levels and architectures
What is the role of sensor fusion in robotics?
How can AI/ML improve sensor fusion performance?
If you are working with sensors, here are some tools to consider: part 1
Leave a Reply
You must be logged in to post a comment.