卡尔曼滤波 (Kalman filter) 是在存在不确定下做信息融合 (combining information) 的通用工具。
It is such a general and powerful tool for combining information in the presence of uncertainty.
What is it?
You can use a Kalman filter in any place where you have uncertain information about some dynamic system, and you can make an educated guess about what the system is going to do next.
卡尔曼滤波适用于连续变化 (continuously changing) 的系统，具有占用内存空间小（除了前一个状态量外，无需保留其他历史数据），处理速度快的特点，适合应用于实时问题和嵌入式系统。
Kalman filters are ideal for systems which are continuously changing. They have the advantage that they are light on memory (they don’t need to keep any history other than the previous state), and they are very fast, making them well suited for real time problems and embedded systems.
What can we do with a Kalman filter?
卡尔曼的一个应用举例：测量坦克的油量，汽车引擎的温度，用户在触摸板上的手指位置，甚至所有你可以跟踪 (track) 的物体。概括的讲，只要用“跟踪”这个词来形容的算法，即tracking、Tracker等这种词描述的算法，基本都要用kalman滤波处理下数据。
It could be data about the amount of fluid in a tank, the temperature of a car engine, the position of a user’s finger on a touchpad, or any number of things you need to keep track of.
How a Kalman filter sees your problem
The Kalman filter assumes that both variables are random and Gaussian distributed. Each variable has a mean value 𝜇, which is the center of the random distribution (and its most likely state), and a variance 𝜎2, which is the uncertainty.
This kind of relationship is really important to keep track of, because it gives us more information: One measurement tells us something about what the others could be. And that’s the goal of the Kalman filter, we want to squeeze as much information from our uncertain measurements as we possibly can!
参数之间的相关性，可以用协方差矩阵 (covariance matrix) 来表示，即矩阵中的每个元素 ∑ij 表示第 i 个和第 j 个状态变量之间的相关度。注意，协方差矩阵是一个对称矩阵，这意味着可以任意交换 i 和 j。
This correlation is captured by something called a covariance matrix.
Describing the problem with matrices
We need some way to look at the current state (at time k-1) and predict the next state at time k. We can represent this prediction step with a prediction matrix, Fk.
For prediction matrix, it takes every point in our original estimate and moves it to a new predicted location, which is where the system would move if that original estimate was the right one.
There might be some changes that aren’t related to the state itself — the outside world could be affecting the system.
- control matrix
- control vector
Everything is fine if the state evolves based on its own properties. Everything is still fine if the state evolves based on external forces, so long as we know what those external forces are.
We can model the uncertainty associated with the “world” (i.e. things we aren’t keeping track of) by adding some new uncertainty after every prediction step * Every state in our original estimate could have moved to a range of states.
In other words, the new best estimate is a prediction made from previous best estimate, plus a correction for known external influences.
And the new uncertainty is predicted from the old uncertainty, with some additional uncertainty from the environment.
Refining the estimate with measurements
We might have several sensors which give us information about the state of our system.
Each sensor tells us something indirect about the state— in other words, the sensors operate on a state and produce a set of readings.
The units and scale of the reading might not be the same as the units and scale of the state we’re keeping track of. You might be able to guess where this is going: We’ll model the sensors with a matrix.
One thing that Kalman filters are great for is dealing with sensor noise. In other words, our sensors are at least somewhat unreliable, and every state in our original estimate might result in a range of sensor readings.
From each reading we observe, we might guess that our system was in a particular state. But because there is uncertainty, some states are more likely than others to have have produced the reading we saw.
We have two Gaussian blobs: One surrounding the mean of our transformed prediction, and one surrounding the actual sensor reading we got.
We must try to reconcile our guess about the readings we’d see based on the predicted state (pink) with a different guess based on our sensor readings (green) that we actually observed.
If we have two probabilities and we want to know the chance that both are true, we just multiply them together.
The mean of this distribution is the configuration for which both estimates are most likely, and is therefore the best guess of the true configuration given all the information we have.
This will allow you to model any linear system accurately. For nonlinear systems, we use the extended Kalman filter, which works by simply linearizing the predictions and measurements about their mean.