How to derive the Kalman gain



modern control

observer

Release date:2023/1/28         

 ・In Japenese
[premise knowledge]
 ・Kalman filter
 ・variance, covariance
 ・least squares
 ・Bayesian estimation


An application example of the Kalman filter was explained here. This section explains how to derive the Kalman gain.


■Concept of Kalman gain derivation

The Kalman gain is for minimizing the error of the estimated value to the true value, and it means the derivation of the Kalman gain to find the minimum mean squared error(MMSE).



The above ① is the variance (covariance) Pk. Therefore, the MMSE can be obtained by minimizing the variance (covariance). Also, Pk is expressed as a matrix, so it is as follows.



Explain the meaning of multiplying the transposed matrix here. Since the error can be represented by a matrix, we will actually calculate it.



This result shows the variance values because the diagonal entries are the squares of each error, and the off-diagonal values are the covariance. Such a matrix is called a variance-covariance matrix. The diagonal trace is then the sum of squared residuals, and the minimum sum of squared residuals is equal to the minimum mean squared error. This is the idea of the least squares method, and the minimum value of the residual sum of squares is the point where the differential value is 0.


■Derivation of Kalman Gain

As explained above, to derive the Kalman gain, the following error variance (covariance) Pk is obtained.



COV means covariance. Also, k|k is attached to the lower right of P and x, which means conditional probability, and it means that Pk will occur under the condition k. In this case, both are k, so it doesn't make much sense, but Pk|k-1 has the form of the value of Pk after the previous value of k-1.

Substituting (2) into (7) gives,


Here, y is transformed as underlined.


where w is independent of x, so,


Next, differentiate the trace using the following formula.

<Trace derivative formula>


From the above,


Next, transform the formula using the transposed matrix formula below.

<Transpose formula>



Since ② is a variance-covariance matrix, it is a symmetric matrix, so the above formula can be used. Therefore,



This is how to derive the Kalman gain.

■Update variance

The next gain and variance will be updated based on the current variance and Kalman gain. This follows the concept of Bayesian inference.

※ What is Bayesian Inference?
When the information that determines the estimated value is updated from time to time, the estimated value is also updated accordingly to improve the accuracy of the estimation. An application of conditional probability.

<Flow to update variance with Bayesian estimation>
It is as follows.


① A priori estimation of variance/covariance


Than,


Also, substitute (9) for (6)


② Posterior estimation of variance/covariance
Substitute (8) into (7). Details are omitted.











List of related articles



modern control

observer