Recursive Least Squares (System Identification Toolkit)
The recursive least squares (RLS) algorithm and Kalman filter algorithm use the following equations to modify the cost function J(k) = E[e 2(k)].
Compare this modified cost function, which uses the previous N error terms, to the cost function, J(k) = E[e 2(k)], which uses only the current error information e(k). The modified cost function J(k) is more robust. The corresponding convergence rate in the RLS algorithm is faster, but the implementation is more complex than that of LMS-based algorithms.
The following procedure describes how to implement the RLS algorithm.
- Initialize the parametric vector using a small positive number ε.
- Initialize the data vector .
- Initialize the k × k matrix P(0).
- For k = 1, update the data vector based on and the current input data u(k) and output data y(k).
- Compute the predicted response by using the following equation.
- Compute the error e(k) by solving the following equation.
- Update the gain vector defined by the following equation.
The properties of a system might vary with time, so you must ensure that the algorithm tracks the variations. You can use the forgetting factor λ, which is an adjustable parameter, to track these variations. The smaller the forgetting factor λ, the less previous information this algorithm uses. When you use small forgetting factors, the adaptive filter is able to track time-varying systems that vary rapidly. The range of the forgetting factor λ is between zero and one, typically 0.98 < λ < 1.
P(k) is a k × k matrix whose initial value is defined by P(0) in step 3. - Update the parametric vector .
- Update the P(k) matrix.
- Stop if the error is small enough, else set k = k + 1 and repeat steps 4–10.