Recursive Methods for Continuous Time

Recursive algorithms can be used to estimate parameters and states in a wide variety of models. However, as provocatively stated in Ljung & Söderström (1983), “There is only one recursive identification method. It contains some design variables to be chosen by the user.” While this statement is not true for all models, we can use the same general algorithm for a wide variety of linear regression and state-space models. Two common recursive algorithms in economics are recursive least squares (RLS) and the Kalman filter. First, we will examine the recursive least squares algorithm. Then we will examine the Kalman filter and transform this recursive algorithm into the RLS algorithm, before examining both these systems in continuous-time.

Discrete Methods

The RLS approach begins with the following difference equation model, \[\begin{equation}\nonumber y(t) = \theta^{\top}\varphi(t) + v(t) \end{equation}\] here we can estimate the model by choosing an estimate the minimizes the errors of the model. In this setting, we will select a least squares method, \[\begin{equation}\nonumber V_{N}(\theta) = \frac{1}{N}\sum_{t=1}^{N} \alpha_{t}[y(t)-\theta^{\top}\varphi(t)]^2 \end{equation}\] minimizing this equation will yield the following estimate of \(\theta\) \[\begin{equation}\nonumber \hat{\theta}(N) = \Bigg(\sum_{t=1}^{N}\alpha_{t}\varphi(t)\varphi^{\top}(t)\Bigg)^{-1}\Bigg(\sum_{t=1}^{N}\alpha_{t}\varphi(t)y(t)\Bigg). \end{equation}\] This estimate should look familiar to since it is closely related to the ordinary least squares estimator. In these equations \(\alpha\) is a weighting vector, that can weight observations differently. In practice, \(\alpha\) is usually set to one. Next, we will define \[\begin{equation}\nonumber \bar{R}(t) = \sum_{k=1}^{t} \alpha_{k}\varphi(k)\varphi^{\top}(k). \end{equation}\] From this definition we can rewrite \(\bar{R}(t)\) recursively as, \[\begin{equation}\label{eq:R_rec}\tag{1} \bar{R}(t-1) = \bar{R}(t) - \alpha_{t}\varphi(t)\varphi^{\top}(t). \end{equation}\] Our equation for \(\theta\) can also be rewritten, \[\begin{equation}\nonumber \begin{split} \hat{\theta}(t) &= \bar{R}^{-1}(t)\bigg(\sum_{k=1}^{t-1}\alpha_{k}\varphi(k)y(k)+\alpha_t\varphi(t)y(t)\bigg) \\ &= \hat{\theta}(t-1) + \bar{R}^{-1}(t)\varphi(t)\alpha_{t}[y(t)-\hat{\theta}^{\top}(t-1)\varphi(t)]. \end{split} \end{equation}\] Using \(R(t)=\frac{1}{t}\bar{R}(t)\) and \(\bar{R}(t-1)=(t-1)R(t-1)\) we can rewrite these equations once more and get a typical RLS system. \[\begin{align}\nonumber \hat{\theta}(t-1) = \hat{\theta}(t-1) + \frac{1}{t}R^{-1}(t)\varphi(t)\alpha_{t}[y(t)-\hat{\theta}^{\top}(t-1)\varphi(t)], \\\nonumber R(t) = R(t-1) + \frac{1}{t}[\alpha_{t}\varphi(t)\varphi^{\top}(t)-R(t-1)] \end{align}\] The avoid the matrix inversion in the system above we can instead use \(P(t) = \bar{R}(t)^{-1}\). Using this we can rewrite \((\ref{eq:R_rec})\) as, \[\begin{equation}\nonumber \begin{split} P(t) &= [P^{-1}(t-1)+\varphi(t)\alpha_{t}\varphi^{\top}(t)]^{-1} \\ &= P(t-1) - \dfrac{P(t-1)\varphi(t)\varphi^{\top}(t)P(t-1)}{1/\alpha_{t} + \varphi^{\top}(t)P(t-1)\varphi(t)}. \end{split} \end{equation}\] Thus our system will become, \[\begin{align}\label{eq:ModifiedRLS}\tag{2} \hat{\theta}(t) = \hat{\theta}(t-1) + L(t)[y(t)-\hat{\theta}^{\top}(t-1)\varphi(t)], \\ \label{eq:ModifiedRLSmid}\tag{3} L(t) = \dfrac{P(t-1)\varphi(t)}{1/\alpha_{t} + \varphi^{\top}(t)P(t-1)\varphi(t)},\\\label{eq:ModifiedRLSend}\tag{4} P(t) = P(t-1) - \dfrac{P(t-1)\varphi(t)\varphi^{\top}(t)P(t-1)}{1/\alpha_{t} + \varphi^{\top}(t)P(t-1)\varphi(t)}. \end{align}\] If \(\alpha_{k}=1\) then \(L(t)=P(t)\varphi(t)\).

Now, we will look at the Kalman filter and how it relates to RLS. Suppose we have the following state-space model, \[\begin{align}\label{eq:StateS1}\tag{5} x(t+1)&= F(t)x(t) + w(t), \\ \label{eq:StateS2}\tag{6} y(t)&= H(t)x(t) + e(t) \end{align}\] The Kalman Filter can be described by the following equations, \[\begin{align}\label{eq:stateupdate1}\tag{7} \hat{x}(t+1) &= F(t)\hat{x}(t) + K(t)[y(t) - H(t)\hat{x}(t)], \\\label{eq:Kalman1}\tag{8} K(t)&= \dfrac{F(t)P(t)H^{\top}(t)}{r_{2}(t)+H(t)P(t)H^{\top}(t)}, \\\label{eq:covariaceUpdate1}\tag{9} P(t+1) &= F(t)P(t)F^{\top}(t) + R_{1}(t) -F(t)P(t)H^{\top}(t)[r_{2}(t)+H(t)P(t)H^{\top}(t)]^{-1}H(t)P(t)F^{\top}(t). \end{align}\] In these equations the covariance matrices for \(\{w(t)\}\) and \(\{e(t)\}\) are \(R_1(t)\) and \(r_2(t)\).

If we rewrite the state-space model as, \[\begin{align}\label{eq:Newmod1}\tag{10} \theta(t+1)&=\theta(t) \\\label{eq:Newmod2}\tag{11} y(t)&= \theta^{\top}(t)\varphi(t) + e(t) \end{align}\] and define \(R_1(t)=0\) and \(r_2(t)=1\) the Kalman filter will become our RLS system. Our new Kalman filter will be the following, \[\begin{align}\label{eq:NewK}\tag{12} \hat{\theta}(t+1) &= \hat{\theta}(t) + K(t)[y(t) - \varphi^{\top}(t)\theta(t)], \\\tag{13} K(t)&= \dfrac{P(t)\varphi(t)}{r_{2}(t)+\varphi^{\top}(t)P(t)\varphi(t)}, \\\tag{14} P(t+1) &= P(t)-P(t)\varphi(t)[r_{2}(t)+\varphi^{\top}(t)P(t)\varphi(t)]^{-1}\varphi^{\top}(t)P(t). \end{align}\] As we can see this is equivalent to the system in \((\ref{eq:ModifiedRLS})-(\ref{eq:ModifiedRLSend})\) with \(1/\alpha_{k} = r_2(t) = 1\), \(K(t)=L(t)\), and some modified timing conventions. The Kalman filter and RLS are the same algorithms under these assumptions, this will be useful as the continuous-time Kalman filter has a more intuitive and well-documented derivation than continuous-time RLS.

The Continuous-Time Kalman Filter

The continuous-time Kalman filter is used when measurements are continuous functions of time. In this section, I will go through the derivation of the continuous-time Kalman filter . If we modify \((\ref{eq:StateS1})-(\ref{eq:StateS2})\) to depend on the increment of time the system will become, \[\begin{align}\nonumber x_{k+1} &= (I +F_k\Delta t)x_{k} + w_{k} \\ \nonumber y_{k} &= H_k x_k + e_k \end{align}\] here the covariance matrix for \(\{w_k\}\) is \(R_{1}(k)\Delta t\) and the covariance matrix for \(\{e(k)\}\) is \(r_{2}(k)/(\Delta t)\). First, we will examine what happens to the Kalman gain in \((\ref{eq:Kalman1})\) as \({\Delta t \rightarrow 0}\). Our Kalman gain will become, \[\begin{equation*} K(t)= \dfrac{(I +F(t)\Delta t)P(t)H^{\top}(t)}{(r_{2}(t)/\Delta t)+H(t)P(t)H^{\top}(t)} \end{equation*}\] rearranging this we will get \[\begin{equation*} \frac{1}{\Delta t} K(t) = \dfrac{(I +F(t)\Delta t)P(t)H^{\top}(t)}{r_{2}(t)+H(t)P(t)H^{\top}(t)\Delta t} \end{equation*}\] and taking the limit of this will yield, \[\begin{equation}\label{Kcontgain}\tag{15} \underset{\Delta t\rightarrow0}{\lim}\frac{1}{\Delta t} K(t) =P(t)H^{\top}(t)r^{-1}_{2}(t) \end{equation}\] this is our continuous-time Kalman gain.

Now, we can examine \((\ref{eq:covariaceUpdate1})\). Rewriting \((\ref{eq:covariaceUpdate1})\) for our new system we will get, \[\begin{equation*} \begin{split} P(t+1) &= (I +F(t)\Delta t)P(t)(I +F(t)\Delta t)^{\top} + R_{1}(t)\Delta t \\&-(I +F(t)\Delta t)P(t)H^{\top}(t)[(r_{2}(t)/\Delta t)+H(t)P(t)H^{\top}(t)]^{-1}H(t)P(t)(I +F(t)\Delta t)^{\top}. \end{split} \end{equation*}\] We can eliminate and terms and divide by \(\Delta t\) to get, \[\begin{equation*} \begin{split} \frac{1}{\Delta t} P(t+1) &=\frac{1}{\Delta t}P(t) + F(t)P(t)+ P(t)F^{\top}(t) + R_{1}(t) \\&-(I +F(t)\Delta t)P(t)H^{\top}(t)[r_{2}(t)+H(t)P(t)H^{\top}(t)\Delta t]^{-1}H(t)P(t)(I +F(t)\Delta t)^{\top}. \end{split} \end{equation*}\] Then, taking the limit as \({\Delta t \rightarrow 0}\), \[\begin{equation}\nonumber \begin{split} \underset{\Delta t\rightarrow0}{\lim\hspace{3ex}} \frac{1}{\Delta t} \big( P(t+1) - P(t)\big)= \dot{P}(t) &= F(t)P(t)+ P(t)F^{\top}(t) + R_{1}(t) -P(t)H^{\top}(t)[r_{2}(t)]^{-1}H(t)P(t) \end{split} \end{equation}\] this equation is our continuous-time covariance updating equation.

Last, we will derive the estimate updating equation. In this setting \((\ref{eq:stateupdate1})\) will become, \[\begin{equation*} \hat{x}(t+1) = (I +F(t)\Delta t)\hat{x}(t) + K(t)[y(t) - H(t)\hat{x}(t)] \end{equation*}\] diving this by \(\Delta t\) will give us, \[\begin{equation*} \frac{1}{\Delta t}(\hat{x}(t+1)-\hat{x}(t)) = F(t)\hat{x}(t) + \frac{K(t)}{\Delta t}[y(t) - H(t)\hat{x}(t)]. \end{equation*}\] Now, we can take the limit as \(\Delta t \rightarrow 0\) and use equation \((\ref{Kcontgain})\), \[\begin{equation}\nonumber \dot{\hat{x}}(t) = F(t)\hat{x}(t) + P(t)H^{\top}(t)r^{-1}_{2}(t)[y(t) - H(t)\hat{x}(t)] \end{equation}\] this will be our systems estimate updating equation.

Thus our continuous time Kalman filter for this system can be described by the following equations. Our system can be rewritten as, \[\begin{align}\tag{16} \dot{x} &= Fx + w \\\tag{17} y &= Hx + v \end{align}\] and our Kalman filter will be, \[\begin{align}\tag{18} \dot{P} &= FP+ P(t)F^{\top} + R_{1} -PH^{\top}[r_{2}]^{-1}HP\\\tag{19} K &=PH^{\top}r^{-1}_{2} \\ \tag{20} \dot{\hat{x}}&= F\hat{x} + K[y - H\hat{x}] \end{align}\] now that we have established how to derive the continuous-time Kalman filter we can use this to get a continuous version of RLS.

Continuous-Time Recursive Least Squares

We can rewrite the system in \((\ref{eq:Newmod1})-(\ref{eq:Newmod2})\) as, \[\begin{align}\tag{21} \dot{\theta}(t) &= 0 \\\tag{22} y(t) &= \theta^\top(t)\varphi(t) + e(t) \end{align}\] Now, with \(R_{1}=0\) and variance for \(e(t)\) equal to \(1\), our RLS system will be \[\begin{align}\label{Kcont2}\tag{23} \dot{P} &= -P\varphi\varphi^{\top} P\\\tag{24} K &=P\varphi\\ \label{Kcont2end}\tag{25} \dot{\hat{\theta}}(t)&= K[y(t) - \hat{\theta}(t)\varphi(t)]. \end{align}\]

We can also, derive this more rigorously starting from a discretized version of the model. The discretized version of our model with an undetermined time step \(\Delta t\) is, \[\begin{align}\nonumber \theta_{k+1} &= \theta_{k}\\\nonumber y_k &= \theta_{k}^\top\varphi_k + e_k \end{align}\] Where, the covariance matrix for \(\{e(k)\}\) is \(1/(\Delta t)\). Also, our weights \(\alpha_{k} = \Delta t\) First we can examine the gain term in \((\ref{eq:ModifiedRLSmid})\). Writing \((\ref{eq:ModifiedRLSmid})\) in this setting we’ll have, \[\begin{equation*} \begin{split} L(t) &= P(t-1)\varphi(t)[(1/\Delta t)+ \varphi(t)P(t-1)\varphi^{\top}(t)]^{-1}\\ &=P(t-1)\varphi(t)\Delta t[1+ \varphi(t)P(t-1)\varphi^{\top}(t)\Delta t]^{-1}. \end{split} \end{equation*}\] Dividing through by \(\Delta t\) and then taking the limit as \(\Delta t \rightarrow 0\) we get, \[\begin{equation}\tag{26} K = \underset{\Delta \rightarrow 0}{\lim}\frac{1}{\Delta t} L(t) = P(t-1)\varphi(t) \end{equation}\]

Next, if we look at \((\ref{eq:ModifiedRLSend})\) we can rewrite this equation as, \[\begin{equation}\nonumber \begin{split} P(t)-P(t-1) &= -P(t-1)\varphi(t)\varphi^{\top}(t)P(t-1)[(1/\Delta t) + \varphi(t)P(t-1)\varphi^{\top}(t)]^{-1}\\ &= -P(t-1)\varphi(t)\varphi^{\top}(t)P(t-1)\Delta t[1 + \varphi(t)P(t-1)\varphi^{\top}(t)\Delta t]^{-1}. \end{split} \end{equation}\] Dividing through by \(\Delta t\) and then taking the limit as \(\Delta t \rightarrow 0\) we get, \[\begin{equation}\tag{27} \dot{P}(t) = -P(t-1)\varphi(t)\varphi^{\top}(t)P(t-1) = -K\varphi^{\top}(t)P(t-1). \end{equation}\]

Last, we can derive the continuous time estimate updating equation \((\ref{eq:ModifiedRLS})\). Rewriting this equation and diving through by \(\Delta t\) yields, \[\begin{equation}\nonumber \frac{1}{\Delta t}\big(\hat{\theta}(t)-\hat{\theta}(t-1)\big) = \frac{1}{\Delta t}L(t)[y(t)-\hat{\theta}^{\top}(t-1)\varphi(t)]. \end{equation}\] Limiting this as \(\Delta t \rightarrow 0\) we get, \[\begin{equation}\tag{28} \dot{\hat{\theta}}(t) = K[y(t)-\hat{\theta}^{\top}(t-1)\varphi(t)]. \end{equation}\] These equations we have just derived are the same as the Kalman filter equations in \((\ref{Kcont2})-(\ref{Kcont2end})\). Again we have shown that the RLS and Kalman Filter equations are the same under particular assumptions.


Lewis, F., Xie, L., & Popa, D. 2007. Optimal and Robust Estimation: With an Introduction to Stochastic Control Theory . Second edn. CRC Press.

Ljung, L., & Söderström, T. 1983. Theory and Practice of Recursive Identification . MIT Press