Iterated filtering

Iterated filtering algorithms are a tool for maximum likelihood inference on partially observed dynamical systems. Stochastic perturbations to the unknown parameters are used to explore the parameter space. Applying sequential Monte Carlo (the particle filter) to this extended model results in the selection of the parameter values that are more consistent with the data. Appropriately constructed procedures, iterating with successively diminished perturbations, converge to the maximum likelihood estimate. Iterated filtering methods have so far been used most extensively to study infectious disease transmission dynamics. Case studies include cholera, Ebola virus, influenza,    malaria,   HIV, pertussis,  poliovirus  and measles. Other areas which have been proposed to be suitable for these methods include ecological dynamics and finance.

The perturbations to the parameter space play several different roles. Firstly, they smooth out the likelihood surface, enabling the algorithm to overcome small-scale features of the likelihood during early stages of the global search. Secondly, Monte Carlo variation allows the search to escape from local minima. Thirdly, the iterated filtering update uses the perturbed parameter values to construct an approximation to the derivative of the log likelihood even though this quantity is not typically available in closed form. Fourthly, the parameter perturbations help to overcome numerical difficulties that can arise during sequential Monte Carlo.

Overview
The data are a time series $$y_1,\dots,y_N$$ collected at times $$ t_1 < t_2 < \dots < t_N$$. The dynamic system is modeled by a Markov process $$ X(t)$$ which is generated by a function $$f(x,s,t,\theta,W)$$ in the sense that


 * $$X(t^{}_n)=f(X(t^{}_{n-1}),t^{}_{n-1},t^{}_n,\theta,W) $$

where $$\theta$$ is a vector of unknown parameters and $$W$$ is some random quantity that is drawn independently each time $$f(.)$$ is evaluated. An initial condition $$X(t_0)$$ at some time $$t_0<t_1$$ is specified by an initialization function, $$X(t_0)=h(\theta)$$. A measurement density $$ g(y_n|X_n,t_n,\theta)$$ completes the specification of a partially observed Markov process. We present a basic iterated filtering algorithm (IF1) followed by an iterated filtering algorithm implementing an iterated, perturbed Bayes map (IF2).

Procedure: Iterated filtering (IF1)

 * Input: A partially observed Markov model specified as above; Monte Carlo sample size $$J$$; number of iterations $$M$$; cooling parameters $$0<a<1$$ and $$b$$; covariance matrix $$\Phi$$; initial parameter vector $$\theta^{(1)}$$


 * for $$m^{}_{}=1$$ to $$M^{}_{}$$
 * draw $$\Theta_F(t^{}_0,j)\sim \mathrm{Normal}(\theta^{(m)},b a^{m-1} \Phi)$$ for  $$j=1,\dots, J$$
 * set $$X_F(t^{}_0,j)=h\big(\Theta_F(t^{}_0,j)\big)$$ for $$j=1,\dots, J$$
 * set $$\bar\theta(t^{}_0)=\theta^{(m)}$$
 * for $$n^{}_{}=1$$ to $$N^{}_{}$$
 * draw $$\Theta_P(t^{}_n,j)\sim \mathrm{Normal}(\Theta_F(t^{}_{n-1},j), a^{m-1} \Phi)$$ for  $$j=1,\dots, J$$
 * set $$X_P(t^{}_n,j)=f(X_F(t^{}_{n-1},j),t^{}_{n-1},t_n,\Theta_P(t_{n},j),W)$$ for $$j=1,\dots, J$$
 * set $$w(n,j) = g(y_n|X_P(t^{}_n,j),t^{}_n,\Theta_P(t_{n},j))$$ for $$j=1,\dots, J$$
 * draw $$k^{}_1,\dots,k^{}_J$$ such that $$P(k^{}_j=i)=w(n,i)\big/{\sum}_\ell w(n,\ell)$$
 * set $$X_F(t^{}_n,j)=X_P(t^{}_n,k^{}_j)$$ and $$\Theta_F(t^{}_n,j)=\Theta_P(t^{}_n,k^{}_j)$$ for $$j=1,\dots, J$$
 * set $$\bar\theta_i^{}(t_n^{})$$ to the sample mean of $$\{\Theta_{F,i}^{}(t^{}_{n},j),j=1,\dots,J\}$$, where the vector $$\Theta^{}_F$$ has components $$\{\Theta^{}_{F,i}\}$$
 * set $$V_i^{}(t_n^{})$$ to the sample variance of $$\{\Theta_{P,i}^{}(t^{}_{n},j),j=1,\dots,J\}$$
 * set $$\theta_i^{(m+1)}= \theta_i^{(m)}+V_i(t_{1})\sum_{n=1}^N V_i^{-1}(t_{n})(\bar\theta_i(t_n)-\bar\theta_i(t_{n-1}))$$


 * Output: Maximum likelihood estimate $$\hat\theta=\theta^{(M+1)}$$

Variations

 * 1) For IF1, parameters which enter the model only in the specification of the initial condition, $$X(t_0)$$, warrant some special algorithmic attention since information about them in the data may be concentrated in a small part of the time series.
 * 2) Theoretically, any distribution with the requisite mean and variance could be used in place of the normal distribution. It is standard to use the normal distribution and to reparameterise to remove constraints on the possible values of the parameters.
 * 3) Modifications to the IF1 algorithm have been proposed to give superior asymptotic performance.

Procedure: Iterated filtering (IF2)

 * Input: A partially observed Markov model specified as above; Monte Carlo sample size $$J$$; number of iterations $$M$$; cooling parameter $$0<a<1$$; covariance matrix $$\Phi$$; initial parameter vectors $$\{\Theta_j, j=1,\dots,J\}$$


 * for $$m^{}_{}=1$$ to $$M^{}_{}$$
 * set $$\Theta_F(t^{}_0,j) \sim \mathrm{Normal}(\Theta_j, a^{m-1} \Phi)$$ for  $$j=1,\dots, J$$
 * set $$X_F(t^{}_0,j)=h\big(\Theta_F(t^{}_0,j)\big)$$ for $$j=1,\dots, J$$
 * for $$n^{}_{}=1$$ to $$N^{}_{}$$
 * draw $$\Theta_P(t^{}_n,j)\sim \mathrm{Normal}(\Theta_F(t^{}_{n-1},k^{}_j), a^{m-1} \Phi)$$ for  $$j=1,\dots, J$$
 * set $$X_P(t^{}_n,j)=f(X_F(t^{}_{n-1},j),t^{}_{n-1},t_n,\Theta_P(t_{n},j),W)$$ for $$j=1,\dots, J$$
 * set $$w(n,j) = g(y_n|X_P(t^{}_n,j),t^{}_n,\Theta_P(t_{n},j))$$ for $$j=1,\dots, J$$
 * draw $$k^{}_1,\dots,k^{}_J$$ such that $$P(k^{}_j=i)=w(n,i)\big/{\sum}_\ell w(n,\ell)$$
 * set $$X_F(t^{}_n,j)=X_P(t^{}_n,k^{}_j)$$ and $$\Theta_F(t^{}_n,j)=\Theta_P(t^{}_n,k^{}_j)$$ for  $$j=1,\dots, J$$
 * set $$\Theta_j=\Theta_F(t^{}_N,j)$$ for  $$j=1,\dots, J$$


 * Output: Parameter vectors approximating the maximum likelihood estimate, $$\{\Theta_j, j=1,\dots, J \}$$

Software
"pomp: statistical inference for partially- [sicobserved Markov processes"] : R package.