User:Eionides/Sandbox

Iterated filtering algorithms are a tool for Maximum Likelihood inference on partially observed dynamic systems. Stochastic perturbations to the unknown parameters are used to explore the parameter space. Applying sequential Monte Carlo (the Particle Filter) to this extended model results in the selection of the parameter values that are more consistent with the data. Appropriately constructed procedures, iterating with successively diminished perturbations, converge to the maximum likelihood estimate. Iterated filtering methods have so far been used most extensively to study the infectious disease transmission dynamics. Case studies include cholera, influenza,  malaria and measles. Other areas which have been proposed to be suitable for these methods include ecological dynamics and finance.

Overview
The data are a time series $$y_1,\dots,y_N$$ collected at times $$ t_1 < t_2 < \dots < t_N$$. The dynamic system is modeled by a Markov process $$ X(t)$$ which is generated by a function $$f(x,s,t,\theta,W)$$ in the sense that

$$X(t^{}_n)=f(X(t^{}_{n-1}),t^{}_{n-1},t^{}_n,\theta,W)$$

where $$\theta$$ is a vector of unknown parameters and $$W$$ is some random quantity that is drawn independently each time $$f(.)$$ is evaluated. An initial condition $$X(t_0)$$ at some time $$t_0<t_1$$, together with a measurement density $$ g(y_n|X_n,t_n,\theta)$$ completes the specification of a partially observed Markov process. A basic iterated filtering algorithm is as follows:

Input
 * A partially observed Markov model specified as above
 * Algorithmic parameters: Monte Carlo sample size $$J$$; number of iterations $$M$$; cooling parameters $$0<a<1$$ and $$b$$; covariance matrix $$\Phi$$; initial parameter vector $$\theta^{(1)}$$

Procedure: Iterated filtering
 * for $$m^{}_{}=1$$ to $$M^{}_{}$$
 * set $$X_F(t^{}_0,j)=X(t_0)$$ for $$j=1,\dots, J$$
 * draw $$\theta(t^{}_0,j)\sim Normal(\theta^{(m)},b a^{m-1} \Phi)$$
 * set $$\bar\theta(t^{}_0)=\theta^{(m)}$$
 * for $$n^{}_{}=1$$ to $$N^{}_{}$$
 * set $$X_P(t^{}_n,j)=f(X_F(t^{}_{n-1},j),t^{}_{n-1},t_n,\theta(t_{n-1},j),W)$$
 * set $$w(n,j) = g(y_n|X_P(t^{}_n,j),t^{}_n,\theta(t_{n-1},j))$$
 * draw $$k^{}_1,\dots,k^{}_J$$ such that $$P(k^{}_j=i)=w(n,i)\big/{\sum}_\ell w(n,\ell)$$
 * set $$X_F(t^{}_n,j)=X_P(t^{}_n,k^{}_j)$$
 * draw $$\theta(t^{}_n,j)\sim Normal(\theta(t^{}_{n-1},k^{}_j), a^{m-1} \Phi)$$
 * set $$\bar\theta_i^{}(t_n^{})$$ to the sample mean of $$\{\theta_i(t^{}_{n-1},k^{}_j),j=1,\dots,J\}$$, where $$\theta$$ has components $$\{\theta^{}_i\}$$
 * set $$V_i^{}(t_n^{})$$ to the sample variance of $$\{\theta_i(t^{}_{n},k^{}_j),j=1,\dots,J\}$$
 * set $$\theta_i^{(m+1)}= \theta_i^{(m)}+V_i(t_{1})\sum_{n=1}^N V_i^{-1}(t_{n})(\bar\theta_i(t_n)-\bar\theta_i(t_{n-1}))$$

Output
 * maximum likelihood estimate $$\hat\theta=\theta^{(M+1)}$$