Sequential Monte Carlo
A flexible sampler for nonlinear state-space models
Sequential Monte Carlo (SMC), also known as particle filtering, is a flexible, simulation-based sampling method for computing the posterior distribution of nonlinear state-space models.
The target distribution given latent variables \(\mathrm{x_{1:t}=(x_1, \cdots, x_t)}\) follows
\[\begin{align} \mathrm{\gamma_t(x_{1:t}):=\frac{1}{Z_t} \widetilde \gamma_t(x_{1:t}), \ \ t\in\{1,2,\cdots, T\}},\notag \end{align}\]State Space Models
\[\begin{align} \mathrm{x_t|x_{t-1}}&\sim \mathrm{f(\cdot|x_{t-1})},\notag \mathrm{y_t|x_t}&\sim \mathrm{g(\cdot|x_t)},\notag \end{align}\]The joint PDF follows
\[\begin{align} \mathrm{\gamma_t(x_{1:t}):=p(\mathbf{x}, \mathbf{y})=p(x_1)g(y_1|x_1) \prod_{t=2}^T f(x_t|x_{t-1}) g(y_t|x_t)},\label{joint_pdf} \end{align}\]where the final normalized density follows \(\mathrm{\gamma_T(x_{1:T})=p(\mathbf{x}, \mathbf{y})}\).
Sequential Importance Sampling
We can consider a proposal
\[\begin{align} \mathrm{q_t(x_{1:t})=q_{t-1}(x_{1:t-1}) q_t(x_t|x_{1:t-1})},\notag \end{align}\]The importance weights follow that
\[\begin{align} \mathrm{\widetilde w_t(x_{1:t})}&=\mathrm{\frac{\widetilde \gamma_t(x_{1:t})}{q_t(x_{1:t})}=\frac{\widetilde \gamma_{t-1}(x_{1:t-1})}{q_{t-1}(x_{1:t-1})}\frac{\widetilde \gamma_t(x_t|x_{1:t-1})}{q_t(x_t|x_{1:t-1})}},\notag\\ \end{align}\]It suffers from weight degeneracy issues.
Sequential Monte Carlo
Simulate a proposal
\[\begin{align} \mathrm{q_t(x_{1:t})=\widehat \gamma_{t-1}(x_{1:t-1}) q_t(x_t|x_{1:t-1})},\notag \end{align}\]Learning Proposals and Twisting Targets
The optimal proposal
\[\begin{align} \mathrm{q^{\star}_t(x_t\\|x_{1:t-1})=\gamma_t(x_t|x_{1:t-1})},\notag \end{align}\]By Eq.\eqref{joint_pdf}, the proposal in example 1.2.1 in (Naesseth et al., 2019) follows that
\[\begin{align} \mathrm{q^{\star}_t(x_t\\|x_{1:t-1})=\frac{\gamma_t(x_t)}{\gamma_t(x_{1:t-1})}=f(x_t|x_{t-1}) g(y_t|x_t)},\notag \end{align}\]Twisted SMC methods: Adapting the Target Distribution
The goal is to see if we can simulate the optimal proposal in this way
\[\begin{align} \mathrm{q^{\star}_t(x_t\\|x_{1:t-1})=\gamma^{\star}_t(x_t\\|x_{1:t-1})=\gamma_T(x_t\\|x_{1:t-1})=p(x_t|x_{1:t-1}, y_{1:T})},\notag \end{align}\]The propose is optimized in a global sense.
- Naesseth, C. A., Lindsten, F., & Schön, T. B. (2019). Elements of Sequential Monte Carlo. Foundations and Trends® in Machine Learning.