Monte-Carlo algorithms for Markov chains and particle methods

B. Jourdain

15h, 5MK

In order to simulate according to a target law $\pi$ on a space $E$ provided with a tribe $\mathcal E$, the Metropolis-Hastings algorithm constructs a Markov chain $(X_n) _{n\geq1}$ which admits $\pi$ as reversible measure. This algorithm is one of the most widely used probabilistic numerical methods in statistics, physics, quantum chemistry and molecular simulation, so much so that a Google search on its name provides more than 350,000 results. When the Markov chain is ergodic, the law of $X_n$ converges to $\pi$ when $n$ tends to infinity and for $\phi : (E,\mathcal E) \to \mathbb R$ measurable and such that $\int_E |\phi(x)|\pi(dx)<\infty$, the strong law of large ergodic numbers ensures the almost sure convergence of $displaystyle \frac1n\sum _{k=1}^n \phi(X_k)$ towards $displaystyle \int_E \phi(x) \pi(dx)$ when $n\to \infty$.

In this first part of the course, we will illustrate on the example of this algorithm the sufficient condition of ergodicity for a Markov chain with values in a general space given in [3] and we will establish under this condition the strong law of large ergodic numbers. We will show the associated central limit theorem under a strengthened condition which allows us to solve the Poisson equation. We will state the result of Peskun [4] which explains how to minimize the asymptotic variance. We will see how to guide the choice of the variance of the proposals of the Metropolis-Hastings random walk algorithm using the scaling results [2]: when the target law is a probability produced on a space whose dimension tends to infinity, the first component of the algorithm accelerates in time and converges in law to a diffusion. Finally, when the convergence to the equilibrium is very slow (metastability phenomenon), we will study how to accelerate it by versions of the algorithm with adaptive preferential sampling like Wang-Landau, SHUS, metadynamics or well-tempered metadynamics.

In a second step, we will study genetic particle algorithms [1] which, by alternating a selection step where particles interact with a mutation phase where they evolve independently according to a Markovian kernel, allow to reduce the variance or to generate rare events.

Bibliography

  • [1] Del Moral, P.: Feynman-Kac formulae. Genealogical and interacting particle systems with applications. Probability and its Applications (New York). Springer-Verlag, 2004.
  • [2] Gelman, A., Gilks, W. R. and Roberts, G. O., Weak convergence and optimal scaling of random walk Metropolis algorithms, Ann. Appl. Probab. 7(1):110-120, 1997
  • [3] Hairer, M., Mattingly, J. C., Yet another look at Harris’ ergodic theorem for Markov chains. Seminar on Stochastic Analysis, Random Fields and Applications VI, 109-117, Progr. Probab., 63, Birkhauser/Springer Basel AG, Basel, 2011.
  • [4] Peskun, P. H., Optimal Monte-Carlo sampling using Markov chains, Biometrika 60(3):607- 612, 1973.