Markov chain Monte Carlo

Markov Chain Monte Carlo methods ( MCMC short method; rarely, Markov Chain Monte Carlo method) are a class of algorithms that draw random samples from probability distributions. This is done on the basis of the construction of a Markov chain having the desired distribution as their stationary distribution. The state of the chain after a large number of steps is then used as a sample of the desired distribution. The quality of the sample increases with increasing number of steps.

MCMC methods produce systems in the canonical state. A sufficient, but not necessary, condition that a MCMC method comprises the canonical state as a stationary distribution, the Detailed Balance property.

Usually, it is possible easily to construct a Markov chain having the desired properties. The Harder is to determine how many steps are necessary in order to achieve convergence to the stationary distribution with acceptable errors or to make the algorithm so that most effectively independent system states are generated. A good chain with a well-chosen initial distribution will converge rapidly, ie the stationary distribution is reached quickly. In typical application of MCMC methods, the target distribution can be achieved only approximately, since there is always a certain residual effect of the initial distribution.

Frequent applications of these algorithms can be found in the numerical calculation of multidimensional integrals. These are often found in the context of Bayesian statistics and computational applications in physics ( for example, statistical physics or path integrals in quantum field theory ) and biology / bioinformatics ( in protein structure prediction ).

Algorithms

Examples of Markov chain Monte Carlo methods are:

  • Metropolis algorithm: The local method generates random movements that are accepted or rejected with a certain probability. It is easy to implement, but has the disadvantage of a high autocorrelation of the system generated states.
  • Gibbs sampling (also called Wärmebadalgorithmus ): The local method is a special case of the Metropolis -Hastings algorithm in which the states are generated according to the local probability distribution.
  • Hybrid Monte Carlo algorithm: the procedure, a combination of molecular dynamics and random motion. Molecular dynamics is used in order to efficiently generate new, independent states which are accepted or rejected, with a certain probability.
  • Clustering algorithms: These are special, non-local process, the autocorrelation times and thus the so-called Critical Slowing down avoided. They were first introduced by Swendsen and Wang for the Potts model. However, few systems are known for which the method could be implemented.
  • Over- relaxation method
550520
de