How
can we generate states with a specified distribution? In the previous section this
essential question was answered for the case of only one variable. If the number of
variables gets larger then the approach given in the previous section becomes prohibitive.
We must devise a new approach that samples the states from the configuration space.In
the Monte Carlo method one calculates the expectation value for an observable A by
computing the average with respect to an appropriate distribution. The distribution
function is determined by the statistical mechanical ensemble. Assume that the system is
described by a Hamiltonian H(x) with x being the degrees of freedom of the system. For the
canonical ensemble with fixed temperature T, volume V and number of particles N the
expectation value for an observable A is given by
where
Here W denotes the phase space, i.e., all configurations
that are available to the system. If the number of configurations of the system under the
given constraints is very large, the task of evaluating of the above equation becomes
formidable and one has to resort to sampling. Sampling here means that we want to pick up
mainly those contributions to the integral that make the largest impact. If we were to
randomly sample the available phase we would, for the most part, obtain states that give a
very small contribution to the expectation value. We cannot apply simple sampling to a
distribution of states that is sharply peaked. To sample the major contributions of the
integrant to the integral one constructs a Markov chain of states where each state
or configuration is generated from the previously generated configuration:
with . The state is derived from the state . This is not done deterministically as for the integration of motion of the
equations of motion in newtonian dynamics, but probabilistically. The state follows from the state with a probability. There is a transition probability
from one state to the other. If the system is in state , and it is so with a probability , then the state changes to the state with the probability W. This evolution seems to be quite different
from what we have learned in the preceding chapter. There, the evolution was
deterministic. Given the initial conditions the entire evolution of the states of the
system is determined for ever.
To make the approach more transparent we formulate again the main points. In the simple
sampling method we generate the states directly from the distribution if it is simple
enough to be known a priori. Here, we use a generating process. The process generates
states, one from the other, ensuring that the states eventually have the correct
distribution.