So
far we concerned ourselves with the generation of configurations using the simple sampling
technique. For this we needed, at least for the examples presented here so far, only
uniformly distributed random numbers. This situation arose because all the configurations
of the examples had the same probability or the same weight. Every configuration counted
the same in the averaging process irrespective of the nature of the configuration. Let us
turn to a situation were we are given a Hamiltonian H. The Hamiltonian depends on
the variables x that describe the degrees of freedom of the system. These can be
the position of particles in space, angles, spin orientations etc. If the energy is
conserved all states x have the same a priori probability. We could thus simulate
such a system generating independent configurations with the fixed energy E. This
immediately raises the questions: How simple is it to generate a configuration with given
energy and would it be better once we have a configuration with given energy to change the
configuration leaving the energy invariant? The idea to generate one configuration from
the other is appealing. Let us take again a look at the random walk. We may look at the
random walk also as a model for a polymer chain. Each site of chain corresponds to a
monomer unit, or an atom of the polymer chain and the edge connecting two sites
corresponds with a bond in the polymer chain.
We could displace a monomer unit from its original position to obtain a new
configuration or conformation of the chain, once all monomers were given the chance for a
displacement. This approach also adds to the sampling a time dimension. The polymer, or in
general a configuration, evolves from an initial configuration. The time evolution of the
configuration is then governed by the method to update the configuration. However, as we
will derive in the next section, this evolution is not entirely stochastic. It is governed
by a master equation giving meaning to the notion of time in a probabilistic simulation.
Let us come back to the probabilities for configurations. If all configurations with a
given energy E have the same probability, then we can write down the sum or the integral
over all possible states under the constraint of a fixed energy
The integral extends over all possible configurations the system can attain. The space
of all these states is called the configuration space W
or also called the state space. The partition function has all the information
needed to describe the statistical mechanical behavior. What we have written down is the
partition function for the micro-canonical ensemble.
If the energy is not a conserved quantity but the temperature T is held fixed, the
distribution of the states of the system, i.e. the probability for the occurrence of a
configuration in configuration space, is governed by the Boltzmann distribution
and the partition function is given by
This is the partition function for the canonical ensemble. kB
is the Boltzmann constant.