first term u1 is discarded in standard simulations because it accounts for external fields (i.e. wall, electrical field, gravity, magnetic field, and centrifugal force). The second term u2 is the most important since it represents the relevant pair potentials. When determined empirically, it actually includes three‐body and many‐body effects, which is why models relying on simple pair potential model reproduce reasonably well liquid or glass structures, and why it is better in this case to denote it by the term “effective” pair potential.
As illustrated by Eq. (7), empirical potentials have a great flexibility since specific terms may be added if needed. When electrostatic interactions are important, a Coulomb charge–charge interaction may, for instance, be included in the form:
(8)
where zi, zj, and ε0 are the charge on atom i and j and the permittivity of free space. Because the Coulombic series converges very slowly, the Ewald, particle‐mesh, or multi‐pole techniques are used in periodic systems. Likewise, more complex models implement three‐ and four‐body terms to reproduce bond‐bending and torsional forces, respectively. Alternatively, polarizable or shell models are employed to consider ions as nonrigid entities and, thus, to account for the effects of the deformation of electron clouds with suitable additional parameters.
From a practical standpoint, the parameters of equations such as (4–7) can be estimated in two different ways depending on the nature of the data to which they are fitted [4]. In the most rigorous way, one relies on energy profiles determined in first‐principles calculations or quantum‐mechanical simulations of appropriate reference systems (cf. Chapter 2.7). Alternatively, potential energy parameters are fitted through MD or lattice dynamics calculations to some selected physical properties. Structural and elastic data are generally chosen because they are most directly related to interatomic potentials. Thermal properties may also be used, but they are sensitive to second‐order effects such as anharmonicity and are in turn generally predicted less accurately.
3 Monte‐Carlo Simulations
3.1 Principles of the Method
First described for atomistic simulations in 1953 by Metropolis et al. [7], the MC method cannot account for the time evolution of the system investigated. By calculating instead its physical properties from repeated samplings made on the basis of a Boltzmann energy distribution of equilibrium states, it differs from a search algorithm with which the energy of the system would occasionally increase even though a steady decrease would be sought after at every step. As already stated, the method is thus inappropriate to tackle any nonequilibrium or history‐dependent phenomenon. Over MD simulations, its main advantage is to shorten the calculation time needed to arrive at the equilibrium structure if a suitable sampling method is employed.
Within the framework of the canonical ensemble, the partition function Q(N,V,T) is, for example,
(9)
where
From the partition function it follows that the probability (P) of finding a configuration r N is
(10)
To fulfill Eq. (9), the standard Metropolis procedure in MC simulations consists of
1 setting up an initial configuration in a periodic boundary cell,
2 calculating the energy of this configuration,
3 selecting an atom at random and moving it randomly along all coordinate directions,
4 accepting the new configuration resulting from the move if it lowers the energy of the system, because the new state is more probable than the former, but keeping the former configuration otherwise only in case its Boltzmann factor is higher than a real number drawn randomly between 0 and 1.
In other words, the MC method does not weight configurations selected randomly according to their Boltzmann factor to calculate the properties of the system, as was done earlier, but weight evenly instead configurations selected with the probability exp[−Up(r N )]. Its trick thus is to concentrate on the sampling in the regions of the phase space that contribute the most to the partition function. Although they will not be described here, there are several other sampling methods in the standard MC calculations to accelerate convergence to the equilibrium state [3].
3.2 Reverse Monte‐Carlo Simulations
To complement standard MC simulations, the so‐called reverse Monte‐Carlo (RMC) method has been developed to study disordered structures [8]. It enables three‐dimensional structural models to be constructed in a manner consistent with experimental results. The data most commonly used are PDF and their Fourier transforms obtained from diffraction experiments (Chapter 2.2). In RMC calculations the standard procedure is to
1 set up an initial configuration in a periodic boundary cell,
2 calculate the set of quantities relevant to the experimental data considered (e.g. PDF),
3 calculate the mean square deviations χ 2 of the calculated from the observed results(11)
where ρ is an appropriate measure of experimental accuracy,
1 select an atom and give it a random displacement,
2 accept the move if it leads to a χ 2 decrease, but keep the former configuration otherwise,
3 repeat from 2) to 5).
To quote a single example, the structural role of “insufficient” network formers has been successfully studied by RMC in a simulation of the atomic configuration of Mg2SiO4 glass in which the measured total structural factors were well fitted [9]. The estimated bond distance of Mg─O differs in the glass and in the crystal (Figure 3), whereas the difference in the peak position of Mg─O between the two phases reflects the structural feature the glass network is built by the corner‐ and edge‐sharing of highly distorted Mg─O‐bearing species.
Figure 3 As defined by Eq. (20), total correlation functions T(r) of Mg2SiO4 glass (a) and crystal (b) [9].
The