83,99 €
Provides hands-on knowledge enabling students of and researchers in chemistry, biology, and engineering to perform molecular simulations
This book introduces the fundamentals of molecular simulations for a broad, practice-oriented audience and presents a thorough overview of the underlying concepts. It covers classical mechanics for many-molecule systems as well as force-field models in classical molecular dynamics; introduces probability concepts and statistical mechanics; and analyzes numerous simulation methods, techniques, and applications.
Molecular Simulations: Fundamentals and Practice starts by covering Newton's equations, which form the basis of classical mechanics, then continues on to force-field methods for modelling potential energy surfaces. It gives an account of probability concepts before subsequently introducing readers to statistical and quantum mechanics. In addition to Monte-Carlo methods, which are based on random sampling, the core of the book covers molecular dynamics simulations in detail and shows how to derive critical physical parameters. It finishes by presenting advanced techniques, and gives invaluable advice on how to set up simulations for a diverse range of applications.
-Addresses the current need of students of and researchers in chemistry, biology, and engineering to understand and perform their own molecular simulations
-Covers the nitty-gritty ? from Newton's equations and classical mechanics over force-field methods, potential energy surfaces, and probability concepts to statistical and quantum mechanics
-Introduces physical, chemical, and mathematical background knowledge in direct relation with simulation practice
-Highlights deterministic approaches and random sampling (eg: molecular dynamics versus Monte-Carlo methods)
-Contains advanced techniques and practical advice for setting up different simulations to prepare readers entering this exciting field
Molecular Simulations: Fundamentals and Practice is an excellent book benefitting chemist, biologists, engineers as well as materials scientists and those involved in biotechnology.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 634
Veröffentlichungsjahr: 2020
Cover
Preface
1 Introduction – Studying Systems from Two Viewpoints
2 Classical Mechanics and Numerical Methods
2.1 Mechanics – The Study of Motion
2.2 Classical Newtonian Mechanics
2.3 Analytical Solutions of Newton's Equations and Phase Space
2.4 Numerical Solution of Newton's Equations: The Euler Method
2.5 More Efficient Numerical Algorithms for Solving Newton's Equations
2.6 Examples of Using Numerical Methods for Solving Newton's Equations of Motion
2.7 Numerical Solution of the Equations of Motion for Many‐Atom Systems
2.8 The Lagrangian and Hamiltonian Formulations of Classical Mechanics
Chapter 2 Appendices
3 Intra‐ and Intermolecular Potentials in Simulations
3.1 Introduction – Electrostatic Forces Between Atoms
3.2 Quantum Mechanics and Molecular Interactions
3.3 Classical Intramolecular Potential Energy Functions from Quantum Mechanics
3.4 Intermolecular Potential Energies
3.5 Force Fields
Chapter 3 Appendices
4 The Mechanics of Molecular Dynamics
4.1 Introduction
4.2 Simulation Cell Vectors
4.3 Simulation Cell Boundary Conditions
4.4 Short‐Range Intermolecular Potentials
4.5 Long‐Range Intermolecular Potentials: Ewald Sums
4.6 Simulating Rigid Molecules
Chapter 4 Appendices
5 Probability Theory and Molecular Simulations
5.1 Introduction: Deterministic and Stochastic Processes
5.2 Single Variable Probability Distributions
5.3 Multivariable Distributions: Independent Variables and Convolution
5.4 The Maxwell–Boltzmann Velocity Distribution
5.5 Phase Space Description of an Ideal Gas
Chapter 5 Appendices
6 Statistical Mechanics in Molecular Simulations
6.1 Introduction
6.2 Discrete States in Quantum Mechanical Systems
6.3 Distributions of a System Among Discrete Energy States
6.4 Systems with Non‐interacting Molecules: The μ‐Space Approach
6.5 Interacting Systems and Ensembles: The γ‐Space Approach and the Canonical Ensemble
6.6 Other Constraints Coupling the System to the Environment
6.7 Classical Statistical Mechanics
6.8 Statistical Mechanics and Molecular Simulations
Chapter 6 Appendices
7 Thermostats and Barostats
7.1 Introduction
7.2 Constant Pressure Molecular Dynamics (the Isobaric Ensembles)
7.3 Constant Temperature Molecular Dynamics
7.4 Combined Constant Temperature–Constant Pressure Molecular Dynamics
7.5 Scope of Molecular Simulations with Thermostats and Barostats
Chapter 7 Appendices
8 Simulations of Structural and Thermodynamic Properties
8.1 Introduction
8.2 Simulations of Solids, Liquids, and Gases
8.3 The Radial Distribution Function
8.4 Simulations of Solutions
8.5 Simulations of Biological Molecules
8.6 Simulation of Surface Tension
8.7 Structural Order Parameters
8.8 Statistical Mechanics and the Radial Distribution Function
8.9 Long‐Range (Tail) Corrections to the Potential
Chapter 8 Appendices
9 Simulations of Dynamic Properties
9.1 Introduction
9.2 Molecular Motions and the Mean Square Displacement
9.3 Molecular Velocities and Time Correlation Functions
9.4 Orientation Autocorrelation Functions
9.5 Hydrogen Bonding Dynamics
9.6 Molecular Motions on Nanoparticles: The Lindemann Index
9.7 Microscopic Determination of Transport Coefficients
Chapter 9 Appendices
10 Monte Carlo Simulations
10.1 Introduction
10.2 The Canonical Monte Carlo Procedure
10.3 The Condition of Microscopic Reversibility and Importance Sampling
10.4 Monte Carlo Simulations in Other Ensembles
10.5 Gibbs Ensemble Monte Carlo Simulations
10.6 Simulations of Gas Adsorption in Porous Solids
Chapter 10 Appendices
References
Index
End User License Agreement
Chapter 2
Table 1.1 Truncation errors and properties of numerical algorithms for solving New...
Table 2.2 Analytical and numerical solutions for the displacement (
m
) for one ha...
Chapter 3
Table 3.1 The enumeration of the interactions between atoms
i
and
j
in an ethane ...
Table 3.2 The distance dependence of the orientation averaged electrostatic pote...
Table 3.3 Intramolecular structure, intermolecular electrostatic, and van der Wa...
Table 3.4 A partial list of atom types used in the AMBER force field.
Chapter 5
Table 5.1 The 36 possible outcomes (microstates) for the roll of 2 dice, the 10 ...
Table 5.2 The value
X
=
E
N
and the corresponding probabilities
P
N
(
E
N
) for 1–5 dic...
Chapter 6
Table 6.1 The distributions for five molecules among equally spaced levels with ...
Table 6.2 The distributions available for the macrostate of seven molecules with...
Table 6.3 The distributions available for nine molecules with a total energy of ...
Table 6.4 The temperature dependence of the partition function and probabilities...
Chapter 8
Table 8.1 The total energy per molecule, potential energy per molecule, density,...
Table 8.2 The seven space filling crystal systems and the constraints that apply...
Table 8.3 The fractional coordinates of the symmetry distinct
N
atom, the 24 gene...
Table 8.4 Spherical harmonics
Y
3m
(
θ
,
φ
) used in the local order paramete...
Table 8.A.1 The electrostatic point charges and Lennard‐Jones parameters used in...
Table 8.A.2 The potential functions and parameters used in the simulation of a N...
Chapter 1
Figure 1.1 Macroscopic and microscopic viewpoints of a gas system involve di...
Chapter 2
Figure 2.1 (a) The coordinate system for a mass moving under the influence o...
Figure 2.2 (a) The quadratic potential energy function and linear force func...
Figure 2.3 The spatial orbit of a mass moving in a −1/
r
2
force field (1/
r
po...
Figure 2.4 (a) The reduced Lennard‐Jones potential,
U
* =
U
/
ɛ
, and the c...
Figure 2.5 (a) The time variation of the position and velocity of a mass dro...
Figure 2.6 A schematic representation of the stages of Euler's method calcul...
Figure 2.7 A schematic representing the flow of time (a–c) for the Verlet al...
Figure 2.8 A schematic representation of the leapfrog algorithm for advancin...
Figure 2.9 As the system evolves, a phase space point moves along the system...
Figure 2.10 Analytical and numerical solutions for the displacement (a) and ...
Figure 2.11 Analytical and numerical solutions for the energy of a one‐dimen...
Figure 2.A.1 The coordinate system transformation from
r
1
and
r
2
to
R
cm
and
Figure 2.A.2 (a) The motion of a mass subjected to a radial force
F
(
r
) point...
Chapter 3
Figure 3.1 (a) The atomic labels for the ethane molecules used in Table 3.1....
Figure 3.2 The potential energy of the propane molecule as a function of the...
Figure 3.3 The potential energy of the propane molecule as the C–C–C angle i...
Figure 3.4 The computed potential energy of the propane molecule for torsion...
Figure 3.5 Electrostatic potential map for (a) benzene at a selected surface...
Figure 3.6 (a) The point charge approximation assumes that the electrostatic...
Figure 3.7 The first four charge distributions in the multipole expansion co...
Figure 3.8 (a) The geometric variables used in determining the electrostatic...
Figure 3.9 (a) The interaction energies of two Ar (•) and Kr (▪) atoms at di...
Figure 3.10 The geometric parameters used in three‐center and four‐center wa...
Chapter 4
Figure 4.1 The simulation cell (dashed lines) described by the three cell ve...
Figure 4.2 At low temperatures where molecules do not have sufficient kineti...
Figure 4.3 The ratio of molecules in the surface to those in the bulk can be...
Figure 4.4 Periodic boundary conditions in two dimensions on the system at t...
Figure 4.5 The Lennard‐Jones intermolecular potential in terms of the reduce...
Figure 4.6 (a) In the minimum image convention, each atom
i
is placed in the...
Figure 4.7 (a) A schematic representation of the neighbor list shell around ...
Figure 4.8 The variation of the functions 1/
r
, erf(
r
), erfc(
r
) (all full cur...
Figure 4.9 (a) The original placement of two charges (black circles) along w...
Figure 4.10 The initial positions of molecules
i
and
j
at time
t
and their p...
Figure 4.A.1 The three‐dimensional surface for the function
f
(
x
,
y
) = 2 −
x
2
...
Figure 4.A.2 A projection of the contours of the function
f
(
x
,
y
) in the
xy
‐p...
Chapter 5
Figure 5.1 Gaussian distribution functions with three values of the
α
‐p...
Figure 5.2 The normalized probability distributions for outcomes of the roll...
Figure 5.3 Picture of the kinetic theory of gases for an ideal gas. Molecule...
Figure 5.4 The collision of a molecule with velocity component
v
x,i
with the...
Figure 5.5 The schematic representation of a collision between two molecules...
Figure 5.6 (a) The Gaussian probability distributions for velocity component...
Figure 5.7 The probability distribution for the kinetic energy of an ideal g...
Figure 5.8 (a) The kinetic energy distribution for 1–10 molecules plotted as...
Figure 5.9 A Gaussian distribution
f
(
x
) with 〈
x
〉 = 0 and
σ
= 1 (full li...
Figure 5.10 The distribution of points
X
and
Y
generated by the Box–Muller m...
Figure 5.A.1 (a) The product of two Gaussian functions
f
a
(
x
) and
f
b
(
x
) is a ...
Figure 5.A.2 The product of two Gaussian functions
f
a
(
x
) and
f
b
(
x
) (full cur...
Figure 5.A.3 The distribution function
f
(
r
) (full line) and its definite int...
Chapter 6
Figure 6.1 The seven possible distributions (microstates) of five molecules ...
Figure 6.2 A representation of
N
non‐interacting molecules in an isolated sy...
Figure 6.3 (a) The experimental setup of a system with constant
N
and
V
in c...
Figure 6.4 (a) A model system with nine quantum states arranged in three ene...
Figure 6.5 (a) The experimental setup of a system with constant
N
, in contac...
Figure 6.6 (a) The experimental setup of a system with fixed
μ
and
V
in...
Figure 6.7 (a) A schematic representation of an isolated system with fixed v...
Figure 6.A.1 The discrete states of a particle in a two‐dimensional box repr...
Chapter 7
Figure 7.1 (a) A schematic representation of the barostat coupled to the sys...
Figure 7.2 A schematic representation of volume change in a simulation cell ...
Figure 7.3 (a) In an isotropic liquid or gas phase, forces are always perpen...
Figure 7.4 (a) A schematic representation of the coupling of the time variat...
Figure 7.5 The variation of the temperature of a system of Lennard–Jones par...
Chapter 8
Figure 8.1 Snapshots of the (a) solid cubic α‐phase (20 K,
L
= 21.6 Å), (b) ...
Figure 8.2 Orthobaric densities for gas and liquid phase methanol simulated ...
Figure 8.3 (a) The structure of a two‐dimensional solid phase and (b) the nu...
Figure 8.4 The effect of the reference molecule on the local density in a vo...
Figure 8.5 (a) A snapshot of a simulation of liquid krypton at 80 K. (b) The...
Figure 8.6 The radial distribution functions for nitrogen in the (a) solid, ...
Figure 8.7 (A) A snapshot of the methanol–water solution. (B) The excess ent...
Figure 8.8 (a) A snapshot of a simulation of an aqueous NaCl solution at 298...
Figure 8.9 (a) The α‐helix Winter Flounder antifreeze protein (wf‐AFP)...
Figure 8.10 (a) A schematic representation of the primary sequence of the 77...
Figure 8.11 (a) A snapshot from the simulation of a water–vacuum interface a...
Figure 8.12 The difference in average values of the normal and transverse co...
Figure 8.13 (a) The initial setup of the liquid–gas system with the interfac...
Figure 8.14 (a) The probability distribution of the non‐local coherence orde...
Figure 8.15 (a) The function
ρ
2
(
r
1
,
r
2
) gives the probability of finding...
Figure 8.A.1 A section extracted from the PDB file of the winter flounder an...
Chapter 9
Figure 9.1 (a) The position of a reference molecule
i
at different time step...
Figure 9.2 If a molecule moves outside the boundaries of the simulation cell...
Figure 9.3 (a) Five sample MSD(
t
) plots run from different 10 ps simulations...
Figure 9.4 (a) A schematic representation of the time variation of the mean ...
Figure 9.5 (a) The mean square displacement of solid N
2
at 20 K (solid line)...
Figure 9.6 (a) The simulation of water confined between plates of graphite. ...
Figure 9.7 (a) The motion of an adsorbed molecule from a “hollow” site on th...
Figure 9.8 A schematic representation of a molecular collision (a) in a dens...
Figure 9.9 The dimensionless velocity autocorrelation function,
ψ
(
t
), f...
Figure 9.10 (a) A schematic representation of the decay of the
P
1
(
t
) orienta...
Figure 9.11 (a) The intermittent hydrogen bond time correlation function for...
Figure 9.12 (a) The change in Lindemann index with temperature for iron nano...
Figure 9.13 (a) The gradient of a function represents the change in the valu...
Figure 9.14 (a) The three components of the pressure tensor on the plane per...
Figure 9.A.1 Schematic representation of the one‐dimensional random walk mod...
Chapter 10
Figure 10.1 A caricature representation of the first three steps of a canoni...
Figure 10.2 The initial configuration of the system, {
r
}
old
with a total pot...
Figure 10.3 The flow of the canonical Monte Carlo procedure for importance s...
Figure 10.4 The experimental and possible molecular dynamics simulation setu...
Figure 10.5 The system with number of molecules
N
old
, in the initial configu...
Figure 10.6 The volume of a system with initial configuration {
ρ
}
old
an...
Figure 10.7 The random changes introduced in the system during a Gibbs ensem...
Figure 10.8 (a) X‐ray crystal structure of the porous MOF. (b) Comparison of...
Figure 10.9 (a) The structure of the cubic unit cell (
a
= 16.99 Å) of the po...
Figure 10.10 (a) A set of 500 uncorrelated points distributed around the ave...
Cover
Table of Contents
Begin Reading
iv
v
xiii
xiv
xv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
317
318
319
320
321
322
323
324
325
326
327
Saman Alavi
Author
Dr. Saman Alavi
Department of Chemistry and
Biomolecular Sciences
University of Ottawa
K1N 6N5 Ottawa
Canada
All books published by Wiley‐VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.
Library of Congress Card No.:
applied for
British Library Cataloguing‐in‐Publication Data
A catalogue record for this book is available from the British Library.
Bibliographic information published bythe Deutsche Nationalbibliothek
The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at <http://dnb.d‐nb.de=.
© 2020 Wiley‐VCH Verlag GmbH & Co. KGaA, Boschstr. 12, 69469 Weinheim, Germany
All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.
Print ISBN: 978‐3‐527‐34105‐4
ePDF ISBN: 978‐3‐527‐69953‐7
ePub ISBN: 978‐3‐527‐69946‐9
oBook ISBN: 978‐3‐527‐69945‐2
Cover Design Grafik‐Design Schulz
To Dorothy, with hopes of many future adventures.
About 30 years or so ago, experimentalist colleagues may have viewed results of molecular simulations with a tinge of skepticism, although they would have agreed that the technique holds promise. At the time, simulations used simplified models, which were important from a theoretical point of view and for developing methodology, but were slightly removed from complexities of real systems. Since then, with advances in methodology, increasingly accurate force fields for describing interactions, and ever increasing computing power, molecular simulations have become an integral part of many experimental chemical, biological, and engineering research projects. Similar to how NMR spectroscopy and crystallographic characterization are used in experimental studies, molecular simulations are used to characterize and understand aspects of the experimental systems that are not accessible to other techniques.
There are many excellent texts and monographs on molecular simulation methods, some of which are listed in the references. These texts are often written for physicists, physical chemists, or engineers specializing in computational methods. Fairly advanced knowledge of mechanics, statistical mechanics, and mathematical physics is often assumed in these texts, although most provide short overviews of these topics.
My recent teaching experience in molecular simulation methods showed that there are a significant number of students of chemistry, biology, and engineering who are interested in learning and using molecular simulation methods, but who are not familiar with some of the background material required to fully appreciate the underpinnings of the methods. This book aims to provide an introduction to molecular simulation methods, starting from a background accessible to most chemists and engineers. This background includes knowledge of calculus, basic physical chemistry, including the laws of thermodynamics, and elementary mechanics, in particular Newton's laws of motion, along with some basic knowledge of quantum mechanics.
The approach in this book is that instead of separate chapters for background material on mechanics, statistical mechanics, and probability theory, the introduction of background topics and related molecular simulation methods are integrated within the text. This illustrates the utility of the background material quickly and provides immediate motivation for the reader to master it. Furthermore, much of the background material is necessarily abstract and providing concrete details of its use helps improve the learning. Integrating fundamental and practical aspects should prevent interested readers from losing momentum by having to go through more abstract background material at the beginning. The balance of material in each chapter is such that principles and practical aspects of correctly performing simulations are presented as closely as possible.
Material on probability theory, advanced mechanics, and statistical mechanics is introduced at a level that should be accessible to the target readers. There is an effort to make arguments behind mathematical equations and techniques as intuitive as possible. The notation used has been chosen to be as clear as possible by showing functional dependences of quantities, more often than perhaps usual. This comes at the expense of elegance and brevity, and at places, the notation is admittedly clunky. However, for a learner it can be useful to be reminded of the functional dependence of the quantities that are being manipulated.
I would like to thank my graduate advisors professors Robert F. Snider and G. Abbas Parsafar for my training in theoretical chemistry and for guiding me in the ups and downs of theoretical research. I also thank professors John A. R. Coope and Bijan Najafi with whom I had the pleasure of learning statistical mechanics and thermodynamics. Professor Coope had a large influence on my approach to statistical mechanics and this is reflected in many places in this book. Professor Najafi taught a thought‐provoking Advanced Thermodynamics course and I later had the pleasure to become his long‐time collaborator.
My thanks to Professors Tamar Seideman (Northwestern University), Donald L. Thompson (University of Missouri‐Columbia), Tom K. Woo (University of Ottawa), and Dennis D. Klug (National Research Council of Canada) with whom I worked as a research associate. They each introduced me to different research areas, styles of work, and personal philosophies for approaching science and life. I had the pleasure of teaching a Molecular Simulation and Statistical Mechanics course with Prof. Woo at the University of Ottawa for a number of years. Tom's feedback and his teaching of sections on Monte Carlo simulations and force fields influenced the approach given here. I would like to thank my long‐time collaborators John A. Ripmeester (National Research Council of Canada) and Ryo Ohmura (Keio University) who have constantly provided motivation to study new systems and helped me better appreciate molecular details embedded in the results of experimental techniques.
I thank former students (in many cases, present colleagues) Mehrdad Bamdad, Mohammad H. Kowsari, Hossein Mohammadimanesh, Robin Susilo, Peter Dornan, Andrew Sirjoosingh, Peter Boyd, S. Alireza Bagherzadeh, Hamid Mosaddeghi, Afsaneh Maleki, Hana Dureckova, and Parisa Naeiji with whom I worked on various simulation projects. I would also like to thank former colleagues at the National Research Council and the students of Prof. Ohmura's group at Keio University for detailed discussions on various projects and how the molecular simulations could be used to interpret their results. Former students in my Molecular Simulation class deserve thanks for suffering through various iterations and refinements of the arguments presented here.
I would like to thank the staff at Wiley, Jolke Perelaer who helped with getting the book project off the ground, Pinky Sathishkumar, the project editor, and Sujisha Kunchi Parambathu the production editor of this book. They caught many typographical errors and helped improve the readability of the text. Any errors that remain, are of course, of my doing and I would appreciate the readers bringing them to my attention.
Special thanks are due to my wife Dorothy who provided support during the lengthy process of completing this book. Her contribution is on par with that of a coauthor! I would also like to thank members of my family, my mother, siblings, and others for their patience with seeing me disappear for lengths of time as I worked on this book.
January 2020
Saman AlaviOttawa
When analyzing physical, chemical, and biological systems, macroscopic and microscopic viewpoints give two seemingly different descriptions. For example, as shown in Figure 1.1, the state of a gas inside a pressurized capsule is described from a macroscopic viewpoint by a limited number of variables, including pressure, P, volume, V, and temperature, T. Depending on the nature of the gas and the macroscopic state, these variables are related by an equation of state such as the ideal gas law, PV = nRT. This description of the gas includes the mechanical variables pressure and volume, and a nonmechanical variable, temperature. In the macroscopic view, the behavior of the gas is governed by the laws of thermodynamics and no reference is made to the molecular structure of the gas. Indeed, thermodynamics was developed in the mid‐nineteenth century before atomic theory of matter was widely accepted by physicists. The thermodynamic description is used for macroscopic samples with micrometer or larger length scales over long times.
Figure 1.1 Macroscopic and microscopic viewpoints of a gas system involve different variables, length, and time scales.
From a microscopic (atomic) viewpoint, a gas is a collection of a large number (in the order of 1023) of molecules, moving randomly at high speeds, each with a specified position, velocity, and acceleration at any given time. The microscopic description of the gas uses only mechanical variables that obey the laws of classical mechanics. The details of atomic/molecular structures and interactions, along with the application of Newton's equation of motion, determine how positions and velocities of the molecules of the gas change with time. The laws of conservation of energy, linear momentum, and angular momentum constrain the mechanical variables throughout the process. Knowledge of mechanical variables at any time allows the calculation of these variables at all times in the future and past (neglecting considerations of classical nonlinear systems and quantum mechanics) and classical mechanics is therefore deterministic with regard to mechanical variables. The classical mechanical microscopic description does not include macroscopic variables such as temperature and entropy, which are used to describe macroscopic systems.The microscopic description is used to describe phenomena on length scales of the order of nanometers and time scales of the order of nanoseconds.
How are these dual descriptions of physical systems reconciled, and why is there such a discrepancy in the length and time scales between these two viewpoints? How do nonmechanical variables get introduced into analysis of system properties in the macroscopic viewpoint, if these macroscopic variables do not appear in the underlying microscopic description of the system, which is supposedly more fundamental? The answers to these questions form the context of molecular simulation.
In his book “What is Life?” Erwin Schrödinger asks the question every student has wondered when first introduced to atoms: Why are atoms so small? [269] Our daily experience captures length scales as small as millimeters, while atoms and molecules with dimensions in the nanometer range are smaller by factors of 10−7/10−8 than any phenomena we experience directly. Even the smallest bacteria have dimensions in the micrometer range, which gives them a length range larger by a factor of 104 compared to atoms and molecules. Why are there such discrepancies in length and time scales between atoms and the macroscopic phenomenon of life?
Schrödinger argues that since atoms are fundamental building blocks of matter, this is not the correct question to ask. The question should be reframed as “Why are we, as living organisms, so much larger than atoms?” or “Why are there so many atoms and molecules in cells and more complex organisms?” Stated differently, the question can be “Why is Avogadro's number so large?” The answers to these questions determine how new system properties emerge as we transition from the microscopic mechanical descriptions of systems to the macroscopic thermodynamic description of large systems.
The connection between microscopic and macroscopic descriptions is made by invoking probability theory arguments in statistical mechanics. Relatively simple microscopic systems such as ideal gases are amenable to analytical statistical mechanical analysis, and explicit formulas relating microscopic mechanical properties of the gas molecules to macroscopic thermodynamic variables can be derived. For more complex microscopic systems, molecular simulations (using numerical computations) within the framework of molecular dynamics or Monte Carlo simulations are performed and statistical mechanical relations relate the averages of molecular properties to macroscopic observables of these systems.
This book gives an introduction to the microscopic molecular dynamics and Monte Carlo simulation methods for calculating the macroscopic properties of systems. Even in cases where the goal is a purely microscopic mechanical study of the system, there are usually macroscopic constraints imposed on the system by the environment. For example, the conditions of constant physiological temperature and ambient pressure imposes constraints on molecular simulations when studying the interaction of a drug candidate with an enzyme binding site in aqueous solution. These constraints impose nonmechanical conditions on the microscopic description of the system that must be applied correctly when simulating molecular behavior.
Chapter 2 gives a brief overview of classical mechanics used to describe the motion of atoms and molecules in microscopic systems. We start from simple physical systems for which analytical solutions of the classical Newtonian equations are available and move to complex multiatom systems for which numerical methods of solution (namely, finite difference methods) are needed. The concept of phase space trajectory, which describes the dynamics of these systems, is introduced.
Solving Newton's laws of motion for a molecular system requires knowledge of the forces acting between atoms. In Chapter 3, the quantum mechanical basis for determining the interatomic forces within and between molecules and their classical approximations are described. A description of classical force fields used in molecular simulations of chemical and biological systems follows.
Having introduced numerical methods to solve the classical equations of motion and the microscopic forces acting between atoms, the next step is the introduction of specialized techniques needed to make molecular simulations feasible. These techniques, which include the use of periodic boundary conditions, potential cutoffs for short range forces, and Ewald summation methods for long range electrostatic forces, are discussed in Chapter 4.
In Chapters 5 and 6, we introduce concepts from probability theory that describe how to predict and analyze behaviors of complex systems on which we have too little or too much information. Concepts of probability theory as applied to mechanical systems form the framework for statistical mechanics. Relations of probability theory and statistical mechanics must be considered to correctly run a molecular simulation and to ensure that the molecular level system is treated in a manner consistent with macroscopic conditions imposed on the system. Molecular simulation results can then be subjected to further statistical mechanical analysis or be used to get direct microscopic insight into phenomena of interest. In Chapter 5, the principles of probability theory are applied to non-interacting systems, while in Chapter 6, the concept of the ensemble of systems is introduced, which allows probabilistic analysis of systems that include intermolecular interactions. The classical expressions for the probability distributions for different ensembles are the constraints that molecular simulations must satisfy.
Chapters 7 and 10 cover specialized molecular simulation techniques for imposing specific values of macroscopic thermodynamic variables in a simulated system. In Chapter 7 methods of correctly imposing constant pressure (Andersen barostat) and constant temperature (Nosé–Hoover thermostat) on systems of molecules in molecular simulations are described. In Chapter 10, the grand canonical Monte Carlo simulation method for imposing the condition of fixed chemical potential and temperature is described.
Chapter 8 and 9 treat the extraction and analysis of structural/thermodynamic properties and dynamic properties using molecular dynamics simulations, respectively. Selected examples from a large body of simulation work are outlined.
Throughout the book, we will emphasize an appreciation of time, length, and energy scales of molecular processes including molecular translations, vibrational processes, and bulk fluid motions.
Many excellent books, articles, and websites on mechanics, probability theory, statistical mechanics, and molecular simulation methods are available and have been cited in the references. These have undoubtedly influenced the presentation of the material here and explicit citations are given in different sections as appropriate. A large body of work on molecular dynamics and Monte Carlo methods is available and only a small sample of topics could be covered here. Important and groundbreaking work by many experts has not been discussed, and this is a reflection of the limited scope of this book rather than the importance of the work. Contributions of researchers from the past and present are gratefully acknowledged, although they are not mentioned individually here.
A further point is that many important advanced modern topics are not covered in this book as they are beyond the scope of this introductory discourse. For example, free energy methods, biased Monte Carlo sampling, and methods of high‐performance computing used in molecular simulations are not discussed. It is hoped that the introductory material in this book provides a launching pad for the study of these advanced topics. For more advanced users, it is hoped that this book can provide a useful overview and some intuitive understanding of methods that go into molecular simulations.
Humans long ago observed motions of earthbound and celestial objects and intuitively discovered that these motions (“mechanics”) follow certain predictable patterns. Without this realization premodern architects, astronomers, navigators, and others could not have achieved many of their accomplishments. Indeed, animals must observe and intuitively understand the operation of laws of motion. Without this understanding, a hawk would not know how steep and fast to dive to have a chance at catching a rabbit, or a gibbon would not know how fast and at what angle to pounce to reach the next branch of a tree high above the forest floor.
A great discovery of modern science is that mathematical laws governing mechanics quantitatively determine how positions and velocities of objects change with time and how they are affected by forces. The great insight of Sir Isaac Newton in discovering the laws of mechanics was that the same mathematical principles that apply to the motion of objects on earth, which move within distance scales of 1–100 meters and times scales of seconds to hours, also apply to the motion of celestial objects such as the moon, earth, and sun, which move on distance scales of 108 to 1011 m and time scales in the range of hours to years. Limiting ourselves to motions encountered on earth and objects within the solar system, the applicability of Newton's laws of mechanics span a 1011 range of distances and a 109 range of times.
Over time, scientists became familiar with the structure of matter and discovered that atomic and molecular building blocks of materials have sizes in the range of 10−9 to 10−7 m and motions of these molecules occur on time scales much shorter than seconds. The question naturally arose whether the same mechanical laws that govern human scale motions also govern the motion of molecules in solids, liquids, and gases, which occur on length scales of 10−9 to 10−7 m. That indeed (with caveats) the laws of classical mechanics apply to the motion on atomic and molecular scales is the working assumption in developing methods for classical molecular simulations.
In classical molecular simulations the laws of mechanics are applied to predict the motions and energies of molecules under different external thermodynamic conditions. In molecular systems, the positions and velocities of atoms and the nature and magnitude of forces acting on atoms depend on the chemical structure, temperature, and pressure of the simulated system. The mechanical approach can be used to study diverse phenomena, such as a solvated protein interacting with a drug substrate, a DNA molecule in a saline solution, an organic material adsorbing on the surface of a solid, or a solid undergoing a melting transition.
The mechanical laws governing the positions, velocities, and forces between molecules at different times are expressed as differential equations. The particular form of the differential equations and the meaning of the mechanical variables themselves depend on whether classical or quantum mechanics are used to describe the system. Most of our focus is on the classical mechanical description, but parallel quantum mechanical descriptions for the motions of molecules can be formulated and will be occasionally discussed.
We begin this chapter by reviewing analytical solutions of some simple systems using classical Newtonian mechanics in Sections 2.2 and 2.3. These systems serve to introduce some of the concepts and notations used later in the chapter and throughout this book. While the systems described are macroscopic, they serve as models for describing atomic and molecular motions in later chapters. An introduction of numerical computation techniques, namely the finite difference (Euler) method and the more sophisticated Verlet and leapfrog methods to solve Newton's equations of motion follows in Sections 2.4 and 2.5. These methods form the core of any molecular dynamics simulation and all further developments are constructed on the foundation of these numerical methods. The numerical solution of the harmonic oscillator is discussed in detail in Section 2.6. The generalization of the mechanical ideas to many‐atom systems is briefly discussed in Section 2.7. Finally, in anticipation of their use in developing molecular dynamics simulation methods, the Lagrangian and Hamiltonian formulations of mechanics are introduced in Section 2.8. These formulations are alternatives to Newton's laws of motion, which are much more appropriate for linking mechanical motions of molecules in the system to the external environment in a way that satisfies the laws of thermodynamics and statistical mechanics.
The three laws that govern the motion of macroscopic objects moving at low speeds compared to the speed of light were first stated together by Isaac Newton [230]. These laws are as follows: (i) any object moves in a straight line with constant speed (i.e. with constant velocity) unless acted on by a force. (ii) The acceleration (change of velocity with time) of the object is proportional to the force acting on it, and the proportionality constant is the mass of the object. This law is summarized in the vector formula F = ma. If more than one force acts on the object, the vector sum of the forces determines the acceleration. (iii) For each force on an object, the object exerts a force of equal magnitude pointing in the opposite direction [105,131,332]. Newton's laws do not specify if and how the forces depend on the position of the object or time and this is the subject of additional empirical observation and analysis. In actuality, the force laws for any specific type of interaction (gravity, mass connected to a spring, electromagnetic interactions, etc.) are devised so that the laws of motion are satisfied.
In systems with many interacting molecules, Newton's three laws of motion give a set of equations that describe the time dependence of position, ri(t), velocity, vi(t) (or momentum pi(t) = mivi(t)), and force Fi(t) (or equivalently, the acceleration ai(t)) for all atoms i. Other mechanical quantities for each atom and molecule, such as energy and angular momentum, can be calculated from these fundamental mechanical variables at any time as needed.
In most mechanical systems, the force on an object varies with its position and proximity to other objects. In these cases, velocities and forces vary dynamically and the simple algebraic formula F = ma does not suffice to determine the motion of the constituting particles in the system over all times. Newton invented the calculus of infinitesimals to predict motions in cases of position‐dependent forces, but as we will see, he was also the first to suggest what amounts to a numerical algebraic method to deal with this problem of position‐dependent forces.
In modern notation, Newton's second law of motion is written as a set of differential equations, second order in time, the solutions of which give the time variation of Cartesian coordinates. For the xi, yi, and zi components of the position vector ri of atom i in an N‐atom system, Newton's second law is written as
Fi({r}) is the force vector on atom i, which can depend on the set {r} of positions of all other atoms in the system. The positions and velocities of different atoms are coupled through the forces acting between them.
In Eq. (2.1) forces are written in terms of partial derivatives of the scalar potential energy Ui({r}) of atom i with respect to its three position components. This is convenient since in many cases, the mathematical form of the potential energy function is more readily determined than the force.
For a system with N‐atoms, Newton's equations of motion give a set of 3N coupled second‐order differential equations. These equations can be solved by analytical methods (very rarely) or using numerical methods (most of the time), the latter being the main focus of molecular dynamics methodology. Solutions of these coupled equations give the time dependence of the set of coordinates {r(t)} and velocities {v(t)} (or momenta {p(t)}) for all atoms in the system. To determine a unique solution for the positions and velocities, 6N initial conditions for the coordinates and velocities of all atoms at time t = 0 are required. The positions of the atoms at different times, {r(t)}, constitute the spatial trajectory (orbit) of the system. The sets of {r(t)} and {p(t)} at different times constitute the phase space trajectory of the system.
For a limited number of low‐dimensional systems where the coupled equations of motion are separable, Newton's equations can be solved analytically to give a closed‐form solution of the spatial and phase space trajectory. In Section 2.3, solutions to Newton's equations of motion for some simple mechanical systems are reviewed and the concept of phase space is introduced. The phase space trajectory of a system is important in describing the mechanics of many‐atom systems and plays a central role in statistical mechanics and its application to molecular dynamics simulation methodology.
A mechanical system studied by Newton (and Galileo Galilei among others before him) was the motion of an object near the Earth's surface where there is a constant gravitational acceleration of a = g = −9.8 m s−2 pointing toward the center of the Earth (see Figure 2.1a). Newton's equation of motion for a mass thrown perpendicularly upward (in the positive y‐direction) in the Earth's gravitational field is
Figure 2.1 (a) The coordinate system for a mass moving under the influence of constant gravitational acceleration. (b) The time dependence of the position and momentum for a particle of mass 1 kg starting at y(0) = 0, thrown upward with an initial speed of v(0) = 10 m s−1 (full lines) and 5 m s−1 (dashed line). (c) Two y–py phase space trajectories for the motions in part (b). All points in the y–py phase plane are covered by trajectories that are determined by the initial conditions of the motion. Two “states” corresponding to volume elements dydpy in the phase space are shown in (c).
Starting at an initial position y(0) and initial velocity of vy(0) at time t = 0, integrating this equation once with respect to time and using the initial conditions gives the time variation of the velocity of the particle:
Integrating Eq. (2.3) with respect to time gives the time variation of the position:
The spatial trajectories for a mass at two sets of initial conditions y(0) and vy(0) and the time dependence of the momentum are shown in Figure 2.1b, and the corresponding phase space trajectories are shown in Figure 2.1c.
The potential energy of a mass in the Earth's gravitational field at any time is
The gravitational potential energy near the surface of the Earth increases linearly with position above a reference point, usually taken to be the surface of the Earth. The total mechanical energy, which is the sum of kinetic and potential energies of the mass during any time t of its motion, is
Substituting the velocity and position from Eqs. (2.3) and (2.4), respectively, into Eq. (2.6) shows that for a particular trajectory, the total energy is constant at all times and depends on the initial conditions through the value of the parameters y(0) and vy(0):
The phase space trajectory or streamline of the projectile is determined by explicitly eliminating time from Eqs. (2.3) and (2.4):
For the motion of a mass in a constant gravitational field, the coordinate y and the “conjugate” momentum py form the phase space of the mechanical system. Trajectories for two sets of initial conditions, or more fundamentally for two specific energy values, are shown in Figure 2.1c. Time does not enter the phase space description but implicitly determines the direction of motion along the trajectory.
The concept of phase space is used extensively when discussing statistical mechanics. For one‐dimensional motion of a single mass in the y‐direction, phase space is two dimensional and consists of coordinate and its conjugate momentum, {y, py}. For three‐dimensional motion of a single mass, the phase space is six dimensional and consists of coordinate–momentum pairs, {x, y, z, px, py, pz}.
The “state” in phase space is determined by the volume element about each phase space point. For example, in the one‐dimensional motion mentioned above, a state is a volume element dy dpy about the point {y, py} shown in Figure 2.1c. States on the same phase space trajectory all have the same energy. All points in phase space correspond to states that belong to a unique trajectory. As we shall see, the smallest phase space volume element (state) is determined by the Heisenberg uncertainty principle, dx·dpx = h/4π.
One of the most important mechanical systems in physics and chemistry is the harmonic oscillator, which describes the motion of a mass m connected to a spring governed by Hooke's law (1660, after the English scientist Robert Hooke), F = −k(x − x0). In the harmonic oscillator, the force is linearly proportional to the displacement ξ = x − x0 of the mass from a relaxed position x0 and points in the direction opposite the displacement and toward the relaxed position x0. The force constant of the spring, k, determines the “stiffness,” i.e. how much force must be exerted to extend or compress the spring by unit length. Note that a harmonic spring behaves symmetrically with respect to extension or compression. The potential energy of the spring is a quadratic function of the displacement, U = ½k(x − x0)2. These relations are shown in Figure 2.2a.
Figure 2.2 (a) The quadratic potential energy function and linear force function for a one‐dimensional harmonic oscillator with an angular frequency ω = 5 s−1. In the rest state, the mass is at ξ = x − x0 = 0. (b) The time dependence of the displacement ξ and momentum for a particle of mass 1 kg with ξ(0) = 0 and pξ(0) = 1.0 kg m s−1 (full line) and 0.5 m s−1 (dashed line). (c) The ξ–pξ ellipses characterizing the phase space trajectory of the harmonic oscillator. The major and minor axes of the ellipse depend on the initial conditions of the spring. All points in the ξ–pξ phase space are covered by trajectories. For a specific spring and mass, the initial conditions determine which elliptical trajectory passes through a point in phase space.
The one‐dimensional single‐mass harmonic oscillator can also represent the relative motion of two masses connected by a spring with a force constant k; see Appendix 2.A.1.
For the one‐dimensional harmonic oscillator, Newton's second law is written as
This equation is simplified by using the displacement, ξ, as the variable and defining the angular frequency ω = (k/m)1/2 to give
Equation (2.10) is a homogenous second‐order differential equation with constant coefficients [52]. The general solution of Eq. (2.10) gives the time dependence of the displacement ξ(t) as a sum of complex exponential functions, or equivalently as a sum of sine and cosine functions:
In the final form, the parameters A and ϕ represent the amplitude and phase of motion, respectively. These solutions can be verified by substituting Eq. (2.11) into Eq. (2.10). Pairs of constants (c1, c2), (C1, C2), or (A, ϕ) characterize the specific trajectory of the mass. The sinusoidal motion in the last expression in Eq. (2.11) gives the harmonic oscillator its name. The frequency and period of the harmonic oscillator are ν = ω/2π and τ = 1/ν, respectively. The time dependence of the velocity of the mass is calculated from the time derivative of Eq. (2.11):
The constants (A, ϕ) are determined by two initial conditions, namely, the values of the initial displacement ξ(0) and velocity vξ(0) at t = 0.
As an example of a specific trajectory, consider a mass m = 1 kg connected to a harmonic spring that gives it an angular frequency of ω = 5 s−1. If initially the mass is at x(0) = x0 (i.e. ξ(0) = 0) and has an initial velocity v(0) = 1.0 m s−1, the specific solutions of the harmonic oscillator, Eqs. (2.11) and (2.12), are ξ(t) = 0.2 sin(ωt) and vξ(t) = cos(ωt), respectively, shown in Figure 2.2b. A second trajectory with the initial conditions ξ(0) = 0 and vξ(0) = 0.5 m s−1 is also shown in this figure.
The total mechanical energy of the harmonic oscillator system is a sum of the kinetic and potential energies determined using Eqs. (2.11) and (2.12):
The total energy is constant and depends on the initial conditions through the amplitude parameter A.
Elimination of the time variable between Eqs. (2.11) and (2.12) gives the phase space trajectory of the harmonic oscillator:
which is an ellipse in {ξ, pξ} phase space. The trajectories for two different initial conditions corresponding to different energy values are shown in Figure 2.2c. Each state in phase space is represented by a volume element dξ dpξ around the point {ξ, pξ} and is associated with a unique trajectory. Note that the phase space trajectory of each mechanical system is a reflection of the specific nature of the forces, or more exactly, the “Hamiltonian” of the system, as shown below.
Determining the spatial and phase space trajectories of a mass subjected to a radially directed force proportional to 1/r2 requires considerably greater mathematical effort. This force describes the motion of particles interacting with gravitational and electrostatic forces. For radial 1/r2 forces, Newton's second law is
Details of the analytical solution of Newton's equations of motion for these cases are given in Appendix 2.A.2 where we prove that the motion of the mass subjected to a radial force in Eq. (2.15) remains confined to the xy‐plane [105,290]. The two equations in Eq. (2.15) cannot be solved directly in the Cartesian coordinate system; however, they can be solved after transformation to polar coordinates {r(t), θ(t)}. The spatial trajectory or orbit of motion of the mass in polar coordinates is
where ℓ is the angular momentum of the mass with respect to the origin r = 0 (see Eq. (2.A.12)) and E is the energy determined by the initial conditions of motion. See Appendix 2.A.2 for full details. For nonzero angular momenta (i.e. where the mass is not moving radially toward the center of force), the orbit is elliptical if E < 0, parabolic for E = 0, and hyperbolic for E > 0. These three cases are shown in Figure 2.3. States with E < 0 orbit around the origin (one of the foci of the ellipse) and for obvious reasons are called bound states.
Figure 2.3 The spatial orbit of a mass moving in a −1/r2 force field (1/r
