106,99 €
Many books explain the theory of atomistic computer simulations; this book teaches you how to run them
This introductory "how to" title enables readers to understand, plan, run, and analyze their own independent atomistic simulations, and decide which method to use and which questions to ask in their research project. It is written in a clear and precise language, focusing on a thorough understanding of the concepts behind the equations and how these are used in the simulations. As a result, readers will learn how to design the computational model and which parameters of the simulations are essential, as well as being able to assess whether the results are correct, find and correct errors, and extract the relevant information from the results. Finally, they will know which information needs to be included in their publications.
This book includes checklists for planning projects, analyzing output files, and for troubleshooting, as well as pseudo keywords and case studies.
The authors provide an accompanying blog for the book with worked examples, and additional material and references: http://www.atomisticsimulations.org/.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 668
Veröffentlichungsjahr: 2013
Contents
Preface
References
Color Plates
Part One The World at the Atomic Scale
1 Atoms, Molecules and Crystals
1.1 Length- and Timescales
1.2 Electrons in an Atom
1.3 Local Environment of an Atom
1.4 Most Favorable Arrangement of Atoms
References
2 Bonding
2.1 Electronic Ground State
2.2 Types of Bonds
2.3 Bond Breaking and Creation
2.4 Distortion of Bonds
References
3 Chemical Reactions
3.1 Chemical Equations
3.2 Reaction Mechanisms
3.3 Energetics of Chemical Reactions
3.4 Every (Valence) Electron Counts
3.5 The Energy Zoo
References
4 What Exactly is Calculated?
4.1 What Can Be Calculated?
4.2 What Actually Happens?
4.3 Models and Simulation Cells
4.4 Energies
4.5 Terms
4.6 Liquid Iron: An Example
References
Part Two Introducing Equations to Describe the System
5 Total Energy Minimization
5.1 The Essential Nature of Minimization
5.2 Minimization Algorithms
5.3 Optimize with Success
5.4 Transition States
5.5 Pseudokeywords
References
6 Molecular Dynamics and Monte Carlo
6.1 Equations of Motion
6.2 Time and Timescales
6.3 System Preparation and Equilibration
6.4 Conserving Temperature, Pressure, Volume or Other Variables
6.5 Free Energies
6.6 Monte Carlo Approaches
6.7 Pseudokeywords for an MD Simulation
References
Part Three Describing Interactions Between Atoms
7 Calculating Energies and Forces
7.1 Forcefields
7.2 Electrostatics
7.3 Electronic and Atomic Motion
7.4 Electronic Excitations
References
8 Electronic Structure Methods
8.1 Hartree–Fock
8.2 Going Beyond Hartree–Fock
8.3 Density Functional Theory
8.4 Beyond DFT
8.5 Basis Sets
8.6 Semiempirical Methods
8.7 Comparing Methods
References
9 Density Functional Theory in Detail
9.1 Independent Electrons
9.2 Exchange-Correlation Functionals
9.3 Representing the Electrons: Basis Sets
9.4 Electron–Nuclear Interaction
9.5 Solving the Electronic Ground State
9.6 Boundary Conditions and Reciprocal Space
9.7 Difficult Problems
9.8 Pseudokeywords
References
Part Four Setting Up and Running the Calculation
10 Planning a Project
10.1 Questions to Consider
10.2 Planning Simulations
10.3 Being Realistic: Available Resources for the Project
10.4 Creating Models
10.5 Choosing a Method
10.6 Writing About the Simulation
10.7 Checklists
References
11 Coordinates and Simulation Cell
11.1 Isolated Molecules
11.2 Periodic Systems
11.3 Systems with Lower Periodicity
11.4 Quality of Crystallographic Data
11.5 Structure of Proteins
11.6 Pseudokeywords
11.7 Checklist
References
12 The Nuts and Bolts
12.1 A Single-Point Simulation
12.2 Structure Optimization
12.3 Transition State Search
12.4 Simulation Cell Optimization
12.5 Molecular Dynamics
12.6 Vibrational Analysis
12.7 The Atomistic Model
12.8 How Converged is Converged?
12.9 Checklists
References
13 Tests
13.1 What is the Correct Number?
13.2 Test Systems
13.3 Cluster Models and Isolated Systems
13.4 Simulation Cells and Supercells of Periodic Systems
13.5 Slab Models of Surfaces
13.6 Molecular Dynamics Simulations
13.7 Vibrational Analysis by Finite Differences
13.8 Electronic-Structure Simulations
13.9 Integration and FFT Grids
13.10 Checklists
References
Part Five Analyzing Results
14 Looking at Output Files
14.1 Determining What Happened
14.2 Why Did it Stop?
14.3 Do the Results Make Sense?
14.4 Is the Result Correct?
14.5 Checklist
References
15 What to do with All the Numbers
15.1 Energies
15.2 Structural Data
15.3 Normal Mode Analysis
15.4 Other Numbers
References
16 Visualization
16.1 The Importance Of Visualizing Data
16.2 Sanity Checks
16.3 Is There a Bond?
16.4 Atom Representations
16.5 Plotting Properties
16.6 Looking at Vibrations
16.7 Conveying Information
16.8 Technical Pitfalls Of Image Preparation
16.9 Ways and Means
References
17 Electronic Structure Analysis
17.1 Energy Levels and Band Structure
17.2 Wavefunctions and Atoms
17.3 Localized Functions
17.4 Density of States, Projected DOS
17.5 STM and CITS
17.6 Other Spectroscopies: Optical, X-Ray, NMR, EPR
References
18 Comparison to Experiment
18.1 Why It Is Important
18.2 What Can and Cannot Be Directly Compared
18.3 How to Determine Whether There is Agreement with Experiment
18.4 Case Studies
References
Appendix A UNIX
A.1 What’s in a Name
A.2 On the Command Line
A.3 Getting Around
A.4 Working with Data
A.5 Running Programs
A.6 Remote Work
A.7 Managing Data
A.8 Making Life Easier by Storing Preferences
A.9 Be Careful What You Wish For
Appendix B Scientific Computing
B.1 Compiling
B.2 High Performance Computing
B.3 MPI and mpirun
B.4 Job Schedulers and Batch Jobs
B.5 File Systems and File Storage
B.6 Getting Help
Index
Related Titles
Bovensiepen, U., Petek, H., Wolf, M. (eds.)
Dynamics at Solid State Surfaces and Interfaces2 Volume Set
2012ISBN: 978-3-527-40938-9
Gujrati, P. D., Leonov, A. I. (eds.)
Modeling and Simulation in Polymers
2010ISBN: 978-3-527-32415-6
Brillson, L. J.
Surfaces and Interfaces of Electronic Materials
2010ISBN: 978-3-527-40915-0
Harrison, P.
Quantum Wells, Wires and DotsTheoretical and Computational Physics of Semiconductor Nanostructures
2009ISBN: 978-0-470-77097-9
van Santen, R. A., Sautet, P. (eds.)
Computational Methods in Catalysis and Materials ScienceAn Introduction for Scientists and Engineers
2009ISBN: 978-3-527-32032-5
Velten, K.
Mathematical Modeling and SimulationIntroduction for Scientists and Engineers
2009ISBN: 978-3-527-40758-3
Ross, R. B., Mohanty, S. (eds.)
Multiscale Simulation Methods for Nanomaterials
2008ISBN: 978-0-470-19166-8
Höltje, H.-D., Sippl, W., Rognan, D., Folkers, G.
Molecular ModelingBasic Principles and Applications
2008ISBN: 978-3-527-31568-0
Cramer, C. J.
Essentials of Computational ChemistryTheories and Models
2004ISBN: 978-0-470-29806-0
The Authors
Dr. Veronika Brázdová
University College London
Dept. of Physics & Astronomy
London, United Kingdom
Dr. David R. Bowler
University College London
Dept. of Physics & Astronomy
London, United Kingdom
Cover Image
Veronika Brázdová and David R. Bowler.
All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.
Library of Congress Card No.:
applied for
British Library Cataloguing-in-Publication Data:
A catalogue record for this book is available from the British Library.
Bibliographic information published by the Deutsche Nationalbibliothek
The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de.
©2013 WILEY-VCH Verlag GmbH & Co. KGaA, Boschstr. 12, 69469 Weinheim, Germany
All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers.
Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.
Print ISBN 978-3-527-41069-9
ePDF ISBN 978-3-527-67184-7
ePub ISBN 978-3-527-67183-0
mobi ISBN 978-3-527-67182-3
oBook ISBN 978-3-527-67181-6
Cover Design Grafik-Design Schulz, Fußgönheim
Typesetting le-tex publishing services GmbH, Leipzig
To Erica, Rob, Chris & Jon Bowler, and Jiří Dvořák.
Preface
We have had many occasions to explain how atomistic computer simulations work, and the strengths and limitations of different methods. This has happened during collaborations with experimentalist colleagues and while teaching students who were starting research projects under our supervision. When faced with questions, a natural first response is to look for a suitable textbook to recommend. However, we were not able to find one, and this led to our planning and writing of this book. Our intended audience, then, encompasses anyone who wants to learn how to perform atomistic computer simulations, as well as those who want to understand how they work, and where they can be relied upon, without necessarily using the techniques.
With this book, we are not aiming to provide a detailed guide to the theory underlying the many different atomistic simulation techniques available. Nor are we trying to give a recipe book of how to write or implement particular algorithms. We have written a textbook that aims to provide the reader with the knowledge required to design and execute any project involving atomistic simulations. We want this to be a practical guide which will present the best practices in simulations. To this end, we have included pseudokeywords and checklists in key chapters. The pseudokeywords listed are the absolutely minimal set of keywords which must be specified for each type of simulation: on their own, they will not guarantee that your simulation runs correctly, but they must be specified to let the simulation code know what type of simulation you intend to do. We use the term pseudokeywords rather than keywords, because the actual words will differ from simulation code to simulation code. Checklists have been successfully used in environments as diverse as operating theaters and airplanes. They are designed to prevent errors caused by memory lapses and accidental omissions. Again, they will not make a simulation magically correct, but they will prevent a lot of simple errors and wasted computer time.
We have divided this book into five parts. In the first part, we cover the basic physics and chemistry which are required to understand atomistic computer simulations, at a level that a final year undergraduate should understand, and the ideas underlying atomistic simulation techniques. We move on to describing the fundamental techniques of atomistic simulations in the second part: total energy minimization and dynamics. The third part is the most technical, and describes the theory of molecular mechanics and electronic structure techniques, at sufficient depth to allow the reader to understand how the simulations work, and what approximations are made in the different approaches. Part Four is the most practical, and addresses the problem of planning a project involving atomistic simulations, choosing an appropriate set of atomic coordinates, and the detailed specification and testing of particular simulations. The final part is concerned with analysis: how to take the numbers produced by a simulation code, and produce valuable data. Our aim in this part in particular is to encourage a close engagement with relevant experiments.
This book is not a traditional textbook, and does not feature exercises or problems. The whole book might be seen as an exercise: our desire is that the reader starts experimenting with at least one atomistic computer simulation code while reading. We will, however, be writing regular blog entries on the website for the book (www.AtomisticSimulations.org) where we will discuss both recent research papers that present interesting results, and exercises for those learning the techniques of atomistic computer simulations. We encourage all readers to check the website regularly for updates.
Any book like this has been helped by many different people, and we would like to acknowledge those who have assisted us along the way. First, we should acknowledge those experimental colleagues and students whose questions and interest gave rise to this book. Mike Gillan and the late Marshall Stoneham have both been wonderful mentors who have inspired and supported us. Eduardo Hernández, Angelos Michaelides, Matthew Bowler, James Owen, Ian Ford, Andy Gormanly, Kyle Rogers, Dorothy M. Duffy, Jamieson Christie and Antonio S. Torralba have all read some or all of the book at different stages, and provided invaluable comments. James Owen, Matthew Bowler, Andrew Morris, Dario Alfè, Joel Ireta, Ana Sofia Villa Verde, Peter Rich and Amandine Maréchal have all kindly contributed images or data for figures. Fernando Rentas and Matthew Gilbride advised on the intricacies of V-Ray and Rhinoceros 3D. Our editors, Ulrike Werner and Valerie Molière at Wiley, have been patient and supportive. Finally, VB would like to thank Joachim Sauer and M. Verónica Ganduglia-Pirovano for insisting that one should always start with a chemical equation, and DRB would like to thank Michael Bowler, whose Baryon Production Coloring Book was sadly never published, but who nevertheless believed that physics can be fun, and should always be done properly.
The figures in this book were prepared using Rhinoceros 3D, V-Ray, VMD [1], OpenDX, GIMP, and XMGrace.
London and Bordeaux, August 2012
Veronika Brázdová and David R. Bowler
1 Humphrey, W., Dalke, A., and Schulten, K., (1996) VMD – Visual Molecular Dynamics, J. Mol. Graph., 14, 33–38
Many of the macroscopic properties of matter can be predicted successfully by performing atomistic computer simulations. Indeed, the atomic theory of matter and its outworking through statistical mechanics can be seen as one of the most important scientific theories ever proposed. In this part of the book, we establish a basic foundation, considering the appropriate length and timescales, the influence of electrons and the environment on atoms, and bonding and chemical reactions. We also discuss what goes into an atomistic computer simulation, what happens during a simulation, and present an example of how atomistic simulations can be used to understand large scale problems.
The world around us is composed of atoms in continual vibrational motion. Each atom consists of a positive nucleus and negative electrons. From here on, we will refer to an atom to mean both the electrons and the nucleus. Different elements and their interactions give rise to everything in the material world. Unless the energies involved are high enough to even break atoms apart, the interactions happen through electrons and nuclei. An investigation of material properties, chemical reactions, or diffusion on surfaces, to name but a few, must therefore involve a description of atoms and their interactions which are mediated by electrons. These interactions will contain the electrons either explicitly or implicitly.
Atoms and electrons cannot be comprehended with our normal senses, though there are many analogies which are used to help us understand them, such as spheres joined with springs or rigid bonds. We must not confuse these analogies with the real objects. In atomistic simulations, we use mathematical models to describe the behavior of atoms and how they interact. In experiments, we can only observe the response of electrons or nuclei to probes, which give characteristic signals. These signals can be calculated from atomistic simulations, and from these simulations we can draw conclusions about the material world.
In this chapter, we will look at the time- and lengthscales involved and the basic concepts forming the foundation of any atomistic simulation. We will also consider the connections between the mathematical approach and the real world.
In biochemistry, Ångstrøms are used for the local structure: adenylate kinase, a small monomeric enzyme, isolated from E. coli is about 45 Å × 45 Å × 43 Å [1]. More commonly used, however, for the overall size of biological macromolecules is the total atomic mass, given in atomic mass units (amu, or Dalton, Da). An adenylate kinase would weigh 20–26 kDa.
Crystals are macroscopic objects, but they are composed of small sets of atoms repeated periodically in all three dimensions. In a silicon crystal, this set of atoms fits within a cube with a 5.431 Å side. In complex crystals such as zeolites dimensions of the repeated unit can be on the order of a few nm.
While length is a direct variable in atomistic simulations, time may not be. This will depend on what we are trying to simulate. If we need the direct time evolution of a system, or more often simply to sample many states of a system, then time will be important and we will use a technique such as Molecular Dynamics (see Chapter 6).
In many simulations, by contrast, the details of the dynamical process are not as important as the energies and structures involved. It is often possible to save computational time and resources by calculating a series of static “snapshots” of the atomic and electronic structure rather than modeling the whole process, for example, the initial and final state of a chemical reaction and the transition state (Chapter 3). Moreover, the range of timescales relevant in one system may be too great to allow simulations of the time evolution of all events because the sampling of time in the simulation is set by the fastest process. Atoms vibrate on a timescale of 10–100 fs (fs is 10−15 s). It takes about 100 fs to break an atomic bond [2, 3]; the larger a system is, the more involved atomic movements can take place, and the longer the relevant timescales are. For instance, pico- and nanosecond local fluctuations of adenylate kinase facilitate large-scale micro- to millisecond movements that have been linked to its enzymatic function [4]. In general, different functional motions of proteins range from femtoseconds to hours [5].
Where do atomistic simulations fit among other computational methods? The length- and timescales accessible to them will depend on the available computational resources, but also, critically, on the method chosen. In general, the more accurate the method, the more computationally demanding it is, leading to smaller system sizes and shorter timescales. In particular, methods that do not include electrons explicitly, such as forcefield methods, allow much larger system sizes and longer timescales than electronic structure methods, which do include electrons. Highly accurate quantum chemical methods, for example, include excitations and mixed spin states, but the system size is limited to a few dozen atoms. Electronic structure methods used in materials science, in particular, the density functional theory (DFT), can routinely handle hundreds of atoms. DFT has been applied to two million atoms [6], however, such calculations are not yet routine and the approach requires significant modifications of the standard DFT algorithms. Classical molecular dynamics (MD) simulations, which approximate the effect of electrons, are routinely applied to systems of hundreds of thousands of atoms and are performed over nanosecond and even picosecond timescales. MD simulations on millions of atoms (see, e. g., [7]) and even on 108 atoms have also been performed [8].
Using classical molecular dynamics, it is also possible to run simulations up to microsecond range. However, there is no guarantee that even on such a timescale the simulated event will occur, as was demonstrated, for example, in [9], where despite a heroic effort, the native state of the protein was not found. It is even more complicated to simulate events with differing characteristic timescales in the same system because the time sampling would have to be fine enough to sample the fastest event. Multiscale methods are used to deal with these difficulties, but they are beyond the scope of this book.
Developments in atomistic computational methods and in computer power over the last few decades have increased the length- and timescales accessible to simulation. On the other side of the theory-experiment divide, developments in experimental techniques and sample preparation have allowed access to much smaller length- and timescales, as well as well-defined, small, model systems. The gap between experiment and simulation is therefore narrowing and it is often possible to investigate the same atomic system by experimental probes and by computer simulations.
The electronic structure of atoms determines their isolated properties and how they interact to form molecules and solids, as well as determining the structure of the periodic table of the elements. In this section, we will outline how electrons are arranged in an isolated atom.
The negatively charged electrons are distributed around the positively charged nucleus, also called the ionic core. Note the careful avoidance of the word “orbit” in the previous sentence. Electrons are quantum objects and we cannot follow their movement from place to place as we would with larger objects. The closest we can get to describing their movement is to calculate the probability that an electron will occupy a particular region of space a given distance from the nucleus and from other electrons. It is worth repeating: electrons do not behave as “normal” objects.
A full description of the electronic structure of an atom would involve a many-body function of the coordinates of all the electrons, the wavefunction. The wavefunction is a solution of the Schrödinger equation, the equation that describes a quantum system. However, it is impossible to solve this for more than two particles analytically, so an independent electron picture is often used. The solution is a set of discrete (i. e., quantum) states the electron can occupy in an atom. While electrons are indistinguishable particles, the quantum states in an atom they occupy do have different properties. Moreover, if two electrons are swapped in a system, the total wavefunction changes sign; we say that electrons are fermions. This antisymmetry of the wavefunction leads to the Pauli exclusion principle: two electrons cannot occupy the same state in one system. How can we describe an electronic state and what does it have to do with bonds between atoms?
The square of the wavefunction at a point in space gives the probability of finding the electron there. The sum over all electrons gives the total electronic charge density at that point. The electron is most likely to be found in a region of space called the atomic orbital. Again, the atomic orbital is not the electron and it is a purely mathematical construct and not an object, even though atomic orbitals are routinely plotted looking like solid objects (such as in Figure 1.1 below).
Figure 1.1 Atomic orbitals. Dark and light indicate the sign of the wavefunction.
These three quantum numbers fully determine an atomic orbital, but one atomic orbital can accommodate two electrons and so we need a fourth quantum number to distinguish the electrons within the orbital. It is called the spin quantum number, ms, and is the projection of the intrinsic angular momentum of the electron (its spin) along an axis. It is usually called simply spin. The spin of an electron can only have two values, conventionally known as up and down and often denoted with up and down arrows, ↑ and ↓.
It should now be clear that not all atomic orbitals are created equal. How, then, will the electrons fill them? There are three rules that govern the electronic structure of an atom. The first is the Pauli exclusion principle, the second is the Aufbau principle (Aufbau means construction or configuration in German), which states that the lowest energy orbitals are always filled first. The third is the Hund’s rule: if more than one empty orbital of equal energy is available (such as three p orbitals of the same shell), each of them will first be filled with one electron until they are all half-full rather than creating a completely filled orbital and a completely empty one.
These three rules, together with the four quantum numbers, lead to the electronic structure of atoms as we know it and are reflected in the periodic table of elements. The hydrogen atom has one electron and in its lowest energy configuration (its ground state), it is in the first orbital to be filled, the 1s orbital. The electronic configuration of the H atom is written as 1s1 and of He, which has two electrons, both in the 1s orbital, as 1s2. Now, the 1s orbital is fully filled, the 2s and then the 2p orbitals get filled in heavier atoms that have more electrons. In the periodic table, the elements are arranged in rows according to the principal quantum number of the highest filled orbital, progressing from left to right as more electrons are added to the orbitals. The first shell can hold up to two electrons in its 1s orbitals, the second shell up to eight (two in 2s and two in each of the three 2p orbitals), the third up to 18 (two in 3s, six in 3p, and ten in 3d). However, because 3d orbitals are higher in energy than 4s orbitals, the 4s orbitals get filled first. Thus, potassium in its ground state does not have the electronic configuration of argon plus one 3d electron, but rather that of argon plus one 4s electron. Only when the 4s orbital is fully filled, do the additional electrons occupy the 3d orbital, though there are exceptions: Cr, for instance, has the configuration 3d54s1. Elements that have a partially filled d shell or that have a partially filled d shell when an electron is removed are called transition metal elements.
The periodic table is composed of blocks in which elements are grouped depending on the shell into which the last electron has been added. Elements in the first two columns form the s-block and their highest filled or partially filled orbital is an s orbital. p-block elements are grouped in the last six columns of the table and have between one and six electrons in a p orbital. The d-block appears in the table only from row four because the first d shell is 3d and it is filled after the 4s shell. As we proceed through the table row by row, from left to right, we therefore first add electrons to an s shell and then to the previous d shell, with the exception of elements like Cr.
Elements in the last column of each l-block have the corresponding orbital fully filled: group 2 elements (elements in the second column) have a fully filled outer s shell, group 12 elements, at the end of the d-block, have a fully filled outer d shell. Group 18 elements, at the end of the p-block and of the periodic table itself, have all their orbitals fully filled; for this reason, helium is also there, even though it is an s-block element. There is more to the periodic table and its blocks than taxonomy, as we shall see in the next section.
The arrangement of atoms in a system is critical to its stability, reactivity, and all other properties and is therefore one of the main topics of interest in atomistic simulations. The overall atomistic and electronic structure depends on the local environment and we will now look at the ways to describe both.
The local environment of an atom, or the atoms that are closest to it and their positions, is a direct consequence of the type of elements involved and their electronic structure. The electronic structure of a set of atoms will differ from the electronic structure of isolated atoms as the atoms seek the most stable arrangement. As with isolated atoms above, we will work with an independent electron picture, though in reality, many-body wavefunctions are required. The electron distribution in a molecule can be described with molecular orbitals rather than with overlapping atomic orbitals. Similar to atomic orbitals, molecular orbitals are mathematical functions that express the probability that an electron will occupy a region in space. They can be approximated by a linear combination of atomic orbitals (LCAO) and used to describe different types of chemical bonding. The equivalent in crystals are bands, which are of infinite extent in a perfect crystal. Both of these functions are also used in atomistic simulations either by themselves or as starting points for more sophisticated computational methods. In this section, we will review the quantum mechanical effects underlying chemical bonding, rather than bonds themselves. Chemical bonds, which are another concept used to describe electron distribution in molecules and crystals, are discussed in detail in Chapter 2.
We will only consider molecule formation from isolated atoms here for reasons of simplicity, but the same description can be applied to complexes of any size formed from other, smaller ones. A molecule with an appreciable lifetime forms when it is more stable than the isolated atoms that form it. Not all molecular orbitals are more stable than the atomic orbitals from which they are formed. Consider, for example, the H2 molecule: the two hydrogen 1s orbitals combine to form two molecular orbitals. The lower-energy orbital has the maximum electron density between the two ionic cores and is called a bonding orbital.The molecular orbital higher in energy than the original atomic orbitals has the maximum electron density close to the ionic cores rather than between them, and is called an antibonding orbital. In the H2 molecule, the antibonding orbital will be empty when the molecule is in its ground state. If the molecule absorbs energy equivalent to the energy difference between the bonding and antibonding state, one electron becomes excited and moves from the bonding orbital to the antibonding orbital. In a general system, there must be more electrons in bonding orbitals than in antibonding orbitals for it to be stable.
The relative energies and occupation of atomic and molecular orbitals are often plotted in diagrams such as the one for the H2 molecule in Figure 1.2: the singly-occupied atomic orbitals in two hydrogen atoms have higher energies than the one-electron energies of electrons in the bonding molecular orbital, while they would have a higher energy in the antibonding orbital.
Figure 1.2 Occupancies and relative energies of atomic orbitals in H atoms and of molecular orbitals in the H2 molecule.
Until now, we have ignored the fact that even atomic orbitals within one atom can combine, or hybridize. Similar to atomic orbitals, hybridization is a concept used to describe how electrons from isolated atoms rearrange to form bonds. It is best described by an example: consider the carbon atom in the methane molecule, CH4. Carbon has only two unpaired electrons in its 2p orbital, yet the methane molecule has four equivalent bonds to hydrogen atoms, not just two. The reason is again stabilization. In a simple picture, one of the carbon 2s electrons is moved into a 2p orbital, leaving the carbon atom with four unpaired electrons, rather than two, each occupying an orbital that is a combination of the original 2s and 2p orbitals. These four orbitals are now energetically equivalent, have the same shape and point towards the corners of a tetrahedron (see Figure 1.3a). In the simple picture, the energy cost of the hybridization is more than offset by the energy gain in forming four C–H bonds. This type of hybridization is called the sp3 hybridization because of the number and type of orbitals involved. Similarly, sp2 orbitals are formed by one s and two p orbitals and lead to a planar arrangement of atoms, such as in gas-phase aluminum trichloride, AlCl3. In sp hybridization one s and one p orbital will lead to a linear molecule, such as the gas-phase beryllium hydride, BeH2 .
Figure 1.3 Hybridization of atomic orbitals and example molecules with the corresponding hybridization: (a) sp3 orbitals; the cylinders show their relative position in a methane (CH4) molecule. (b) methane molecule and the schematic orientation of its sp3 orbitals. (c) sp2 hybridization and a AlCl3 molecule. (d) sp hybridization and a BeH2 molecule. Dark and light indicate the sign of the wave-function.
We have seen above that the spatial orientation of atomic orbitals translates to the spatial orientation of bonds and that we can expect particular bond angles when particular types of orbitals participate in a bond. Bond lengths, on the other hand, are determined by the relative strengths of attractive and repulsive forces between the atoms, classical as well as quantum mechanical (Chapter 2). The equilibrium bond length is the optimum distance between two atoms: in Figure 1.4, the bond length of the H–H bond in an H2 molecule corresponds to the minimum of energy as a function of distance between the two H atoms.
Every atomistic simulation code requires information about the position of all atoms in the system, but for analysis of atomic structure, a description in terms of interatomic distances, angles, and dihedral angles is more useful. Bonding, which is mentioned here and above briefly, is fully discussed in Chapter 2. There are terms commonly used for particular configurations of atoms; an atom bound to n other atoms is described as n-fold coordinated. (In complex solid systems, the coordination of an atom is sometimes determined as the number of its closest neighbors, without necessarily considering bonding.) The configuration of the four-fold coordinated carbon atom in the methane molecule is referred to as tetrahedral because the four hydrogen atoms form a tetrahedron. The same tetrahedral arrangement of atoms and bonding is found in diamond and silicon. An octahedrally coordinated atom is at the center of an octahedron, bound to six atoms that form an octahedron. These and other commonly encountered configurations are shown in Figure 1.5. They may occur both in small molecules and as a part of larger systems. Atoms with higher coordination numbers tend to form more complex and less symmetrical complexes.
Figure 1.4 Energy of the H2 molecule as a function of H–H separation. The dashed axis is the reference energy of two H atoms.
Figure 1.5 Examples of the different local coordinations of atoms. Linear: C2H2 molecule. Tetrahedral: CH4 molecule. Trigonal planar: AlCl3 molecule. Trigonal bipyramidal: PCl5 molecule. Octahedral: Bi atom and its closest neighbors in Bi crystal.
In atomistic simulations, we need to put a number on the stability of each atomic system and this measure of stability is called the total energy. Total energies allow us to compare different atom and electron configurations, calculate bond strengths and energy gain or loss caused by changes in the system, study the mechanism of chemical reactions, calculate forces on individual atoms, and evolution of the system in time.
The total energy of a system comes from both kinetic and potential terms. We must consider the kinetic energy of the nuclei and the electrons. The potential energy comes from the interactions between the particles, and includes interactions between nuclei, interactions between electrons and interactions between nuclei and electrons. We can write the total energy as the sum of these terms:
(1.1)
The zero-point energy (ZPE) of the nuclei should also be included in the total energy. This is due to the residual motion of the nuclei coming from their quantum nature. The ZPE for an atom decreases with increasing atomic mass, and is rarely important in atomistic simulations, except in systems involving very light elements.
The terms in the total energy could, in principle, be calculated from the Schrödinger equation for the entire system of electrons and ions:
(1.2)
Here, E and ψ are the eigenvalue and eigenfunction of the Hamiltonian , respectively.
We have already encountered the wavefunction of an electron, but in Eq. (1.2), the wavefunction is a function of all electrons and all nuclei and presents a many-body problem that is intractable without approximations. Approximations, often very drastic ones, to the Schrödinger equation are at the heart of all atomistic simulations, no matter whether they treat electrons explicitly or only account for them by parametrizing the interaction between the atoms.
The Hamiltonian operator is analogous to the classical Hamiltonian, it is a sum of a kinetic energy term and a potential energy term:
(1.3)
The kinetic energy operator is a sum of kinetic energy contributions from electrons and from nuclei. The potential energy operator includes classical electrostatic interactions as well as purely quantum mechanical terms and any external fields.
In physics, the most common energy units are the electron volt (eV) and hartree (Ha), and in chemistry it is the kilojoule per mole (kJ/mol) or kilocalorie per mole (kcal/mol). One hartree is ∼ 27.211 eV and 1 eV is ∼ 96.485 kJ/mol.
For any stable system, the total energy is a negative number. The lower the number, the more stable the system. A local minimum in the energy will correspond to a stable state. A system will evolve to reach the nearest stable state by rearranging its nuclei and electrons; a large part of atomistic simulations is finding the most stable configuration of a set of atoms (Chapter 5). We shall see throughout this book that absolute total energies are much less interesting than relative total energies. We will also see that we can only compare total energies of systems with the same number and type of atoms, the same number of electrons, and the same computational setup. This can always be achieved by considering more than one system, so long as the totals are equivalent, as discussed in Chapter 3 and Section 15.1.
The total energy is defined for a fixed atomic configuration. However, we often need to deal with systems whose configuration changes in time, even if it is only a change in atomic positions due to thermal vibrations. In realistic systems temperature, volume, pressure and entropy changes also play a role. To account for these influences, we need to use one of the thermodynamic potentials alongside the total energy to characterize the systems. We will briefly define entropy and the four thermodynamic potentials (internal energy, Gibbs free energy, Helmholtz free energy, and enthalpy) here, because the terminology is often used very loosely in literature. These definitions will be sufficient to enable the reader to follow the chapter on chemical reactions (Chapter 3), but for an in depth discussion, we recommend a thermodynamics textbook (see, e. g., Further Reading).
Entropy (S) is a measure of disorder in the system. The more atomic configurations a system can adopt without changing its macroscopic state, the more likely it is that it will be found in the particular macroscopic state and the greater its entropy. A set of such possible configurations of one system is called the ensemble. An ensemble is therefore a representation of a macrostate. An ensemble average of some property of the set of configurations is an average of the property for each configuration, weighted by the probability of the configuration.
The internal energy (U) is the total energy minus the kinetic energy of bulk motion (i. e., rotations and translations of the whole system), and minus any potential energy brought about by an external field.
The only contributions to the energetics of a system that we have considered, until now, are the total energy and zero-point energy of a fixed atomic configuration. Such a system would correspond to absolute zero temperature. However, a real system exists above absolute zero and its atoms vibrate due to its finite thermal energy. A change in the macrostate of an atomic system exchanging heat (but not atoms) with its environment at constant temperature (T) is characterized by a change in its Helmholtz free energy (F or A):
(1.4)
Unlike the internal energy, the Helmholtz free energy is an ensemble average over many microstates because it includes an entropy term.
Many atomistic processes are accompanied not only by change in entropy but also by change in volume at constant pressure. The Gibbs free energy is then the appropriate thermodynamic potential. Its change is defined by
(1.5)
where the volume V changes and the pressure p is constant. Absolute Gibbs free energy cannot be measured experimentally, but its changes can, and are tabulated for many chemical reactions. The Gibbs free energy is often called just free energy and, even more confusingly, used to be called free enthalpy. However, enthalpy (H) is a separate thermodynamic potential. It is the Gibbs free energy without the entropy term:
(1.6)
Changes in enthalpy can also be measured and are tabulated for many processes.
In many situations, for example in solids at low temperature where disorder is relatively low, comparing total energies instead of Gibbs free energies or enthalpies is a perfectly adequate approximation for describing changes in the system. However, there is no rule to say when this can be neglected, only a tendency for T ΔS to be smaller at low temperatures; it is important to investigate the effects of entropy. Entropic contributions can be important in surprising situations, for instance, liquid iron at the Earth’s core, as described in Section 4.6.
The most stable atomic configuration corresponds to the lowest total energy a system can have. An atomistic simulation code can find a locally stable configuration by minimizing the total energy with respect to atomic displacements, which may or may not be the lowest energy configuration. In every minimization step, the code will calculate the total energy of the current configuration and the forces acting on atoms. It will then move the atoms in a way that lowers both the energy and the forces. The force on each atom I is defined as the gradient of the total energy with respect to the position of the atom:
(1.7)
The force F is a vector and its units are energy units over distance units, such as eV/Å. Whilst cautioning firmly against thinking of atoms as material objects, we can, for the moment, liken atoms in a molecule or a crystal to balls connected by springs: if a ball is moved away from its position, the spring will exert a force to bring it back. Similarly, forces on atoms are a measure of the distance and direction in which the atoms need to move to reach the total energy minimum (Chapter 5). Mathematically, the derivatives of a function in a function minimum are zero. In practice, in the total energy minimum with respect to the positions of the atoms, the forces are smaller than some threshold value, as we will see later in this book.
McMurry, J. (2008) Organic Chemistry, 8th edn, Brooks/Cole Publishing Company.
Clear explanation of the basics of electronic configuration of atoms and orbitals and of hybridization.
Rae, A.I.M. (2008) Quantum Mechanics, 5th edn, Taylor & Francis.
A useful introduction to quantum mechanics, covering wave equations, wave-functions and the mathematics of quantum numbers, atomic and molecular orbitals.
Bransden, B.H. and Joachain, C.J. (2000) Quantum Mechanics, 2nd edn, Prentice Hall.
A good introduction to quantum mechanics.
Ford, I. (2013) Statistical Thermodynamics: An Entropic Approach, John Wiley & Sons.
A good introduction to statistical thermodynamics.
1 Müller, C.W. and Schulz, G.E. (1992) Structure of the complex between adenylate kinase from escherichia coli and the inhibitor Ap5A refined at 1.9 Å resolution. J. Mol. Biol., 224, 159–177.
2 Polanyi, J.C. and Zewail, A.H. (1995) Direct observation of the transition state. Acc. Chem. Res., 28, 119–132.
3 Miller, R.J.D., Ernstorfer, R., Harb, M., Gao, M., Hebeisen, C.T., Jean-Ruel, H., Lu, C., Gustavo, M., and Sciaini, G. (2010) ‘Making the molecular movie’: first frames. Act. Cryst. A, 66, 137–156.
4 Henzler-Wildman, K.A., Lei, M., Thai, V., Kerns, S.J., Karplus, M., and Kern, D. (2007) A hierarchy of timescales in protein dynamics is linked to enzyme catalysis. Nature, 450, 913–918.
5 McCammon, J.A. (2009) Computational studies of protein dynamics, in Water and Biomolecules Physical Chemistry of Life Phenomena (eds K. Kuwajima, Y. Goto, F. Hirata, M. Kataoka, and M. Terazima), Springer, Berlin.
6 Bowler, D.R. and Miyazaki, T. (2010) Calculations on millions of atoms with density functional theory: linear scaling shows its potential. J. Phys.: Condens. Matter, 22, 074207.
7 Gumbart, J., Trabuco, L.G., Schreiner, E., Villa, E., and Schulten, K. (2009) Regulation of the protein-conducting channel by a bound ribosome. Structure, 17, 1453–1464.
8 Trachenko, K., Zarkadoula, E., Todorov, I.T., Dove, M.T., Dunstan, D.J., and Nordlund, K. (2012) Modeling high-energy radiation damage in nuclear and fusion applications. Nucl. Instrum. Methods B, 277, 6–13.
9 Freddolino, P.L., Liu, F., Gruebele, M., and Schulten, K. (2008) Ten-microsecond molecular dynamics simulation of a fast-folding WW domain. Biophys. J.: Biophys. Lett., 94, L75–L77.
Atoms are quantum mechanical objects with both the nucleus and the electrons properly described by the laws of quantum mechanics. However, when considering the interactions between atoms in molecules and solids, the nucleus can, to a very good approximation, be treated classically: it serves as a source of electrostatic potential. The interaction of electrons, however, remains entirely quantum mechanical. In this chapter, we consider the behavior of electrons and how they are responsible both for the properties of atoms and for the interactions between atoms which make the world so interesting; many more details on how this behavior can be calculated are given in Chapters 7–9. The interaction between atoms is generally known as bonding.
There are many different types of bonds, and the structure of molecules and solids is linked strongly to the bonding. We have seen that some bonds are strongly directional, which are responsible for the structures of molecules such as water and methane. Other bonds are nondirectional, and the structure of materials with these bonds is determined by other factors that increase the stability of the system, for example, by maximizing the number of neighbors. The strength of bonds also varies, and goes some way towards determining the strength of materials, though other factors such as microstructure can be important.
This chapter focuses on the electrons and their role in atomic structure, and many atomistic simulations will include the electrons explicitly. However, it is perfectly possible to consider the electrons implicitly and to parameterize the interactions between atoms, replacing the bonds with numerical functions. At the simplest level, these act very much like springs. For instance, forcefields such as CHARMM, Amber and GROMACS are widely used within biochemistry, while empirical potentials have a long history within physics; these methods are discussed
in Chapter 7. However, these methods still need data for fitting the potentials, which often comes from electronic structure calculations; understanding electronic interactions is therefore extremely important.
The starting point for all simulations must be the lowest energy or ground state: any understanding of the properties of a system must build from the most stable configuration. Any system which is not in its lowest energy state will be liable to fall into that state as it evolves. The response of a system to excitation is an important means of characterization, but we must start from a thorough understanding of the ground state.
In most circumstances, the behavior of a set of atoms, and hence of many forms of matter including molecules and solids, can be modeled by assuming that the electrons arrange themselves in their lowest energy state for a given set of atomic positions; in other words, we can place the electrons in their ground state for every set of atomic positions. This approximation is known as the Born–Oppenheimer approximation, and is described in detail in Section 7.3. Naturally, there are occasions when this is not applicable: to take a simple example, consider a flame, which is a complex system involving heat, chemical reactions, electronic excitations and deexcitations with emission of light, and many other processes. The question of excitations is touched on in Section 7.4, but is too complex for a detailed treatment in a textbook of this nature.
As we have seen, the electronic structure of atoms is an important starting point in understanding the interactions between atoms. The structure of the periodic table of the elements reflects the distribution of electrons in atoms, and this distribution of electrons determines the bonding properties of atoms. The outermost electrons, known as valence electrons, are most strongly involved in determining how atoms interact, whatever the bonding mechanism.
The electronic ground state can be calculated using techniques described in Chapter 8. The interaction between atoms will disturb the atomic electronic configurations, leading to new distributions of electrons: bonds. Chemistry textbooks define a number of different types of bonds which are useful in understanding the behavior of matter; it is important to note, however, that the real world rarely fits well within any particular picture, and bonds tend to have a mixture of characters.
Different types of bonds have different strengths, and behave differently as atoms move. We present a brief overview of the types of bonds and their characteristics here. A fuller discussion can be found in standard textbooks, and you are encouraged to read several of these, to gain a fuller picture.
A simple understanding of bonding can be gained from thinking about two hydrogen atoms. When separated by a large distance, they each have their ground state orbital (the 1s orbital, as discussed in Section 1.2 above). As the two atoms come closer, their orbitals start to overlap; we can make symmetric and antisymmetric combinations of the orbitals which are known as bonding and antibonding orbitals. A schematic illustration of this is shown in Figure 2.1. This is an example of covalent bonding.
If we bring together two atomic orbitals so that they overlap, they will combine to form one bonding orbital and one antibonding orbital. Putting two electrons into the bonding orbital gives the strongest bond, while adding more electrons starts to populate the antibonding orbital, weakening the bond. Consider the difference between bringing two hydrogen atoms together and bringing two helium atoms together: in both cases, we have two 1s orbitals overlapping, thus forming a bonding and an antibonding orbital. With hydrogen, we have two electrons which fully populate the bonding orbital, giving a strong bond. With helium, however, there are four electrons, which will fill both the bonding orbital and the antibonding orbital, giving no bond. This is reflected in nature, where H2 molecules are common (and atomic H is short-lived and can only be made by putting energy into the system) and He is found in atomic form only. The strength of the bonding can be described by the bond order. We will cover this fully in Chapter 7, but it is worth discussing now briefly. The bond order relates to the strength of bonds between atoms, and can be defined as one-half of the difference between the number of bonding electrons and the number of antibonding electrons. In H2, the bond order is two, while for He2 , the bond order is zero.
The concept of molecular orbitals, introduced in Section 1.2, is helpful, even though it is not applicable to all types of bonding. The type of the molecular orbital depends on the types of the original atomic orbitals and the way they overlap. A σ molecular orbital is created by two atomic orbitals overlapping head-on. A σ orbital has an ellipsoid shape with a circular cross-section in the direction perpendicular to the atoms (see Figure 2.2). Antibonding orbitals are denoted with an asterisk added to the orbital type, for example, σ* .A σ molecular orbital can be formed from many combinations of atomic orbitals, for instance: two s orbitals; one s and one p orbitals; two p orbitals; or two d orbitals. The electron distribution in hydrogen, water, and methane molecules is well-described by σ orbitals. We say that electrons in a bonding σ orbital create a σ bond.
π orbitals are a combination of two p or d orbitals, or one p and one d orbital that overlap “sideways.” While a σ bonding orbital does not have a nodal plane through the ionic cores, a π bonding orbital has one nodal plane. π bonds, then, have zero electron density in the nodal plane through the ionic cores. δ orbitals are created from two d orbitals that overlap so that there are two nodal planes going through the nuclei. One bonding orbital constitutes a single bond, and a single bond therefore involves two electrons. Two bonding orbitals, with four electrons in total, can form a double bond. Double bonds are shorter and stronger than single bonds. Three orbitals can form a triple bond, involving six electrons.
Figure 2.1 (a) The ground state orbitals of two widely separated hydrogen atoms plotted against the separation. (b) The resulting bonding orbital made from combining the two hydrogen orbitals in a symmetric way. (c) The resulting antibonding orbital made from combining the two hydrogen orbitals in an antisymmetric way.
Figure 2.2 The formation of molecular orbitals. (a) Two px orbitals combining along the x-axis to give σ bonding (B) and antibonding (AB) orbitals. (b) Two pγ orbitals combining along the x-axis to give π bonding and antibonding orbitals. Dark and light indicate the sign of the wavefunction.
Only the electrons that fill the orbitals of the outermost shell in a given atom participate in chemical bonding; they are called valence electrons. It has been observed that atoms that are not transition metals tend to form complexes such that the s and p valence shells are fully filled (closed). This empirical rule is called the octet rule because one closed s and p shell have eight electrons in total. The octet rule is a useful guideline in predicting the maximum number of bonds an element is likely to form as well as the type of complex it is likely to form: elements with an almost-filled p valence shell, for example, halides (group 17 elements), will likely accept one electron from elements with an almost-empty valence shell, such as, the alkali metals (group 1 elements), which will donate their valence electron so as to be left with the lower-lying closed shells. Bear in mind, though, that however elegant the octet rule looks in its simplicity, it is only a guideline and there are many compounds that violate it, most notably the d-block elements.
This understanding of bonding arising from molecular orbitals is most clearly seen in covalent bonding, and we start our survey of bonding with this type.
Covalent bonding involves the formation of bonds by overlapping atomic orbitals in order to permit the sharing of valence electrons between atoms. The most stable structures have atoms with a full outer shell (which, as we have seen, contains eight electrons when considering just the s and p electrons). Our example of hydrogen above is the simplest example, giving two electrons shared between the two hydrogens (hence mimicking the electronic structure of helium). Classic examples are seen in organic molecules: methane, CH4, allows the carbon atom to share its four valence electrons – the 2s and 2p electrons – with four hydrogen atoms, each of which contribute a single 1s electron. The hydrogen atoms end up with two outer electrons and the carbon atom with eight outer electrons. Similarly, the structure of diamond allows each carbon atom to share an electron with each of four neighbors, giving four stable bonds. The resulting tetrahedral crystal structure reflects the electronic ground state.
Covalent bonds tend to be highly directional: they will be oriented along directions determined by symmetry and the overlap of orbitals. This means that the strength and stability of the bonding is affected by both stretching of bonds and distortion of the bond angles, whereas other forms of bonding (discussed below) are less affected by direction and more by numbers of neighbors and other factors.
Carbon is particularly notable in its ability to share differing numbers of electrons with other atoms to form bonds of different strength. Single bonds, where each atom contributes one electron, tend to be σ bonds, while double bonds, where each atom contributes two electrons, consist of a σ and a π bond. Triple bonds consist of a σ and two π bonds, and each atom contributes three electrons. It is possible to form even stronger bonds, for instance, quadruple and quintuple bonds formed by d-block elements, but these are rare. Double and triple bonds are also seen with oxygen and nitrogen, both in the gaseous molecules and in their bonding with carbon. It is also possible to create delocalized bonds, where adjacent double and single bonds are in resonance, giving a bond midway between single and double, and a delocalized electronic structure; the classic example is the aromatic bonds in a benzene ring, though aromatic polymers are also very common and their electronic structure is used as the basis of conduction in polymers.
Figure 2.3 illustrates the charge density in a number of simple organic molecules: ethyne (C2H2); ethene (C2H4); ethane (C2H6); and benzene (C6H6). These illustrate single bonds (for instance, between the carbon atoms in ethane, or the hydrogen and carbon atoms in all the molecules), double bonds (between the carbon atoms in ethene), triple bonds (between the carbon atoms in ethyne), and aromatic bonds (between the carbon atoms in benzene). Note how the electron density increases with increasing bond strength, with the aromatic bonds midway between single and double bonds. The symmetry of the bond is reflected in the spatial distribution of the electrons, and the increased strength of the bond also leads to the shorter bond length.
Figure 2.3 Charge density plotted in planes for simple organic molecules: (a) C2H2, (b) C2H4, (c) C2H6 and (d) C2H6. Scale is in units of electrons/Å3.
