Electrocatalysis -  - E-Book

Electrocatalysis E-Book

0,0
169,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Catalysts speed up a chemical reaction or allow for reactions to take place that would not otherwise occur. The chemical nature of a catalyst and its structure are crucial for interactions with reaction intermediates.
An electrocatalyst is used in an electrochemical reaction, for example in a fuel cell to produce electricity. In this case, reaction rates are also dependent on the electrode potential and the structure of the electrical double-layer.
This work provides a valuable overview of this rapidly developing field by focusing on the aspects that drive the research of today and tomorrow. Key topics are discussed by leading experts, making this book a must-have for many scientists of the field with backgrounds in different disciplines, including chemistry, physics, biochemistry, engineering as well as surface and materials science. This book is volume XIV in the series "Advances in Electrochemical Sciences and Engineering".

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 593

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Cover

Advances in Electrochemical Science and Engineering

Title Page

Copyright

In Memoriam

Preface

List of Contributors

Chapter 1: Multiscale Modeling of Electrochemical Systems

1.1 Introduction

1.2 Introduction to Multiscale Modeling

1.3 Electronic Structure Modeling

1.4 Molecular Simulations

1.5 Reaction Modeling

1.6 The Oxygen Reduction Reaction on Pt (111)

1.7 Formic Acid Oxidation on Pt (111)

1.8 Concluding Remarks

Acknowledgment

References

Chapter 2: Statistical Mechanics and Kinetic Modeling of Electrochemical Reactions on Single-Crystal Electrodes Using the Lattice-Gas Approximation

2.1 Introduction

2.2 Lattice-Gas Modeling of Electrochemical Surface Reactions

2.3 Statistical Mechanics and Approximations

2.4 Monte Carlo Simulations

2.5 Applications to Electrosorption, Electrodeposition and Electrocatalysis

2.6 Conclusions

Acknowledgments

References

Chapter 3: Single Molecular Electrochemistry within an STM

3.1 Introduction

3.2 Experimental Methods for Single Molecule Electrical Measurements in Electrochemical Environments

3.3 Electron Transfer Mechanisms

3.4 Single Molecule Electrochemical Studies with an STM

3.5 Conclusions and Outlook

Acknowledgment

References

Chapter 4: From Microbial Bioelectrocatalysis to Microbial Bioelectrochemical Systems

4.1 Prelude: From Fundamentals to Biotechnology

4.2 Microbial Bioelectrochemical Systems (BESs)

4.3 Bioelectrocatalysis: Microorganisms Catalyze Electrochemical Reactions

4.4 Characterizing Anodic Biofilms by Electrochemical and Biological Means

Acknowledgments

References

Chapter 5: Electrocapillarity of Solids and its Impact on Heterogeneous Catalysis

5.1 Introduction

5.2 Mechanics of Solid Electrodes

5.3 Electrocapillary Coupling at Equilibrium

5.4 Exploring the Dynamics

5.5 Mechanically Modulated Catalysis

5.6 Summary and Outlook

Acknowledgements

References

Chapter 6: Synthesis of Precious Metal Nanoparticles with High Surface Energy and High Electrocatalytic Activity

6.1 Introduction

6.2 Shape-Controlled Synthesis of Monometallic Nanocrystals with High Surface Energy

6.3 Shape-Controlled Synthesis of Bimetallic NCs with High Surface Energy

6.4 Concluding Remarks and Perspective

Acknowledgments

References

Chapter 7: X-Ray Studies of Strained Catalytic Dealloyed Pt Surfaces

7.1 Introduction

7.2 Dealloyed Bimetallic Surfaces

7.3 Dealloyed Strained Pt Core–Shell Model Surfaces

7.4 X-Ray Studies of Dealloyed Strained PtCu3(111) Single Crystal Surfaces

7.5 X-Ray Studies of Dealloyed Strained Pt - Cu Polycrystalline Thin Film Surfaces

7.6 X-Ray Studies of Dealloyed Strained Alloy Nanoparticles

7.7 Conclusions

Acknowledgments

References

Index

Advances in Electrochemical Science and Engineering

Advisory Board

Philippe Allongue, Ecole Polytechnique, Palaiseau, France

A. Robert Hillman, University of Leicester, Leicester, UK

Tetsuya Osaka, Waseda University, Tokyo, Japan

Laurence Peter, University of Bath, Bath, UK

Lubomyr T. Romankiw, IBM Watson Research Center, Yorktown Heights, USA

Shi-Gang Sun, Xiamen University, Xiamen, China

Esther Takeuchi, SUNY Stony Brook, Stony Brook; and Brookhaven National Laboratory, Brookhaven, USA

Mark W. Verbrugge, General Motors Research and Development, Warren, MI, USA

All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.

Library of Congress Card No.: applied for

British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.

Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de.

© 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim Germany

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.

Print ISBN: 978-3-527-33227-4

ePDF ISBN: 978-3-527-68046-7

ePub ISBN: 978-3-527-68045-0

mobi ISBN: 978-3-527-68044-3

oBook ISBN: 978-3-527-68043-6

Cover Design Schulz Grafik-Design, Fußgönheim

Typesetting Thomson Digital, Noida, India

In Memoriam

Prof. Dr. Dieter M. Kolb (1942–2011)

Dieter M. Kolb has been the Head of the Institute of Electrochemistry at the University of Ulm, Germany, from 1990 to 2010, and has served as Co-Editor of this series since 1997. It is with deep sadness to report that Professor Kolb passed away on October 4, 2011. He is well known for his contributions to the fundamental understanding of electrochemical phenomena. In particular, he was among the founders of electrochemical surface science by combining classical electrochemical understanding with atomic-level information obtained by modern surface science techniques like the in-situ scanning tunneling microscopy (STM). These scientific activities have stimulated not only related research in other electrochemical laboratories but also studies of non-electrochemists in the field of interfacial electrochemistry.

His scientific accomplishments include initial stages of metal deposition together with underpotential deposition of foreign metals on single crystal substrates, nanostructuring of electrode surfaces by metal clusters generated at the tip of an STM, surface reconstruction of gold electrodes, metallization of organic layers, and electrocatalysis. By using well-defined electrode surfaces, he obtained unprecedented insights into the nature of elementary processes, especially the influence of surface structure on electrochemical reactions.

Preface

This book is devoted to the rapidly growing field of electrocatalysis and aspects of modern electrochemical surface science. Recent progress is reviewed with a particular emphasis on methodological developments that are driving electrochemical surface science research and its relation to kinetics of electrode reactions across a broad range of fundamental topics and applications. These key fundamental research developments include modern experimental methods with well-defined model electrodes, as well as achievements in theoretical electrochemistry.

Mueller, Fantauzzi, and Jacob provide a comprehensive introduction to multiscale modeling in Chapter 1, which is not only designed for theorists but also for experimentalists with limited background in modern computational chemistry. The use of density functional theory for a basic understanding of elementary steps in oxygen reduction and formic acid oxidation on Pt(111) is highlighted. Experimental results for well-defined single-crystal electrodes are compared with kinetic modeling based on a statistical-mechanical description of electrochemical surface reactions in Chapter 2 by Koper. Nichols and Higgins present in Chapter 3 new insights into charge transfer across electrical junctions by single-molecule electrochemical studies with the tip of a scanning tunneling microscope. The field of bioelectrocatalysis is addressed in Chapter 4 by Schröder and Harnisch, who describe microbial electron transfer mechanisms and the catalysis of electrochemical reactions by microorganisms. In Chapter 5, Weissmüller presents the fundamentals of the theory of electrocapillarity of solids and their application in the field of strain-dependent catalysis. In Chapter 6, Huang, Zhou, Tian, and Sun review recent progress in shape-controlled synthesis of metal nanoparticles with high-energy facets and their applications in electrocatalysis. The geometric and electronic structure of strained dealloyed bimetallic Pt catalysts as studied by X-ray measurements is shown by Strasser in Chapter 7.

Several chapters address applications for which electrocatalysts efficiently convert electrical energy into chemical bonds, or the reverse, converting chemical energy from fuels into electrical energy. Prominent among these applications is the task of catalyzing electrode reactions in fuel cells.

This book was initiated under the editorial leadership of Prof. Dr. D.M. Kolb. Upon his untimely passing, the lead editorial tasks were taken on by Guest Editor Dr. L.A. Kibler, with the assistance of Prof. R.C. Alkire.

The combined experimental and theoretical approach is of interest to chemists, physicists, biochemists, surface and materials scientists, and engineers. The opportunities for impact in this field are far greater than the current researchers trained in electrochemistry can accomplish. By providing up-to-date reviews with deep level of coverage of key background topics, this book is adapted to students and professionals entering the field, as well as experienced researchers seeking to expand their scope and mastery.

Ulm, Germany

Urbana, IL, USA

L.A. Kibler

R.C. Alkire

List of Contributors

Donato Fantauzzi

Universität Ulm

Institut für Elektrochemie

Albert-Einstein-Allee 47

89081 Ulm

Germany

Falk Harnisch

TU Braunschweig

Institute of Environmental and Sustainable Chemistry

Hagenring 30

38106 Braunschweig

Germany

and

Helmholtz Centre for Environmental

Research - UFZ

Department of Environmental

Microbiology

Permoserstrae 15

04318 Leipzig

Germany

Simon J. Higgins

University of Liverpool

Department of Chemistry

Donnan and Robert Robinson Laboratories

Crown Street

Liverpool L69 7ZD

UK

Long Huang

Xiamen University

State Key Laboratory of Physical Chemistry of Solid Surfaces

College of Chemistry and Chemical Engineering

Department of Chemistry

South Siming Road 422

Xiamen 361005

China

Timo Jacob

Universität Ulm

Institut für Elektrochemie

Albert-Einstein-Allee 47

89081 Ulm

Germany

Marc T.M. Koper

Leiden University

Leiden Institute of Chemistry

Einsteinweg 55

2333 CC Leiden

The Netherlands

and

Hokkaido University

Catalysis Research Center

Kita21, Nishi10, Kita-ku

Sapporo 001-0021

Japan

Jonathan E. Mueller

Universität Ulm

Institut für Elektrochemie

Albert-Einstein-Allee 47

89081 Ulm

Germany

Richard J. Nichols

University of Liverpool

Department of Chemistry

Donnan and Robert Robinson Laboratories

Crown Street

Liverpool L69 7ZD

UK

Uwe Schröder

TU Braunschweig

Institute of Environmental and Sustainable Chemistry

Hagenring 30

38106 Braunschweig

Germany

Peter Strasser

Technical University Berlin

Department of Chemistry

Chemical Engineering Division

Strasse des 17. juni 124

10623 Berlin

Germany

Shi-Gang Sun

Xiamen University

State Key Laboratory of Physical Chemistry of Solid Surfaces

College of Chemistry and Chemical Engineering

Department of Chemistry

South Siming Road 422

Xiamen 361005

China

Na Tian

Xiamen University

State Key Laboratory of Physical Chemistry of Solid Surfaces

College of Chemistry and Chemical Engineering

Department of Chemistry

South Siming Road 422

Xiamen 361005

China

Jörg Weissmüller

Technische Universität Hamburg-Harburg

Institut für Werkstoffphysik und Werkstofftechnologie

Eiendorfer Strae 42

21073 Hamburg

Germany

and

Helmholtz-Zentrum Geesthacht

Institut für Werkstoffforschung

Werkstoffmechanik

Max-Planck-Strae 1

21502 Geesthacht

Germany

Zhi-You Zhou

Xiamen University

State Key Laboratory of Physical Chemistry of Solid Surfaces

College of Chemistry and Chemical Engineering

Department of Chemistry

South Siming Road 422

Xiamen 361005

China

1

Multiscale Modeling of Electrochemical Systems

Jonathan E. Mueller, Donato Fantauzzi, and Timo Jacob

1.1 Introduction

As one of the classic branches of physical chemistry, electrochemistry enjoys a long history. Its relevance and vitality remain unabated as it not only finds numerous applications in traditional industries, but also provides the scientific impetus for a plethora of emerging technologies. Nevertheless, in spite of its venerability and the ubiquity of its applications, many of the fundamental processes, underlying some of the most basic electrochemical phenomena, are only now being brought to light.

Electrochemistry is concerned with the interconversion of electrical and chemical energy. This interconversion is facilitated by transferring an electron between two species involved in a chemical reaction, such that the chemical energy associated with the chemical reaction is converted into the electrical energy associated with transferring the electron from one species to the other. Taking advantage of the electrical energy associated with this electron transfer for experimental or technological purposes requires separating the complementary oxidation and reduction reactions of which every electron transfer is composed. Thus, an electrochemical system includes an electron conducting phase (a metal or semiconductor), an ion conducting phase (typically an electrolyte, with a selectively permeable barrier to provide the requisite chemical separation), and the interfaces between these phases at which the oxidation and reduction reactions take place.

Thus, the fundamental properties of an electrochemical system are the electric potentials across each phase and interface, the charge transport rates across the conducting phases, and the chemical concentrations and reaction rates at the oxidation and reduction interfaces. Traditional experimental techniques (e.g., cyclic voltammetry) measure one or more of these continuous observables in an effort to understand the interrelationships between purely electrochemical phenomena (e.g., electrode potential, current density). While these techniques often shed light on both fundamental (e.g., ionic charge) and statistical (e.g., diffusion rates) properties of the atoms and ions that make up an electrochemical system, they provide little insight into the detailed atomic structure of the system.

In contrast, modern surface science techniques (e.g., STM, XPS, SIMS, LEISS) typically probe the atomistic details of the interface regions, and support efforts to gain insight into the atomistic processes underlying electrochemical phenomena. Indeed, these methods have been applied to gas–solid interfaces with resounding success, elucidating the atomistic structures underlying macroscopic phenomena [1]. Unfortunately, the presence of the electrolyte at the electrode surface hampers the application of many of these surface science techniques. Because the resulting solid–electrolyte interface is an essential component of any electrochemical system, electrochemistry has not yet fully experienced the atomistic revolution enjoyed by other departments of surface science, although these techniques are increasingly making their way into electrochemistry [2].

The dramatic increases in computing power realized over the past decades coupled with improved algorithms and methodologies have enabled theorists to develop reliable, atomistic-level descriptions of surface structures and processes [3]. In particular, periodic density functional theory (DFT) now exhibits a degree of efficiency and accuracy which allows it not only to be used to explain, but also to predict experimental results, allowing theory to take a proactive, or even leading, role in surface science investigations. A prime example of this is the design of a new steam reforming catalyst based on a combination of theoretical and experimental fundamental research [4].

The application of DFT to electrochemical systems is not as straightforward as it is to the surface–vacuum interfaces of surface sciences. There have indeed been promising efforts in this direction [5–7], and there is a growing interest in theorectical electrochemistry [8–10]; however, proper treatments of the electrolyte and electrode potential provide novel challenges for which there are not yet universally agreed upon solutions. Nevertheless, there are already success stories, such as the theoretical prediction [11,12] and experimental confirmation [13] of the nonmonotonic dependence of the electrocatalytic activity of the hydrogen evolution reaction (HER) on the thickness of Pd overlayers on Au(111).

Common to both the experimental and theoretical approaches mentioned above is the existence of two regimes – the macroscopic and the atomistic – and the importance of relating these in order to obtain a comprehensive picture of an electrochemical system. Statistical mechanics provides the necessary framework for relating the discrete properties and atomistic structures of the atomistic regime to the continuous variable controlled or observed in the macroscopic regime. The fundamental assumption underlying this relationship is what Richard Feynman called the “atomic hypothesis”, which we rephrase in terms of electrochemistry as follows: “there is nothing that electrochemical systems do that cannot be understood from the point of view that they are made up of atoms acting according to the laws of physics” [14].

Modern computational methods, based on the principles of quantum mechanics, provide a means of probing the atomistic details of electrochemical systems, as do the techniques of modern surface science techniques. The concepts of statistical mechanics are critical for extending the results of these molecular scale models to macro-scale descriptions of electrochemical systems. Such a procedure creates a multiscale model of an electrochemical system, built up from the atomistic details of the quantum regime to a description of the electrochemical phenomena observed in macroscopic systems.

This chapter is intended to serve as an introduction to multiscale modeling for electrochemists with minimal background in the methods of modern computational chemistry. Thus, the fundamentals of some of the most important methods are presented within the framework of multiscale modeling, which integrates diverse methods into a single multiscale model, which then spans a wider range of time and length scales than is otherwise possible. The physical ideas underlying the methods and the conceptual framework used to weave them together are emphasized over the specific how-to details of running simulations. Thus Section 1.2 gives an overview of the multiscale modeling and Sections 1.3–1.5 present three different levels of theory used as components in many multiscale models: electronic structure modeling methods, molecular modeling methods, and chemical reaction modeling methods. The development of appropriate models for simulating electrochemical systems at each level of theory is the key outcome of each of these sections. To illustrate the application of some of the methods of multiscale modeling to electrochemistry, two concrete examples are presented in detail. In Section 1.6 a detailed mechanistic study of the oxygen reduction reaction on Pt(111) is presented. In Section 1.7 a similar study of formic acid oxidation illustrates additional approaches and modeling techniques. In both cases the focus is on the methods and modeling techniques used rather than the particular conclusions reached in each study.

1.2 Introduction to Multiscale Modeling

Electrochemical phenomena can be viewed over a wide range of time and length scales, ranging from electronic transfer processes which take place over distances on the scale of nanometers in times of the order of femtoseconds, to large-scale industrial processes involving moles of atoms occupying spaces best measured in meters and lasting hours, days or even years. Bridging these time and length scales is one of the central tasks of modern theoretical electrochemistry. This is the case for both scientists seeking to further our fundamental understanding of electrochemistry, and engineers developing applications of electrochemical processes and systems. Thus, the former continue the hard work of uncovering the atomistic processes underlying macroscopic electrochemical processes [2], while the latter seek to bring together interconnected phenomena spanning many time and length scales to design a product with the desired functionality [15]. In both cases a multiscale framework is needed to interrelate phenomena at the relevant time and length scales.

Traditionally, computational chemistry uses a single computational tool to model a given system at a particular time and length scale. Several of the major categories that these computational tools fit into, along with the approximate time and length scales, to which they can be applied, are shown in Figure 1.1. The physical laws appropriate for the system components or building blocks at each time and length scale govern the models developed at that level and determine which system properties we can obtain directly. Thus each level of theory focuses on the system under a single aspect. Multiscale modeling aims at stitching these various aspects together into a unified whole, such that macroscopic properties emerge from underlying microscopic phenomena.

Figure 1.1 Schematic representation of various categories of simulation methods used in multiscale modeling according to the time and length scales of the models to which they are most applicable.

Two strategies are available for stitching methods of differing scales together into a single, coherent multiscale model. In the first, known as concurrent coupling, the various levels of simulation are incorporated into a single multiscale model. Thus, as illustrated in Figure 1.2, a single simulation makes direct use of various levels of theory and explicitly describes phenomena taking place at a range of time and length scales. Concurrent coupling is typically realized by dividing the system into various regions, each of which is treated using a different level of theory. Defining the boundaries between these regions and then determining how the regions interact with each other is the primary challenge in concurrent coupling. The key disadvantage, is that, because the time propagation of the system dynamics is limited by the process with the smallest time step, there are often only limited gains in the time scales which can be achieved using concurrent coupling schemes. Nevertheless, significant gains in the length scales treated are very realizable. A common example of concurrent coupling is QM/MM modeling in which an electronic structure method is used to describe a small reactive portion of a system, which is otherwise described using a molecular force field. We discuss this approach in greater detail in Section 1.4.3.3.

Figure 1.2 Schematic representation of (a) concurrent and (b) serial coupling approaches to integrating two diverse levels of theory into a single, multiscale modeling framework. In concurrent coupling different parts of a single model system are treated using methods describing the system at two different levels of detail. Here a small region treated with a more detailed method (A) is typically embedded within a larger region described using a more course-grained method (B). The shaded overlap region where the fibers from these distinct computational methods are woven together into a single, continuous model is critical to the successful implementation of a concurrent coupling approach. In serial coupling results obtained using a more fundamental method (A) are used as the basis for optimizing the parameters needed to define a course-grained method (B), which can then be applied to larger model systems for larger time and length scale simulations.

The second strategy, known as sequential coupling, uses results from modeling at one level as the inputs for a model at another level. This often entails fitting the parameters that define a model at one level to results derived from another level, and is often referred to as parameter passing. Thus, as shown in Figure 1.2, one method is derived from another, such that subsequent simulations carried out using the derived method are not constrained by the time and length scale limitations of the parent model(s). The aim in this procedure is often to extend the atomistic details, and thus presumably accuracy, of smaller scale methods to systems which would normally be too large for them to treat. Of course, there is a price to be paid for these gains in computational efficiency. Larger scale methods can only be derived from smaller (typically more exact) methods by making approximations and simplifying assumptions. To verify the validity of these approximations, it is important to make a direct comparison between results from the derived method and results from its parent method(s) for cases which were not included in the derivation. It is also possible that errors are propagated from the parent method(s) in the process of derivation. Comparison with experiment, where possible, is an important means of locating such errors. A common example of sequential modeling is force field parameter development and parameter optimization using results obtained from electronic structure calculations. This strategy is described in greater detail in Section 1.4.2.

Thus, communication between various levels of theory is at the heart of multiscale modeling. In the case of concurrent coupling this involves the continuous translation and shuttling of information across the boundaries between regions modeled using different methods in order to maintain the unity of the overall model. In the case of sequential coupling, information is first transferred from a parent to a daughter method, as the former method is derived or parametrized from the latter. However, subsequent validation requires the reverse flow of information as results from the derived method are returned to the parent method for comparison.

1.3 Electronic Structure Modeling

Multiscale modeling in electrochemistry typically begins at the atomic level, where the interactions of the electrons and nuclei which make up the electrochemical system are described in the language of quantum mechanics. Indeed, modern physics claims that the quantum mechanical description of such a system, in the form of a space- and time-dependent wave function, , is exact, and contains all that there is to know about the system. However, we are unable to obtain exact wavefunctions for all but the most trivial systems. Thus, Paul Dirac once famously noted: “The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation” [16].

For several decades Dirac's prescription to avoid excessive computation was necessary for making headway in the effort to understand chemistry from quantum mechanical, first principles, and primarily conceptual rather than strictly numerical connections between quantum mechanics and chemical phenomena were developed. However, since the advent of modern computers and the subsequent exponential increase in their computing power, a new avenue for applications of quantum mechanics to chemistry has opened up which takes advantage of the enormous computing power available today.

1.3.1 Modern Electronic Structure Theory

Even with all of the computational resources available today, exact solutions to the Schrödinger equation are, in all but the simplest cases, unattainable. Thus, much effort has been dedicated to developing numerical methods for obtaining approximate wavefunctions that provide reliable descriptions of systems of interest. The aim of this section is to briefly present the conceptual underpinnings of modern electronic structure theory and methods and the conceptual bridges that can be used to link their results to macroscopic electrochemical systems. More detailed and complete treatments of quantum mechanics [14–18], quantum chemistry [19–21], statistical mechanics [22,23], chemical kinetics [24] and electronic structure methods [25–27] are readily available in many of the works referenced in this chapter, as well as in many more we have not had occasion to cite.

1.3.1.1 Quantum Mechanical Foundations

The central premise of quantum mechanics is that every physical system is completely described by a wavefunction, , such that all system properties can be obtained by consulting the wavefunction. [One must bear in mind that the wavefunction, , is defined over the configuration space of the system, rather than over three-dimensional space. In the following, we shall take r to denote a point (or state of the system) in this configuration space.] Because the absolute square of the wavefunction, , in a region of configuration space is proportional to the probability density of finding the system in the same region of space (or equivalently, in the state corresponding to that region of configuration space), the wavefunction of a system must be normalizable. Furthermore, physically meaningful wavefunctions are finite, single-valued, continuous and continuously differentiable over all space.

Operators (e.g., ) and their corresponding eigenvalues () take the place of the dynamical variables used in classical mechanics. The eigenfunction solutions, , of the corresponding eigenvalue equation:

(1.1)

form a possible basis set for expressing the system states, which are represented as wavefunctions, . A single measurement on any state of the system always yields a single eigenvalue (). The probability that a particular eigenstate is observed is proportional to the absolute square of its coefficient, , in the linear combination of eigenfunctions making up the initial state of the system, . However, once a particular eigenvalue, , is observed the system remains in the corresponding eigenstate, , until it is further perturbed.

A collection of measurements on an ensemble of systems or particles with identical initial wavefunctions, results in an average value, that is, the expectation value of the system.

(1.2)

The dynamics of a non-relativistic, quantum mechanical system are governed by the time-dependent Schrödinger equation:

(1.3)

When the energy of the system (i.e., ) has no explicit time dependence, we can derive and make use of the time-independent Schrödinger equation:

(1.4)

The Hamiltonian operator, , which operates on the wavefunction to extract the system energy, E, contains both potential energy terms for the interactions of particles in the system with each other or any external fields, , and a kinetic energy term, , for each particle, which for a particle with mass m is written in atomic units1 as:

(1.5)

For a molecule composed of nuclei with nuclear masses and charges and , respectively, and M electrons, can be written as:

(1.6)

The form of the potential energy terms describing the electromagnetic interactions between each pair of particles couples the motions of the particles, barring analytical solutions for all but the most trivial systems. Nevertheless, useful approximate solutions are within reach for many systems of interest, in part due to the variational principle, which states that because all solutions of the Schrödinger equation are linear combinations of the eigenfunction solutions, that the ground state (i.e., lowest energy) solution forms the lower limit of the system energy. Thus, the energy of any solution we generate will be greater than or equal to the ground state energy, and we can optimize any approximation of the ground state by minimizing its energy.

1.3.1.2 Born–Oppenheimer Approximation

The Hamiltonian for a molecule includes kinetic and potential energy terms for both electrons and nuclei, and operates on a wavefunction describing both electrons and nuclei. However, because the kinetic energy of the nuclei is small relative to the electronic kinetic energy, it can be ignored. This allows the electronic wavefunction to be calculated based on localized nuclear coordinates, rather than a delocalized nuclear wavefunction. Thus, the following electronic Schrödinger equation can be separated out of the time-independent Schrödinger equation:

(1.7)

where are the electronic wave functions, and denotes that nuclear spatial coordinates are parameters and not variables.

In this approximation, first introduced by Max Born and J. Robert Oppenheimer in 1927 [28], the heavy nuclei are thought of as fixed relative to the rapid motion of the quickly moving electrons, allowing the electrons to fully equilibrate to fixed nuclear positions. The equilibrated electronic energy as a function of the nuclear coordinates then forms a potential energy surface, with local minima (stable structures) and saddle points (transition states). Thus, obtaining the electronic structure and the corresponding system energy as a function of the nuclear positions is important in computational chemistry. The Born–Oppenheimer approximation is appropriate for the vast majority of cases; however, it breaks down for nuclear configurations where there are solutions to the electronic Schrödinger equation with similar energies.

1.3.1.3 Single-Electron Hamiltonians

While the Born–Oppenheimer approximation simplifies the Schrödinger equation by separating out the motion of the nuclei, the wavefunctions for the electrons are still coupled through their electrostatic interactions, making the resulting equations very difficult to solve. Of course there is no such difficulty for a single-electron system.

(1.8)

This suggests that a crude, but readily soluble, approximation for a multi-electron system might be made by neglecting the electron–electron interactions altogether, so that the wavefunction for each electron is solved for independently.2 In this case, the total electronic wavefunction, , would be written as the product of single electron wavefunctions . The total wavefunction is known as the Hartree product:

(1.9)

A more reasonable approximation than non-interacting electrons is to decouple the motions of the electrons, so that each electron interacts with the field associated with the average charge density of the other electrons rather than directly with the wavefunctions describing the other electrons. The resulting single-electron hamiltonian is known as the Hartree hamiltonian [29–31] (here for the i th-electron):

(1.10)

Because the final potential energy term contains the charge density of the other electrons, the electronic wavefunctions are now dependent on each other. Thus, to evaluate the Hartree hamiltonian for one electron, the wavefunctions for all of the other electrons (or at least their net charge density distribution) must first be known. The way out of this chicken and egg problem, is to start with a trial wavefunction. This initial guess provides the background charge density for solving for each single-electron Hartree hamiltonian individually. These new solutions then provide the initial guess for a new set of solutions. The process can be repeated iteratively until the solutions converge (i.e., the wavefunctions remain relatively unchanged over the course of a single iteration in which each wavefunction is optimized against the other current wavefunctions in turn) in what is known as a self-consistent field (SCF) method. The energy for such a system can be written as the sum of the energies of the single-electron hamiltonians; however, because each electron–electron interaction is fully accounted for twice (i.e., once in the single electron hamiltonian of each electron involved) we must compensate for the double counting by subtracting the total electron repulsion energy from the sum of individual electron energies. Thus, the total energy of the system can be written as:

(1.11)

where is the self-consistent charge density distribution corresponding to the single-electron wavefunctions, .

1.3.1.4 Basis Sets

The single-electron wavefunctions () that form the Hartree product total electronic wavefunctions are not known analytically, but rather must be formed from some other basis set ().

(1.12)

While, from a purely mathematical point of view, any complete, and thus infinite, basis set will do, in practice we are limited to finite basis sets. Furthermore, computational considerations favor small basis sets, whose functions, when substituted into the various HF equations, result in integrals which can be efficiently computed. At the same time, the resulting single-electron wavefunctions should be as accurate as possible. This is best accomplished by choosing basis functions that resemble the well-known solutions to atomic hydrogen.

1.3.1.4.1 Slater and Gaussian Type Orbitals

An obvious choice along these lines is Slater type orbitals [32] (STOs), centered on the atomic nuclei. This basis set mirrors the exact orbitals for the hydrogen atom, and naturally forms a minimum basis set, which means that each electron is described by only one basis function.

(1.13)

Unfortunately, all of the requisite integrals cannot be evaluated analytically, thus Gaussian type orbitals [33,34] (GTOs) provide an alternative basis, with easy to evaluate integrals.

(1.14)

However, GTOs lack a cusp at the origin and decay too rapidly as they move away from the origin. This second deficiency can be remedied by using a linear combination of GTOs in place of each STO, which has been fit to reproduce the STO it replaces. Linear combinations of three GTOs have been found to optimize the relationship between accuracy and computational expense. This is known as the STO-3G basis set. Results can be further improved by moving beyond a minimum basis set, so that each electron orbital is described by not just one basis set orbital, but two, three or more. Such decompression results in so called double-, triple-, and so on basis sets. Additionally, polarization functions, consisting of basis functions with one quantum number higher angular momentum (i.e., p for s and d for p orbitals), can be included to give more flexibility to valence electrons, and diffuse functions included to better describe weakly bound electrons.

1.3.1.4.2 Plane Waves

For periodic systems, particularly those with delocalized electrons, such as metals, plane waves provide a natural choice for the basis set:

(1.15)

where for a unit cell vector r, k is restricted to the first Brillouin zone:

(1.16)

for n is any integer.

The size of the basis set required for the calculations to converge (i.e., the highest value of n) must be tested for each system.

1.3.1.4.3 Effective Core Potentials

A common strategy for reducing the expense of modeling heavy elements, which contain many electrons, is to replace each nucleus and its core electrons with an effective core potential (ECP). The valence electrons, which are primarily responsible for chemical interactions, are then explicitly modeled in the presence of the ECP (or pseudopotential as it is called in the physics community). Besides reducing computational expenses, ECPs also provide a means of implicitly incorporating the relativistic effects which can be important for accurately describing the core electrons of heavy elements into a simulation without further complicating the rest of the treatment of valence electrons with unimportant relativistic contributions. While it is possible to include all core electrons in the ECP, better results are often obtained by modeling some of the highest energy core electrons explicitly alongside the valence electrons.

1.3.1.5 Enforcing the Pauli Principle

As fermions, electrons are not allowed to share identical states; rather, they are indistinguishable particles which must occupy distinct states in such a way that the sign of their combined wavefunction is reversed when they are exchanged. The Pauli principle requires that the total electronic wavefunction is antisymmetric with respect to the interchange of any two electrons. This implies two conditions. The first is that all single-electron spatial orbitals must be orthonormal:

(1.17)

The second is that the overall wavefunction, , is antisymmetric with respect to the exchange of any two electrons and :

(1.18)

These rules are typically enforced by writing the total wavefunction in the form of a Slater determinant, whose terms are products of single-electron spin orbitals, , which are each composed of a spatial orbital, , and a spin component, . For an M -electron system the Slater determinant has the following form:

(1.19)

Because electrons are indistinguishable, every electron appears in every orbital. Furthermore, electrons have two spin possibilities. Thus for every spatially distinct single-electron wavefunction there are actually a pair of orbitals, one with up () and one with down () spin. An appropriately anti-symmetrized pair of spin orbitals ( and ) have the form:

(1.20)

1.3.1.6 Electron Correlation Methods

An important property of spin is that when the interaction of two same-spin orbitals is evaluated, an extra electron correlation, known as exchange, appears, due to the ability of same-spin electrons to avoid each other. Thus the expression for the potential energy of interaction between two same-spin electrons occupying orbitals and is:

(1.21)

While for opposite-spin electrons, it is simply:

(1.22)

Nevertheless, this electron exchange is the only electron correlation accounted for in the Hartree–Fock method. Accounting for dynamical correlation due to explicit electron–electron interaction and non-dynamical correlation due to wavefunction contributions from higher energy electron configurations requires more advanced methods.

The Hartree–Fock equations approximate the Schrödinger equation by ignoring all correlation, with the exception of exchange. Nevertheless, by reintroducing this ignored electron correlation, they can be used as a stepping stone for returning to the full Schrödinger equation. How the Hartree–Fock method can be corrected to recapture the electron correlation that has been lost, is the subject of this section.

1.3.1.6.1 Configuration Interaction

By solving the Hartree–Fock equations we arrive at an optimal HF wavefunction, . Furthermore, we are also able to solve for excited states of the system, within the Hartree–Fock approximation, . Any one of these solutions can be substituted into the exact Schrödinger equation (i.e., operated on using the exact many-electron Hamiltonian operator to yield the system energy), such that the variational principle still applies. However, further improving any one of these solutions is hampered by the coupling of the electronic motions, which is the original problem the HF equations were designed to avoid. Nevertheless, if a set of Hartree–Fock configurations exists we can write the wavefunction as a linear combination of Hartree–Fock configurations and optimize this linear combination according to the variational principle. This procedure is known as the configuration interaction (CI) method. Thus, finding approximate solutions to the exact Hamiltonian is made tractable by writing the wavefunction as a linear combination of Hartree–Fock configurations. The exact solution is approached in the limit of an infinite basis set (i.e., all possible Hartree–Fock configurations); however, in practice, computational considerations limit us to finite basis sets.

1.3.1.6.2 Perturbation Theory

The strategy in CI methods is to make use of solutions to the Hartree–Fock equations as the basis set for a linear combination of configurations that can be optimized for the exact, multi-electron hamiltonian within the context of the variational principle. By contrast, perturbation theory rewrites the multi-electron hamiltonian in terms of an analytically solvable component () and a correction factor (). Møller–Plesset (MP) perturbation theory, takes to be the sum of the Hartree–Fock one-electron operators, excluding the electron–electron repulsion terms. The perturbation term () is then taken to be the electron–electron repulsion terms (taking into account the double counting of these interactions in the Hartree–Fock equations). This choice is appropriate insofar as the Schrödinger equation with is solvable; however, the electron–electron repulsion term is not always the relatively small perturbation that the theory assumes it to be.

The uncorrected energy from MP includes no electron–electron interactions, and the first-order correction recovers the Hartree–Fock result. Additional correction terms may offer further improvements to the Hartree–Fock energy; however, these terms become increasingly complex and computationally expensive to calculate. Furthermore, the nature of these correction factors means that there is no guarantee that the approximation at any level (beyond the first) provides either an upper or a lower bound to the real energy.

1.3.1.6.3 Coupled Cluster

Coupled cluster methods provide a way to systematically add various types of excitations to the wavefunction. Thus, the first-order terms include all single-electron excitations, the second-order terms all double excitations, and so on. Coupled cluster methodology resembles the CI approach insofar as it writes the wavefunction as a linear combination of configurations. It resembles perturbation theory in systematically ordering corrections factors from least to most significant.

1.3.1.6.4 Electron Correlation in Electrochemistry

At present, methods that explicitly include electron correlation are too expensive for use on most systems of interest in electrochemistry. Nevertheless, they occasionally find application calculating dispersion interactions or van der Waals forces, which depend heavily on electron correlations

1.3.1.7 Density Functional Theory

An alternative approach to wavefunction methods, is to work instead with the electron density. Not only is electron density more intuitive, but it also provides computational advantages, which have enabled density functional theory (DFT) to develop into a powerful and widely used methodology.

1.3.1.7.1 The Hohenberg–Kohn Theorems

Two theorems, first proved by Hohenberg and Kohn [35,36], provide the foundation for DFT. The first of these, the Hohenberg–Kohn existence theorem, states that the non-degenerate, ground-state electron density of a system uniquely determines the external potential of the system (and thus the hamiltonian) within a constant. This means that the system energy can be expressed as a functional of the electron density:

(1.23)

The second theorem, the Hohenberg–Kohn variational theorem, justifies the application of variational methods to the density. Thus, the correct ground state electron density distribution is the density distribution corresponding to the energy minimum. Thus

(1.24)

leads to the optimal ground state electron density .

1.3.1.7.2 The Energy Functional

Unfortunately is unknown; however, if we consider the electron density to be made up of non-interacting electrons (this is allowed, provided we really have the same density), we can write the energy as a sum of single energy operators:

(1.25)

The first three terms correspond to the electronic kinetic energy, the electron–nucleus attractions, and the electron–electron repulsions, respectively. The form of these terms is well known; however, the form of the final term, which includes the electron exchange and correlation energies, along with a correction to the kinetic energy term, is unknown. From here, a single-electron hamiltonian is easily extracted:

(1.26)

and

(1.27)

As in the case of the HF method, the energies obtained with this single-electron hamiltonian for each electron in the system can be summed to arrive at the total system energy. However, in contrast to HF, the result is the exact system energy, rather than an approximate energy. The difficulty is that, in contrast to HF, the exact form of is unknown. Thus, utilizing DFT requires approximating this term.

1.3.1.7.3 Exchange–Correlation Functionals

A vast number of approximations for have been developed. These (and the DFT methods they are utilized within) can be conveniently classed in rungs on “Jacob's ladder” (Figure 1.3), which span the gap between Hartree results and chemical accuracy [37,38]. On the bottom rung of the ladder are methods which employ the localized density approximation (LDA). Here, the value of at each point is typically borrowed from the value of for a uniform electron gas with the same density as the local density of the point of interest. In any case, the value of depends only on .

Figure 1.3 Jacob's ladder illustrating the hierarchy of approximations used to construct exchange-correlation functionals. Abreviated categorizations of methods are shown to the left of the ladder and the new level of dependence added at each level is shown on the right. Thus, the exchange-correlation functional at any given rung of the ladder will be a function of not only the quantity or construct directly to its right, but also of all quantities and constructs below it.

The next level up is known as the generalized gradient approximation (GGA). Here, depends not only on , but also on its gradient, . Including either the Laplacian of the density (), or the local electronic kinetic energies is known as the meta-generalized gradient approximation (MGGA).

The hyper-generalized gradient approximation (hyper-GGA), uses the Kohn–Sham (KS) orbitals to calculate the exact HF exchange. The total is then written as a linear combination of the HF exchange and the from a LDA and/or GGA method.

Finally, at the highest level of “Jacob's ladder” sit generalized random phase methods (RPM). Not only are occupied KS orbitals included, but also virtual (i.e., unoccupied) KS orbitals.

1.3.2 Applications of Electronic Structure to Geometric Properties

The system energy is the most basic output of electronic structure calculations. Due to the Born–Oppenheimer approximation, this calculated system energy corresponds to the energy associated with a particular set of nuclear coordinates (i.e., a particular system geometry). Although the multi-dimensional configuration space containing all possible geometries is often quite large, only a few select geometries, namely local minima and the saddlepoints along the minimum energy pathways (MEP) connecting the local minima, are typically required for an accurate description of the chemical characteristics and behavior of the system. Methods for finding these local minima and saddlepoints, that is, those corresponding to the resting states (RS) and transition states (TS) of a chemical system, are indispensable for successfully applying electronic structure methods to chemical systems.

1.3.2.1 Geometry Optimization

Geometry optimization procedures begin with an initial geometry guess, typically provided by the user based on previous calculations, experimental results or chemical intuition. In addition to the energy of this initial structure, , additional quantities may be calculated in order to aid the following optimization procedure.

The first of these is the energy gradient, , which corresponds to the net forces exerted on each atom and is expressed as a vector in configuration space, . In some cases, the components of the gradient are best evaluated analytically from the function used to compute the system energy, . However, in other cases it is either more efficient or necessary to evaluate them numerically, by displacing the geometry along the various directions in configuration space and computing the energy change that results from each displacement. By definition the gradient at a RS point (or any stationary point including a TS) is zero, thus, the gradient can also be used to help identify the end point of the calculation.

Another useful metric is the second derivative matrix, also known as the Hessian. The matrix components correspond to all possible pairs of coordinates i and j (including i = j) in configuration space, and have the value: . At a stationary point, all of the off-diagonal terms of the Hessian are zero. In addition, the diagonal terms are all positive at a RS, and all are positive except for one, which is negative, at a TS. Thus the Hessian can not only be used to verify that a RS has been reached, but can also be used to distinguish between various types of RSs.

The most basic optimization strategy is to optimize the variables (i.e., coordinates in configuration space) one at a time. Because the variables are not independent there is no guarantee that the local minimum will be approached with this procedure, and even should it be approached this is likely to require many optimization cycles. Thus, for systems with more than a handful of variables (i.e., the vast majority of molecular geometries) this method is neither practical nor reliable. Fortunately other strategies are available.

Steepest descent methods optimize the structure at each optimization step along the direction of the gradient of the starting geometry for that step. Convergence is guaranteed using steepest descent methods; however, their efficiency is hampered by their intrinsic requirement that adjacent optimization steps be in perpendicular directions. Conjugate gradient methods attempt to remedy this and improve their efficiency by forming the new search direction from a linear combination of the previous search direction and the gradient of the current geometry.

Newton–Raphson methods take advantage of not only the gradient but also the Hessian to advance the optimization yet more quickly toward the nearest stationary point. Because this stationary point could be a minimum, maximum or saddlepoint, it is important that the initial guess is made within the desired region of configuration space.

1.3.2.2 Transition State Searches

Methods for locating saddle points can conveniently be categorized by the type of input or initial guess that they require. Local methods require nothing more than an initial geometry guess near the transition state [39]. As we have already seen Newton–Raphson methods are available for optimizing an initial input structure to the nearest stationary point (in this case a saddle point). An analysis of which internal coordinates (bond distances, angles, etc.) are significantly different in the products compared with the reactants sometimes provides insight into the region of configuration space in which the saddle point is most likely to be found, suggesting an initial guess. However, this is often not the case, making an acceptable initial guess hard to come by. This difficulty, combined with the high computation cost of calculating the Hessian at each step, makes local methods an impractical sole means for finding most saddle points. At the very least a different preliminary method is needed to arrive at a reasonable initial guess, before turning to local optimization methods. Interpolation methods are able to fill this role. Unlike local optimization methods, they require the minima on either side of the saddle point being sought as inputs [40]. Various schemes can then be used to trace out and optimize a MEP between the given minima. The crest of this MEP then corresponds to the saddle point being sought. In some cases system coordinates can be chosen such that the reaction coordinates are seen to clearly correspond to just one or two of these system coordinates. Constrained minimization, in which these reactive coordinates are stepped from their reactant to their product values, provides an intuitive means of mapping out a reaction pathway, in what is known as coordinate driving. Linearly interpolating the reactant and product cartesian coordinates is another interpolation strategy, with a variety of strategies for optimizing the interpolated image(s) and driving one of them to the saddle point being available [41]. Particularly popular are variations of the nudged elastic band (NEB) method (Figure 1.4), which uses a spring constant to connect and evenly distribute images over the MEP [42].

Figure 1.4 Illustration of how the NEB method would be applied to a two-dimensional potential energy surface to find a transition state. A typical initial guess used to initiate a NEB procedure, and the final MEP, at which the application of the NEB method arrives, are shown on the potential energy surface.

1.3.3 Corrections to Potential Energy Surfaces and Reaction Pathways

While local minima and saddle points are the critical locations on a potential energy surface (PES) for describing chemical reactions, there is a collection of system states associated with each of these stationary points on the PES. These associated states include the vibrational, rotational, translational and electronic states of the molecules within the system, as well as configurational states of the system components. Taking into consideration the energetic and entropic contributions of these states is often vital for accurately modeling chemical reactions that might occur in a system.

1.3.3.1 Energy and Entropy Corrections

Vibrational states of molecules are typically treated within a harmonic approximation. In this case, the normal modes, , and corresponding frequencies, , are found by solving the eigenvalue equation given by the Hessian:

(1.28)

Once the normal modes are known, the allowed energies and states for each mode are determined by the quantum mechanical treatment of the individual harmonic oscillators. Thus the energy states for a vibrational mode with frequency are given by:

(1.29)

The total partition function for all vibrational states (), from which their entropic contributions at temperature can be extracted, is given by:

(1.30)

In cases where gas phase molecules consisting of more than a single atom are present (i.e., where molecules are free to rotate), rotational degrees of freedom should be considered. These are typically accounted for within a rigid rotor approximation, which assumes that the internal geometry of the molecule is fixed. For a diatomic molecule the energy levels obtained from the quantum mechanical solution of a rigid rotor for two masses, and , lying distances and , respectively, from their combined center of mass, leading to a moment of inertia , are:

(1.31)

When we take into account the symmetry index, (which is 1 for a heterogeneous and 2 for a homogeneous diatomic molecule), we can write the partition function encompassing all rotational states as:

(1.32)

For more complex molecules with three moments of inertia, , and , the partition function is:

(1.33)

In the case of gas phase molecules, not only rotational but also translational motion should be considered. The expected energy contribution from translational degrees of freedom for an ideal gas particle with mass m at temperature T is given by:

(1.34)

and the partition function for such a particle constrained in a volume is:

(1.35)

While there are cases where excited electronic states are low enough in energy to play a significant role, in the majority of cases only the ground electronic state is accessible and thus the ground state energy computed using an electronic structure method provides a good approximation for the electronic energy of the system, . The partition function for this electronic state, is given by its degeneracy, , that is, its spin multiplicity.

The total system energy can now be computed by summing these various energy contributions:

(1.36)

Similarly the overall partition function is formed by taking the product of the various contributing partition functions:

(1.37)

Armed with these energy correction terms and their corresponding partition functions, we are able to calculate a variety of useful thermodynamic quantities, such as the zero-point energy, entropy, enthalpy and Gibbs free energy.

1.3.3.2 Thermodynamic State Functions

A stationary state, and indeed any point on a PES, corresponds to a set of fixed nuclear coordinates. However, the Heisenberg uncertainty principle does not allow for fixed nuclear coordinates, because the positions and momenta of the nuclei would then be simultaneously, exactly determined. Similarly, the absence of a zero-energy state among the vibrational states means that regardless of the temperature (i.e., even as absolute zero is approached), molecules continue to vibrate, albeit only in their lowest vibrational states. Thus meaningful comparison with experiment requires correcting system energies to include these zero-point corrections. Adding these zero-point corrections to the “bottom of the well” energy, , yields the zero-point-corrected energy of the system:

(1.38)

This zero-point energy then corresponds to the effective energy in experiments conducted at temperatures approaching 0 K.

Making additional comparisons with macroscopic quantities such as pressure p, temperature T, and volume V requires us to use the tools of statistical mechanics to derive appropriate thermodynamic quantities from the overall partition function, we found above. Thus, the temperature-dependent internal energy of a system at a particular temperature T, , can be calculated from its partition function, Q:

(1.39)

An often more relevant quantity is the enthalpy, , since it corresponds to experimentally determined heats of formation:

(1.40)

The entropy, , of a system can also be calculated directly from the partition function:

(1.41)

Using this entropy both the Helmholtz, , and the Gibbs, , free energies can also be computed:

(1.42)

The Gibbs free energy is particularly useful as it corresponds to the most common experimental conditions (fixed temperature and pressure). Finally, it should be noted that these thermodynamic quantities can not only be computed for individual points on the PES, but that these points can then be used to construct new energy surfaces, corresponding to the thermodynamic quantity of interest.

1.3.3.3 Reaction Energies and Rates

The raw energy from a single electronic structure calculation typically has little meaning on its own for several reasons. First, the use of pseudo-potentials masks the energetic contributions of core electrons. Second, the energy to form the system from infinitely separated electrons and nuclei (or pseudo-potentials when they are used) is almost never directly relevant to experimental measurables. Finally, systematic errors associated with the use of any particular method and basis set tend to result in relatively large errors for a given individual calculation.

If, instead, relative energies are calculated, the situation changes considerably. Approximations and systematic errors often cancel out, yielding much improved accuracy. Furthermore, reaction energies and heats of formation are standard experimentally measured quantities, which are readily available for many reactions and substances.

The heat of formation, , of a system composed of i -indexed atoms with stoichiometric factors, , and standard state enthalpies, , computed from electronic structure calculations as shown above, and an electronic-structure-calculation-derived system enthalpy of is:

(1.43)

Reaction energies, enthalpies and free energies can also be calculated analogously. Thus, the Gibbs free energy for a reaction is given by:

(1.44)

The Gibbs free energy (or thermodynamic quantity for another relevant ensemble) can then be used to calculate the equilibrium constant for the reaction of interest:

(1.45)

This is all the information needed to characterize a system at equilibrium; however, non-equilibrium systems, and in particular reaction rates, are often of interest in electrochemistry. The rate of an elementary reaction step can be written as the product of the concentrations of the reactants, each raised to the power of its stoichiometric factor, times a rate constant:

(1.46)

Transition state theory provides an approximate means for calculating the rate constant from the initial state (i.e., reactants) and transition state separating the reactants and products on the PES:

(1.47)

where is the free energy difference between the reactants and the transition state. The critical assumption here is that the system begins in the reactant state and then samples nearby states until at last it samples the transition state, at which point it proceeds directly to the product state. Thus, the exponential term can be interpreted as the Boltzmann factor for finding the system at the transition state, and the forefactor gives the rate at which new states are sampled.

1.3.4 Electronic Structure Models in Electrochemistry

Traditional electrochemical experiments involve systems composed of around 1026 atoms. However, with current computational technology, routine electronic structure calculations are only feasible for around 102 atoms. Thus, modeling macroscopic-scale experimental systems with nano-scale electronic structure calculations requires the three following assumptions or approximations. First, the assumption that the experimental system can be broken up into various spatially localized processes (e.g., electrochemical reactions at the interface or ion diffusion in the electrolyte). Secondly, that the macroscopic properties of these processes can not merely be derived from the sum of all underlying atomistic processes, but rather, that they can be derived from the properties of a limited number of representative exemplars of the atomistic processes (rather than summing them exhaustively). Finally, that there are models involving no more than 102 atoms which reliably represent the critical atomistic processes in the macroscopic system and can thus be used as the exemplars that form the starting point for derivations of macroscopic properties.

The most commonly modeled processes and regions using electronic structure calculations are the chemical reactions at the interface between the ion- and electron-conductors. Typically, this is a solid–liquid interface. The number of atoms explicitly involved in an individual electrochemical reaction at such an interface is typically within the limits of the number of atoms that can be explicitly treated with electronic structure methods. The electrode surface, as well as the surrounding electrolyte, often plays a crucial role in surface reactions; however, both extend well beyond the nano-scale limitations of electronic structure calculations into the macroscopic scale. Thus, the application of electronic structure methods to electrochemical surface reactions, requires restricting the extent to which the solvent and surface are treated explicitly (i.e., modeled as individual atoms).

1.3.4.1 Modeling the Electrode Surface: Cluster versus Slab

There are two basic approaches for modeling extended electrode surfaces: the cluster approach and the slab/supercell approach (Figure 1.5) [27,43–45]. In the cluster approach a cluster of atoms surrounding the active surface site, where the reaction is to take place, is cut out and used to model the electrode surface. In determining which particular cluster model to use, tests should be performed to verify that the cluster chosen well approximates the chemical activity of the active site for the reaction of interest. Because computational capabilities were once much more limited than they are today, clusters were a popular model choice, because a cluster can be formed with as few as a single atom. However, many of these early, small cluster models have been found to result in inadequate, if not altogether false, descriptions of the surface chemistry. Indeed, it has been shown that clusters consisting of 20 to more than 50 atoms are required to reliably model the simplest metal surfaces [46]. Thus it is important to verify that one's results are converged with respect to cluster size before adopting a cluster model. Cluster models are particularly appropriate for studying isolated reactions, in which no neighboring adsorbates are involved. In contrast to cluster models, which limit the extent of the electrode in all directions, slab/supercell methods limit only the depth of the electrode. Infinite extent in the directions perpendicular to the surface is procured by introducing periodicity. Convergence should be tested to verify that the slab is sufficiently thick to model the surface of a bulk system. The convergence for unit cell lengths perpendicular to the surface is also important to verify when low surface coverage situations are being modeled. Otherwise these unit cell lengths can be modified to correspond to the surface coverage being modeled.

Figure 1.5 Illustrations of slab (a) and cluster (b) methods for modeling an electrode surface. In the slab approximation, the electrode surface is modeled using a periodically infinite slab. This infinite slab (with a periodically recurring adsorbate) is shown below, while the contents of a single simulation supercell are shown above among the outlines of neighboring supercells. In the cluster approximation a cluster is conceptually lifted out of the electrode surface, and taken to be representative of the whole on the basis of convergence tests, where various cluster sizes and geometries are tested.

1.3.4.2 Modeling the Solvent: Explicit versus Implicit