A Matter of Density - N. Sukumar - E-Book

A Matter of Density E-Book

N. Sukumar

0,0
120,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

The origins and significance of electron density in the chemical, biological, and materials sciences Electron density is one of the fundamental concepts underlying modern chemistry and one of the key determinants of molecular structure and stability. It is also the basic variable of density functional theory, which has made possible, in recent years, the application of the mathematical theory of quantum physics to chemical and biological systems. With an equal emphasis on computational and philosophical questions, A Matter of Density: Exploring the Electron Density Concept in the Chemical, Biological, and Materials Sciences addresses the foundations, analysis, and applications of this pivotal chemical concept. The first part of the book presents a coherent and logically connected treatment of the theoretical foundations of the electron density concept. Discussion includes the use of probabilities in statistical physics; the origins of quantum mechanics; the philosophical questions at the heart of quantum theory, like quantum entanglement; and methods for the experimental determination of electron density distributions. The remainder of the book deals with applications of the electron density concept in the chemical, biological, and materials sciences. Contributors offer insights on how a deep understanding of the origins of chemical reactivity can be gleaned from the concepts of density functional theory. Also discussed are the applications of electron density in molecular similarity analysis and electron density-derived molecular descriptors, such as electrostatic potentials and local ionization energies. This section concludes with some applications of modern density functional theory to surfaces and interfaces. An essential reference for students as well as quantum and computational chemists, physical chemists, and physicists, this book offers an unparalleled look at the development of the concept of electron density from its inception to its role in density functional theory, which led to the 1998 Nobel Prize in Chemistry.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 555

Veröffentlichungsjahr: 2012

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title Page

Copyright

Preface

Contributors

Chapter 1: Introduction of Probability Concepts in Physics—the Path to Statistical Mechanics

Further Reading

Chapter 2: Does God Play Dice?

2.1 Quanta of Radiation

2.2 Adiabatic Invariants

2.3 Probability Laws

2.4 Matter Waves

2.5 Quantum Statistics

2.6 Matrix Mechanics and Commutation Relations

2.7 Wave Functions

2.8 The Statistics of Electrons

2.9 Does God Play Dice?

References

Chapter 3: The Electron Density

3.1 Molecular Structure

3.2 Self-Consistent Treatment of Many-Electron Systems

3.3 Density Matrices and Electron Correlation

3.4 Experimental Determination of the Electron Density

3.5 Concluding Remarks

References

Chapter 4: Atoms in Molecules

4.1 Critical Points of the Electron Density

4.2 Virial Partitioning of the Electron Density

4.3 The Bond Path and the Molecular Graph

4.4 Catastrophe Points in the Change of Molecular Structure

4.5 Topology of the Laplacian Distribution

4.6 The Fermi Hole and Electron Delocalization

4.7 Electron Localization Function

4.8 The Source Function

4.9 Stockholder Partitioning of The Electron Density

4.10 Atoms in Momentum Space

4.11 Density Matrix Partitioning

4.12 Concluding Remarks

4.13 Epilogue

References

Chapter 5: Density Functional Approach to the Electronic Structure of Matter

5.1 The Hohenberg–Kohn Theorems

5.2 The Chemical Potential

5.3 The Exchange-Correlation Hole

5.4 The Kohn–Sham Equation

5.5 A Matter of Phase

Acknowledgment

References

Chapter 6: Density-Functional Approximations for Exchange and Correlation

6.1 The Challenge of Density-Functional Theory

6.2 Exchange and Correlation Functionals

6.3 Ingredients and Techniques for Constructing Density Functional Approximations

6.4 Nonempirical Derivation and Local Density Models

6.5 Semilocal Functionals Beyond the Local Density Approximation

6.6 Constraint Satisfaction

6.7 The Comeback of Exact Exchange: Global and Local Hybrids

6.8 The Best of Both Worlds: Range-Separated Hybrids

6.9 Empirical Fits

6.10 Correlation Functionals Compatible with Exact Exchange

6.11 Current Trends and Outlook for the Future

References

Chapter 7: An Understanding of the Origin of Chemical Reactivity from a Conceptual DFT Approach

7.1 Introduction

7.2 Reactivity Descriptors

7.3 Molecular Electronic Structure Principles

7.4 Conceptual DFT as A useful Tool Towards Analyzing Chemical Reactivity

7.5 Concluding Remarks

Acknowledgments

References

Chapter 8: Electron Density and Molecular Similarity

8.1 The Molecular Similarity Principle in Drug Design

8.2 Electron-Density-Based Atomic and Molecular Similarity Analysis

8.3 Molecular Similarity Measures from Critical Points of the Electron Density

8.4 Electron-Density-Derived Molecular Surface Descriptors

8.5 Alignment-Free Molecular Shape and Electronic Property Descriptors

8.6 Network Graphs from Molecular Similarity

References

Chapter 9: Electrostatic Potentials and Local Ionization Energies in Nanomaterial Applications

9.1 The Electronic Density

9.2 The Electrostatic Potential

9.3 The Average Local Ionization Energy

9.4 Reactivity

9.5 Summary

Acknowledgment

References

Chapter 10: Probing Electron Dynamics with the Laplacian of the Momentum Density

10.1 Introduction

10.2 Computational Methods

10.3 A Postulate and its Existing Support

10.4 Structure of Motion, Transferability, and Anisotropy

10.5 Conclusion

Acknowledgments

References

Chapter 11: Applications of Modern Density Functional Theory to Surfaces and Interfaces

11.1 Introduction

11.2 The Predictive Capability of DFT

11.3 Slab Models Used in Surface/Interface Studies

11.4 The Surface Energy and Issues with Polar Surfaces

11.5 Adsorbate on Surfaces—Energetics and the Wulff Construction

11.6 Adsorbates on Surfaces—Electronic Structure

11.7 Surface Phase Diagrams: First Principles Thermodynamics

11.8 Interface Phase Diagrams: First Principles Thermodynamics

11.9 Outlook and Concluding Thoughts

Acknowledgments

References

Index

Copyright © 2013 by John Wiley & Sons, Inc. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New Jersey

Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

A matter of density : exploring the electron density concept in the chemical, biological, and materials sciences / edited by N. Sukumar.

pages cm

Includes index.

ISBN 978-0-470-76900-3 (hardback)

1. Electron distribution. I. Sukumar, N., editor of compilation.

QC793.5.E626M38 2013

539.7′2112—dc23

2012023635

Preface

Electron density is one of the fundamental concepts underpinning modern chemistry. Introduced through Max Born's probability interpretation of the wave function, it is an enigma that bridges the classical concepts of particles and fluids. The electronic structure of matter is intimately related to the quantum laws of composition of probabilities and the Born–Oppenheimer separation of electronic and nuclear motions in molecules. The topology of the electron density determines the details of molecular structure and stability. The electron density is a quantity that is directly accessible to experimental determination through diffraction experiments. It is the basic variable of density functional theory, which has enabled practical applications of the mathematical theory of quantum physics to chemical and biological systems in recent years. The importance of density functional theory was recognized by the 1998 Nobel Prize in chemistry to Walter Kohn and John Pople.

In the first part (Chapters 1–6) of this book, we aim to present the reader with a coherent and logically connected treatment of theoretical foundations of the electron density concept, beginning with its statistical underpinnings: the use of probabilities in statistical physics (Chapter 1) and the origins of quantum mechanics. We delve into the philosophical questions at the heart of the quantum theory such as quantum entanglement (Chapter 2), and also describe methods for the experimental determination of electron density distributions (Chapter 3). The conceptual and statistical framework developed in earlier chapters is then employed to treat electron exchange and correlation, the partitioning of molecules into atoms (Chapter 4), density functional theory, and the theory of the insulating state of matter (Chapter 5). Chapter 6 concludes with an in-depth treatment of density-functional approximations for exchange and correlation by Viktor Staroverov.

The second part (Chapters 7–11) deals with applications of the electron density concept in chemical, biological, and materials sciences. In Chapter 7, Chakraborty, Duley, Giri, and Chattaraj describe how a deep understanding of the origins of chemical reactivity can be gleaned from the concepts of density functional theory. Applications of electron density in molecular similarity analysis and of electron-density-derived molecular descriptors form the subject matter of Chapter 8. In Chapter 9, Politzer, Bulat, Burgess, Baldwin, and Murray, elaborate on two of the most important such descriptors, namely, electrostatic potentials and local ionization energies, with particular reference to nanomaterial applications. All the applications discussed thus far have dealt with electron density in position space. A complementary perspective is obtained by considering the electron density in momentum space. MacDougall and Levit illustrate this in Chapter 10, by employing the Laplacian of the electron momentum density as a probe of electron dynamics. Pilania, Zhu, and Ramprasad conclude the discussion in Chapter 11 with some applications of modern density functional theory to surfaces and interfaces. The book is addressed to senior undergraduate and graduate students in chemistry and philosophers of science, as well as to current and aspiring practitioners of computational quantum chemistry, and anyone interested in exploring the applications of the electron density concept in chemistry, biology, and materials sciences.

I would like to express my sincere thanks and appreciation to the numerous friends and colleagues who helped to make this book a reality by graciously contributing their precious time and diligent efforts in reviewing various chapters or otherwise offering their valuable suggestions, namely, Drs. Felipe Bulat and A. K. Rajagopal (Naval Research Laboratory, Washington, DC), Prof. Shridhar Gadre (University of Pune and Indian Institute of Technology, Kanpur, India), Dr. Michael Krein (Lockheed Martin Advanced Technology Laboratories, Cherry Hill, NJ and Rensselaer Polytechnic Institute, Troy, NY), Prof. Preston MacDougall (Middle Tennessee State University, Murfreesboro, TN), Prof. Cherif Matta (Mount Saint Vincent University and Dalhousie University, Halifax, Nova Scotia, Canada), Dr. Salilesh Mukhopadhyay (Feasible Solutions, NJ), Profs. Peter Politzer and Jane Murray (CleveTheoComp LLC, Cleveland, OH), Prof. Sunanda Sukumar (Albany College of Pharmacy, Albany, NY and Shiv Nadar University, Dadri, India), Prof. Ajit Thakkar (University of New Brunswick, Fredericton, Canada), and Prof. Viktor Staroverov (University of Western Ontario, Canada). I also owe a deep debt of gratitude to the institutions and individuals who hosted me at various times during the last couple of years and provided me with the facilities to complete this book, namely, Rensselaer Polytechnic Institute in Troy, NY, and my host there Prof. Curt Breneman; the Institute of Mathematical Sciences in Chennai, India, and my host there Prof. G. Baskaran; and Shiv Nadar University in Dadri, India. The patient assistance of Senior Acquisitions Editor, Anita Lekhwani, and her very capable and efficient team at John Wiley & Sons has also been invaluable in this process.

N. Sukumar

Department of Chemistry Shiv Nadar University Dadri, UP, India

Contributors

Jeffrey W. Baldwin, Acoustics Division, Naval Research Laboratory, Washington, DC
Felipe A. Bulat, Acoustics Division, Naval Research Laboratory, Washington, DC
James Burgess, Acoustics Division, Naval Research Laboratory, Washington, DC
Arindam Chakraborty, Department of Chemistry and Center for Theoretical Studies, Indian Institute of Technology, Kharagpur, India
Pratim Kumar Chattaraj, Department of Chemistry and Center for Theoretical Studies, Indian Institute of Technology, Kharagpur, India
Soma Duley, Department of Chemistry and Center for Theoretical Studies, Indian Institute of Technology, Kharagpur, India
Santanab Giri, Department of Chemistry and Center for Theoretical Studies, Indian Institute of Technology, Kharagpur, India
M. Creon Levit, NASA, Advanced Supercomputing Division, Ames Research Center, Moffett Field, CA
Preston J. MacDougall, Department of Chemistry and Center for Computational Science, Middle Tennessee State University, Murfreesboro, TN
Jane S. Murray, CleveTheoComp LLC, Cleveland, OH
G. Pilania, Department of Chemical, Materials and Biomolecular Engineering, Institute of Materials Science, University of Connecticut, Storrs, CT
Peter Politzer, CleveTheoComp LLC, Cleveland, OH
R. Ramprasad, Department of Chemical, Materials and Biomolecular Engineering, Institute of Materials Science, University of Connecticut, Storrs, CT
Viktor N. Staroverov, Department of Chemistry, The University of Western Ontario, London, Ontario, Canada
N. Sukumar, Department of Chemistry, Shiv Nadar University, India; Rensselaer Exploratory Center for Cheminformatics Research, Troy, NY
Sunanda Sukumar, Department of Chemistry, Shiv Nadar University, India
H. Zhu, Department of Chemical, Materials and Biomolecular Engineering, Institute of Materials Science, University of Connecticut, Storrs, CT

Chapter 1

Introduction of Probability Concepts in Physics—the Path to Statistical Mechanics

N. Sukumar

It was an Italian gambler who gave us the first scientific study of probability theory. But Girolamo Cardano, also known as Hieronymus Cardanus or Jerome Cardan (1501–1576), was no ordinary gambler. He was also an accomplished mathematician, a reputed physician, and author. Born in Pavia, Italy, Cardan was the illegitimate son of Fazio Cardano, a Milan lawyer and mathematician, and Chiara Micheria. In addition to his law practice, Fazio lectured on geometry at the University of Pavia and at the Piatti Foundation and was consulted by the likes of Leonardo da Vinci on matters of geometry. Fazio taught his son mathematics and Girolamo started out as his father's legal assistant, but then went on to study medicine at Pavia University, earning his doctorate in medicine in 1525. But on account of his confrontational personality, he had a difficult time finding work after completing his studies. In 1525, he applied to the College of Physicians in Milan, but was not admitted owing to his illegitimate birth. Upon his father's death, Cardan squandered his bequest and turned to gambling, using his understanding of probability to make a living off card games, dice, and chess. Cardan's book on games of chance, Liber de ludo aleae (On Casting the Die, written in the 1560s, but not published until 1663), contains the first ever exploration of the laws of probability, as well as a section on effective cheating methods! In this book, he considered the fundamental scientific principles governing the likelihood of achieving double sixes in the rolling of dice and how to divide the stakes if a game of dice is incomplete.

The fundamental assumption here is that the act of rolling (or not rolling) die A does not affect the outcome of the roll of die B. In other words, the two dice are independent of each other, and their probabilities are found to compound in a multiplicative manner. Of course, the same conclusion holds for the probability of two fives or two ones or indeed that of die A coming up a one and die B coming up a five. So we can generalize this law to read

1.1

Eventually, Cardan developed a great reputation as a physician, successfully treating popes and archbishops, and was highly sought after by many wealthy patients. He was appointed Professor of Medicine at Pavia University, and was the first to provide a (clinical) description of typhus fever and (what we now know as) imaginary numbers. Cardan's book Arts Magna (The Great Art or The Rules of Algebra) is one of the classics in algebra. Cardan did, however, pass on his gambling addiction to his younger son Aldo; he was also unlucky in his eldest son Giambatista. Giambatista poisoned his wife, whom he suspected of infidelity, and was then executed in 1560. Publishing the horoscope of Jesus and writing a book in praise of Nero (tormentor of Christian martyrs) earned Girolamo Cardan a conviction for heresy in 1570 and a jail term. Forced to give up his professorship, he lived the remainder of his days in Rome off a pension from the Pope.

The foundations of probability theory were thereafter further developed by Blaise Pascal (1623–1662) in correspondence with Pierre de Fermat (1601–1665). Following Cardan, they studied the dice problem and solved the problem of points, considered by Cardan and others, for a two player game, as also the “gambler's ruin”: the problem of finding the probability that when two men are gambling together, one will ruin the other. Blaise Pascal was the third child and only son of Étienne Pascal, a French lawyer, judge, and amateur mathematician. Blaise's mother died when he was three years old. Étienne had unorthodox educational views and decided to homeschool his son, directing that his education should be confined at first to the study of languages, and should not include any mathematics. This aroused the boy's curiosity and, at the age of 12, Blaise started to work on geometry on his own, giving up his playtime to this new study. He soon discovered for himself many properties of figures, and, in particular, the proposition that the sum of the angles of a triangle is equal to two right angles. When Étienne realized his son's dedication to mathematics, he relented and gave him a copy of Euclid's elements.

In 1639, Étienne was appointed tax collector for Upper Normandy and the family went to live in Rouen. To help his father with his work collecting taxes, Blaise invented a mechanical calculating machine, the Pascaline, which could do the work of six accountants, but the Pascaline never became a commercial success. Blaise Pascal also repeated Torricelli's experiments on atmospheric pressure (New Experiments Concerning Vacuums, October 1647), and showed that a vacuum could and did exist above the mercury in a barometer, contradicting Aristotle's and Descartes' contentions that nature abhors vacuum. In August 1648, he observed that the pressure of the atmosphere decreases with height, confirming his theory of the cause of barometric variations by obtaining simultaneous readings at different altitudes on a nearby hill, and thereby deduced the existence of a vacuum above the atmosphere. Pascal also worked on conic sections and derived important theorems in projective geometry. These studies culminated in his Treatise on the Equilibrium of Liquids (1653) and The Generation of Conic Sections (1654 and reworked on 1653–1658). Following his father's death in 1651 and a road accident in 1654 where he himself had a narrow escape, Blaise turned increasingly to religion and mysticism. Pascal's philosophical treatise Pensées contains his statistical cost-benefit argument (known as Pascal's wager) for the rationality of belief in God:

If God does not exist, one will lose nothing by believing in him, while if he does exist, one will lose everything by not believing.

In his later years, he completely renounced his interest in science and mathematics, devoting the rest of his life to God and charitable acts. Pascal died of a brain hemorrhage at the age of 39, after a malignant growth in his stomach spread to the brain.

In the following century, several physicists and mathematicians drew upon the ideas of Pascal and Fermat, in advancing the science of probability and statistics. Christiaan Huygens (1629–1694), mathematician and physicist, wrote a book on probability, Van Rekeningh in Spelen van Geluck (The Value of all Chances in Games of Fortune), outlining the calculation of the expectation in a game of chance. Jakob Bernoulli (1654–1705), professor of mathematics at the University of Basel, originated the term permutation and introduced the terms a priori and a posteriori to distinguish two ways of deriving probabilities. Daniel Bernoulli (1700–1782), mathematician, physicist, and a nephew of Jakob Bernoulli, working in St. Petersburg and at the University of Basel, wrote nine papers on probability, statistics, and demography, but is best remembered for his Exposition of a New Theory on the Measurement of Risk (1737). Thomas Bayes (1702–1761), clergyman and mathematician, wrote only one paper on probability, but one of great significance: An Essay towards Solving a Problem in the Doctrine of Chances published posthumously in 1763. Bayes' theorem is a simple mathematical formula for calculating conditional probabilities. In its simplest form, Bayes' theorem relates the conditional probability (also called the likelihood) of event A given B to its converse, the conditional probability of B given A:

1.2

where P(A) and P(B) are the prior or marginal probabilities of A (“prior” in the sense that it does not take into account any information about B) and B, respectively; P(A|B) is the conditional probability of A, given B (also called the posterior probability because it is derived from or depends on the specified value of B); and P(B|A) is the conditional probability of B given A. To derive the theorem, we note that from the product rule, we have

1.3

Dividing by P(B), we obtain Bayes' theorem (Eq. 1.2), provided that neither P(B) nor P(A) is zero.

The next actor in our story is Pierre-Simon de Laplace (1749–1827), a mathematician and a physicist, who worked on probability and calculus over a period of more than 50 years. His father, Pierre Laplace, was in the cider trade and expected his son to make a career in the church. However, at Caen University, Pierre-Simon discovered his love and talent for mathematics and, at the age of 19, went to Paris without taking his degree, but with a letter of introduction to d'Alembert, from his teacher at Caen. With d'Alembert's help, Pierre-Simon was appointed professor of mathematics at École Militaire, from where he started producing a series of papers on differential equations and integral calculus, the first of which was read to the Académie des Sciences in Paris in 1770. His first paper to appear in print was on integral calculus in Nova Acta Eruditorum, Leipzig, in 1771. He also read papers on mathematical astronomy to the Académie, including the work on the inclination of planetary orbits and a study of the perturbation of planetary orbits by their moons. Within 3 years Pierre-Simon had read 13 papers to the Académie, and, in 1773, he was elected as an adjoint in the Académie des Sciences. His' 1774 Mémoire sur la Probabilité des Causes par les Évènemens gave a Bayesian analysis of errors of measurement. Laplace has many other notable contributions to his credit, such as the central limit theorem, the probability generating function, and the characteristic function. He also applied his probability theory to compare the mortality rates at several hospitals in France.

Working with the chemist Antoine Lavoisier in 1780, Laplace embarked on a new field of study, applying quantitative methods to a comparison of living and inanimate systems. Using an ice calorimeter that they devised, Lavoisier and Laplace showed respiration to be a form of combustion. In 1784, Laplace was appointed examiner at the Royal Artillery Corps, where he examined and passed the young Napoleon Bonaparte. As a member of a committee of the Académie des Sciences to standardize weights and measures in 1790, he advocated a decimal base, which led to the creation of the metric system. He married in May 1788; he and his wife went on to have two children. While Pierre-Simon was not modest about his abilities and achievements, he was at least cautious, perhaps even politically opportunistic, but certainly a survivor. Thus, he managed to avoid the fate of his colleague Lavoisier, who was guillotined during the French Revolution in 1794. He was a founding member of the Bureau des Longitudes and went on to lead the Bureau and the Paris Observatory. In this position, Laplace published his Exposition du Systeme du Monde as a series of five books, the last of which propounded his nebular hypothesis for the formation of the solar system in 1796, according to which the solar system originated from the contraction and cooling of a large, oblate, rotating cloud of gas.

During Napoleon's reign, Laplace was a member, then chancellor of the Senate, receiving the Legion of Honor in 1805 and becoming Count of the Empire the following year. In Mécanique Céleste (4th edition, 1805), he propounded an approach to physics that influenced thinking for generations, wherein he “sought to establish that the phenomena of nature can be reduced in the last analysis to actions at a distance between molecule and molecule, and that the consideration of these actions must serve as the basis of the mathematical theory of these phenomena.” Laplace's Théorie Analytique des Probabilités (1812) is a classic of probability and statistics, containing Laplace's definition of probability; the Bayes rule; methods for determining probabilities of compound events; a discussion of the method of least squares; and applications of probability to mortality, life expectancy, and legal affairs. Later editions contained supplements applying probability theory to measurement errors; to the determination of the masses of Jupiter, Saturn, and Uranus; and to problems in surveying and geodesy. On restoration of the Bourbon monarchy, which he supported by casting his vote against Napoleon, Pierre-Simon became Marquis de Laplace in 1817. He died on March 5, 1827.

Another important figure in probability theory was Carl Friedrich Gauss (1777–1855). Starting elementary school at the age of seven, he amazed his teachers by summing the integers from 1 to 100 instantly (the sum equals 5050, being the sum of 50 pairs of numbers, each pair summing to 101). At the Brunswick Collegium Carolinum, Gauss independently discovered the binomial theorem, as well as the law of quadratic reciprocity and the prime number theorem. Gauss' first book Disquisitiones Arithmeticae published in 1801 was devoted to algebra and number theory. His second book, Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientium (1809), was a two-volume treatise on the motion of celestial bodies. Gauss also used the method of least squares approximation (published in Theoria Combinationis Observationum Erroribus Minimis Obnoxiae, 1823, supplement 1828) to successfully predict the orbit of Ceres in 1801. In 1807, he was appointed director of the Göttingen observatory. As the story goes, Gauss' assistants were unable to exactly reproduce the results of their astronomical measurements. Gauss got angry and stormed into the lab, claiming he would show them how to do the measurements properly. But, Gauss was not able to repeat his measurements exactly either! On plotting a histogram of the results of a particular measurement, Gauss discovered the famous bell-shaped curve that now bears his name, the Gaussian function:

1.4

1.5

It is of a sigmoid shape and has wide applications in probability and statistics. In the field of statistics, Gauss is best known for his theory of errors, but this represents only one of Gauss' many remarkable contributions to science. He published over 70 papers between 1820 and 1830 and in 1822, won the Copenhagen University Prize for Theoria Attractioniscorporum Sphaeroidicorum Ellipticorum Momogeneorum Methodus Nova Tractata, dealing with geodesic problems and potential theory. In Allgemeine Theorie des Erdmagnetismus (1839), Gauss showed that there can only be two poles in the globe and went on to specify a location for the magnetic South pole, establish a worldwide net of magnetic observation points, and publish a geomagnetic atlas. In electromagnetic theory, Gauss discovered the relationship between the charge density and the electric field. In the absence of time-dependent magnetic fields, Gauss's law relates the divergence of the electric field E to the charge density ρ(r):

1.6

which now forms one of Maxwell's equations.

The stage is now set for the formal entry of probability concepts into physics, and the credit for this goes to the Scottish physicist James Clerk Maxwell and the Austrian physicist Ludwig Boltzmann. James Clerk Maxwell (1831–1879) was born in Edinburgh on June 13, 1831, to John Clerk Maxwell, an advocate, and his wife Frances. Maxwell's father, a man of comfortable means, had been born John Clerk, and added the surname Maxwell to his own after he inherited a country estate in Middlebie, Kirkcudbrightshire, from the Maxwell family. The family moved when James was young to “Glenlair,” a house his parents had built on the 1500-acre Middlebie estate. Growing up in the Scottish countryside in Glenlair, James displayed an unquenchable curiosity from an early age. By the age of three, everything that moved, shone, or made a noise drew the question: “what's the go o' that?” He was fascinated by geometry at an early age, rediscovering the regular polyhedron before any formal instruction. However, his talent went largely unnoticed until he won the school's mathematical medal at the age of 13, and first prizes for English and poetry. He then attended Edinburgh Academy and, at the age of 14, wrote a paper On the Description of Oval Curves, and Those Having a Plurality of Foci describing the mechanical means of drawing mathematical curves with a piece of twine and generalizing the definition of an ellipse, which was read to the Royal Society of Edinburgh on April 6, 1846. Thereafter, in 1850, James went to Cambridge, where (according to Peter Guthrie Tait) he displayed a wealth of knowledge, but in a state of disorganization unsuited to mastering the cramming methods required to succeed in the Tripos. Nevertheless, he obtained the position of Second Wrangler, graduating with a degree in mathematics from Trinity College in 1854, and was awarded a fellowship by Trinity to continue his work. It was during this time that he extended Michael Faraday's theories of electricity and magnetism. His paper On Faraday's Lines of Force, read to the Cambridge Philosophical Society in 1855 and 1856, reformulated the behavior of and relation between electric and magnetic fields as a set of four partial differential equations (now known as Maxwell's equations, published in a fully developed form in Maxwell's Electricity and Magnetism 1873).

In 1856, Maxwell was appointed professor of natural philosophy at Marischal College in Aberdeen, Scotland, where he became engaged to Katherine Mary Dewar. They were married in 1859. At 25, Maxwell was a decade and a half younger than any other professors at Marischal, and lectured 15 hours a week, including a weekly pro bono lecture to the local working men's college. During this time, he worked on the perception of color and on the kinetic theory of gases. In 1860, Maxwell was appointed to the chair of natural philosophy at King's College in London. This was probably the most productive time of his career. He was awarded the Royal Society's Rumford Medal in 1860 for his work on color, and elected to the Society in 1861. Maxwell is credited with the discovery that color photographs could be formed using red, green, and blue filters. In 1861, he presented the world's first color photograph during a lecture at the Royal Institution. It was also here that he came into regular contact with Michael Faraday, some 40 years his senior, whose theories of electricity and magnetism would be refined and perfected by Maxwell. Around 1862, Maxwell calculated that the speed of propagation of an electromagnetic field is approximately the speed of light and concluded, “We can scarcely avoid the conclusion that light consists in the transverse undulations of the same medium which is the cause of electric and magnetic phenomena.” Maxwell then showed that the equations predict the existence of waves of oscillating electric and magnetic fields that travel through an empty space at a speed of 310,740,000 m/s. In his 1864 paper A Dynamical Theory of the Electromagnetic Field, Maxwell wrote, “The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.”

In 1865, Maxwell left London and returned to his Scottish estate in Glenlair. There he continued his work on the kinetic theory of gases and, using a statistical treatment, showed in 1866 that temperature and heat involved only molecular movement. Maxwell's statistical picture explained heat transport in terms of molecules at higher temperature having a high probability of moving toward those at lower temperature. In his 1867 paper, he also derived (independently of Boltzmann) what is known today as the Maxwell–Boltzmann velocity distribution:

1.7

where fv(vx, vy, vz) dvx dvy dvz is the probability of finding a particle with velocity in the infinitesimal element [dvx, dvy, dvz] about velocity , k is a constant now known as the Boltzmann constant (1.38062 × 10−23 J/K), and T is the temperature. This distribution is the product of three independent Gaussian distributions of the variables vx, vy, and vz, with variance kT/m.

Maxwell's work on thermodynamics also led him to devise the Gedankenexperiment (thought experiment) that came to be known as Maxwell's demon. In 1871, Maxwell accepted an offer from Cambridge to be the first Cavendish Professor of Physics. He designed the Cavendish Laboratory, which was formally opened on June 16, 1874. His four famous equations of electrodynamics first appeared in their modern form of partial differential equations in his 1873 textbook A Treatise on Electricity and Magnetism:

1.8

1.9

1.10

1.11

where E is the electric field, B the magnetic field, J the current density, and we have suppressed the universal constants, the permittivity, and permeability of free space. Maxwell delivered his last lecture at Cambridge in May 1879 and passed away on November 5, 1879, in Glenlair.

The story goes that Einstein was once asked whom he would most like to meet if he could go back in time and meet any physicist of the past. Without hesitation, Einstein gave the name of Newton and then Boltzmann. Ludwig Eduard Boltzmann was born on February 20, 1844, in Vienna, the son of a tax official. Ludwig attended high school in Linz and subsequently studied physics at the University of Vienna, receiving his doctorate in 1866 for a thesis on the kinetic theory of gases, under the supervision of Josef Stefan. Boltzmann's greatest contribution to science is, of course, the invention of statistical mechanics, relating the behavior and motions of atoms and molecules with the mechanical and thermodynamic properties of bulk matter. We owe to the American physicist Josiah Willard Gibbs the first use of the term statistical mechanics. In his 1866 paper, entitled Über die Mechanische Bedeutung des Zweiten Hauptsatzes der Warmetheorie, Boltzmann set out to seek a mechanical analog of the second law of thermodynamics, noting that while the first law of thermodynamics corresponded exactly with the principle of conservation of energy, no such correspondence existed for the second law. Already in this 1866 paper, Boltzmann used a ρ log ρ formula, interpreting ρ as density in phase space. To obtain a mechanical formulation of the second law, he started out by providing a mechanical interpretation of temperature by means of the concept of thermal equilibrium, showing that at equilibrium both temperature and the average kinetic energy exchanged are zero.

To establish this result, Boltzmann considered a subsystem consisting of two molecules and studied their behavior assuming that they are in equilibrium with the rest of the gas. The condition of equilibrium requires that this subsystem and the rest of the molecules exchange kinetic energy and change their state in such a way that the average value of the kinetic energy exchanged in a finite time interval is zero, so that the time average of the kinetic energy is stable. However, one cannot apply the laws of elastic collision to this subsystem, as it is in equilibrium with, and exchanging energy and momentum with, the rest of the gas. To overcome this obstacle, Boltzmann proposed a remarkable argument: he argued that, at equilibrium, the evolution of the two-particle subsystem is such that, sooner or later, it would pass through two states having the same total energy and momentum. But, this is just the same outcome as if these states had resulted from an elastic collision. Herein, we can find the germ of the ergodic hypothesis. Boltzmann regarded the irregularity of the system evolution as a sort of spreading out or diffusion of the system trajectory among the possible states and thus reasoned that if such states are able to occur, they will occur. It is only the existence of such states that is of importance and no assumption was made regarding the time interval required for the system to return to a state with the same energy and momentum. In particular, Boltzmann made no assumption of periodicity for the trajectory. Only the fact of closure of the trajectory matters to the argument, not when such closure occurs.

On completing his Privatdozenten (lectureship) in 1867, Boltzmann was appointed professor of mathematical physics at the University of Graz. The next year he set out to create a general theory of the equilibrium state. Boltzmann argued on probabilistic grounds that the average energy of motion of a molecule in an ideal gas is the same in each direction (an assumption also made by Maxwell) and thus derived the Maxwell–Boltzmann velocity distribution (Eq. 1.7). Since for an ideal gas, all energy is in the form of kinetic energy, , the Boltzmann distribution for the fractional number of molecules Ni/N occupying a set of states i and possessing energy Ei is thus proportional to the probability density function (Eq. 1.7):

1.12

1.13

He applied the distribution to increasingly complex cases, treating external forces, potential energy, and motion in three dimensions. In his 1868 paper, he elaborated on his concept of diffuse motion of the trajectory among possible states, generalizing his earlier results to the whole available phase space consistent with the conservation of total energy. In 1879, Maxwell pointed out that this generalization rested on the assumption that the system, if left to itself, will sooner or later pass through every phase consistent with the conservation of energy—namely, the ergodic hypothesis. In his 1868 paper, Boltzmann also pioneered the use of combinatorial arguments, showed the invariance of the phase volume during the motion, and interpreted the phase space density as the probability attributed to a region traversed by a trajectory. Here, we see the precursor to Max Born's statistical interpretation of the quantum wave function.

Boltzmann was also the first one to recognize the importance of Maxwell's electromagnetic theory. He spent several months in Heidelberg with Robert Bunsen and Leo Konigsberg in 1869 and then in Berlin with Gustav Kirchoff and Herman von Helmholtz in 1871, working on problems of electrodynamics. During this time, he continued developing and refining his ideas on statistical mechanics. Boltzmann's nonequilibrium theory was first presented in 1872 and used many ideas from his equilibrium theory of 1866–1871. His famous 95-page article, Weitere Studien über das Wärmegleichgewicht unter Gasmolecülen (Further Studies on the Thermal Equilibrium of Gas Molecules), published in October 1872, contains what he called his minimum theorem, now known as the H-theorem, the first explicit probabilistic expression for the entropy of an ideal gas. Boltzmann's probability equation relates the entropy S of an ideal gas to the number of ways W (Wahrscheinlichkeit) in which the constituent atoms or molecules can be arranged, that is, the number of microstates corresponding to a given macrostate:

1.14

Here, log refers to natural logarithms. The H-theorem is an equation based on Newtonian mechanics that quantifies the heat content of an ideal gas by a numerical quantity H (short for heat). Defined in terms of the velocity distributions of the atoms and molecules of the gas, H assumes its minimum value when the velocities of the particle are distributed according to the Maxwell–Boltzmann (or Gaussian) distribution. Any gas system not at its minimal value of H will tend toward the minimum value through molecular collisions that move the system toward the Maxwell–Boltzmann distribution of velocities.

After a stint as professor of mathematics at the University of Vienna from 1873 to 1876, Boltzmann returned to Graz to take the chair of experimental physics. In 1884, Boltzmann initiated a theoretical study of radiation in a cavity (black body radiation) and used the principles of thermodynamics to derive Stefan's law:

1.15

In 1890, Boltzmann was appointed to the chair of theoretical physics at the University of Munich in Bavaria, Germany, and succeeded Stefan as professor of theoretical physics in his native Vienna after the latter's death in 1893. In 1900, at the invitation of Wilhelm Ostwald, Boltzmann moved to the University of Leipzig. Although the two were on good personal terms, Ostwald was one of Boltzmann's foremost scientific critics and the latter struggled to gain acceptance for his ideas among his peers. Ostwald argued, for instance, that the actual irreversibility of natural phenomena proved the existence of processes that cannot be described by mechanical equations. Unlike Boltzmann, most chemists at that time did not ascribe a real existence to molecules as mechanical entities; the molecular formula was treated as no more than a combinatorial formula. The Vienna Circle was strongly influenced at that time by the positivist–empiricist philosophy of the Austrian physicist and philosopher Ernst Mach (1838–1916), who occupied the chair for the philosophy of the inductive sciences at the University of Vienna. As an experimental physicist, Mach also held that scientific theories were only provisional and had no lasting place in physics. He advanced the concept that all knowledge is derived from sensation; his philosophy was thus characterized by an antimetaphysical attitude that recognized only sensations as real. According to this view, phenomena investigated by science can be understood only in terms of experiences or the “sensations” experienced in the observation of the phenomena; thus, no statement in science is admissible unless it is empirically verifiable. This led him to reject concepts such as absolute time and space as metaphysical. Mach's views thus stood in stark opposition to the atomism of Boltzmann. Mach's reluctance to acknowledge the reality of atoms and molecules as external, mind-independent objects was criticized by Boltzmann and later by Planck as being incompatible with physics. Mach's main contribution to physics involved his description and photographs of spark shock-waves and ballistic shock-waves. He was the first to systematically study supersonic motion, and describe how passing the sound barrier caused the compression of air in front of bullets and shells; the speed of sound bears his name today. After Mach's retirement following a cardiac arrest, Boltzmann returned to his former position as professor of theoretical physics in Vienna in 1902, where he remained for the rest of his life.

On April 30, 1897, Joseph John Thomson announced the discovery of “the carriers of negative electricity”—the electron—to the Royal Institution in England. He was to be awarded the Nobel Prize in 1906 for his determination of its charge to mass ratio. Meanwhile, in November 1900, Max Planck came to the realization that the Wien law is not exact. In an attempt to define an entropy of radiation conforming with Stefan's empirical result (Eq. 15), Planck was led to postulate the quantum of action:

1.16

Further Reading

Boltzmann L. Wien Ber 1866;53:195–220.

Boltzmann L. Wien Ber 1872;66:275–370.

Campbell L, Garnett W. The Life of James Clerk Maxwell. London: Macmillan; 1882.

Cohen EGD, Thirring W. The Boltzmann Equation: Theory and Applications: Proceedings of the International Symposium ‘100 Years Boltzmann Equation’ Vienna, 4th-8th September 1972 (Few-Body Systems). New York: Springer-Verlag; 1973.

Gibbs JW. Elementary Principles in Statistical Mechanics, Developed with Especial Reference to the Rational Foundation of Thermodynamics [reprint]. New York: Dover; 1960.

Jammer M. The Conceptual Development of Quantum Mechanics. New York: McGraw Hill; 1966.

Klein MJ. The Making of a Theoretical Physicist. Biography of Paul Ehrenfest. Amsterdam: Elsevier; 1970.

Maxwell JC. Theory of Heat, 1871 [reprint]. Westport (CT): Greenwood Press; 1970.

Maxwell JC. A Treatise on Electricity and Magnetism. Oxford: Clarendon Press; 1873.

Pascal B. Pensees (Penguin Classics). Penguin Books; 1995. Krailsheimer AJ, Translator.

Planck M. Ann Phys 1901;4:553.

Tait PG. Proceedings of the Royal Society of Edinburgh, 1879–1880. Quoted in Everitt CWF. James Clerk Maxwell: Physicist and Natural Philosopher. New York: Charles Scribner; 1975.

The MacTutor History of Mathematics archive Index of Biographies. Available at http://www-groups.dcs.st-and.ac.uk/∼ history/BiogIndex.html (School of Mathematics and Statistics, University of St Andrews, Scotland). Accessed 2011.

Chapter 2

Does God Play Dice?

N. Sukumar

2.1 Quanta of Radiation

2.1

(where b is a constant), from which he derived his famous radiation law for the energy density of black-body radiation:

2.2

This equation reduces to the Wien law at high ν and low T:

2.3

and to the Rayleigh–Jeans radiation law at low ν and high T:

2.4

Ehrenfest realized that Planck's hypothesis challenged Boltzmann's assumption of equal a priori probabilities of volume elements in phase space. It was Albert Einstein, a technical expert at the Swiss patent office in Bern, who recognized the logical inconsistency in combining an electrodynamical description based on Maxwell's equations, which assume that energy can vary continuously, with a statistical description, where the oscillator energy is restricted to assume only discrete values that are integral multiples of hν. Equation 2.1, in fact, combines the wave and particle aspects of radiation, although this was not Planck's explicit intention. Planck's conception of energy quantization was of oscillators of frequency ν that could only absorb or emit energy in integral multiples of hν: the quantization only applied to the interaction between matter and radiation. Einstein's 1905 paper in Annalen der Physik on the photoelectric effect and the light quantum hypothesis [4], wherein Einstein proposed that radiant energy itself is quantized, launched the quantum revolution in our physical conceptions of matter and radiation, winning him the Nobel Prize in 1921.

Albert Einstein was born on March 14, 1879, in the German town of Ulm, the first child of Pauline and Hermann Einstein. Pauline was a talented pianist and Hermann was a merchant. In 1880, the family moved to Munich, where Hermann started a business with his brother Jacob. A daughter Maria (also called Maja) was born in 1881. Albert was a good student and excelled at school, but generally kept to himself and detested sports and gymnastics [5]. Both Albert and Maja learned to play the piano; Albert would play Mozart and Beethoven sonatas, accompanied by his mother, and he delighted in piano improvisations. Their uncle Jacob would pose mathematical problems, which Albert derived great satisfaction from solving. When a family friend gave him a book on Euclidean geometry when Albert was 12 years old, it was like a revelation. Albert also taught himself calculus. After the family moved to Italy to start a new business, Albert moved to Zurich in 1895, and the following year he gave up his German citizenship and enrolled at the Swiss Federal Institute of Technology (Eidgenössische Technische Hochschule, ETH). At ETH, his proposal for an experiment to test the earth's movement relative to the ether was rebuffed by Professor Heinrich Weber. After graduation from ETH, Einstein failed to secure a university position and was unemployed for nearly a year before obtaining a series of temporary teaching positions. He was granted Swiss citizenship in 1901, moved to Bern the following year, and finally secured an appointment at the Swiss federal patent office. Here he found the time to work on his own on scientific problems of interest to him. Hermann Einstein died of a heart attack in Milan in 1902. Albert married Mileva Maric, a former classmate from ETH, the following year. Their son, Hans Albert, was born in 1904.

That brings us to Einstein's annus mirabilis or miracle year, 1905: the 8-month period during which he published in the Annalen der Physik four of the most important papers of his life, in addition to his PhD thesis [6] for the University of Zurich: (i) On a Heuristic Viewpoint Concerning the Production and Transformation of Light dealing with the photoelectric effect, received in March [4]; (ii) On the Motion—Required by the Molecular Kinetic Theory of Heat—of Small Particles Suspended in a Stationary Liquid on Brownian motion and the determination of Avogadro's number, which helped to resolve lingering doubts on the reality of molecules, received in May [7], (iii) On the Electrodynamics of Moving Bodies on special relativity, received in June [8], (iv) Does the Inertia of a Body Depend Upon Its Energy Content? on mass–energy equivalence, received in September [9]. Einstein was then 26. The revolution in physics was under way!

At the turn of the century, the wave theory of light, based on Maxwell's equations and continuous functions in space, was firmly established. The existence of electromagnetic waves had been confirmed by Heinrich Hertz in a series of experiments beginning in 1886, but these same experiments also produced the first evidence for the photoelectric effect [10]. The photoelectric effect occurs when ultraviolet or visible light illuminates the surface of an electropositive metal subjected to a negative potential: there is then a flow of electrons from the cathode (cathode rays). This discovery spurred several others to investigate the pheno menon. It was established that radiation from an electric arc discharges the cathode, without affecting the anode [11], that red and infrared radiation are ineffective in inducing a photoelectric current [12], that the photoelectric current is proportional to the intensity of light absorbed [13], that the emission occurs only at frequencies exceeding a minimum threshold value ν0 [14], with the more electropositive the metal comprising the electrode, the lower the threshold frequency ν0 [15] and that the energy of the ejected photoelectrons is independent of the intensity of the incident light but proportional to its frequency above the threshold, that is, to ν − ν0 [14, 16]. These observations seemed incompatible with Maxwell's electromagnetic theory, but were explained by Einstein's hypothesis of discrete light corpuscles or quanta, each of energy hν and each capable of interacting with a single electron. This electron can then absorb the single light quantum (photon); part of its energy is used to overcome the attraction of the electron to the metal and the rest would appear as the kinetic energy of the photoelectron.

2.2 Adiabatic Invariants

Paul Ehrenfest had long realized the importance of the variable E/ν and he sought a conceptual foundation for the quantum hypothesis in terms of generalized adiabatic invariants:

If you contract a reflecting cavity infinitely slowly, then the frequency ν and the energy E of each proper vibration increase simultaneously in such a way that E/ν remains invariant under this “adiabatic” influence. The a priori probability must always depend on only those quantities which remain invariant under adiabatic influencing, or else the quantity ln W will fail to satisfy the condition, imposed by the second law on the entropy, of remaining invariant under adiabatic changes. [19]

In December 1912, Ehrenfest found the result he was seeking: “Then my theorem reads… The average kinetic energy of our system increases in the same proportion as the frequency under an adiabatic influencing.” According to Ehrenfest's adiabatic hypothesis [20], quantum admissible motions transform to other admissible motions under adiabatic influences. Ehrenfest thus launched on a program for finding quantum states by quantizing the adiabatic invariants. Ehrenfest's adiabatic principle determined the formal applicability of the formalism of classical mechanics to the quantum theory, and also enabled the determination of stationary states of systems that were adiabatically related to other known systems.