Molecular Excitation Dynamics and Relaxation - Leonas Valkunas - E-Book

Molecular Excitation Dynamics and Relaxation E-Book

Leonas Valkunas

0,0
133,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

This work brings together quantum theory and spectroscopy to convey excitation processes to advanced students and specialists wishing to conduct research and understand the entire fi eld rather than just single aspects.

Written by experienced authors and recognized authorities in the field, this text covers numerous applications and offers examples taken from different disciplines. As a result, spectroscopists, molecular physicists, physical chemists, and biophysicists will all fi nd this a must-have for their research. Also suitable as supplementary reading in graduate level courses.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 657

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Preface

Part One: Dynamics and Relaxation

1 Introduction

2 Overview of Classical Physics

2.1 Classical Mechanics

2.2 Classical Electrodynamics

2.3 Radiation in Free Space

2.4 Light–Matter Interaction

3 Stochastic Dynamics

3.1 Probability and Random Processes

3.2 Markov Processes

3.3 Master Equation for Stochastic Processes

3.4 Fokker–Planck Equation and Diffusion Processes

3.5 Deterministic Processes

3.6 Diffusive Flow on a Parabolic Potential (a Harmonic Oscillator)

3.7 Partially Deterministic Process and the Monte Carlo Simulation of a Stochastic Process

3.8 Langevin Equation and Its Relation to the Fokker–Planck Equation

4 Quantum Mechanics

4.1 Quantum versus Classical

4.2 The Schrödinger Equation

4.3 Bra-ket Notation

4.4 Representations

4.5 Density Matrix

4.6 Model Systems

4.7 Perturbation Theory

4.8 Einstein Coefficients

4.9 Second Quantization

5 Quantum States of Molecules and Aggregates

5.1 Potential Energy Surfaces, Adiabatic Approximation

5.2 Interaction between Molecules

5.3 Excitonically Coupled Dimer

5.4 Frenkel Excitons of Molecular Aggregates

5.5 Wannier–Mott Excitons

5.6 Charge-Transfer Excitons

5.7 Vibronic Interaction and Exciton Self-Trapping

5.8 Trapped Excitons

6 The Concept of Decoherence

6.1 Determinism in Quantum Evolution

6.2 Entanglement

6.3 Creating Entanglement by Interaction

6.4 Decoherence

6.5 Preferred States

6.6 Decoherence in Quantum Random Walk

6.7 Quantum Mechanical Measurement

6.8 Born Rule

6.9 Everett or Relative State Interpretation of Quantum Mechanics

6.10 Consequences of Decoherence for Transfer and Relaxation Phenomena

7 Statistical Physics

7.1 Concepts of Classical Thermodynamics

7.2 Microstates, Statistics, and Entropy

7.3 Ensembles

7.4 Canonical Ensemble of Classical Harmonic Oscillators

7.5 Quantum Statistics

7.6 Canonical Ensemble of Quantum Harmonic Oscillators

7.7 Symmetry Properties of Many-Particle Wavefunctions

7.8 Dynamic Properties of an Oscillator at Equilibrium Temperature

7.9 Simulation of Stochastic Noise from a Known Correlation Function

8 An Oscillator Coupled to a Harmonic Bath

8.1 Dissipative Oscillator

8.2 Motion of the Classical Oscillator

8.3 Quantum Bath

8.4 Quantum Harmonic Oscillator and the Bath: Density Matrix Description

8.5 Diagonal Fluctuations

8.6 Fluctuations of a Displaced Oscillator

9 Projection Operator Approach to Open Quantum Systems

9.1 Liouville Formalism

9.2 Reduced Density Matrix of Open Systems

9.3 Projection (Super)operators

9.4 Nakajima–Zwanzig Identity

9.5 Convolutionless Identity

9.6 Relation between the Projector Equations in Low-Order Perturbation Theory

9.7 Projection Operator Technique with State Vectors

10 Path Integral Technique in Dissipative Dynamics

10.1 General Path Integral

10.2 Imaginary-Time Path Integrals

10.3 Real-Time Path Integrals and the Feynman–Vernon Action

10.4 Quantum Stochastic Process: The Stochastic Schrödinger Equation

10.5 Coherent-State Path Integral

10.6 Stochastic Liouville Equation

11 Perturbative Approach to Exciton Relaxation in Molecular Aggregates

11.1 Quantum Master Equation

11.2 Second-Order Quantum Master Equation

11.3 Relaxation Equations from the Projection Operator Technique

11.4 Relaxation of Excitons

11.5 Modified Redfield Theory

11.6 Förster Energy Transfer Rates

11.7 Lindblad Equation Approach to Coherent Exciton Transport

11.8 Hierarchical Equations of Motion for Excitons

11.9 Weak Interchromophore Coupling Limit

11.10 Modeling of Exciton Dynamics in an Excitonic Dimer

11.11 Coherent versus Dissipative Dynamics: Relevance for Primary Processes in Photosynthesis

Part Two: Spectroscopy

12 Introduction

13 Semiclassical Response Theory

13.1 Perturbation Expansion of Polarization: Response Functions

13.2 First Order Polarization

13.3 Nonlinear Polarization and Spectroscopic Signals

14 Microscopic Theory of Linear Absorption and Fluorescence

14.1 A Model of a Two-State System

14.2 Energy Gap Operator

14.3 Cumulant Expansion of the First Order Response

14.4 Equation of Motion for Optical Coherence

14.5 Lifetime Broadening

14.6 Inhomogeneous Broadening in Linear Response

14.7 Spontaneous Emission

14.8 Fluorescence Line-Narrowing

14.9 Fluorescence Excitation Spectrum

15 Four-Wave Mixing Spectroscopy

15.1 Nonlinear Response of Multilevel Systems

15.2 Multilevel System in Contact with the Bath

15.3 Application of the Response Functions to Simple FWM Experiments

16 Coherent Two-Dimensional Spectroscopy

16.1 Two-Dimensional Representation of the Response Functions

16.2 Molecular System with Few Excited States

16.3 Electronic Dimer

16.4 Dimer of Three-Level Chromophores – Vibrational Dimer

16.5 Interferences of the 2D Signals: General Discussion Based on an Electronic Dimer

16.6 Vibrational vs. Electronic Coherences in 2D Spectrum of Molecular Systems

17 Two Dimensional Spectroscopy Applications for Photosynthetic Excitons

17.1 Photosynthetic Molecular Aggregates

17.2 Simulations of 2D Spectroscopy of Photosynthetic Aggregates

18 Single Molecule Spectroscopy

18.1 Historical Overview

18.2 How Photosynthetic Proteins Switch

18.3 Dichotomous Exciton Model

Appendix

A.1 Elements of the Field Theory

A.2 Characteristic Function and Cumulants

A.3 Weyl Formula

A.4 Thermodynamic Potentials and the Partition Function

A.5 Fourier Transformation

A.6 Born Rule

A.7 Green’s Function of a Harmonic Oscillator

A.8 Cumulant Expansion in Quantum Mechanics

A.9 Matching the Heterodyned FWM Signal with the Pump-Probe

A.10 Response Functions of an Excitonic System with Diagonal and Off-Diagonal Fluctuations in the Secular Limit

References

Index

Related Titles

Radons, G., Rumpf, B., Schuster, H. G. (eds.)

Nonlinear Dynamics of Nanosystems

2010

ISBN: 978-3-527-40791-0

 

Siebert, F., Hildebrandt, P.

Vibrational Spectroscopy in Life Science

2008

ISBN: 978-3-527-40506-0

 

Schnabel, W.

Polymers and Light

Fundamentals and Technical Applications

2007

ISBN: 978-3-527-31866-7

 

Reich, S., Thomsen, C., Maultzsch, J.

Carbon Nanotubes

Basic Concepts and Physical Properties

2004

ISBN: 978-3-527-40386-8

 

May, V., Kühn, O.

Charge and Energy Transfer Dynamics in Molecular Systems

2011

ISBN: 978-3-527-40732-3

 

Yakushevich, L. V.

Nonlinear Physics of DNA

1998

ISBN: 978-3-527-40417-9

The Authors

Prof. Leonas Valkunas

Department of Theoretical Physics

Vilnius University

Center for Physical Sciences and Technology

Vilnius, Lithuania

[email protected]

Prof. Darius Abramavicius

Department of Theoretical Physics

Vilnius University

Vilnius, Lithuania

[email protected]

Dr. Tomáš Mančcal

Faculty of Mathematics and Physics

Charles University in Prague

Prague, Czech Republic

[email protected]

Cover Picture

All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.

Library of Congress Card No.:

applied for

British Library Cataloguing-in-Publication Data:

A catalogue record for this book is available from the British Library.

Bibliographic information published by the Deutsche Nationalbibliothek

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de.

© 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Boschstr. 12, 69469 Weinheim, Germany

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.

Print ISBN 978-3-527-41008-8

ePDF ISBN 978-3-527-65368-3

ePub ISBN 978-3-527-65367-6

mobi ISBN 978-3-527-65366-9

oBook ISBN 978-3-527-65365-2

Composition le-tex publishing services GmbH, Leipzig

Cover Design Grafik-Design Schulz, Fußgönheim

Preface

Classical mechanics is known for its ability to describe the dynamics of macroscopic bodies. Their behavior in the course of time is usually represented by classical trajectories in the real three-dimensional space or in the so-called phase space defined by characteristic coordinates and momenta, which together determine the degrees of freedom of the body under consideration. For the description of the dynamics of a microscopic system, however, quantum mechanics should be used. In this case, the system dynamics is qualified by the time evolution of a complex quantity, the wavefunction, which characterizes the maximum knowledge we can obtain about the quantum system. In terms of the quantum mechanical description, coordinates and momenta cannot be determined simultaneously. Their values should satisfy the Heisenberg uncertainty principle. At the interface between the classical world in which we live and the world of microscopic systems, this type of description is inherently probabilistic. This constitutes the fundamental differences between classical and quantum descriptions of the systemdynamics. In principle, however, both classical and quantum mechanics describe a reversible behavior of an isolated system in the course of time.

Irreversibility of time evolution is a property found in the dynamics of open systems. No realistic system is isolated; it is always subjected to coupling to its environment, which in most cases cannot be considered as a negligible factor. The theory of open quantum systems plays a major role in determining the dynamics and relaxation of excitations induced by an external perturbation. A typical external perturbation is caused by the interaction of a system with an electromagnetic field. In resonance conditions, when the characteristic transition frequencies of the system match the frequencies of the electromagnetic field, the energy is transferred from the field to the system and the system becomes excited. The study of the response of material systems to various types of external excitation conditions is the main objective of spectroscopy. Spectroscopy, in general, is an experimental tool to monitor the features and properties of the system based on the measurement of its response. More complicated spectroscopic experiments study the response which mirrors the dynamics of excitation and its relaxation.

Together with the widely used conventional spectroscopic approaches, two-dimensional coherent spectroscopic methods were developed recently, and they have been applied for studies of the excitation dynamics in various molecular systems, such as photosynthetic pigment–protein complexes, molecular aggregates, and polymers. Despite the complexity of the temporal evolution of the two-dimensional spectra, some of these spectra demonstrate the presence of vibrational and electronic coherence on the subpicosecond timescale and even picosecond timescale. Such observations demonstrate the interplay between the coherent behavior of the system, which might be considered in terms of conventional quantum mechanics, and the irreversibility of the excitation dynamics due to the interaction of the system with its environment.

From the general point of view, quantum mechanics is the basic approach for considering various phenomena in molecular systems. However, a typical description must be based on a simplified model, where specific degrees of freedom are taken into consideration, and the rest of them are attributed to an environment or bath. This is the usual approach used for open quantum systems. Thus, complexity of the molecular system caused by some amount of interacting molecules has to be specifically taken into account by describing the quantum behavior of the system. For this purpose the concept of excitons is usually invoked.

As can be anticipated, this area of research covers a very broad range of fields in physics and chemistry. Having this in mind, we have divided this book into two parts. Part One, being more general, describes the basic principles and theoretical approaches which are necessary to describe the excitation dynamics and relaxation in quantum systems interacting with the environment. These theoretical approaches are then used for the description of spectroscopic observables in Part Two.

Consequently, we have many different readers of this book in mind. First of all, the book addresses undergraduate and graduate students in theoretical physics and chemistry, molecular chemical physics, quantum optics and spectroscopy. For this purpose the basic principles of classical physics, quantum mechanics, statistical physics, and stochastic processes are presented in Part One. Special attention is paid to the interface of classical and quantum physics. This includes discussion on the decoherence and entanglement problems, the projection operator, and stochastic classical and quantum problems. These processes are especially relevant in small molecular clusters, often serving as primary natural functioning devices. Therefore, the adiabatic description of molecules, the concept of Frenkel and Wannier–Mott excitons, charge-transfer excitons, and problems of exciton self-trapping and trapping are also presented. This knowledge helps understand other chapters in this book, especially in Part Two, which is more geared toward graduate students and professionals who are interested in spectroscopy. Since different approaches to the problem are widely used to describe the problem of coherence, various methods used for the description are also discussed. Possible modern approaches for observation of the processes determining the excitation dynamics and relaxation in molecular systems are discussed in Part Two, which is mainly devoted to the theoretical description of the spectroscopic observations. For this purpose the response function formalism is introduced. Various spectroscopic methods are discussed, and the results demonstrating the possibility to distinguish the coherent effects on the excitation dynamics are also presented.

We would like to thank our colleagues and students for their contribution. First of all we mention Vytautas Butkus, who produced almost all the figures in this book and who pushed us all the time to proceed with the book. He was also involved in the theoretical analysis of the two-dimensional spectra of molecular aggregates. We are also grateful to our students, Vytautas Balevicius Jr., Jevgenij Chmeliov, Andrius Gelzinis, Jan Olsina, and others, who were involved in solving various theoretical models. We are thankful to our colleagues and collaborators, the discussions with whom were very stimulating and helped in understanding various aspects in this rapidly developing field of science. Especially we would like to express our appreciation to our colleagues Shaul Mukamel and Graham R. Fleming, who were initiators of two-dimensional coherent electronic spectroscopy and who have inspired our research. We also thank our wives and other members of our families for patience, support, and understanding while we were taking precious time during holidays and vacations to write this book.

Vilnius, Prague

November 2012

Leonas ValkunasDarius AbramaviciusTomáš Mančal

Part One

Dynamics and Relaxation

1

Introduction

Photoinduced dynamics of excitation in molecular systems are determined by various interactions occurring at different levels of their organization. Depending on the perturbation conditions, the excitation in solids and molecular aggregates may lead to a host of photoinduced dynamics, from coherent and incoherent energy migration to charge generation, charge transfer, crystal lattice deformation, or reorganization of the environmental surroundings. The theoretical description of all these phenomena therefore requires one to treat part of the molecular system as an open system subject to external perturbation. Since perfect insulation of any system from the rest of the world is practically unattainable, the theory of open systems plays a major role in any realistic description of experiments on molecular systems.

In classical physics, the dynamics of an open system is reflected in the temporal evolution of its parameters, leading to a certain fixed point in the corresponding phase space. This fixed point corresponds to a thermodynamic equilibrium, with the unobserved degrees of freedom determining the thermodynamic bath. Many situations in molecular physics allow one to apply a classical or semiclassical description of the evolution of the perturbation-induced excitation in an open system. Often, the influence of the large number of degrees of freedom can be efficiently simulated by stochastic fluctuations of some essential parameters of the system. Such fluctuations may lead to transitions between several stable fixed points in the phase space of the system, or, in a semiclassical situation, to transitions between several states characterized by different energies.

Apart from classical fluctuations, a genuine quantum description might be required when entanglement between constituents of the system has to be considered. This is especially essential for systems with energy gaps larger than the thermal energy, which is an energy characteristics of the bath defined by macroscopic degrees of freedom. Only a full quantum description then leads to proper formation of a thermal equilibrium.

Indeed it is impossible to switch off fluctuations completely. Even if we place a system in a complete vacuum and isolate it from some light sources, there still exist background vacuum fluctuations of the electromagnetic field. Even at zero temperature these fluctuations affect the quantum system, and the resulting spontaneous emission emerges. All these fluctuations cause decay of excited states and establish thermal equilibrium and stochasticity “in the long run.”

The first part of this book presents a coarse-grained review of the knowledge which is needed for a description of excitation dynamics and relaxation in molecular systems. Basic topics of classical physics which are directly related to the main issue of this book are presented in Chapter 2. It is worthwhile mentioning that concepts of classical physics are also needed for better understanding of the basic behavior of quantum systems. The electromagnetic field, which is responsible for electronic excitations, can usually be well described in terms of classical electrodynamics. Thus, the main principles of this theory and the description of the field–matter interaction are also introduced in Chapter 2. The concept and main applicative features of stochastic dynamics are presented in Chapter 3. Markov processes, the Fokker–Planck equation, and diffusive processes together with some relationships between these descriptions and purely stochastic dynamics are also described in Chapter 3. The basic concepts of quantum mechanics, which is the fundamental theory of the microworld, are presented in Chapter 4. Together with its main postulates and equations, some typical model quantum systems with exact solutions are briefly discussed. The density matrix and second quantization of the vibrations and electromagnetic field are briefly introduced as well. Special attention is paid in this book to consideration of molecular aggregates. The adiabatic approximation, the exciton concept, Frenkel excitons, Wannier–Mott excitons, and charge-transfer excitons are described together with vibronic interactions, the self-trapping problem, and the exciton trapping problem in Chapter 5. Chapter 6 is devoted to a discussion of decoherence and entanglement concepts. The problem of measurements in quantum mechanics and the relative state interpretation are also discussed. The basics of statistical physics are then presented in Chapter 7. The relationship between the statistical approach and thermodynamics is briefly outlined, and standard statistics used for descriptions of classical and quantum behavior are presented. The harmonic oscillator model of the system–bath interaction is described in Chapter 8. In Chapter 9 we describe the projection operator technique together with the concept of the reduced density matrix and its master equations. The path integral technique is then discussed in Chapter 10 together with the stochastic Schrödinger equation approach and the so-called hierarchical equations of motions. Excitation dynamics and relaxation in some model systems are discussed in Chapter 11.

2

Overview of Classical Physics

In this chapter we will review some of the most important concepts of classical physics. Despite the eminent role played by quantum mechanics in the description of molecular systems, classical physics provides an important conceptual and methodological background to most of the theories presented in later chapters and to quantum mechanics itself. Often classical or semiclassical approximations are indispensable to make a theoretical treatment of problems in molecular physics feasible. In the limited space of this chapter we have no intention to provide a complete review as we assume that the reader is familiar with most of the classical concepts. Specialized textbooks are recommended to the interested reader in which the topics presented in this chapter are treated with full rigor (e.g., [1–4]).

2.1 Classical Mechanics

Classical mechanics, as the oldest discipline of physics, has provided the formal foundation for most of the other branches of physics. Perhaps with the exception of phenomenological thermodynamics, there is no theory with a similar general validity and success that does not owe its foundations to mechanics. Classical mechanics reached its height with its Lagrangian and Hamiltonian formulations. These subsequently played a very important role in the development of statistical and quantum mechanics.

In classical mechanics, the physical system is described by a set of idealized material points (point-sized particles) in space which interact with each other by a specific set of forces. The coordinates and velocities of all particles fully describe the state of the system of the particles. The three laws formulated by Newton fully describe the properties of motion of this system. The first law states that the particle moves at a constant speed in a predefined direction if it is not affected by a force. The second law relates the change of motion of the particle due to the presence of external forces. The third law defines the symmetry of all forces: particle a acts on particle b with the same force as particle b acts on particle a.

The dynamics of the system of N particles is described by a set of differential equations [1, 2, 4]:

(2.1)

Here mi is the mass of the ith particle and Fij is the force created by the j th particle acting on the ith particle. The velocity of the ith particle is given by a time derivative of the coordinate . For a problem formulated in three spatial dimensions the particle momenta together with the coordinates ri create a 6N-dimensional phase space in the three-dimensional real space.

The real phase space is often smaller due to specific symmetries, resulting in certain conservation laws. For instance, if the points describe some finite body, which is at rest, the center of mass of all points may be fixed. In that case the dimension of the phase space effectively decreases by six (three coordinates and three momenta corresponding to a center of mass equal to zero). If additionally the body is rigid, we are left with three dimensional phase space, characterizing orientation of the body (e.g. three Euler angles).

A single point in the phase space defines an instantaneous state of the system. The notion of the system’s state plays an important role in quantum physics; thus, it is also useful to introduce this type of description in classical physics. The motion of the system according to Newton’s laws draws a trajectory in the phase space. In the absence of external forces, the energy of the system is conserved, and the trajectory therefore corresponds to a particular energy value. Different initial conditions draw different trajectories in the phase space as shown schematically in Figure 2.1. The phase space trajectories never intersect or disappear. Later in the discussion of statistical mechanics this notion is used to describe the microcanonical ensemble of an isolated system.

Note that in Newton’s equation, (2.1), we can replace t by –t and the equation remains the same. Thus, the Newtonian dynamics is invariant to an inversion of the time axis, and the dynamics of the whole system is reversible. This means that Newton’s equation for a finite isolated system with coordinate-related pairwise forces has no preferred direction of the time axis. Because energy is conserved, the whole system does not exhibit any damping effects. The damping is often introduced phenomenologically. In order to achieve irreversible dynamics using a microscopic description, one has to introduce an infinitely large system so that the observable part is a small open subsystem of the whole. In such a subsystem the damping effects occur naturally from statistical arguments. Various treatments of open systems are described in subsequent chapters.

Figure 2.1 Motion of the system in a phase space starting with different initial conditions.

2.1.1 Concepts of Theoretical Mechanics: Action, Lagrangian, and Lagrange Equations

Some problems in mechanics can be solved exactly. The feasibility of such an exact solution often depends crucially on our ability to express the problem in an appropriate coordinate system. Let us find now a more general way of expressing mechanical equations of motion that would have the same general form in an arbitrary system of coordinates, and would therefore allow a straightforward transformation from one coordinate system to another. This new form of the representation of Newton’s equations is called the Lagrangian formulation of mechanics.

Let us start with Newton’s law, (2.1), in the following form:

(2.2)

(2.3)

The first term on the left-hand side of (2.3) can obviously be written as a variation of an integral over the potential:

(2.4)

The second term on the left-hand side can be turned into a variation as well. We apply integration by parts and interchange the variation with the derivative to obtain

(2.5)

(2.6)

Here we used the rules of variation of a product, and we multiplied the equation obtained by –1. Now, the first term denotes the total kinetic energy of the system. The second term is the full potential energy. Thus, the variation of the kinetic energy must be anticorrelated with the variation of the potential energy. This result is also implied by the conservation of the total energy.

We next denote the kinetic energy term by T, and introduce two new functions:

(2.7)

where

(2.8)

Here, S denotes the action functional or simply the action. The scalar function L is the Lagrangian function, or the Lagrangian. The whole mechanics therefore reduces to the variational problem

(2.9)

also known as the Hamilton principle. According to this principle, the trajectories ri(t), which satisfy Newton’s laws of motion, correspond to an extremum of the action functional S. In Chapter 10, we will see that the action functional plays an important role in the path integral representations of quantum mechanics.

(2.10)

(2.11)

This can only be satisfied for an arbitrary value of δqi if

(2.12)

Equation (2.12) is the famous Lagrange equation of classical mechanics in a form independent of the choice of the coordinate system.

There is some flexibility in choosing a particular form of the Lagrangian. If we define a new Lagrangian L′ by adding a total time derivative of a function of coordinates,

(2.13)

the equations of motion remain unchanged. The corresponding action integral S′ is

(2.14)

where the last two terms do not contribute to a variation with fixed points at times t1 and t2. By means of (2.13), the Lagrangian can sometimes be converted into a form more convenient for description of a particular physical situation. We will give an example of such a situation in Section 2.4.3.

2.1.2 Hamilton Equations

A more symmetric formulation of mechanics can be achieved by introducing generalized momenta pi as conjugate quantities of coordinates qi. So far the independent variables of the Lagrangian were qi and . Now we will define the generalized momentun corresponding to the coordinate qi as

(2.15)

It can be easily shown that in Cartesian coordinates the momentum is conjugate to the coordinate ri. Let us investigate the variation of the Lagrangian:

(2.16)

First, from (2.12) and (2.15) we obtain a very symmetric expression:

(2.17)

which can also be written as

(2.18)

This in turn can be written in such a way that we have a variation of a certain function on the left-hand side and an expression with variations of pi and qi only on the right-hand side:

(2.19)

The expression on the left-hand side,

(2.20)

(2.21)

Comparing the coefficients of variations of δqi and δpi, we get two independent equations:

(2.22)

and

(2.23)

Equations (2.22) and (2.23) are known as the canonical or Hamilton equations of classical mechanics. We usually call the momentum pi the canonically conjugated momentum only to the coordinate qi. The Hamilton equations represent mechanics in a very compact and elegant way by the set of first-order differential equations.

The Hamiltonian or Lagrangian formalism applies to systems with gradient forces, that is, those which are given by derivatives of potentials. This assumption is true when considering gravitational, electromagnetic, and other fundamental forces. However, frictional forces often included phenomenologically in the mechanical description of dynamic systems cannot be given as gradients of some friction potential. Thus, the Hamiltonian description cannot describe friction phenomena. The microscopic relaxation theory and openness of the dynamic system are required to obtain a theory with the relaxation phenomena.

2.1.3 Classical Harmonic Oscillator

(2.24)

In the Lagrange formulation we can define the Lagrangian as the difference of kinetic and potential energies, getting for a particle with mass m

(2.25)

From (2.12) it follows that , and thus

(2.26)

which is equivalent to the Newton’s equation as demonstrated in the previous sections.

Similarly, we can write the Hamiltonian

(2.27)

where the momentum . In this case the Hamilton equations of motion read

(2.28)

(2.29)

Again we get the same set of equations of motion, which means that the dynamics is equivalent whatever type of description is chosen. However, the Hamiltonian formulation gives one clue about the number of independent variables. In this case we obtain two equations for variables x and p, the coordinate and the momentum, respectively. Thus, in the context of dynamic equations, it is a two-dimensional system (two-dimensional phase space).

(2.30)

Figure 2.2 Parabolic potential of the harmonic oscillator (a), and the two-dimensional phase space of the oscillator (b). The trajectory is the ellipse or the circle.

and the solution is given by

(2.31)

which yields

(2.32)

(2.33)

We thus find that the frequency of the oscillator is described by the stiffness of the force parameter k and the mass of the particle m. Keeping this in mind, we can write the potential energy as

(2.34)

(2.35)

Later we will find that this form of the Hamiltonian is equivalent to the Hamiltonian of the quantum harmonic oscillator and the constant α is associated with the reduced Planck constant.

The solution of the dynamic equations can now be written as

(2.36)

(2.37)

which shows that the phase space defined by the y and z axes corresponds to the complex plane and a point in this space draws a circle. In the following we often face the application of classical or quantum oscillators. The latter is described in Section 4.6.1.

2.2 Classical Electrodynamics

For our introduction to classical electrodynamics, the microscopic Maxwell–Lorentz equations provide a convenient starting point. They enable us to view matter as an ensemble of charged particles, as opposed to the continuum view of macroscopic electrodynamics. The microscopic electric and magnetic fields are usually denoted by E and B, respectively. Let us assume that there are particles with charges qi located at points ri in space. The density of charge and the density of current can be then defined as

(2.38)

The Maxwell–Lorentz equations for the fields in a vacuum read [3, 5]

(2.39)

(2.40)

(2.41)

(2.42)

We introduced the usual constants – vacuum permittivity , magnetic permeability μ0, and the speed of light in a vacuum c, which are all related through denotes divergence, and ∇× is the curl operator as described in Appendix A.1.

The same equations are valid for the microscopic and macroscopic cases. The difference is only in the charge and current densities, which in the macroscopic case are assumed to be continuous functions of space, while in the microscopic case the charge and current densities are given as a collection of microscopic points and their velocities.

2.2.1 Electromagnetic Potentials and the Coulomb Gauge

For the subsequent discussion, it is advantageous to introduce the vector potentialA which determines the magnetic field through the following relation:

(2.43)

The magnetic field given by such an expression automatically satisfies the second Maxwell–Lorentz equation, (2.40). Since for any scalar function χ we have the identity the vector potential is defined up to the so-called gauge function χ, and the transformation

(2.44)

does not change the magnetic field.

The same identity allows us to rewrite the third Maxwell–Lorentz equation, (2.41), in a more convenient form. Applying definition (2.43) to (2.41), we obtain

(2.45)

which can be satisfied by postulating a scalar potential ϕ through

(2.46)

It is easy to see that if A is transformed by (2.44), the simultaneous transformation

(2.47)

keeps (2.46) satisfied. The transformation composed of (2.44) and (2.47) is known as the gauge transformation, and the Maxwell–Lorentz equations are invariant with respect to this transformation. This phenomenon is denoted as gauge invariance.

The freedom in the choice of A and ϕ can be used to transform Maxwell–Lorentz equations into a form convenient for a particular physical situation. Here we will use the well-known Coulomb gauge, which is useful for separating the radiation part of the electromagnetic field from the part associated with charges. The Coulomb gauge is defined by the condition

(2.48)

which can always be satisfied [6].

2.2.2 Transverse and Longitudinal Fields

The Maxwell–Lorentz equations provide a complete description of the system of charges and electromagnetic fields, including their mutual interaction. In most of this book we will be interested in treating radiation as a system that interacts weakly with matter represented by charged particles. It would therefore be extremely useful to separate electromagnetic fields into those fields that are associated with the radiation, that is, those that can exist in free space without the presence of charges and currents, and those fields that are directly associated with their sources. Such a separation can be achieved by the so-called Helmholtz theorem [6]. This states that any vector field a can be decomposed into its transverse (divergence-free – denoted by ) and longitudinal (rotation-free – denoted by ||) parts. That is, any vector field a can be written as

(2.49)

The transverse field is defined by

(2.50)

while the longitudinal component satisfies

(2.51)

The magnetic field is purely transverse due to (2.40), and thus the decomposition of electric and magnetic fields reads

(2.52)

The Maxwell–Lorentz equations for the transverse and longitudinal fields can then be given separately:

(2.53)

(2.54)

(2.55)

and

(2.56)

The last of these equations can be converted into the well-known continuity equation by applying ∇ and using (2.53):

(2.57)

This means that the longitudinal current density is related to the change of the charge density and the charge is conserved in the absence of currents.

From (2.39) and (2.46) we can derive the Poisson equation which relates the scalar potential and the charge density,

(2.58)

and so in the Coulomb gauge the scalar potential is given by the instantaneous charge distribution. Equation (2.46), which relates the scalar potential to vectors A and E, can also be decomposed into transverse and longitudinal parts, yielding

(2.59)

and

(2.60)

Equations (2.59) and (2.60) therefore decompose the electric field into the part generated by the charge distributions through the scalar potential and the part associated with the vector potential A.

The vector potential, and therefore also the transverse part of the electric field, can exist without charges. This part then naturally represents the radiation part of the electromagnetic field and it is necessarily related to the magnetic field. The other part is all due to charges: the charges create the scalar potential, which generated the longitudinal electric field. Equations (2.54) and (2.55) lead to

(2.61)

where we used the Coulomb gauge condition, (2.48), and the vector identity, (2.60). The term on the right-hand side is a natural source of the light–matter interaction.

2.3 Radiation in Free Space

In this section we will show that the relationships of electrodynamics also yield to the Lagrangian and Hamiltonian formalisms discussed in Section 2.1. For this purpose we have to identify proper conjugate momenta for the selected “coordinate” variables of the field. For now, we will consider the radiation in a space free of charges.

2.3.1 Lagrangian and Hamiltonian of the Free Radiation

We now consider the case where the charge density is zero and thus in the Coulomb gauge the scalar potential ϕ(r) is taken to be zero as well. All electric and magnetic fields are then necessarily given by the vector potential A. We can therefore choose A as a suitable “coordinate” for the description of the radiation. The equation of motion for the vector potential is given by

(2.62)

which follows from (2.61) in the case when the current is zero. We multiplied (2.61) by 1/μ0 for later convenience.1)

Let us take the Cartesian coordinate system. Equation (2.62) can be understood as the equation of motion for the vector potential. We express the equation in components Ax, Ay, and Az and multiply the components by their variations (a dot product) to obtain

(2.63)

Here we used the Levi-Civita symbol εijk to express the cross product a × b (see Appendix A.1). In order to convert the expression on the left-hand side into a variation of a functional, we have to integrate it not only over time (as we did in the case of classical mechanics), but also over space. We will use the same trick to treat the double spatial derivative as in the case of the time derivatives – we will integrate it by parts. We also assume that the variations are zero at times t1 and t2 (the limits of the time integration) and at the limits of the spatial integration. Under the spatial integration, the first term on the left-hand side of (2.63) yields

(2.64)

(2.65)

Consequently, the Lagrangian density of the radiation field defined as

(2.66)

leads to correct equations of motion, which can be verified by inserting them into the Lagrange equations, (2.12) [6].

The momentum p conjugate to A is given as

(2.67)

and the Hamiltonian density (given by of the radiation field is

(2.68)

Using (2.43) and (2.60), we can recast this result in a more familiar form,

(2.69)

with transverse electric and magnetic fields.

We find that the Hamiltonian of the electromagnetic field has a quadratic form reminiscent of a harmonic oscillator described in Section 2.1.3. Note that in the theory of electromagnetic fields we need to distinguish the Lagrangian and Hamiltonian from their densities. The latter are denoted by calligraphic letters and , respectively. We use this distinction throughout this chapter.

2.3.2 Modes of the Electromagnetic Field

It is very useful to introduce the notion of field modes. With this concept we will be able to show that the free radiation is formally similar to an ensemble of harmonic oscillators. This idea will be very useful when we turn to field quantization.

(2.70)

The Coulomb gauge requires that for each k

(2.71)

and consequently Ak is a vector in a plane perpendicular to k. As k is essentially the propagation direction of the field modes, the vector potential is perpendicular to the propagation direction. Defining unit vectors e1k and e2k, which are perpendicular to each other and to k, we can write

(2.72)

(2.73)

The exponential factors with components, (2.73), form a Kronecker delta under the integration over space,

(2.74)

as one can verify by direct integration. The Hamiltonian of the radiation, (2.69), then reads

(2.75)

If we define two real variables,

(2.76)

and

(2.77)

it turns into a notoriously well known form:

(2.78)

Equation (2.78) represents the radiation as an ensemble of independent harmonic oscillators of unit masses. This makes quantization of the radiation rather straightforward, and enables us to apply to the radiation all sorts of results derived originally for harmonic oscillators.

(2.79)

This result finds use, for example, in the description of spontaneous emission in Section 4.8, where the radiation forms a bath or environment for an excited emitter.

2.4 Light–Matter Interaction

In the previous section we described the radiation field free of any matter. However, the full description of the system of fields and charged particles by (2.61) contains some matter properties on the right-hand side. We write Lagrangian Lmat to describe Newton’s laws for the particles and Lagrangian Lrad to describe the free radiation. The transverse current influences the vector potential A, and at the same time it depends on . It will therefore play a role in the mechanical part of the equations of motion. This opens a way to define the Hamiltonian that will describe the light–matter interaction.

2.4.1 Interaction Lagrangian and Correct Canonical Momentum

In order to find the Hamiltonian formulation corresponding to (2.61) we will be looking for a light–matter interaction Lagrangian,

(2.80)

which produces the desired right-hand side term in the equations of motion. The free space Lagrangian density of the radiation field does not depend explicitly on A (it only depends on and ∇A). The term ∂/∂A in the Lagrange equation, (2.12), is therefore equal to zero. This term can be used to obtain the right-hand side of (2.61). Defining the interaction Lagrangian density by

(2.81)

correctly leads to (2.61).

The current explicitly contains (see the definition given by (2.38)), and its presence in the total Lagrangian L therefore complicates the definition of the conjugate momentum:

(2.82)

The Lagrangian of the isolated matter leads to the purely kinetic conjugate momentum , and (2.82) gives

(2.83)

To evaluate (2.83) we have to identify the transverse part of j which gives zero under application of ∇. Using the definition given by (2.82), we can write

(2.84)

(2.85)

can be used to identify the longitudinal and transverse parts of j. The decomposition can be written in a tensor manner using the components of the unit vector n as follows:

(2.86)

The components of the transverse part of are thus defined as

(2.87)

where we defined the decomposition of a unity tensor,

(2.88)

by two tensors:

(2.89)

and

(2.90)

Using and , we can obtain the longitudinal and transverse parts of any given vector field a as

(2.91)

Now we can finally evaluate the new conjugate momentum, (2.83). For its components we obtain

(2.92)

Here the fact that A is completely transverse is taken into account. As a result of incorporation of the interaction Lagrangian, the conjugate momentum of the particles becomes dependent on the vector potential A.

As we can see, the momentum of a particle is directly affected by the vector potential. We should remind the reader once again that this expression is meaningful only for the Coulomb gauge, and is not applicable to a general gauge. The vector potential is fully defined only in a specific gauge.

2.4.2 Hamiltonian of the Interacting Particle–Field System

In the previous subsections we defined the canonical variables, the Hamiltonian and the Lagrangian of the material system and the radiation field. We also determined the interaction Lagrangian. This allows us now to derive the full Hamiltonian of the interacting material system plus radiation. Combining (2.20), (2.66), (2.67), and (2.81), we can write the Hamiltonian of the interacting system as

(2.93)

where Ltot is the total Lagrangian of the material system, the radiation and their interaction, and Π is the momentum conjugate to A. This leads to

(2.94)

Here we introduced the symbol V(r1, … , rN) instead of ϕ for the electrostatic Coulomb potential, which is now equivalent to the scalar potential of the longitudinal field. Equation (2.94) represents the total classical Hamiltonian of an interacting system of charges and fields.

For our purposes we will group the particles into molecules or supramolecules (such as clusters or aggregates of molecules), and split the potential V into intermolecular and intramolecular parts:

(2.95)

where ξn denotes the particles forming the nth molecule (or supramolecules). This splitting is essential. Some interactions, for example, those inside the molecules or their aggregates, will be treated explicitly (by quantum chemistry, an excitonic model, or a similar theory), and some, for example those occuring between the aggregates or the molecules, can be included in the description of the light–matter interaction.

Our aim is now to write the Hamiltonian in a form suitable for studying the interaction of molecules with light. First, we split the Hamiltonian into three terms, where the first term describes the pure material system, the second term describes the radiation field, and the third term contains the mixed terms:

(2.96)

Hamiltonian Hmol of the molecules should include only the longitudinal fields, that is,

(2.97)

where

(2.98)

The radiation Hamiltonian is given by (2.68) and thus the rest of (2.94) composes the light–matter interaction Hamiltonian:

(2.99)

Equation (2.99) is the so-called minimal coupling Hamiltonian or the pA Hamiltonian, which represents a convenient starting point for the discussion of interaction of small molecules with light.

2.4.3 Dipole Approximation

The characteristic dimensions of molecular systems are usually much smaller than the wavelength of light. The radiation field can therefore be assumed to be homogeneous within the extent of the molecule or the molecular aggregate, and the vector potential A(ri) can be replaced by its value at a chosen reference (e.g., the mass or the charge center) point inside the molecule:

(2.100)

Now we will use the fact that the equations of motion will remain the same if we add a total time derivative of a function of coordinates and time to the Lagrangian (see Section 2.1). Let us for simplicity assume that we have just two supramolecules or aggregates denoted by ξ1 and ξ2. We will add the following term:

(2.101)

where

(2.102)

is the polarization of the two molecules, and μξ is the dipole moment of molecule ξ. In the dipole approximation we can also write

(2.103)

and, therefore,

(2.104)

Consequently, addition of (2.104) to the total Lagrangian replaces the term containing the product ∙ A by a term containing . As a result, the conjugate momentum of the particle is again purely kinetic,

(2.105)

and the momentum conjugate to the vector potential reads

(2.106)

(2.107)

(2.108)

It can be shown [6] that in the dipole approximation the intermolecular part (the first term on the right-hand side) exactly cancels the interaction between the supramolecules provided V(ξi, ξj) includes only dipole–dipole interaction as well. The Hamiltonian therefore contains only the noninteracting part of . The second term on the right-hand side is an intramolecular contribution of the polarization which we will disregard with the provision that it does not play an important role in the radiative processes. The last step of our analysis of the Hamiltonian is the definition of a new field D(r), the so-called displacement vector, as

(2.109)

From (2.105), we can see that . The total Hamiltonian can therefore be written as

(2.110)

Hamiltonian (2.110) is a possible starting point for studies of the light–matter interaction for nanoparticles and molecular aggregates. This interaction Hamiltonian (the last term) is in the convenient form of a product of the molecular dipole moment and the transverse field. In practical calculations, we can often assume that the polarization is linearly proportional to the electric field, that is,

(2.111)

and

(2.112)

where χ and are the linear susceptibility and the relative permittivity, respectively.

1) We divided (2.61) by μ0 in order to obtain the Hamiltonian corresponding to the energy density. We could derive the Lagrangian function and the Hamiltonian function without this step and multiply them by a suitable constant at the end.

3

Stochastic Dynamics

Many dynamic processes can be characterized as so-called stochastic. This class of dynamic processes is widely used to describe the time evolution of open systems [8, 9]. In open systems, the degrees of freedom of the system under consideration constitute a small part of the total number of degrees of freedom of the system and its environment. If the environment coordinates are not followed explicitly, we can only observe the system dynamics affected by a large number of unknown forces. These forces may drive the system in an unpredictable way and we cannot use simple deterministic differential equations to describe the degrees of freedom of the system. To provide a convenient description of this situation, we can introduce the concept of stochasticity and characterize the stochastic evolution by probabilities that the system is in certain states, and the dynamics between these states is stochastic.

One such stochastic process is the celebrated Brownian motion of a microscopic bead in a liquid, first described by Brown in the 1830s. The bead in a liquid is pushed randomly by fluctuating molecules from the surroundings. As the particle interacts with many degrees of freedom of the liquid simultaneously, the net fluctuating force becomes Gaussian due to the central limit theorem. Not only classical Brownian motion falls under this category, but an arbitrary process in the system coupled to the fluctuating environment can also be described by a stochastic process of some sort. It will be demonstrated later that quantum mechanics describes some intermediate case between the deterministic and probabilistic nature of the dynamics where the wavefunction is defined by the deterministic equation, and the measurement process is purely probabilistic.

3.1 Probability and Random Processes

To introduce random processes we have to turn back to the probability theory. The elementary starting point is the concept of a set of random events. We define certain indivisible elementary events which compose the so-called probability space. The probability space can then be divided into various regions covering some groups of elementary events which are associated with observable random events. It is thus easy to think of a random event as a set of elementary events. This concept is presented graphically in Figure 3.1. Let us now introduce some concepts of set theory which are useful in the description of random events.

Figure 3.1 Probability space of random events: black dots represent the elementary events, and events A–C cover some elementary events. All elementary events create the probability space.

Consider two sets A and B. If set A is said to be a part of set B, A is a subset of B, and this dependence is denoted by

(3.1)

Union of sets denoted by

(3.2)

creates a new set, the elements of which are given by elements of A and B. Thus, sets A and B become subsets of the resulting set C. Intersection of sets

(3.3)

creates set D, which is formed by the elements shared by A and B. For completeness we also introduce the concepts of an empty set and the full set Ω. The empty set has no elements and the full set has all elements of the probability space.

A set complement to set A contains all elements of the full space which are not present in A. We denote such a set by Ac. We next introduce the complementarity operation,

(3.4)

which removes all elements from B which are in A, and we can write

(3.5)

By definition we thus have

(3.6)

and

(3.7)

Figure 3.2 Main definitions and operations of sets: Ω denotes the set covering the full space, A is a “finite” event, and Ac is the complement set. The bottom row describes the union, intersection, and subtraction operations as described in the text.

Geometrically these operations are represented in Figure 3.2.

A random event is now understood as a realization of the elementary events belonging to a specific set: event A happens when one of its composing elementary events is realized. To quantify the event we introduce the probability of the event P(A). Three axioms of Kolmogorov fully characterize the probability of events. The first axiom states that the probability is defined as a nonnegative number. The second axiom denotes that the full set is characterized by probability 1:

(3.8)

The empty set then has

(3.9)

The third axiom states that the probability of the union of nonintersecting sets is given by

(3.10)

It follows that a union of an arbitrary two sets has probability

(3.11)

In practice for all other events (or sets), the probability is given as a limit of the ratio of the realizations of event A, which we denote by mA, and the number of trials N; thus,

(3.12)

This relation is known as the theorem of large numbers. It represents the proper recipe to experimentally determine probabilities of various events, and it is behind the idea of the Monte Carlo simulation of stochastic processes. However, this requires a lot of trials, but some events should not be tested experimentally (e.g., the reliability of a nuclear power plant).

Conditional probability is one of the important concepts for discussion of physical processes. Consider an event A. It may happen that in some instances of realization of event A, event B happens at the same time. Such an event should be related to the overlap region of sets A and B. Let event B be the additional necessary condition that we want to include in describing event A. In this case the space of possible events is limited by set B, because realization of B is necessary. We denote this conditional probability as P(A/B). The event that A and B happen at the same time is given by the set A ∩ B and the probability is thus proportional to P(A ∩ B). However, since event B is a necessary condition with its own probability P(B), the proper normalization requires that the conditional probability satisfies

(3.13)

Alternatively, we can define the probability of the intersection as

(3.14)

This allows us to define the independent events. If A is independent of B, the conditional probability that A happens with the condition of B is just the probability of A, that is,

(3.15)

This also means that for independent events we have

(3.16)