Nonequilibrium Statistical Physics of Small Systems -  - E-Book

Nonequilibrium Statistical Physics of Small Systems E-Book

0,0
138,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This book offers a comprehensive picture of nonequilibrium phenomena in nanoscale systems. Written by internationally recognized experts in the field, this book strikes a balance between theory and experiment, and includes in-depth introductions to nonequilibrium fluctuation relations, nonlinear dynamics and transport, single molecule experiments, and molecular diffusion in nanopores.
The authors explore the application of these concepts to nano- and biosystems by cross-linking key methods and ideas from nonequilibrium statistical physics, thermodynamics, stochastic theory, and dynamical systems. By providing an up-to-date survey of small systems physics, the text serves as both a valuable reference for experienced researchers and as an ideal starting point for graduate-level students entering this newly emerging research field.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 810

Veröffentlichungsjahr: 2013

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Cover

Reviews of Nonlinear Dynamics and Complexity

Title Page

Copyright

Preface

List of Contributors

Color Plates

Part I: Fluctuation Relations

Chapter 1: Fluctuation Relations: A Pedagogical Overview

1.1 Preliminaries

1.2 Entropy and the Second Law

1.3 Stochastic Dynamics

1.4 Entropy Generation and Stochastic Irreversibility

1.5 Entropy Production in the Overdamped Limit

1.6 Entropy, Stationarity, and Detailed Balance

1.7 A General Fluctuation Theorem

1.8 Further Results

1.9 Fluctuation Relations for Reversible Deterministic Systems

1.10 Examples of the Fluctuation Relations in Action

1.11 Final Remarks

References

Chapter 2: Fluctuation Relations and the Foundations of Statistical Thermodynamics: A Deterministic Approach and Numerical Demonstration

2.1 Introduction

2.2 The Relations

2.3 Proof of Boltzmann's Postulate of Equal A Priori Probabilities

2.4 Nonequilibrium Free Energy Relations

2.5 Simulations and Results

2.6 Results Demonstrating the Fluctuation Relations

2.7 Conclusion

References

Chapter 3: Fluctuation Relations in Small Systems: Exact Results from the Deterministic Approach

3.1 Motivation

3.2 Formal Development

3.3 Discussion

3.4 Conclusions

Acknowledgments

References

Chapter 4: Measuring Out-of-Equilibrium Fluctuations

4.1 Introduction

4.2 Work and Heat Fluctuations in the Harmonic Oscillator

4.3 Fluctuation Theorem

4.4 The Nonlinear Case: Stochastic Resonance

4.5 Random Driving

4.6 Applications of Fluctuation Theorems

4.7 Summary and Concluding Remarks

Acknowledgments

References

Chapter 5: Recent Progress in Fluctuation Theorems and Free Energy Recovery

5.1 Introduction

5.2 Free Energy Measurement Prior to Fluctuation Theorems

5.3 Single-Molecule Experiments

5.4 Fluctuation Relations

5.5 Control Parameters, Configurational Variables, and the Definition of Work

5.6 Extended Fluctuation Relations

5.7 Free Energy Recovery from Unidirectional Work Measurements

5.8 Conclusions

References

Chapter 6: Information Thermodynamics: Maxwell's Demon in Nonequilibrium Dynamics

6.1 Introduction

6.2 Szilard Engine

6.3 Information Content in Thermodynamics

6.4 Second Law of Thermodynamics with Feedback Control

6.5 Nonequilibrium Equalities with Feedback Control

6.6 Thermodynamic Energy Cost for Measurement and Information Erasure

6.7 Conclusions

Appendix 6.A: Proof of Eq. (6.56)

Acknowledgments

References

Chapter 7: Time-Reversal Symmetry Relations for Currents in Quantum and Stochastic Nonequilibrium Systems

7.1 Introduction

7.2 Functional Symmetry Relations and Response Theory

7.3 Transitory Current Fluctuation Theorem

7.4 From Transitory to the Stationary Current Fluctuation Theorem

7.5 Current Fluctuation Theorem and Response Theory

7.6 Case of Independent Particles

7.7 Time-Reversal Symmetry Relations in the Master Equation Approach

7.8 Transport in Electronic Circuits

7.9 Conclusions

Acknowledgments

References

Chapter 8: Anomalous Fluctuation Relations

8.1 Introduction

8.2 Transient Fluctuation Relations

8.3 Transient Work Fluctuation Relations for Anomalous Dynamics

8.4 Anomalous Dynamics of Biological Cell Migration

8.5 Conclusions

Acknowledgments

References

Part II: Beyond Fluctuation Relations

Chapter 9: Out-of-Equilibrium Generalized Fluctuation–Dissipation Relations

9.1 Introduction

9.2 Generalized Fluctuation–Dissipation Relations

9.3 Random Walk on a Comb Lattice

9.4 Entropy Production

9.5 Langevin Processes without Detailed Balance

9.6 Granular Intruder

9.7 Conclusions and Perspectives

Acknowledgments

References

Chapter 10: Anomalous Thermal Transport in Nanostructures

10.1 Introduction

10.2 Numerical Study on Thermal Conductivity and Heat Energy Diffusion in One-Dimensional Systems

10.3 Breakdown of Fourier's Law: Experimental Evidence

10.4 Theoretical Models

10.5 Conclusions

Acknowledgments

References

Chapter 11: Large Deviation Approach to Nonequilibrium Systems

11.1 Introduction

11.2 From Equilibrium to Nonequilibrium Systems

11.3 Elements of Large Deviation Theory

11.4 Applications to Nonequilibrium Systems

11.5 Final Remarks

Acknowledgments

References

Chapter 12: Lyapunov Modes in Extended Systems

12.1 Introduction

12.2 Numerical Algorithms and LV Correlations

12.3 Universality Classes of Hydrodynamic Lyapunov Modes

12.4 Hyperbolicity and the Significance of Lyapunov Modes

12.5 Lyapunov Spectral Gap and Branch Splitting of Lyapunov Modes in a “Diatomic” System

12.6 Comparison of Covariant and Orthogonal HLMs

12.7 Hyperbolicity and Effective Degrees of Freedom of Partial Differential Equations

12.8 Probing the Local Geometric Structure of Inertial Manifolds via a Projection Method

12.9 Summary

Acknowledgments

References

Chapter 13: Study of Single-Molecule Dynamics in Mesoporous Systems, Glasses, and Living Cells

13.1 Introduction

13.2 Investigation of the Structure of Mesoporous Silica Employing Single-Molecule Microscopy

13.3 Investigation of the Diffusion of Guest Molecules in Mesoporous Systems

13.4 A Test of the Ergodic Theorem by Employing Single-Molecule Microscopy

13.5 Single-Particle Tracking in Biological Systems

13.6 Conclusion and Outlook

Acknowledgments

References

Index

Reviews of Nonlinear Dynamics and Complexity

Schuster, H. G. (ed.)

Reviews of Nonlinear Dynamics and Complexity

Volume 1

2008

ISBN: 978-3-527-40729-3

Schuster, H. G. (ed.)

Reviews of Nonlinear Dynamics and Complexity

Volume 2

2009

ISBN: 978-3-527-40850-4

Schuster, H. G. (ed.)

Reviews of Nonlinear Dynamics and Complexity

Volume 3

2010

ISBN: 978-3-527-40945-7

Grigoriev, R. and Schuster, H. G. (eds.)

Transport and Mixing in Laminar Flows

From Microfluidics to Oceanic Currents

2011

ISBN: 978-3-527-41011-8

Lüdge, K. (ed.)

Nonlinear Laser Dynamics

From Quantum Dots to Cryptography

2011

ISBN: 978-3-527-41100-9

Klages, R., Just, W., Jarzynski, C. (eds.)

Nonequilibrium Statistical Physics of Small Systems

Fluctuation Relations and Beyond

2013

ISBN: 978-3-527-41094-1

Pesenson, M. M. (ed.)

Multiscale Analysis and Nonlinear Dynamics

From Genes to the Brain

2013

ISBN: 978-3-527-41198-6

Niebur, E., Plenz, D., Schuster, H. G. (eds.)

Criticality in Neural Systems

2014

ISBN: 978-3-527-41104-7

ISBN: 978-0-471-66658-5

All books published by Wiley-VCH are carefully produced. Nevertheless, authors, editors, and publisher do not warrant the information contained in these books, including this book, to be free of errors. Readers are advised to keep in mind that statements, data, illustrations, procedural details or other items may inadvertently be inaccurate.

Library of Congress Card No.: applied for

British Library Cataloguing-in-Publication Data

A catalogue record for this book is available from the British Library.

Bibliographic information published by the Deutsche Nationalbibliothek

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de.

© 2013 Wiley-VCH Verlag & Co. KGaA, Boschstr. 12, 69469 Weinheim, Germany

All rights reserved (including those of translation into other languages). No part of this book may be reproduced in any form – by photoprinting, microfilm, or any other means – nor transmitted or translated into a machine language without written permission from the publishers. Registered names, trademarks, etc. used in this book, even when not specifically marked as such, are not to be considered unprotected by law.

Print ISBN: 978-3-527-41094-1

ePDF ISBN: 978-3-527-65873-2

ePub ISBN: 978-3-527-65872-5

mobi ISBN: 978-3-527-65871-8

oBook ISBN: 978-3-527-65870-1

Cover Design Spieszdesign, Neu-Ulm

Typesetting Thomson Digital, Noida, India

Preface

The term small systems denotes objects composed of a limited, small number of particles, as is typical for matter on meso- and nanoscales. The interest of the scientific community in small systems has been boosted by the recent advent of micromanipulation techniques and nanotechnologies. These provide scientific instruments capable of measuring tiny energies in physical systems under nonequilibrium conditions, that is, when these systems are exposed to external forces generated by gradients or fields. Prominent examples of small systems exhibiting nonequilibrium dynamics are biopolymers stretched by optical tweezers (as shown in the lower picture on the book cover), colloidal particles dragged through a fluid by optical traps, and single molecules diffusing through meso- and nanopores.

Understanding the statistical physics of such systems is particularly challenging, because their small size does not allow one to apply standard methods of statistical mechanics and thermodynamics, which presuppose large numbers of particles. Small systems often display an intricate interplay between microscopic nonlinear dynamical properties and macroscopic statistical behavior leading to highly nontrivial fluctuations of physical observables (cf. the upper picture on the book cover). They can thus serve as a laboratory for understanding the emergence of complexity and irreversibility, in the sense that for a system consisting of many entities the dynamics of the whole is more than the sum of its single parts.

Studying the behavior of small systems on different spatiotemporal scales becomes particularly interesting in view of nonequilibrium transport phenomena such as diffusion, heat conduction, and electronic transport. Understanding these phenomena in small systems requires novel theoretical concepts that blend ideas and techniques from nonequilibrium statistical physics, thermodynamics, stochastic theory, and dynamical systems theory. More recently, it has become clear that a central role in this field is played by fluctuation relations, which generalize fundamental thermodynamic relations to small systems in nonequilibrium situations.

The aim of this book is to provide an introduction for both theorists and experimentalists to small systems physics, fluctuation relations, and the associated research topics listed in the word cloud diagram shown below. The book should also be useful for graduate-level students who want to explore this new field of research. The single chapters have been written by internationally recognized experts in small systems physics and provide in-depth introductions to the directions of their research. This approach of a multi-author reference book appeared to be particularly useful in view of the vast amount of literature available on different forms of fluctuation relations. While there exist excellent reviews highlighting single facets of fluctuation relations, we feel that the field lacks a reference that brings together the most important contributions to this topic in a comprehensive manner. This book is an attempt to fill the gap. In a way, it may act itself as a complex system, in the sense that the book as a whole ideally yields a new picture on small systems physics and fluctuation relations emerging from a synergy of the individual chapters. Along these lines, our intention was to embed research on fluctuation relations into a wider context of small systems research by pointing out cross-links to other theories and experiments. We thus hope that this book may serve as a catalyst both to fuse existing theories on fluctuation relations and to open up new directions of inquiry in the rapidly growing area of small systems research.

Accordingly, the book is organized into two parts. Part I introduces both the theoretical and experimental foundations of fluctuation relations. It starts with a threefold opening on basic theoretical ideas. The first chapter features a pedagogical introduction to fluctuation relations based on an approach that was coined “stochastic thermodynamics.” The second chapter outlines a fully deterministic theory of fluctuation relations by working it out both analytically and numerically for a particle in an optical trap. The third chapter generalizes these deterministic ideas by also establishing cross-links to the Gallavotti–Cohen fluctuation theorem, which historically was the first to be established, with mathematical rigor, for nonequilibrium steady states. After this theoretical opening, the following two chapters summarize groundbreaking experimental work on two fundamental types of fluctuation relations. Along the lines of Gallavotti and Cohen, the first subset of them is often referred to as “fluctuation theorems” generalizing the second law of thermodynamics to small systems (see the first formula on the book cover). This type of fluctuation formulas is tested experimentally in systems where particles are confined by optical traps under nonequilibrium conditions. “Work relations,” on the other hand, generalize an equilibrium relation between work and free energy to nonequilibrium (see the second formula on the book cover). The result is tested in experiments where single DNA and RNA chains are unzipped by optical tweezers. The remaining three chapters of Part I elaborate on aspects of fluctuation relations that moved into the focus of small systems research more recently. The first one introduces the nonequilibrium thermodynamics of information processing by using feedback control. The second one reviews quantum mechanical generalizations of fluctuation relations applied to electron transport in mesoscopic circuits. The third one discusses generalizations of fluctuation relations for stochastic anomalous dynamics with cross-links to experiments on biological cell migration.

Part II goes beyond fluctuation relations by reviewing topics that, while centered around nonequilibrium fluctuations in small systems, do not elaborate in particular on fluctuation relations. It starts with a discussion of fluctuation–dissipation relations, which are intimately related to, but may not be confused with, fluctuation relations. A cross-link to the foregoing chapter is provided in terms of partially studying anomalous dynamics, a topic that becomes particularly important for heat conduction in nanostructures, as is demonstrated from both an experimental and a theoretical point of view in the subsequent chapter. Fluctuation relations bear an important relation to large deviation theory, as is outlined in the next chapter, with applications to interacting particle systems. The book concludes with a summary about Lyapunov modes, which provide important information about the phase space dynamics in deterministically chaotic interacting many-particle systems, and experiments about diffusion in meso- and nanopores by performing single-molecule spectroscopy.

We finally remark that the various points of view expressed in the single chapters may not always be in full agreement with each other. This became clear in lively discussions between different groups of authors when the book was in preparation. As editors, we do not necessarily aim to achieve a complete consensus among all authors, as differences in opinions are typical for a very active field of research such as the one presented in this book.

We are most grateful to Heinz-Georg Schuster, the editor of the series Reviews of Nonlinear Dynamics and Complexity, in which this book is published as a Special Issue, for his invitation to edit this book, and for his help in getting the project started. We also thank Vera Palmer and Ulrike Werner from Wiley-VCH Publishers for their kind and efficient assistance in editing this book. C.J. gratefully acknowledges financial support from the National Science Foundation (USA) under grant DMR-0906601. W.J. is grateful for support from the British EPSRC by grant EP/H04812X/1. We finally thank all book chapter authors for sharing their expertise in this multi-author monograph. Their strong efforts and enthusiasm for this project were indispensable for bringing it to success.

Summer 2012

London

London

College Park, MD

Rainer KlagesWolfram JustChristopher Jarzynski

List of Contributors

Anna Alemany

Universitat de Barcelona

Departament de Física Fonamental

Small Biosystems Lab

Avda. Diagonal 647

08028 Barcelona

Spain

and

Instituto de Salud Carlos III

CIBER-BBN de Bioingeniería

Biomateriales y Nanomedicina

C/ Sinesio Delgado 4

28029 Madrid

Spain

L. Bellon

Université de Lyon

Ecole Normale Supérieure de Lyon

Laboratoire de Physique (CNRS UMR 5672)

46 Allée d'Italie

69364 Lyon Cedex 07

France

Christoph Bräuchle

Ludwig-Maximilians-Universität München

Department Chemie

Lehrstuhl für Physikalische

Chemie I

Butenandtstr. 11

81377 Munich

Germany

Aleksei V. Chechkin

National Science Center “Kharkov Institute of Physics and Technology” (NSC KIPT)

Institute for Theoretical Physics

Akademicheskaya Street 1

Kharkov 61108

Ukraine

Sergio Ciliberto

Université de Lyon

Ecole Normale Supérieure de Lyon

Laboratoire de Physique (CNRS UMR 5672)

46 Allée d'Italie

69364 Lyon Cedex 07

France

Peter Dieterich

Technische Universität Dresden

Medizinische Fakultaet “Carl Gustav Carus”

Institut für Physiologie

Fetscherstrasse 74

01307 Dresden

Germany

Denis J. Evans

Australian National University

Research School of Chemistry

Building 35 Science Rd

Canberra, ACT 0200

Australia

Ian Ford

University College London

Department of Physics and Astronomy and London Centre for Nanotechnology

Gower Street

London WC1E 6BT

UK

Pierre Gaspard

Université Libre de Bruxelles

Center for Nonlinear Phenomena and Complex Systems and Department of Physics

Campus Plaine

Code Postal 231

Boulevard du Triomphe

1050 Brussels

Belgium

J.R. Gomez-Solano

Université de Lyon

Ecole Normale Supérieure de Lyon

Laboratoire de Physique (CNRS UMR 5672)

46 Allée d'Italie

69364 Lyon Cedex 07

France

G. Gradenigo

Universita degli Studi di Roma “La Sapienza”

Dipartimento di Fisica

Piazzale A. Moro 2

00185 Rome

Italy

Rosemary J. Harris

Queen Mary University of London

School of Mathematical Sciences

Mile End Road

London E1 4NS

UK

O.G. Jepps

Griffith University

School of Biomolecular and Physical Sciences

Queensland Micro- and Nanotechnology Centre

170 Kessels Road

Brisbane, Qld 4111

Australia

Rainer Klages

Queen Mary University of London

School of Mathematical Sciences

Mile End Road

London E1 4NS

UK

Baowen Li

National University of Singapore

Department of Physics and Centre for Computational Science and Engineering

Science Drive 2

Singapore 117542

Singapore

and

NUS Graduate School for Integrative Sciences and Engineering

28 Medical Drive

Singapore 117456

Singapore

Sha Liu

National University of Singapore

Department of Physics and Centre for Computational Science and Engineering

Science Drive 2

Singapore 117542

Singapore

and

NUS Graduate School for Integrative Sciences and Engineering

28 Medical Drive

Singapore 117456

Singapore

Stephan Mackowiak

Ludwig-Maximilians-Universität München

Department Chemie

Lehrstuhl für Physikalische

Chemie I

Butenandtstr. 11

81377 Munich

Germany

A. Petrosyan

Université de Lyon

Ecole Normale Supérieure de Lyon

Laboratoire de Physique (CNRS UMR 5672)

46 Allée d'Italie

69364 Lyon Cedex 07

France

A. Puglisi

Universita degli Studi di Roma “La Sapienza”

Dipartimento di Fisica

Piazzale A. Moro 2

00185 Rome

Italy

Günter Radons

Chemnitz University of Technology

Institute of Mechatronics and Institute of Physics

09107 Chemnitz

Germany

James C. Reid

The University of Queensland

Australian Institute for Bioengineering and Nanotechnology

AIBN Building (75)

Corner Cooper & College Roads

Brisbane, Qld 4072

Australia

Marco Ribezzi-Crivellari

Universitat de Barcelona

Departament de Fisica Fonamental

Small Biosystems Lab

Barcelona

Spain

and

Instituto de Salud Carlos III

CIBER-BBN de BioingenieriaBiomateriales y Nanomedicina

Madrid

Spain

Felix Ritort

Universitat de Barcelona

Departament de Fisica Fonamental

Small Biosystems Lab

Barcelona

Spain

and

Instituto de Salud Carlos III

CIBER-BBN de BioingenieriaBiomateriales y Nanomedicina

Madrid

Spain

Lamberto Rondoni

Dipartimento di Scienze Matematiche

Politecnico di Torino

Corso Duca degli Abruzzi 24

10129 Torino

INFN, Sezione di Torino

Via P. Giuria 1

10125 Torino, Italy

Takahiro Sagawa

Kyoto University

The Hakubi Center for Advanced Research

iCeMS Complex 1 West Wing

Yoshida-Ushinomiya-cho, Sakyo-ku

Kyoto 606-8302

Japan

and

Kyoto University

Yukawa Institute of Theoretical Physics

Kitashirakawa Oiwake-Cho, Sakyo-ku

Kyoto 606-8502

Japan

A. Sarracino

Universita degli Studi di Roma “La Sapienza”

Dipartimento di Fisica

Piazzale A. Moro 2

00185 Rome

Italy

Debra J. Searles

The University of Queensland

Australian Institute for Bioengineering and Nanotechnology

and School of Chemistry and Molecular Biosciences

AIBN Building (75)

Corner Cooper & College Roads

Brisbane, Qld 4072

Australia

Richard Spinney

University College London

Department of Physics and Astronomy and London Centre for Nanotechnology

Gower Street

London WC1E 6BT

UK

Hugo Touchette

Queen Mary University of London

School of Mathematical Sciences

Mile End Road

London E1 4NS

UK

Masahito Ueda

The University of Tokyo

Department of Physics

7-3-1 Hongo, Bunkyo-ku

Tokyo 113-0033

Japan

D. Villamaina

Universita degli Studi di Roma “La Sapienza”

Dipartimento di Fisica

Piazzale A. Moro 2

00185 Rome

Italy

A. Vulpiani

Universita degli Studi di Roma “La Sapienza”

Dipartimento di Fisica

Piazzale A. Moro 2

00185 Rome

Italy

Stephen R. Williams

Australian National University

Research School of Chemistry

Building 35 Science Rd

Canberra, ACT 0200

Australia

Hong-Liu Yang

Chemnitz University of Technology

Institute of Mechatronics and Institute of Physics

09107 Chemnitz

Germany

Gang Zhang

Peking University

Key Laboratory for the Physics and Chemistry of Nanodevices and Department of Electronics

Yiheyuan Road 5

Beijing 100871

China

and

National University of Singapore

Department of Physics and Centre for Computational Science and Engineering

Science Drive 2

Singapore 117542

Singapore

Part I

Fluctuation Relations

The contributions to Part I of this book are organized into three clusters of chapters, beginning with theoretical foundations. Spinney and Ford's opening chapter provides a pedagogical overview of fluctuation relations, summarizing key results in the field and emphasizing their connection with the second law of thermodynamics. These results are derived within the framework of continuous Markovian stochastic dynamics, in which the effects of thermal surroundings on the evolution of a system of interest are modeled by random noise. The next chapter, by Reid, Williams, Searles, Rondoni, and Evans, takes a complementary approach, using fully deterministic equations of motion to model the system's evolution. The authors show that a number of related results follow directly from the consideration of an appropriately defined dissipation function. These results are then illustrated using a conceptually simple and experimentally accessible system: a micron-size bead trapped with laser tweezers. Finally, Rondoni and Jepps' chapter makes a distinction between physically motivated fluctuation relations, such as those considered in the first two contributions, and the study of similar results within the framework of dynamical systems theory, where the emphasis is on generality and mathematical rigor rather than specific physical realizations. By developing a rigorous formalism for physically motivated fluctuation relations, Rondoni and Jepps explore this distinction in detail, and illuminate a number of issues such as the relationship between transient and steady-state fluctuation relations.

The second cluster of chapters within Part I considers experimental foundations. The contribution by Bellon, Gomez-Solano, Petrosyan, and Ciliberto reviews experiments in which the fluctuations of a system away from thermal equilibrium are measured and compared with theory. The experiments include a torsional pendulum, a polystyrene particle trapped optically in a double well, and a cantilever used for atomic force microscopy (AFM), and together they provide a set of experimental platforms for testing a variety of fluctuation relations. Next, Alemany, Ribezzi-Crivellari, and Ritort provide an introduction and up-to-date review of an important application of fluctuation relations, namely, the recovery of equilibrium free energy differences from out-of-equilibrium single-molecule pulling experiments. Among other issues, this contribution discusses and illustrates the importance of using the appropriate microscopic definition of work.

Further developments are discussed in the third and final set of chapters of Part I. Sagawa and Ueda review the thermodynamics of feedback control, in which an external observer uses information about the fluctuations of a small system to guide its subsequent evolution, as in Maxwell's famous thought experiment. They discuss how fluctuation relations and the second law of thermodynamics itself are modified in this setting. Gaspard then gives an overview of the time-symmetry relations that are obeyed by out-of-equilibrium systems. His contribution focuses on both quantum and stochastic dynamics, and discusses applications to electron transport in mesoscopic circuits. In the closing chapter of Part I, Klages, Chechkin, and Dieterich investigate and extend fluctuation relations in the context of anomalous dynamics, as modeled by Lévy flights, long-time correlated Gaussian stochastic processes, and time-fractional kinetics. Such anomalous dynamics arise in physically relevant situations such as cell migration.

Theoretical Foundations

1

Fluctuation Relations: A Pedagogical Overview

Richard Spinney and Ian Ford

1.1 Preliminaries

Ours is a harsh and unforgiving universe, and not just in the little matters that conspire against us. Its complicated rules of evolution seem unfairly biased against those who seek to predict the future. Of course, if the rules were simple, then there might be no universe of any complexity worth considering. Perhaps richness of behavior emerges only because each component of the universe interacts with many others and in ways that are very sensitive to details: this is the harsh and unforgiving nature. In order to predict the future, we have to take into account all the connections between the components, since they might be crucial to the evolution; furthermore, we need to know everything about the present in order to predict the future: both of these requirements are in most cases impossible. Estimates and guesses are not enough: unforgiving sensitivity to the details very soon leads to loss of predictability. We see this in the workings of a weather system. The approximations that meteorological services make in order to fill gaps in understanding, or initial data, eventually make the forecasts inaccurate.

So a description of the dynamics of a complex system is likely to be incomplete and we have to accept that predictions will be uncertain. If we are careful in the modeling of the system, the uncertainty will grow only slowly. If we are sloppy in our model building or initial data collection, it will grow quickly. We may expect the predictions of any incomplete model to tend toward a state of general ignorance, whereby we cannot be sure about anything: rain, snow, heat wave, or hurricane. We must expect there to be a spread, or fluctuations, in the outcomes of such a model.

This discussion of the growth of uncertainty in predictions has a bearing on another matter: the apparent irreversibility of all but the most simple physical processes. This refers to our inability to drive a system exactly backward by reversing the external forces that guide its evolution. Consider the mechanical work required to compress a gas by a piston in a cylinder. We might hope to see the expended energy returned when we stop pushing and allow the gas to drive the piston all the way back to the starting point: but not all will be returned. The system seems to mislay some energy to the benefit of the wider environment. This is the familiar process of friction. The one-way dissipation of energy during mechanical processing is an example of the famous second law of thermodynamics. But the process is actually rather mysterious: What about the underlying reversibility of Newton's equations of motion? Why is the leakage of energy one way?

We may suspect that a failure to engineer the exact reversal of a compression is simply a consequence of a lack of control over all components of the gas and its environment: the difficulty in setting things up properly for the return leg implies the virtual impossibility of retracing the behavior. So we might not expect to be able to retrace exactly. But why do we not sometimes see “antifriction?” A clue might be seen in the relative size and complexity of the system and its environment. The smaller system is likely to evolve in a more complicated fashion as a result of the coupling, while we may expect the larger environment to be much less affected. There is a disparity in the effect of the coupling on each participant, and it is believed that this is responsible for the apparent one-way nature of friction. It is possible to implement these ideas by modeling the behavior of a system using uncertain or stochastic dynamics. The probability of observing a reversal of the behavior on the return leg can be calculated explicitly and it turns out that the difference between probabilities of observing a particular compression and seeing its reverse on the return leg leads to a measure of the irreversibility of natural processes. The second law is then a rather simple consequence of the dynamics. A similar asymmetric treatment of the effect on a system of coupling to a large environment is possible using deterministic and reversible nonlinear dynamics. In both cases, Loschmidt's paradox, the apparent breakage of time reversal symmetry for thermally constrained systems, is evaded, although for different reasons.

This chapter describes the so-called fluctuation relations, or theorems [1–5], that emerge from the analysis of a physical system interacting with its environment and that provide the structure that leads to the conclusion just outlined. They can quantify unexpected outcomes in terms of the expected. They apply on microscopic as well as macroscopic scales, and indeed their consequences are most apparent when applied to small systems. They can be derived on the basis of a rather natural measure of irreversibility, just alluded to, that offers an interpretation of the second law and the associated concept of entropy production. The dynamical rules that control the universe might seem harsh and unforgiving, but they can also be charitable and from them have emerged fluctuation relations that seem to provide a better understanding of entropy, uncertainty, and the limits of predictability.

This chapter is structured as follows. In order to provide a context for the fluctuation relations suitable for newcomers to the field, we begin with a brief summary of thermodynamic irreversibility and then describe how stochastic dynamics might be modeled. We use a framework based on stochastic rather than deterministic dynamics, since developing both themes here might not provide the most succinct pedagogical introduction. Nevertheless, we refer to the deterministic framework briefly later on to emphasize its equivalence. We discuss the identification of entropy production with the degree of departure from dynamical reversibility and then take a careful look at the developments that follow, which include the various fluctuation relations, and consider how the second law might not operate as we expect. We illustrate the fluctuation relations using simple analytical models as an aid to understanding. We conclude with some final remarks, but the broader implications are to be found elsewhere in this book, for which we hope this chapter will serve as a helpful background.

1.2 Entropy and the Second Law

Ignorance and uncertainty has never been an unusual state of affairs in human perception. In mechanics, Newton's laws of motion provided tools that seemed to dispel some of the haze: here were mathematical models that enabled the future to be foretold! They inspired attempts to predict future behavior in other fields, particularly in thermodynamics, the study of systems through which matter and energy can flow. The particular focus in the early days of the field was the heat engine, a device whereby fuel and the heat it can generate can be converted into mechanical work. Its operation was discovered to produce a quantity called entropy that could characterize the efficiency with which energy in the fuel could be converted into motion. Indeed, entropy seemed to be generated whenever heat or matter flowed. The second law of thermodynamics famously states that the total entropy of the evolving universe is always increasing. But this statement still attracts discussion, more than 150 years after its introduction. We do not debate the meaning of Newton's second law anymore, so why is the second law of thermodynamics so controversial?

Well, it is hard to understand how there can be a physical quantity that never decreases. Such a statement demands the breakage of the principle of time reversal symmetry, a difficulty referred to as Loschmidt's paradox. Newton's equations of motion do not specify a preferred direction in which time evolves. Time is a coordinate in a description of the universe and it is just a convention that real-world events take place while this coordinate increases. Given that we cannot actually run time backward, we can demonstrate this symmetry in the following way. A sequence of events that take place according to time reversal symmetric equations can be inverted by instantaneously reversing all the velocities of all the participating components and then proceeding forward in time once again, suitably reversing any external protocol of driving forces, if necessary. The point is that any evolution can be imagined in reverse, according to Newton. We therefore do not expect to observe any quantity ever-increasing with time. This is the essence of Loschmidt's objection to Boltzmann's [6] mechanical interpretation of the second law.

Nobody, however, has been able to initiate a heat engine such that it sucks exhaust gases back into its furnace and combines them into fuel. The denial of such a spectacle is empirical evidence for the operation of the second law, but it is also an expression of Loschmidt's paradox. Time reversal symmetry is broken by the apparent illegality of entropy-consuming processes and that seems unacceptable. Perhaps we should not blindly accept the second law in the sense that has traditionally been ascribed to it. Or perhaps there is something deeper going on. Furthermore, a law that only specifies the sign of a rate of change sounds rather incomplete.

But what has emerged in the past two decades or so is the realization that Newton's laws of motion, when supplemented by the acceptance of uncertainty in the way systems behave, brought about by roughly specified interactions with the environment, can lead quite naturally to a quantity that grows with time, that is, uncertainty itself. It is reasonable to presume that incomplete models of the evolution of a physical system will generate additional uncertainty in the reliability of the description of the system as they are evolved. If the velocities were all instantaneously reversed, in the hope that a previous sequence of events might be reversed, uncertainty would continue to grow within such a model. We shall, of course, need to quantify this vague notion of uncertainty. Newton's laws on their own are time reversal symmetric, but intuition suggests that the injection and evolution of configurational uncertainty would break the symmetry. Entropy production might therefore be equivalent to the leakage of our confidence in the predictions of an incomplete model: an interpretation that ties in with prevalent ideas of entropy as a measure of information.

Before we proceed further, we need to remind ourselves about the phenomenology of irreversible classical thermodynamic processes [7]. A system possesses energy and can receive additional incremental contributions in the form of heat from a heat bath at temperature and work from an external mechanical device that might drag, squeeze, or stretch the system. It helps perhaps to view and roughly as increments in kinetic and in potential energy, respectively. We write the first law of thermodynamics (energy conservation) in the form . The second law is then traditionally given as Clausius' inequality:

(1.1)

where the integration symbol means that the system is taken around a cycle of heat and work transfers, starting and ending in thermal equilibrium with the same macroscopic system parameters, such as temperature and volume. The temperature of the heat bath might change with time, though by definition and in recognition of its presumed large size it always remains in thermal equilibrium, and the volume and shape imposed upon the system during the process might also be time dependent. We can also write the second law for an incremental thermodynamic process as

(1.2)

where each term is an incremental entropy change, the system again starting and ending in equilibrium. The change in system entropy is denoted and the change in entropy of the heat bath, or the surrounding medium, is defined as

(1.3)

such that is the total entropy change of the two combined (the “universe”). We see that Eq. (1.1) corresponds to the condition , since . A more powerful reading of the second law is that

(1.4)

for any incremental segment of a thermodynamic process, as long as it starts and ends in equilibrium. An equivalent expression of the law would be to combine these statements to write , from which we conclude that the dissipative work (sometimes called irreversible work) in an isothermal process,

(1.5)

is always positive, where is a change in Helmholtz free energy. We may also write and regard as a contribution to the change in entropy of a system that is not associated with a flow of entropy from the heat bath, the term. For a thermally isolated system, where , we have and the second law then says that the system entropy increase is due to “internal” generation; hence, is sometimes [7] denoted .

Boltzmann tried to explain what this ever-increasing quantity might represent at a microscopic level [6]. He considered a thermally isolated gas of particles interacting through pairwise collisions within a framework of classical mechanics. The quantity

(1.6)

where is the population of particles with a velocity in the range of about , can be shown to decrease with time, or remain constant if the population is in a Maxwell–Boltzmann distribution characteristic of thermal equilibrium. Boltzmann obtained this result by assuming that the collision rate between particles at velocities and is proportional to the product of populations at those velocities, that is, . He proposed that was proportional to the negative of system entropy and that his so-called -theorem provides a sound microscopic and mechanical justification for the second law. Unfortunately, this does not hold up. As Loschmidt pointed out, Newton's laws of motion cannot lead to a quantity that always decreases with time: would be incompatible with the principle of time reversal symmetry that underlies the dynamics. The -theorem does have a meaning, but it is statistical: the decrease in is an expected, but not guaranteed, result. Alternatively, it is a correct result for a dynamical system that does not adhere to time reversal symmetric equations of motion. The neglect of correlation between the velocities of colliding particles, both in the past and in the future, is where the model departs from Newtonian dynamics.

The same difficulty emerges in another form when, following Gibbs, it is proposed that the entropy of a system might be viewed as a property of an ensemble of many systems, each sampled from a probability density , where denotes the positions and velocities of all the particles in a system. Gibbs wrote

(1.7)

where is Boltzmann's constant and the integration is over all phase space. The Gibbs representation of entropy is compatible with classical equilibrium thermodynamics. But the probability density for an isolated system should evolve in time according to Liouville's theorem, in such a way that is a constant of the motion. How, then, can the entropy of an isolated system, such as the universe, increase? Either equation (1.7) is valid only for equilibrium situations, something has been left out, or too much has been assumed.

The resolution of this problem is that Gibbs' expression can represent thermodynamic entropy, but only if is not taken to provide an exact representation of the state of the universe or, if you wish, of an ensemble of universes. At the very least, practicality requires us to separate the universe into a system about which we may know and care a great deal and an environment with which the system interacts, which is much less precisely monitored. This indeed is one of the central principles of thermodynamics. We are obliged by this incompleteness to represent the probability of environmental details in a so-called coarse-grained fashion, which has the effect that the probability density appearing in Gibbs' representation of the system entropy evolves not according to Liouville's equations, but according to versions with additional terms that represent the effect of an uncertain environment upon an open system. This then allows to change, the detailed nature of which will depend on exactly how the environmental forces are represented.

For an isolated system however, an increase in will emerge only if we are obliged to coarse-grained aspects of the system itself. This line of development could be considered rather unsatisfactory, since it makes the entropy of an isolated system grain-size dependent, and alternatives may be imagined where the entropy of an isolated system is represented by something other than . The reader is directed to the literature [8] for further consideration of this matter. However, in this chapter, we shall concern ourselves largely with entropy generation brought about by systems in contact with coarse-grained environments described using stochastic forces, and within such a framework the Gibbs' representation of system entropy will suffice.

We shall discuss a stochastic representation of the additional terms in the system's dynamical equations in the next section, but it is important to note that a deterministic description of environmental effects is also possible, and it might perhaps be thought more natural. On the other hand, the development using stochastic environmental forces is in some ways easier to present. But it should be appreciated that some of the early work on fluctuation relations was developed using deterministic so-called thermostats [1, 9], and that this theme is represented briefly in Section 1.9, and elsewhere in this book.

1.3 Stochastic Dynamics

1.3.1 Master Equations

We pursue the assertion that sense can be made of the second law, its realm of applicability and its failings, when Newton's laws are supplemented by the explicit inclusion of a developing configurational uncertainty. The deterministic rules of evolution of a system need to be replaced by rules for the evolution of the probability that the system should take a particular configuration. We must first discuss what we mean by probability. Traditionally, it is the limiting frequency that an event might occur among a large number of trials. But there is also a view that probability represents a distillation, in numerical form, of the best judgment or belief about the state of a system: our information [10]. It is a tool for the evaluation of expectation values of system properties, representing what we expect to observe based on information about a system. Fortunately, the two interpretations lead to laws for the evolution of probability that are of similar form.

So let us derive equations that describe the evolution of probability for a simple case. Consider a random walk in one dimension, where a step of variable size is taken at regular time intervals [11–13]. We write the master equation describing such a stochastic process:

(1.8)

where is the probability that the walker is at position at timestep , and is the transition probability for making a step of size in timestep given a starting position of . The transition probability may be considered to represent the effect of the environment on the walker. We presume that Newtonian forces cause the move to be made, but we do not know enough about the environment to model the event any better than this. We have assumed the Markov property such that the transition probability does not depend on the previous history of the walker, but only on the position prior to making the step. It is normalized such that

(1.9)

since the total probability that any transition is made, starting from , is unity. The probability that the walker is at position at time is a sum of probabilities of all possible previous histories that lead to this situation. In the Markov case, the master equation shows that these path probabilities are products of transition probabilities and the probability of an initial situation, a simple viewpoint that we shall exploit later.

1.3.2 Kramers–Moyal and Fokker–Planck Equations

The Kramers–Moyal and Fokker–Planck equations describe the evolution of probability density functions, denoted , which are continuous in space (KM) and additionally in time (FP). We start with the Chapman–Kolmogorov equation, an integral form of the master equation for the evolution of a probability density function that is continuous in space:

(1.10)

We have swapped the discrete time label for a parameter . The quantity describes a jump from through distance in a period starting from time . Note that now has dimensions of inverse length (it is really a Markovian transition probability density), and is normalized according to .

We can turn this integral equation into a differential equation by expanding the integrand in to get

(1.11)

and define the Kramers–Moyal coefficients, proportional to moments of ,

(1.12)

to obtain the (discrete time) Kramers–Moyal equation:

(1.13)

Sometimes the Kramers–Moyal equation is defined with a time derivative of on the left-hand side instead of a difference.

Equation (1.13) is rather intractable, due to the infinite number of higher derivatives on the right-hand side. However, we might wish to confine attention to evolution in continuous time and consider only stochastic processes that are continuous in space in this limit. This excludes processes that involve discontinuous jumps: the allowed step lengths must go to zero as the timestep goes to zero. In this limit, every KM coefficient vanishes except the first and second, consistent with the Pawula theorem. Furthermore, the difference on the left-hand side of Eq. (1.13) becomes a time derivative and we end up with the Fokker–Planck equation (FPE):

(1.14)

We can define a probability current,

(1.15)

and view the FPE as a continuity equation for probability density:

(1.16)

The FPE reduces to the familiar diffusion equation if we take and to be zero and , respectively. Note that it is probability that is diffusing, not a physical property like gas concentration. For example, consider the limit of the symmetric Markov random walk in one dimension as timestep and spatial step go to zero: the so-called Wiener process. The probability density evolves according to

(1.17)

with an initial condition , if the walker starts at the origin. The statistical properties of the process are represented by the probability density that satisfies this equation, that is,

(1.18)

representing the increase in positional uncertainty of the walker as time progresses.

1.3.3 Ornstein–Uhlenbeck Process

We now consider a very important stochastic process describing the evolution of the velocity of a particle v. We shall approach this from a different viewpoint: a treatment of the dynamics where Newton's equations are supplemented by environmental forces, some of which are stochastic. It is proposed that the environment introduces a linear damping term together with random noise:

(1.19)

where is the friction coefficient, is a constant, and has statistical properties , where the angle brackets represent an expectation over the probability distribution of the noise, and , which states that the so-called “white” noise is sampled from a distribution with no autocorrelation in time. The singular variance of the noise might seem to present a problem, but it can be accommodated. This is the Langevin equation. We can demonstrate that it is equivalent to a description based on a Fokker–Planck equation by evaluating the KM coefficients, considering Eq. (1.12) in the form

(1.20)

and in the continuum limit where . This requires an equivalence between the average of over a transition probability density and the average over the statistics of the noise . We integrate Eq. (1.19) for small to get

(1.21)

and according to the properties of the noise, this gives with , such that . We also construct and using the appropriate statistical properties and the continuum limit, we get and . We have therefore established that the FPE equivalent to the Langevin equation (Eq. (1.19)) is

(1.22)

The stationary solution to this equation ought to be the Maxwell–Boltzmann velocity distribution of a particle of mass in thermal equilibrium with an environment at temperature , so must be related to and in the form , where is Boltzmann's constant. This is a connection known as a fluctuation–dissipation relation: characterizes the fluctuations and the dissipation or damping in the Langevin equation. Furthermore, it may be shown that the time-dependent solution to Eq. (1.22), with initial condition at time , is

(1.23)

This is a Gaussian with time-dependent mean and variance. The notation characterizes this as a transition probability density for the so-called Ornstein–Uhlenbeck process starting from initial value at initial time , and ending at the final value at time .

The same mathematics can be used to describe the motion of a particle in a harmonic potential , in the limit where the frictional damping coefficient is very large. The Langevin equations that describe the dynamics are and , which reduce in this so-called overdamped limit to

(1.24)

which then has the same form as Eq. (1.19), but for position instead of velocity. The transition probability (1.23), recast in terms of , can therefore be employed.

In summary, the evolution of a system interacting with a coarse-grained environment can be modeled using a stochastic treatment that includes time-dependent random external forces. However, these really represent the effect of uncertainty in the initial conditions for the system and its environment: indefiniteness in some of these initial environmental conditions might only have an impact upon the system at a later time. For example, the uncertainty in the velocity of a particle in a gas increases as particles that were initially far away, and that were poorly specified at the initial time, have the opportunity to move closer and interact. The evolution equations are not time reversal symmetric since the principle of causality is assumed: the probability of a system configuration depends upon events that precede it in time, and not on events in the future. The evolving probability density can capture the growth in configurational uncertainty with time. We can now explore how growth of uncertainty in system configuration might be related to entropy production and the irreversibility of macroscopic processes.

1.4 Entropy Generation and Stochastic Irreversibility

1.4.1 Reversibility of a Stochastic Trajectory

The usual statement of the second law in thermodynamics is that it is impossible to observe the reverse of an entropy-producing process. Let us immediately reject this version of the law and recognize that nothing is impossible. A ball might roll off a table and land at our feet. But there is never stillness at the microscopic level and, without breaking any law of mechanics, the molecular motion of the air, ground, and ball might conspire to reverse their macroscopic motion, bringing the ball back to rest on the table. This is not ridiculous: it is an inevitable consequence of the time reversal symmetry of Newton's laws. All we need for this event to occur is to create the right initial conditions. Of course, that is where the problem lies: it is virtually impossible to engineer such a situation, but virtually impossible is not absolutely impossible.

This of course highlights the point behind Loschmidt's paradox. If we were to time reverse the equations of motion of every atom that was involved in the motion of the ball at the end of such an event, we would observe the reverse behavior. Or rather more suggestively, we would observe both the forward and the reverse behavior with probability 1. This of course is such an overwhelmingly difficult task that one would never entertain the idea of its realization. Indeed, it is also not how one typically considers irreversibility in the real world, whether that be in the lab or through experience. What one may in principle be able to investigate is the explicit time reversal of just the motion of the particle(s) of interest to see whether the previous history can be reversed. Instead of reversing the motion of all the atoms of the ground, the air, and so on, we just attempt to roll the ball back toward the table at the same speed at which it landed at our feet. In this scenario, we would certainly not expect the reverse behavior. Now because the reverse motion is not inevitable, we have somehow, for the system we are considering, identified (or perhaps constructed) the concept of irreversibility albeit on a somewhat anthropic level: events do not easily run backward.

How have we evaded Loschmidt's paradox here? We failed to provide the initial conditions that would ensure reversibility: we left out the reversal of the motion of all other atoms. If they act upon the system differently under time reversal, then irreversibility is (virtually) inevitable. This is not so very profound, but what we have highlighted here is one of the principal paradigms of thermodynamics, the separation of the system of interest and its environment, or for our example the ball and the rest of the surroundings. Given then that we expect such irreversible behavior when we ignore the details of the environment in this way, we can ask what representation of that environment might be most suitable when establishing a measure of the irreversibility of the process? The answer to which is when the environment explicitly interacts with the system in such a way that time reversal is irrelevant. While never strictly true, this can hold as a limiting case that can be represented in a model, allowing us to determine the extent to which the reversal of just the velocities of the system components can lead to a retracing of the previous sequence of events.

Stochastic dynamics can provide an example of such a model. In the appropriate limits, we may consider the collective influence of all the atoms in the environment to act on the system in the same inherently unpredictable and dissipative way regardless of whether their coordinates are time reversed or not. In the Langevin equation, this is achieved by ignoring a quite startling number of degrees of freedom associated with the environment, idealizing their behavior as noise along with a frictional force that slows the particle regardless of which way it is traveling. If we consider now the motion of our system of interest according to this Langevin scheme, its forward and reverse motion both are no longer certain and we can attribute a probability to each path under the influence of the environmental effects. How can we measure irreversibility given these dynamics? We ask the question, what is the probability of observing some forward process compared to the probability of seeing that forward process undone? Or perhaps, to what extent has the introduction of stochastic behavior violated Loschmidt's expectation? This section is largely devoted to the formulation of such a quantity.

Intuitively, we understand that we should be comparing the probability of observing some forward and reverse behavior, but these ideas need to be made concrete. Let us proceed in a manner that allows us to make a more direct connection between irreversibility and our consideration of Loschmidt's paradox. First, let us imagine a system that evolves under some suitable stochastic dynamics. We specifically consider a realization or trajectory that runs from time to . Throughout this process, we imagine that any number of system parameters may be subject to change. This could be, for example under suitable Langevin dynamics, the temperature of the heat bath, or perhaps the nature of a confining potential. The changes in the parameters alter the probabilistic behavior of the system as time evolves. Following the literature, we assume that any such change in these system parameters occurs according to some protocol that itself is a function of time. We note that a particular realization is not guaranteed to take place, since the system is stochastic; so, consequently, we associate with it a probability of occurring that is entirely dependent on the exact trajectory taken, for example, , and the protocol .

We can readily compare probabilities associated with different paths and protocols. To quantify an irreversibility in the sense of the breaking of Loschmidt's expectation however, we must consider one specific path and protocol. Recall now our definition of the paradox. In a deterministic system, a time reversal of all the variables at the end of a process of length leads to the observation of the reverse behavior with probability 1 over the same period . It is the probability of the trajectory that corresponds to this reverse behavior within a stochastic system that we must address. To do so, let us consider what we mean by time reversal. A time reversal can be thought of as the operation of the time reversal operator on the system variables and distribution. Specifically, for position , momentum , and some protocol , we have , , and . If we were to do this after time for a set of Hamilton's equations of motion in which the protocol was time independent, the trajectory would be the exact time-reversed retracing of the forward trajectory. We shall call this trajectory the reversed trajectory and is phenomenologically the “running backward” of the forward behavior. Similarly, if we were to consider a motion in a deterministic system that was subject to some protocol (controlling perhaps some external field), we would observe the reversed trajectory only if the original protocol were performed symmetrically backward. This running of the protocol backward we shall call the reversed protocol.

We now are in a position to construct a measure of irreversibility in a stochastic system. We do so by comparing the probability of observing the forward trajectory under the forward protocol with the probability of observing the reversed trajectory under the reversed protocol following a time reversal at the end of the forward process. We literally attempt to undo the forward process and measure how likely that is. Since the quantities we have just defined here are crucial to this chapter, we shall make their nature absolutely clear before we proceed. To reiterate, we wish to consider the following:

Reversed trajectory: Given a trajectory that runs from time to , we define the reversed trajectory that runs forward in time explicitly such that . Examples are for position and for momentum .Reversed protocol: The protocol behaves in the same way as the position variable under time reversal and so we define the reversed protocol such that .

Given these definitions, we can construct the path probabilities we seek to compare. For notational clarity, we label path probabilities that depend upon the forward protocol with the superscript to denote the forward process and probabilities that depend upon the reversed protocol with the superscript to denote the reverse process. The probability of observing a given trajectory , , has two components. First, the probability of the path given its starting point , which we shall write as ; second, the initial probability of being at the start of the path, which we write as since it concerns the distribution of variables at the start of the forward process. The probability of observing the forward path is then given as

(1.25)

It is intuitive to proceed if we imagine the path probability as being approximated by a sequence of jumps that occur at distinct times. Since continuous stochastic behavior can be readily approximated by jump processes, but not the other way round, this simultaneously allows us to generalize any statements for a wider class of Markov processes. We shall assume for brevity that the jump processes occur in discrete time. By repeated application of the Markov property for such a system, we can write

(1.26)

Here, we consider a trajectory that is approximated by the jump sequence between points such that there are distinct transitions that occur at discrete times , and where and . is the probability of a jump from to using the value of the protocol evaluated at time .

Continuing with our description of irreversibility, we construct the probability of the reversed trajectory under the reversed protocol. Approximating as a sequence of jumps as before, we may write

(1.27)

There are two key concepts here. First, in accordance with our definition of irreversibility, we attempt to “undo” the motion from the end of the forward process and so the initial distribution is formed from the distribution to which evolves under , such that for continuous probability density distributions we have

(1.28)

so named because it is the probability distribution at the end of the forward process. For our discrete model, the equivalent is given by

(1.29)