The Thermodynamics of evolution - François Roddier - E-Book

The Thermodynamics of evolution E-Book

François Roddier

0,0

Beschreibung

Thermodynamique de l'évolution - Un essai de thermo-bio-sociologie - translated into English with the help of Steve Ridgway

À PROPOS DE L'AUTEUR

François Roddier est né en 1936. Astrophysicien, il est connu de tous les astronomes pour ses travaux qui ont permis de compenser l’effet des turbulences atmosphériques lors de l’observation des astres. Après avoir créé le département d’astrophysique de l’université de Nice, c’est aux États-Unis, au National Optical Astronomy Observatory (Tucson, Arizona) puis à l’Institute for Astrophysics de l’Université d’Hawaii, qu’il participe au développement des systèmes d’optique adaptative qui équipent désormais les grands outils d’observation comme le télescope CFHT (Canada-France-Hawaii), ou le télescope japonais Subaru tous deux situés à Hawaii, et les télescopes de l’ESO (European Southern Observatory), l’observatoire européen austral situé au Chili. Savant toujours curieux, il s’intéresse aux aspects thermodynamiques de l’évolution.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 375

Veröffentlichungsjahr: 2020

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



ISBN : 978-2-917141-89-2

© 2016, Éditions Parole

Groupe AlterMondo 83500 La Seyne-sur-Mer

Courriel : [email protected]

Suivi commande : [email protected]

www.editions-parole.net

Tous droits réservés pour tous pays

Page de titre

François Roddier

TheThermodynamicsof evolution

Translated into English with the help of

Steve Ridgway

Introduction

Fifty years ago, I began my scientific career under the direction of Jacques-Émile Blamont. He had just returned from the United States, where he had contributed to the beginnings of space research. Back in France, he intended to put the country, and with it the rest of Europe, on the same track. In March, 1959, I joined him in the first European space experiment: a launch of Véronique rockets in the Sahara desert.

The fifty years that followed were marked by spectacular progress in our knowledge of the universe. Similar progress was made in practically all other fields of knowledge. Such progress is unprecedented in human history. One would think that it must have improved the fortunes of mankind, and to a degree, it has. The field of medicine, and especially surgery, has seen great advances; agricultural production has considerably improved, too. But only a fraction of humanity is really profiting from all this progress. After temporarily receding, hunger is on the rise again all over the world. Virtually non-existent at the beginning of my career, unemployment has become endemic in France. Around the world, economic crises are now pervasive, oil resources are dwindling and our planet’s protective ozone layer is in danger of destruction. And if that was not enough, the threat of global warming is looming. What have we done?

Most researchers of my generation are asking themselves this question, especially those in the “space sciences”. In 2004, Jacques Blamont, for instance, published a book called “Introduction au siècle des menaces” (“Introduction to the century of threats”)1, in which he “deconstructs, bit by bit, the infernal machine that we are now in the process of bequeathing to our children, thanks to the scientific progress we so strongly believed in…”2. In 2008, at the fiftieth anniversary of his laboratory, he confided to me that “this is going to be worse than I predicted”. The next year, together with theologist Jacques Arnoud, he wrote the book “Lève-toi et marche” (“Stand up and walk”)3, in which they debated the human condition and the state of the world.

In 2005, another space scientist, geophysicist André Lebeau who occupied prominent positions at the CNES and ESA4, published “L’engrenage de la technique” (“The grip of technique”)5, in which he analyses human evolution in terms of biological evolution. His next book, “L’enfermement planétaire” (Planetary containment, 2008)6, came to troubling conclusions based on the limits of our resources.

That same year, Roger-Maurice Bonnet, my colleague, friend and fellow student under Blamont, scientific director at ESA and then at ISSI7, co-authored a book with Lodewijk Woltjer, former director of ESO8. This publication, entitled “Survivre mille siècles, le pouvons-nous?” (Can we Survive a Thousand Centuries?)9, reviews a number of possible causes for human extinction.

After spending the last sixteen years of my career in the United States, I moved back to France for retirement in January 2001. Here, I became interested in biological evolution and started asking myself the same questions. Initially, I shared my reflections on a dedicated website:

http://francois-roddier.fr/

These studies led me very quickly to the laws of thermodynamics, a subject I was familiar with from teaching it at the University of Nice. On re-establishing contact with Roger Bonnet, I learned about his forthcoming book. I mentioned to him that I might have an answer to his central question, and in response, he invited me to present my ideas in Bern, at which time he convinced me to publish them.

Writing a book on this subject is an especially arduous undertaking for various reasons, the first being the difficulty of explaining the little taught science of thermodynamics, particularly the new field of non-equilibrium thermodynamics. For example, the notion of entropy10 is so complex that it took scientists a whole century to understand it. Even today, some people still distinguish between thermodynamic entropy and informational entropy without knowing that these are one and the same concept. The entropy of a system is a measure of our lack of knowledge about this system. This implies that entropy is as much a property of the observer as it is of the system observed. A number of scientists are still reluctant to accept this.

At the origin of the problem is the physical interpretation of probability. To some, probability is a physical quantity that is measurable through statistical procedures, called the “frequentist” interpretation. The work of researchers in this tradition is based on steady state and ergodic hypotheses that are physically unverifiable. To others, probability is a “subjective” quantity that depends on our “a priori knowledge”, a statement called the Bayesian interpretation. In his book “The logic of science”, the American physicist E. T. Jaynes shows how the Bayesian approach allows us to unify probability theory and statistics in one unique deductive logic that, in turn, enables us to take optimal decisions in the face of incomplete information. He calls this “the logic of science”.

The recent advances on which this book relies are founded on and implicitly follow the Bayesian approach. As our knowledge forms part of the universe we are studying, it is incomplete and always will be. As we shall see, humankind is a dissipative structure. By importing information from its environment, mankind continuously improves its knowledge base; by doing this, it diminishes its internal entropy and dissipates energy ever more efficiently.

It is clear that, while the laws of physics are understood to be generally valid, their application to domains as complex as biology or human sciences seems still far from satisfactory. The difficulty here lies in the number of variables at play, as well as in the non-linearity of phenomena. However, the second half of the twentieth century saw considerable progress in both areas. The problem posed by the number of variables was tackled through a statistical approach, forming the discipline of statistical mechanics - the continuation of what was once called thermodynamics. The problem of non-linearity has evolved thanks to numerical experimentation, forming the discipline of non-linear dynamics, or chaos theory.

Despite these improvements, difficulties persist, and the validity of certain theoretical results used in this book is actively debated. These difficulties affect the very notion of dissipative structure. By definition, such a structure is in a steady state, which seems to exclude a priori the possibility of studying its evolution. Another problem lies in the exact boundaries of these structures. Such issues are the subject of continuing discussion by specialists.

Despite these on-going difficulties, results obtained so far are immensely significant. Had I been told ten years ago that the laws of statistical mechanics could explain human behaviour, I would have smiled dubiously. Today, however, I am absolutely convinced that they do. The fundamental laws of biochemistry are those of thermodynamics, as established by Gibbs, and insofar as living beings are made up of biochemical reactions, they cannot but obey these laws.

My aim is to show that the results we have so far obtained open up great perspectives, not just in relation to biology, but also to the human sciences. The results that I describe in this essay are remarkably coherent to me, which is why I am confident of their significance. Undoubtedly, I will be criticised for over confidence, though in this essay I will only identify the pieces of the puzzle. Essentially, I see this book as the prologue to a scientific programme for the twenty-first century, a programme that allows us to unify the sciences, from cosmology to human sciences.

Regrettably, the natural sciences are strongly partitioned at this moment in time. Very few physicists show an interest in biology, and even less in the human sciences. Conversely, few biologists and even fewer humanities and social science researchers have mastered the tools of physics. Each to their own discipline! In my case, I trained and worked in physics and, ten years ago, embarked on studying biology. To write a book that encompasses all disciplines, from cosmology to sociology, is not an easy undertaking, and mistakes and imprecision are unavoidable. I therefore ask my readers to be forgiving, and to correspond with me about potential issues, so that these can be included and addressed in a future edition.

One of the issues that I have struggled with is language, as each discipline develops its own jargon. To assist the readers, a definition of the scientific and technical terms that are italicized in the main text will be found in a glossary at the end of the book. The use of everyday language turned out to be equally tricky. Richard Dawkins entitled his first book “The Selfish Gene”, as if a gene could exhibit human behaviour. Dawkins justified this by saying that it was a figure of speech. My own book goes even further: it is my goal to show that, with different aspects, one is able to find the same underlying processes as much in physics as in biology or sociology. One can follow these processes continuously from one discipline to another and therefore can describe them with the same vocabulary. Thus it may seem that I am using terminology carelessly, when the opposite is more nearly true.

Everyday language is especially suited for describing human and even animal behaviour. But can one also use it to talk about things? People say, for instance, that an individual imitates another. The same thing can be said about a monkey or a bird. But if a magnet aligns itself to its neighbour, can we say that this, too, is imitation? In this book (section 3.1), I will show how these processes are similar and deeply related.

The issue of language becomes particularly acute when we are dealing with manifestations of intention. We may kill a rabbit to eat - a cat could be said to do the same, albeit perhaps more instinctively. As we shall see, bacteria orient themselves towards their food source. Is this because they intend to feed themselves or, more simply, because their behaviour follows the law of Le Chatelier (section 9.1)? For me, the question is one of language.

Conversely, we now know that the Earth’s atmosphere maintains itself in a state of “maximum production of entropy” that correspondingly maximizes its dissipation of energy. It appears increasingly clear that these processes apply to ecosystems, too. In fact, ecosystems are observed to self-organise so as to constantly maximize their rate of energy dissipation. One is left wondering whether the same principles might also apply to human societies. Could we say, for instance, that a human society self-organises to maximize the speed with which it dissipates energy? I will not hesitate to assert this, even if the range of our choices and the goals of our actions may appear to differentiate us from nature.

Physicists are effectively accustomed to expressing the laws of physics in the form of variational principles: a mechanical system evolves according to a principle of least action; light propagates by minimising its optical path. For a physicist “everything takes place as if” light is constantly looking for the fastest way of getting from one point to another. Thus one could come to the conclusion that the universe incessantly strives to maximize the speed with which energy dissipates. That this principle also applies to human evolution is not an obvious consequence, but it should not surprise us, even if humans can experience their intentions differently.

We know that the laws of chemistry derive entirely from those of physics, even if this derivation is not always easy to trace. The same thing can be said of biochemistry. There are still a number of biologists who are reluctant to accept that the laws of biology stem entirely from those of chemistry. Even if the origin of life has not yet been resolved, it has become clear that it resulted from particular chemical reactions that scientists call autocatalytic (section 8.1). We can thus move seamlessly from chemistry to biology. Natural selection now appears as a consequence of the laws of thermodynamics (section 5.3).

The application of biology to humanity encounters even greater reluctance and resistance. In the past, premature extrapolation of biological laws towards human society has lead to aberrations11. The idea that our behaviour could follow “natural laws” hurt our feeling of free will. To reduce humankind to the laws of physics comes close to a terribly materialist approach. It seems to obscure the spirituality of humanity that we consider essential to us. We shall see that, far from occulting it, this shows its role and significance.

In fact, the central idea of this book is that evolution has progressively shifted from genetic to cultural. Culture is defined here as the set of information that is available in the brain. Following this definition, culture is not exclusive to humanity. Three chapters are dedicated to the passage from genes to culture. The particularity of humankind is that has become the dominant factor in its evolution. In other words, one cannot apply the laws of biology to humans without replacing genes with culture: human evolution is essentially cultural. Thermodynamically speaking, human minds reduce their entropy (culturally self-organise) so that we (and our society) can dissipate more energy.

In the course of this book, we will see that certain phenomena, such as cyclones, memorise information about their environment. Their memory is inertial. Plants memorise information in their genes. Evolved animals also memorise information in their brains, which is why we can train them. One can say that they are capable of learning. When we speak of human beings, we describe ourselves as being conscious. As being part of humankind, we share vast reservoirs of information and experience. Thus, one can speak of a collective conscience.

What physics and biology teach us – and history confirms – is that the problems that humankind faces can be resolved by utilizing our collective conscience. At the moment, humanity is becoming conscious of itself and begins to worry about its chances of long-term survival. This book is a contribution to this growing collective consciousness, which is likely to take several generations. In this spirit, I would like to dedicate this book to all young people. It is they, who will finally and fully elucidate the laws of evolution. With this awareness, they will build a future humanity, filled with hope.

There remains a last concern: the reflections that I put forward in this book entirely confirm the fears expressed by many authors, notably those I mentioned at the beginning. More and more work is published each year on environmental issues, the end of oil or the collapse of societies. The risk in writing this book, for me, is to come across as just another bearer of bad news and of being ignored as a consequence. This is why I only briefly touch on the multiple crises that affect our world, and leave this task to more qualified authors. Instead, I will concentrate on what might happen after the current crises, because that is where hope appears. I am convinced that this hope is justified: it is confirmed by the laws of physics and by everything that we know from modern biology. Although my conclusion may seem too optimistic, I believe it is grounded in the underlying laws of evolution.

Finally, I would like to extend a message towards current and future generations. History has shown us that each time a society is in crisis, it searches for the guilty and identifies its scapegoats. Primitive civilisations offered human sacrifices to their gods; the Romans tortured Christians; the Middle Ages ended in religious wars; the French monarchy decapitated their king and a number of aristocrats. More recently, Nazi Germany gassed Jews. Today, we blame immigrants or “gypsies” for crime or unemployment. This book wants to show the real culprit: the laws of statistical mechanics against which we are powerless. Howard Bloom12 speaks of a “Lucifer principle” without acknowledging that it is really a matter of fundamental principles of thermodynamics. Our suffering is caused by the entropy associated with our lack of knowledge about the laws of the universe. As soon as these laws are universally recognised and understood, this entropy will go away, and humanity will be enabled to take charge of its destiny and alleviate its misery.

1. Jacques Blamont. Introduction au siècle des menaces (Introduction to the Century of Threats). Odile Jacob (2004).

2. A quote from the book editor.

3. Jacques Arnould, Jacques Blamont. Lève-toi et marche. Propositions pour un futur de l’humanité (Stand up and Walk. Propositions for a Future of Mankind). Odile Jacob (2009).

4. CNES: Centre National d’Études Spatiales (French National Centre for Space Studies); ESA: European Space Agency.

5. André Lebeau. L’engrenage de la technique. Essai sur une menace planétaire (The grip of Technique. Essay on a World Threat). Gallimard (2005).

6. André Lebeau. L’enfermement planétaire (A Closed World). Gallimard (2008).

7. International Space Science Institute, based in Bern, Switzerland.

8. ESO: European Southern Observatory, headquartered in Garching, near Munich, Germany.

9. Roger-Maurice Bonnet, Lodewijk Woltjer. Surviving 1000 Centuries, Can we do it? Springer, Praxis, (2008).

10. The terms that are printed in italics are scientific and technical terms and are explained in the glossary at the end of this book.

11. Examples include social darwinism, biological justifications for racism and eugenics.

12. Howard Bloom, The Luficer Principle. Atlantic Monthly Press (1995).

“The true physics is that which will, one day, achieve the inclusion of man in his wholeness in a coherent picture of the world.”

Pierre Teilhard de Chardin

The Phenomenon of Man

Prologue

The Concept of Evolution

The idea that the world evolves seems self-evident to us. Every day, my computer reminds me to update its software. Everyone is eager to upgrade their mobile phones in order to take advantage of the latest gadgets. We forget that only fifteen years ago, ownership of a mobile phone or home

internet was a novelty.

For the last two hundred years, humanity has become used to constant scientific and technological progress. This progress happens faster and faster, with no end to this acceleration in sight. To us, it feels like the natural state of things. The majority of us believe that this trend will continue forever. Few people realize that things have not always been this way. In the following paragraphs, I will show that they have not, indeed.

In the Middle Ages, progress was so slow that it was almost imperceptible. The idea of evolution is entirely absent from the literature of this epoch. The perception was that humanity had always been in the state in which it could then be observed, that is, in the state in which God had created it. Therefore, medieval paintings always show the holy family wearing the fashion of the time.

It seems as if everything started to change around the end of the fifteenth century, with the development of typography, pioneered by Johannes Gutenberg. At that time, people generally believed that the world could be explained through the Bible. Accordingly, it was the Bible that became the first “mass” printed book. Throughout the sixteenth century then, people learned to read in order to read the Bible. Consequently, there was a great surge of literacy. By reading the Bible, people learned how to interpret and think for themselves. Michel de Montaigne notably encouraged his readers to engage in philosophical reflection. Books became more abundant.

By the seventeenth century, scholars such as René Descartes proclaimed the possibility that the world could be understood independently of religious beliefs. This moment marked the rise of rational thought that we call “Cartesian”. Blaise Pascal, by contrast, remained undecided between religion and reason, leading to his famous “wager”.

With the arrival of the eighteenth century, books had become so ubiquitous that the need arose to “compress” this information and to assemble all of human knowledge in a single book. This project resulted in the Encyclopaedia of Denis Diderot and Jean de Rond d’Alembert, and in “L’Histoire Naturelle” (Natural History) by Georges-Louis Leclerc, Comte de Buffon. Because the unification of all of this knowledge enlightened humans, this period has been called “the Enlightenment”.

From Buffon’s Natural History, readers learned that remains of seashells are sometimes found in the high mountains. The findings of such shells, normally present in the ocean, put in question whether the rock formations may once have been submerged in water. Around the same time, Scottish naturalist James Hutton identified lava pieces in his garden and wondered whether there had once been volcanoes in Scotland. Bit by bit, new evidence came to light, leading to the conclusion that Earth itself is subject to evolution.

Less than half a century later, Jean Baptiste de Lamarck first studied botany, then zoology, and finally became interested in palaeontology. The latter suggests that living organisms that once existed on Earth are now extinct. The mounting evidence led to the hypothesis that plant and animal species evolve as well. Moreover, they evolve along a trajectory from most simple to most complex. In their “perfection”, humans appeared to be the pinnacle of evolution. A further half-century later, Charles Darwin published his book on the origin of species as a consequence of natural selection, bringing the mechanism of evolution to light.

In 1916, Albert Einstein published his equation connecting the form of space-time to the distribution of energy. To his great surprise, the Universe appeared to vary in time. As the concept of an evolving universe seemed impossible to Einstein, he added a “cosmological constant” to his equation in order to render the Universe stationary. When, in 1929, Edwin Hubble demonstrated that the universe is indeed expanding, Einstein remarked that the inclusion of his “cosmological constant” had been the greatest mistake of his life.

We have thus come to see that not only life and our planet evolve, but the universe itself does so as well. The question remains: if everything evolves, are there general laws that govern this evolution?

Part I•The laws of thermodynamics

1. Thermodynamics in the 19th century

Although it can take different forms (mechanical, electrical, chemical), a given amount of energy always stays the same. It is said to be a conservative quantity. However, it tends to dissipate, that is to transform itself into heat. Heat differs from other forms of energy by the fact that it can never be totally converted to another form of energy. A system that is isolated from the rest of the world is said to be a “closed system”. In a closed system, energy transforms itself irreversibly into heat. Differences disappear and motions cease. A closed system evolves until it reaches thermal equilibrium.

1.1. Energy

In a world where everything is changing, how do you know the laws that govern its evolution? A mathematician, Emmy Noether, has shown that if evolution obeys fixed (unchanging) laws, then there is a measurable quantity that remains constant. Physicists call such a quantity an invariant. Discovered in the nineteenth century, this invariant has been given the name of energy. It can be thought of as the Ariane’s thread that allows us to track the evolution.

Supplying energy to human societies is currently such a concern that this abstract notion, issued from physical sciences, has now become a common notion. Everyone knows that energy is needed to put a mass in motion. To a physicist, a quantity is defined if we know how to measure it. To measure an amount of energy, physicists use the notion of mechanical work. Mechanical work is produced when, for example, a weight is elevated to a certain height. The energy required, referred to as mechanical work, is the product of the force applied (equal and opposite to the weight) by the length of the travel (here a height).

Every time the same weight is elevated the same height, the same amount of energy or mechanical work must be provided. There are many ways to raise a weight. One can for example pull on a rope. One then provide muscle power. This is a way to measure it. One can also use an electric motor. Electricity is another form of energy. It can be supplied by a battery in which energy is stored in chemical form.

If the energy can take a great variety of forms, its importance comes from the fact that it is conserved. In all cases, the amount of energy can be measured using the same unit. The international unit for energy is the joule after the English physicist James Prescott Joule. An energy per unit of time is a “power”. It is measured in joules per second, also called watt, after the English engineer James Watt who developed the first steam engine used industrially (see additional information on energy).

For example, the energy that was used to raise the weight is not lost. It can be recovered by letting the weight go down. On its way down, the weight can turn a dynamo that will generate electricity and recharge the battery. Here we assume that all our manipulations are perfectly reversible. Unfortunately, they are usually imperfectly, sometimes very much so. If the string breaks, the weight falls. Energy is apparently lost.

1.2. Energy dissipation

Physicists have found that in this case, energy is not really lost. It is converted into heat. The same amount of energy always produces the same amount of heat. It increases the temperature of the same volume of water by the same amount. Unfortunately, the transformation of energy into heat is not a reversible process. You can use a kettle to heat water but, when cooling, the kettle water will not produce back the electricity that it has consumed.

The problem is that, whatever its form, energy always ends up in the form of heat. For example, consider the pendulum of a grandfather clock. Remove it from its equilibrium position and then let it loose. It will oscillate for a while, but the oscillation amplitude will gradually decrease until the pendulum comes to a halt. The mechanical energy of the oscillations has been converted into heat by mechanical friction. To maintain the motion one must wind up the rope on which a weight is attached. When going down, the weight provides the energy needed to maintain the movement of the pendulum which, in turn, converts it into heat. In physics the conversion of energy into heat is called “energy dissipation”.

It is exactly the same for us. The food we eat is our energy source. It helps maintain the beats of our heart, as the fall of the weight keeps the clock pendulum beating. It provides the mechanical work necessary for our movements. As for the clock, this energy is constantly converted into heat. It keeps our bodies at 37 ° C. We must therefore constantly eat to live, as we add gas or wood into a heating device to maintain its temperature.

Modern people not only eat. They also “consume” energy to heat their house or to travel by car. Physicists do not like the expression “consume energy” because the energy does not disappear: it is transformed into heat. Physicists prefer the expression “dissipate energy”. When a driver brakes, the energy called “kinetic”, which is associated with the motion of the car, is converted into heat inside the brakes. They get hot. They then cool through ventilation which disperses the heat into the atmosphere, hence the idea of “dissipation”. The energy which is dissipated becomes unrecoverable. One can no longer convert it into mechanical work. It is said that the transformation of kinetic energy into heat is irreversible.

1.3. The first two laws of thermodynamics

This does not mean that it is never possible to convert heat into mechanical energy. This is what a car engine does. It converts into motion the heat produced by the combustion of gasoline. The problem is that this conversion can only be partial. In the early nineteenth century, engineers wondered why. The French Nicolas Léonard Sadi Carnot was the first to come up with an answer. In doing so, he founded a new science: thermodynamics.

It is based on two main laws formerly known as “principles.” The first law is that heat is a form of energy. The second law is that you cannot convert heat into mechanical energy without a temperature difference. Indeed, if steam or hot air can push a piston, one must apply a force to bring the piston back to its initial state. For this force to be weaker, one must condense the steam or cool the air content inside the cylinder.

A steam engine produces cycles of transformations. After each cycle it returns to its initial state. During a cycle, it extracts heat from the boiler where the water is vaporized, then part of the heat is given back to the condenser where water condenses. Only the heat difference is converted into mechanical energy. The maximum fraction of the heat that can be converted into mechanical energy is called the Carnot efficiency. This amount is proportional to the temperature difference and can only be obtained if all the operations are reversible.

This gives heat a special status. Most forms of energy (electric, chemical, etc.) are completely convertible into mechanical work. They are called free energy. Heat is not. That is why it is considered as a degraded form of energy. Dissipation of energy amounts to a loss of free energy.

The properties of the steam engine described above are general and apply to all heat engines independently of how they are built. As enunciated by Carnot, the second law says you cannot produce free energy in a sustainable way without conducting closed cycle transformations that extract heat from a heat source and transfer part of it to a cold source. The efficiency of the operation is a maximum when all the transformations are reversible. We shall see at the end of this book (section 16.3) that this law applies to mankind itself. We are in the process of realizing it to our own cost.

1.4. Entropy

A little later, following from the work of Carnot, a German physicist, Rudolf Clausius, discovered a mysterious mathematical quantity with interesting properties - he named it entropy. After a cycle of transformations, entropy returns to its initial value if all transformations are reversible, otherwise it increases. Entropy is therefore a measure of the degradation or dissipation of energy. As long as the entropy is constant, there is no loss of free energy. Whenever entropy increases, there is a loss of free energy. Part of this energy is no longer convertible into mechanical energy: it has been dissipated. While we commonly speak of energy dissipation, physicists speak of entropy production. The two expressions are equivalent.

If energy is the thread that allows us to monitor evolution, entropy is the arrow that orients the thread and indicates the direction of time. Although energy is conserved, it degrades. Entropy is a measure of its degradation. The concept of entropy is essential to a good understanding of this book. Here I will only give a general idea. Readers wishing to know more can read the additional information given at the end this book (Chapter 17).

1.5. Open and closed systems

Clausius has shown that if a part of the universe is isolated so as to eliminate all the exchanges of matter and energy with the outside, then its total entropy can only increase or remain constant. It remains constant if all the changes it undergoes are reversible. It increases if an irreversible transformation occurs. Consider for example a so-called “Thermos” bottle containing hot and cold water in similar proportions. After a while, warm water is obtained. The transformation is irreversible. The entropy of the mixture is greater than that of its components. Note that with hot water and cold water one can run a heat engine and obtain mechanical work. Once all the water is at the same temperature, this is no longer possible. Whenever entropy increases, the amount of mechanical work one can obtain decreases. There is a loss of free energy. Energy has been dissipated.

To designate a part which is isolated from the rest of the universe, physicists use the term “closed system”13. Real transformations being never completely reversible, the entropy of a closed system will always increase until it reaches a maximum value for which it is no longer possible to obtain mechanical work. All differences fade out. There is no longer any difference in temperature, in pressure or in chemical composition. It is said that thermodynamic equilibrium has been reached. Any closed system evolves toward thermodynamic equilibrium – though it may take a very long time to get there.

As interesting as they are, these laws can hardly explain the existence of life. Isolated in a thermos bottle, a fly will eventually exhaust its reserves in oxygen and food and die. Any motion will cease. Death is a state close to thermodynamic equilibrium. But none of this tells us why life began on Earth, let alone how it developed. As interesting as they are, these laws cannot explain the existence of life.

Clearly, life would not be possible if the earth was a closed system. It is not. It continuously receives energy from the sun in the form of radiation, for the most part visible light, at remarkably constant level. It re-emits this energy throughout space under the form of infrared radiation. The atmosphere is constantly crossed by inward and outward flows of energy. Illuminated on one side, mainly near the equator, its temperature is far from uniform. Temperature differences set its atmosphere in motion. They create depressions and anticyclones which generate the weather. Earth is an open system.

13. Some authors use the word “closed” as synonymous to mechanically isolated. Here we use it for systems that are both mechanically and thermally isolated (no matter, heat or any form of energy can go through).

2. Thermodynamics in the 20th century

In an open system that is out-of-equilibrium and crossed by an energy flow, structures in motion can appear. As they adapt to their environment, they organize themselves to maximize the flow of energy that passes through them. This has the effect of maximizing the speed with which energy dissipates. They are said to be dissipative structures. Hurricanes or living organisms are dissipative structures. An ensemble of interacting dissipative structures, such as the earth atmosphere, an ecosystem or a human society is also a dissipative structure.

2.1. Dissipative structures

Most open systems, crossed by a permanent flow of energy, contain moving structures. If the flow is interrupted, then the system becomes closed. Its internal entropy increases until the thermodynamic equilibrium is reached. All motions stop and structures disappear. If the flow is restored, the internal entropy of the system can decrease, spontaneous motion appears, mechanical energy dissipates. In the 60s, the physicist Ilya Prigogine proposed the term dissipative structure to describe the spontaneously appearing structures. I shall use this term in the broad sense of a structure that maintains itself thanks to a constant flow of energy14. They are like a sand castle that is permanently rebuilt with a constant supply of new sand.

Each of us can easily see an example by placing a pan of water on the stove. A flow of energy will cross it upwards. The water heats the bottom of the pan, mainly near the centre. Since hot water is less dense than cold water, a stream of hot water rises near the centre of the pan. When it reaches the surface, the hot water spreads out, cools and descends mainly along the cooler side walls of the pan. Physicists call this phenomenon convection. The water in the pan is a dissipative structure. It has spontaneously set itself in motion. At a larger scale, a cyclone sets itself in motion to dissipate the heat from the ocean into the atmosphere. It is also a dissipative structure. More generally, the Earth’s atmosphere is a dissipative structure. It “self-organizes” itself to transport heat from the equator to the poles.

Prigogine immediately understood that the concept of dissipative structure applies to life. If the Earth’s atmosphere self-organizes itself to dissipate solar energy, life would have self-organized itself for the same reason. One is a simple phenomenon of fluid physics, the other one is a much slower phenomenon of physico-chemical nature.

Clearly a living cell is a dissipative structure. It subsists only under a constant supply of matter and energy. The energy input is the base of its metabolism. In general a set of interacting dissipative structures is also a dissipative structure. For example, this is the case of interacting cells like a colony of bacteria or a multicellular organism. It is also the case of animal or plant species. Plants, animals, people or human societies are dissipative structures. They subsist by permanently renewing themselves, through a continuous supply of energy. For the first time, physicists had a concept that could be applied to inert material as well as to living matter or human societies. Like sand castles, human societies survive only if they are constantly rebuilt, one with sand the other with new generations of individuals.

2.2. The third law of thermodynamics

Let us come back to our water pan. When it is set on the heat, a temperature difference appears between the top and bottom of the pan. The second law tells us that, thanks to this temperature difference, a fraction of the water heat can be converted into mechanical energy. Motions do effectively appear. There is production of free energy. But as these motions increase their amplitude, they reduce the temperature difference between the top and bottom of the pan, decreasing the efficiency of mechanical energy production. At some point, the motions will cease to grow. The rate of free energy production has reached its maximum value. The flow of energy through the pan is also a maximum. In other words, motions self-organize inside the pan so as to maximize the flow of energy that goes through it. The rate at which energy is dissipated or rate of entropy production is also a maximum.

The same phenomenon occurs in the Earth’s atmosphere. Because it is lower on the horizon, each second the Sun brings less energy per unit area at the poles than at the equator. As a result, the temperature at the poles is, on average, thirty degrees lower than that at the equator. Thanks to this temperature difference, currents self-organize in the atmosphere to carry heat from the equator to the poles. This tends to reduce their temperature difference, hence the efficiency with which mechanical energy is produced. At some point, the currents cease to grow. The flow of energy they carry reaches its maximum value. This is what geophysicists effectively observe. They say that the Earth’s atmosphere is in a state of maximum entropy production. The same phenomenon has been observed on Mars and Titan.

So far, this is an experimental finding. No known law of thermodynamics implies this is a general phenomenon. A growing number of physicists however believe that it is a general law: dissipative structures self-organize so as to maximize the energy flow that goes through them. They do this by producing free energy. They maximize their free energy production, in order to maximize the energy flux that goes through them. As a result, they maximize the rate at which energy is dissipated. It is said that dissipative structures maximize the rate of production of entropy. Experts refer to this hypothetical law as the “law of maximum entropy production” or by the acronym MEP or MaxEP.

In January 2003, a researcher of Scottish origin, Roderick Dewar, then employed by INRA15 in Bordeaux (France), proposed a demonstration in terms of statistical mechanics, a branch of physics that I will discuss later. The generality of this proof is still debated. The second law of thermodynamics was first recognized as a general principle before being demonstrated from more fundamental principles. Here we do the same for this new law which we will refer to as the third law of thermodynamics16. We will see that it is of considerable importance in biology, because it helps explain the process of natural selection in physical terms. We are interested here by the fact it also applies to humans and human societies. It is a common observation that human societies keep dissipating more energy. The law of maximum production of entropy implies that they self-organize so as to maximize their rate of energy dissipation. Of course they do this unconsciously.

2.3. Statistical mechanics

Carnot and Clausius lived in the nineteenth century. They established the principles of thermodynamics as general principles that accounted for the experimental facts. They did it without worrying about the nature of matter. In the late nineteenth century, mainly thanks to advances in chemistry, a growing number of physicists were convinced that matter was made of atoms or, more precisely, assemblies of atoms called molecules. The brightest of them, such as the James Clerk Maxwell, the English theorist of electromagnetism, strove to explain the behaviour of matter as a set of molecules. Assuming that molecules interact according to the known laws of classical mechanics, the pressure or temperature of a gas became for them statistical quantities, that is to say averages over a large number of molecules. For example Maxwell established that gas temperature is a measure of the average kinetic energy of its molecules. A new branch of physics took birth, statistical mechanics.

The English physicist James Prescott Joule showed that shaking water would raise its temperature and that the temperature elevation was always proportional to the amount of mechanical energy provided. He concluded that mechanical energy had been converted into heat, a particular form of energy. We have seen that this irreversible operation produces entropy. It came to the mind of an Austrian physicist Ludwig Boltzmann that entropy is a measure of molecular disorder. Indeed, shaking water, communicates to the molecules a mechanical motion that is initially ordered. Gradually this motion naturally tends to become disordered. If temperature is a measure of the average kinetic energy of molecules, then it has been increased. The mechanical energy associated with an orderly motion of molecules has been converted into heat, a form of energy associated with the disordered motion of the molecules.

The entropy of an isolated system increases because the motion of its molecules naturally tends to become disordered. This trend is irreversible. By contrast, if external energy is provided, then the motion can become ordered. This is for example the case for a pressure difference. If an external energy maintains a pressure differential between two parts of a fluid stream then an ordered flow of molecules will organize itself, creating a current that tends to equalize the pressure. The experience of the pan of water on the fire shows that temperature difference can also create an orderly flow of molecules. The transition from a disordered to an ordered molecular state corresponds to a decrease of entropy.

2.4. Entropy and information

Shortly thereafter, the American physicist Willard Gibbs generalized Boltzmann’s theory, including chemical reactions that are essential to explain life. For a long time his work remained misunderstood. The notion of order has a subjective appearance. Everyone sets his stuff as he sees fit. A desk covered with papers may seem in perfect order to the user, while looking in complete disorder to the person in charge of sweeping the floor. The notion of order is intimately linked to the notion of information. Order is a means to store and share information. In a workshop, a worker puts his tools in order not only to find them more easily, but also for his co-workers to find them easily. The French word for “computer” is “ordinateur” an apparatus that put things in order.

It was not until the end of the Second World War that an American physicist working in the field of telecommunications, Claude Shannon, looked at the problem of how to measure an amount of information. Trying to formalize the problem, he fell on the mathematic formula Gibbs had given for entropy. Shannon’s expression showed that an increase of entropy