Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
Shortlisted for the Royal Society Science Book Prize 2019 A magisterial history of calculus (and the people behind it) from one of the world's foremost mathematicians. This is the captivating story of mathematics' greatest ever idea: calculus. Without it, there would be no computers, no microwave ovens, no GPS, and no space travel. But before it gave modern man almost infinite powers, calculus was behind centuries of controversy, competition, and even death. Taking us on a thrilling journey through three millennia, professor Steven Strogatz charts the development of this seminal achievement from the days of Archimedes to today's breakthroughs in chaos theory and artificial intelligence. Filled with idiosyncratic characters from Pythagoras to Fourier, Infinite Powers is a compelling human drama that reveals the legacy of calculus on nearly every aspect of modern civilisation, including science, politics, medicine, philosophy, and much besides.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 585
Veröffentlichungsjahr: 2019
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Infinite Powers
Steven Strogatz is the Jacob Gould Schurman Professor of Applied Mathematics at Cornell University. A renowned teacher and one of the world’s most highly cited mathematicians, he has blogged about maths for the New York Times and Th e New Yorker. He is the author of Sync and Th e Joy of x. He Lives in Ithaca, New York.
First published in the United States in 2019 by Houghton Mifflin Harcourt Publishing Company, 3 Park Avenue, 19th Floor, New York, New York 10016.
First published in hardback and trade paperback in Great Britain in 2019 by Atlantic Books, an imprint of Atlantic Books Ltd.
Copyright © Steven Strogatz, 2019
The moral right of Steven Strogatz to be identified as the author of this work has been asserted by him in accordance with the Copyright, Designs and Patents Act of 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of both the copyright owner and the above publisher of this book.
The picture acknowledgements on p. 307 constitute an extension of this copyright page.
Every effort has been made to trace or contact all copyright holders.
The publishers will be pleased to make good any omissions or rectify any mistakes brought to their attention at the earliest opportunity.
10 9 8 7 6 5 4 3 2 1
A CIP catalogue record for this book is available from the British Library.
Hardback ISBN: 978 1 78649 294 4
Trade paperback ISBN: 978 1 78649 295 1
E-book ISBN: 978 1 78649 296 8
Printed in Great Britain
Atlantic Books
An imprint of Atlantic Books Ltd
Ormond House
26–27 Boswell Street
London
WC1N 3JZ
www.atlantic-books.co.uk
Introduction
1. Infinity
2. The Man Who Harnessed Infinity
3. Discovering the Laws of Motion
4. The Dawn of Differential Calculus
5. The Crossroads
6. The Vocabulary of Change
7. The Secret Fountain
8. Fictions of the Mind
9. The Logical Universe
10. Making Waves
11. The Future of Calculus
Conclusion
Acknowledgments
Illustration Credits
Notes
Bibliography
Index
Without calculus, we wouldn’t have cell phones, computers, or microwave ovens. We wouldn’t have radio. Or television. Or ultrasound for expectant mothers, or GPS for lost travelers. We wouldn’t have split the atom, unraveled the human genome, or put astronauts on the moon. We might not even have the Declaration of Independence.
It’s a curiosity of history that the world was changed forever by an arcane branch of mathematics. How could it be that a theory originally about shapes ultimately reshaped civilization?
The essence of the answer lies in a quip that the physicist Richard Feynman made to the novelist Herman Wouk when they were discussing the Manhattan Project. Wouk was doing research for a big novel he hoped to write about World War II, and he went to Caltech to interview physicists who had worked on the bomb, one of whom was Feynman. After the interview, as they were parting, Feynman asked Wouk if he knew calculus. No, Wouk admitted, he didn’t. “You had better learn it,” said Feynman. “It’s the language God talks.”
For reasons nobody understands, the universe is deeply mathematical. Maybe God made it that way. Or maybe it’s the only way a universe with us in it could be, because nonmathematical universes can’t harbor life intelligent enough to ask the question. In any case, it’s a mysterious and marvelous fact that our universe obeys laws of nature that always turn out to be expressible in the language of calculus as sentences called differential equations. Such equations describe the difference between something right now and the same thing an instant later or between something right here and the same thing infinitesimally close by. The details differ depending on what part of nature we’re talking about, but the structure of the laws is always the same. To put this awesome assertion another way, there seems to be something like a code to the universe, an operating system that animates everything from moment to moment and place to place. Calculus taps into this order and expresses it.
Isaac Newton was the first to glimpse this secret of the universe. He found that the orbits of the planets, the rhythm of the tides, and the trajectories of cannonballs could all be described, explained, and predicted by a small set of differential equations. Today we call them Newton’s laws of motion and gravity. Ever since Newton, we have found that the same pattern holds whenever we uncover a new part of the universe. From the old elements of earth, air, fire, and water to the latest in electrons, quarks, black holes, and superstrings, every inanimate thing in the universe bends to the rule of differential equations. I bet this is what Feynman meant when he said that calculus is the language God talks. If anything deserves to be called the secret of the universe, calculus is it.
By inadvertently discovering this strange language, first in a corner of geometry and later in the code of the universe, then by learning to speak it fluently and decipher its idioms and nuances, and finally by harnessing its forecasting powers, humans have used calculus to remake the world.
That’s the central argument of this book.
If it’s right, it means the answer to the ultimate question of life, the universe, and everything is not 42, with apologies to fans of Douglas Adams and The Hitchhiker’s Guide to the Galaxy. But Deep Thought was on the right track: the secret of the universe is indeed mathematical.
Feynman’s quip about God’s language raises many profound questions. What is calculus? How did humans figure out that God speaks it (or, if you prefer, that the universe runs on it)? What are differential equations and what have they done for the world, not just in Newton’s time but in our own? Finally, how can any of these stories and ideas be conveyed enjoyably and intelligibly to readers of goodwill like Herman Wouk, a very thoughtful, curious, knowledgeable person with little background in advanced math?
In a coda to the story of his encounter with Feynman, Wouk wrote that he didn’t get around to even trying to learn calculus for fourteen years. His big novel ballooned into two big novels— Winds of War and War and Remembrance, each about a thousand pages. Once those were finally done, he tried to teach himself by reading books with titles like Calculus Made Easy — but no luck there. He poked around in a few textbooks, hoping, as he put it, “to come across one that might help a mathematical ignoramus like me, who had spent his college years in the humanities — i.e., literature and philosophy — in an adolescent quest for the meaning of existence, little knowing that calculus, which I had heard of as a difficult bore leading nowhere, was the language God talks.” After the textbooks proved impenetrable, he hired an Israeli math tutor, hoping to pick up a little calculus and improve his spoken Hebrew on the side, but both hopes ran aground. Finally, in desperation, he audited a high-school calculus class, but he fell too far behind and had to give up after a couple of months. The kids clapped for him on his way out. He said it was like sympathy applause for a pitiful showbiz act.
I’ve written Infinite Powers in an attempt to make the greatest ideas and stories of calculus accessible to everyone. It shouldn’t be necessary to endure what Herman Wouk did to learn about this landmark in human history. Calculus is one of humankind’s most inspiring collective achievements. It isn’t necessary to learn how to do calculus to appreciate it, just as it isn’t necessary to learn how to prepare fine cuisine to enjoy eating it. I’m going to try to explain everything we’ll need with the help of pictures, metaphors, and anecdotes. I’ll also walk us through some of the finest equations and proofs ever created, because how could we visit a gallery without seeing its masterpieces? As for Herman Wouk, he is 103 years old as of this writing. I don’t know if he’s learned calculus yet, but if not, this one’s for you, Mr. Wouk.
As should be obvious by now, I’ll be giving an applied mathematician’s take on the story and significance of calculus. A historian of mathematics would tell it differently. So would a pure mathematician. What fascinates me as an applied mathematician is the push and pull between the real world around us and the ideal world in our heads. Phenomena out there guide the mathematical questions we ask; conversely, the math we imagine sometimes foreshadows what actually happens out there in reality. When it does, the effect is uncanny.
To be an applied mathematician is to be outward-looking and intellectually promiscuous. To those in my field, math is not a pristine, hermetically sealed world of theorems and proofs echoing back on themselves. We embrace all kinds of subjects: philosophy, politics, science, history, medicine, all of it. That’s the story I want to tell — the world according to calculus.
This is a much broader view of calculus than usual. It encompasses the many cousins and spinoffs of calculus, both within mathematics and in the adjacent disciplines. Since this big-tent view is unconventional, I want to make sure it doesn’t cause any confusion. For example, when I said earlier that without calculus we wouldn’t have computers and cell phones and so on, I certainly didn’t mean to suggest that calculus produced all these wonders by itself. Far from it. Science and technology were essential partners — and arguably the stars of the show. My point is merely that calculus has also played a crucial role, albeit often a supporting one, in giving us the world we know today.
Take the story of wireless communication. It began with the discovery of the laws of electricity and magnetism by scientists like Michael Faraday and André-Marie Ampère. Without their observations and tinkering, the crucial facts about magnets, electrical currents, and their invisible force fields would have remained unknown, and the possibility of wireless communication would never have been realized. So, obviously, experimental physics was indispensable here.
But so was calculus. In the 1860s, a Scottish mathematical physicist named James Clerk Maxwell recast the experimental laws of electricity and magnetism into a symbolic form that could be fed into the maw of calculus. After some churning, the maw disgorged an equation that didn’t make sense. Apparently something was missing in the physics. Maxwell suspected that Ampère’s law was the culprit. He tried patching it up by including a new term in his equation — a hypothetical current that would resolve the contradiction — and then let calculus churn again. This time it spat out a sensible result, a simple, elegant wave equation much like the equation that describes the spread of ripples on a pond. Except Maxwell’s result was predicting a new kind of wave, with electric and magnetic fields dancing together in a pas de deux. A changing electric field would generate a changing magnetic field, which in turn would regenerate the electric field, and so on, each field bootstrapping the other forward, propagating together as a wave of traveling energy. And when Maxwell calculated the speed of this wave, he found — in what must have been one of the greatest Aha! moments in history — that it moved at the speed of light. So he used calculus not only to predict the existence of electromagnetic waves but also to solve an age-old mystery: What was the nature of light? Light, he realized, was an electromagnetic wave.
Maxwell’s prediction of electromagnetic waves prompted an experiment by Heinrich Hertz in 1887 that proved their existence. A decade later, Nikola Tesla built the first radio communication system, and five years after that, Guglielmo Marconi transmitted the first wireless messages across the Atlantic. Soon came television, cell phones, and all the rest.
Clearly, calculus could not have done this alone. But equally clearly, none of it would have happened without calculus. Or, perhaps more accurately, it might have happened, but only much later, if at all.
The story of Maxwell illustrates a theme we’ll be seeing again and again. It’s often said that mathematics is the language of science. There’s a great deal of truth to that. In the case of electromagnetic waves, it was a key first step for Maxwell to translate the laws that had been discovered experimentally into equations phrased in the language of calculus.
But the language analogy is incomplete. Calculus, like other forms of mathematics, is much more than a language; it’s also an incredibly powerful system of reasoning. It lets us transform one equation into another by performing various symbolic operations on them, operations subject to certain rules. Those rules are deeply rooted in logic, so even though it might seem like we’re just shuffling symbols around, we’re actually constructing long chains of logical inference. The symbol shuffling is useful shorthand, a convenient way to build arguments too intricate to hold in our heads.
If we’re lucky and skillful enough — if we transform the equations in just the right way — we can get them to reveal their hidden implications. To a mathematician, the process feels almost palpable. It’s as if we’re manipulating the equations, massaging them, trying to relax them enough so that they’ll spill their secrets. We want them to open up and talk to us.
Creativity is required, because it often isn’t clear which manipulations to perform. In Maxwell’s case, there were countless ways to transform his equations, all of which would have been logically acceptable but only some of which would have been scientifically revealing. Given that he didn’t even know what he was searching for, he might easily have gotten nothing out of his equations but incoherent mumblings (or the symbolic equivalent thereof). Fortunately, however, they did have a secret to reveal. With just the right prodding, they gave up the wave equation.
At that point the linguistic function of calculus took over again. When Maxwell translated his abstract symbols back into reality, they predicted that electricity and magnetism could propagate together as a wave of invisible energy moving at the speed of light. In a matter of decades, this revelation would change the world.
It’s eerie that calculus can mimic nature so well, given how different the two domains are. Calculus is an imaginary realm of symbols and logic; nature is an actual realm of forces and phenomena. Yet somehow, if the translation from reality into symbols is done artfully enough, the logic of calculus can use one real-world truth to generate another. Truth in, truth out. Start with something that is empirically true and symbolically formulated (as Maxwell did with the laws of electricity and magnetism), apply the right logical manipulations, and out comes another empirical truth, possibly a new one, a fact about the universe that nobody knew before (like the existence of electromagnetic waves). In this way, calculus lets us peer into the future and predict the unknown. That’s what makes it such a powerful tool for science and technology.
But why should the universe respect the workings of any kind of logic, let alone the kind of logic that we puny humans can muster? This is what Einstein marveled at when he wrote, “The eternal mystery of the world is its comprehensibility.” And it’s what Eugene Wigner meant in his essay “On the Unreasonable Effectiveness of Mathematics in the Natural Sciences” when he wrote, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”
This sense of awe goes way back in the history of mathematics. According to legend, Pythagoras felt it around 550 BCE when he and his disciples discovered that music was governed by the ratios of whole numbers. For instance, imagine plucking a guitar string. As the string vibrates, it emits a certain note. Now put your finger on a fret exactly halfway up the string and pluck it again. The vibrating part of the string is now half as long as it used to be — a ratio of 1 to 2 — and it sounds precisely an octave higher than the original note (the musical distance from one do to the next in the do-re-mi-fa-sol-la-ti-do scale). If instead the vibrating string is ⅔ of its original length, the note it makes goes up by a fifth (the interval from do to sol; think of the first two notes of the Stars Wars theme). And if the vibrating part is ¾ as long as it was before, the note goes up by a fourth (the interval between the first two notes of “Here Comes the Bride”). The ancient Greek musicians knew about the melodic concepts of octaves, fourths, and fifths and considered them beautiful. This unexpected link between music (the harmony of this world) and numbers (the harmony of an imagined world) led the Pythagoreans to the mystical belief that all is number. They are said to have believed that even the planets in their orbits made music, the music of the spheres.
Ever since then, many of history’s greatest mathematicians and scientists have come down with cases of Pythagorean fever. The astronomer Johannes Kepler had it bad. So did the physicist Paul Dirac. As we’ll see, it drove them to seek, and to dream, and to long for the harmonies of the universe. In the end it pushed them to make their own discoveries that changed the world.
To help you understand where we’re headed, let me say a few words about what calculus is, what it wants (metaphorically speaking), and what distinguishes it from the rest of mathematics. Fortunately, a single big, beautiful idea runs through the subject from beginning to end. Once we become aware of this idea, the structure of calculus falls into place as variations on a unifying theme.
Alas, most calculus courses bury the theme under an avalanche of formulas, procedures, and computational tricks. Come to think of it, I’ve never seen it spelled out anywhere even though it’s part of calculus culture and every expert knows it implicitly. Let’s call it the Infinity Principle. It will guide us on our journey just as it guided the development of calculus itself, conceptually as well as historically. I’m tempted to state it right now, but at this point it would sound like mumbo jumbo. It will be easier to appreciate if we inch our way up to it by asking what calculus wants . . . and how it gets what it wants.
In a nutshell, calculus wants to make hard problems simpler. It is utterly obsessed with simplicity. That might come as a surprise to you, given that calculus has a reputation for being complicated. And there’s no denying that some of its leading textbooks exceed a thousand pages and weigh as much as bricks. But let’s not be judgmental. Calculus can’t help how it looks. Its bulkiness is unavoidable. It looks complicated because it’s trying to tackle complicated problems. In fact, it has tackled and solved some of the most difficult and important problems our species has ever faced.
Calculus succeeds by breaking complicated problems down into simpler parts. That strategy, of course, is not unique to calculus. All good problem-solvers know that hard problems become easier when they’re split into chunks. The truly radical and distinctive move of calculus is that it takes this divide-and-conquer strategy to its utmost extreme — all the way out to infinity. Instead of cutting a big problem into a handful of bite-size pieces, it keeps cutting and cutting relentlessly until the problem has been chopped and pulverized into its tiniest conceivable parts, leaving infinitely many of them. Once that’s done, it solves the original problem for all the tiny parts, which is usually a much easier task than solving the initial giant problem. The remaining challenge at that point is to put all the tiny answers back together again. That tends to be a much harder step, but at least it’s not as difficult as the original problem was.
Thus, calculus proceeds in two phases: cutting and rebuilding. In mathematical terms, the cutting process always involves infinitely fine subtraction, which is used to quantify the differences between the parts. Accordingly, this half of the subject is called differential calculus. The reassembly process always involves infinite addition, which integrates the parts back into the original whole. This half of the subject is called integral calculus.
This strategy can be used on anything that we can imagine slicing endlessly. Such infinitely divisible things are called continua and are said to be continuous, from the Latin roots con (together with) and tenere (hold), meaning uninterrupted or holding together. Think of the rim of a perfect circle, a steel girder in a suspension bridge, a bowl of soup cooling off on the kitchen table, the parabolic trajectory of a javelin in flight, or the length of time you have been alive. A shape, an object, a liquid, a motion, a time interval — all of them are grist for the calculus mill. They’re all continuous, or nearly so.
Notice the act of creative fantasy here. Soup and steel are not really continuous. At the scale of everyday life, they appear to be, but at the scale of atoms or superstrings, they’re not. Calculus ignores the inconvenience posed by atoms and other uncuttable entities, not because they don’t exist but because it’s useful to pretend that they don’t. As we’ll see, calculus has a penchant for useful fictions.
More generally, the kinds of entities modeled as continua by calculus include almost anything one can think of. Calculus has been used to describe how a ball rolls continuously down a ramp, how a sunbeam travels continuously through water, how the continuous flow of air around a wing keeps a hummingbird or an airplane aloft, and how the concentration of HIV virus particles in a patient’s bloodstream plummets continuously in the days after he or she starts combination-drug therapy. In every case the strategy remains the same: split a complicated but continuous problem into infinitely many simpler pieces, then solve them separately and put them back together.
Now we’re finally ready to state the big idea.
To shed light on any continuous shape, object, motion, process, or phenomenon — no matter how wild and complicated it may appear — reimagine it as an infinite series of simpler parts, analyze those, and then add the results back together to make sense of the original whole.
The rub in all of this is the need to cope with infinity. That’s easier said than done. Although the carefully controlled use of infinity is the secret to calculus and the source of its enormous predictive power, it is also calculus’s biggest headache. Like Frankenstein’s monster or the golem in Jewish folklore, infinity tends to slip out of its master’s control. As in any tale of hubris, the monster inevitably turns on its maker.
The creators of calculus were aware of the danger but still found infinity irresistible. Sure, occasionally it ran amok, leaving paradox, confusion, and philosophical havoc in its wake. Yet after each of these episodes, mathematicians always managed to subdue the monster, rationalize its behavior, and put it back to work. In the end, everything always turned out fine. Calculus gave the right answers, even when its creators couldn’t explain why. The desire to harness infinity and exploit its power is a narrative thread that runs through the whole twenty-five-hundred-year story of calculus.
All this talk of desire and confusion might seem out of place, given that mathematics is usually portrayed as exact and impeccably rational. It is rational, but not always initially. Creation is intuitive; reason comes later. In the story of calculus, more than in other parts of mathematics, logic has always lagged behind intuition. This makes the subject feel especially human and approachable, and its geniuses more like the rest of us.
The Infinity Principle organizes the story of calculus around a methodological theme. But calculus is as much about mysteries as it is about methodology. Three mysteries above all have spurred its development: the mystery of curves, the mystery of motion, and the mystery of change.
The fruitfulness of these mysteries has been a testament to the value of pure curiosity. Puzzles about curves, motion, and change might seem unimportant at first glance, maybe even hopelessly esoteric. But because they touch on such rich conceptual issues and because mathematics is so deeply woven into the fabric of the universe, the solution to these mysteries has had far-reaching impacts on the course of civilization and on our everyday lives. As we’ll see in the chapters ahead, we reap the benefits of these investigations whenever we listen to music on our phones, breeze through the line at the supermarket thanks to a laser checkout scanner, or find our way home with a GPS gadget.
It all started with the mystery of curves. Here I’m using the term curves in a very loose sense to mean any sort of curved line, curved surface, or curved solid — think of a rubber band, a wedding ring, a floating bubble, the contours of a vase, or a solid tube of salami. To keep things as simple as possible, the early geometers typically concentrated on abstract, idealized versions of curved shapes and ignored thickness, roughness, and texture. The surface of a mathematical sphere, for instance, was imagined to be an infinitesimally thin, smooth, perfectly round membrane with none of the thickness, bumpiness, or hairiness of a coconut shell. Even under these idealized assumptions, curved shapes posed baffling conceptual difficulties because they weren’t made of straight pieces. Triangles and squares were easy. So were cubes. They were composed of straight lines and flat pieces of planes joined together at a small number of corners. It wasn’t hard to figure out their perimeters or surface areas or volumes. Geometers all over the world — in ancient Babylon and Egypt, China and India, Greece and Japan — knew how to solve problems like these. But round things were brutal. No one could figure out how much surface area a sphere had or how much volume it could hold. Even finding the circumference and area of a circle was an insurmountable problem in the old days. There was no way to get started. There were no straight pieces to latch onto. Anything that was curved was inscrutable.
So this is how calculus began. It grew out of geometers’ curiosity and frustration with roundness. Circles and spheres and other curved shapes were the Himalayas of their era. It wasn’t that they posed important practical issues, at least not at first. It was simply a matter of the human spirit’s thirst for adventure. Like explorers climbing Mount Everest, geometers wanted to solve curves because they were there.
The breakthrough came from insisting that curves were actually made of straight pieces. It wasn’t true, but one could pretend that it was. The only hitch was that those pieces would then have to be infinitesimally small and infinitely numerous. Through this fantastic conception, integral calculus was born. This was the earliest use of the Infinity Principle. The story of how it developed will occupy us for several chapters, but its essence is already there, in embryonic form, in a simple, intuitive insight: If we zoom in closely enough on a circle (or anything else that is curved and smooth), the portion of it under the microscope begins to look straight and flat. So in principle, at least, it should be possible to calculate whatever we want about a curved shape by adding up all the straight little pieces. Figuring out exactly how to do this — no easy feat — took the efforts of the world’s greatest mathematicians over many centuries. Collectively, however, and sometimes through bitter rivalries, they eventually began to make headway on the riddle of curves. Spinoffs today, as we’ll see in chapter 2, include the math needed to draw realistic-looking hair, clothing, and faces of characters in computer-animated movies and the calculations required for doctors to perform facial surgery on a virtual patient before they operate on the real one.
The quest to solve the mystery of curves reached a fever pitch when it became clear that curves were much more than geometric diversions. They were a key to unlocking the secrets of nature. They arose naturally in the parabolic arc of a ball in flight, in the elliptical orbit of Mars as it moved around the sun, and in the convex shape of a lens that could bend and focus light where it was needed, as was required for the burgeoning development of microscopes and telescopes in late Renaissance Europe.
And so began the second great obsession: a fascination with the mysteries of motion on Earth and in the solar system. Through observation and ingenious experiments, scientists discovered tantalizing numerical patterns in the simplest moving things. They measured the swinging of a pendulum, clocked the accelerating descent of a ball rolling down a ramp, and charted the stately procession of planets across the sky. The patterns they found enraptured them — indeed, Johannes Kepler fell into a state of self-described “sacred frenzy” when he found his laws of planetary motion — because those patterns seemed to be signs of God’s handiwork. From a more secular perspective, the patterns reinforced the claim that nature was deeply mathematical, just as the Pythagoreans had maintained. The only catch was that nobody could explain the marvelous new patterns, at least not with the existing forms of math. Arithmetic and geometry were not up to the task, even in the hands of the greatest mathematicians.
The trouble was that the motions weren’t steady. A ball rolling down a ramp kept changing its speed, and a planet revolving around the sun kept changing its direction of travel. Worse yet, the planets moved faster when they got close to the sun and slowed down as they receded from it. There was no known way to deal with motion that kept changing in ever-changing ways. Earlier mathematicians had worked out the mathematics of the most trivial kind of motion, namely, motion at a constant speed where distance equals rate times time. But when speed changed and kept on changing continuously, all bets were off. Motion was proving to be as much of a conceptual Mount Everest as curves were.
As we’ll see in the middle chapters of this book, the next great advances in calculus grew out of the quest to solve the mystery of motion. The Infinity Principle came to the rescue, just as it had for curves. This time the act of wishful fantasy was to pretend that motion at a changing speed was made up of infinitely many, infinitesimally brief motions at a constant speed. To visualize what this would mean, imagine being in a car with a jerky driver at the wheel. As you anxiously watch the speedometer, it moves up and down with every jerk. But over a millisecond, even the jerkiest driver can’t make the speedometer needle move by much. And over an interval much shorter than that — an infinitesimal time interval — the needle won’t move at all. Nobody can tap the gas pedal that fast.
These ideas coalesced in the younger half of calculus, differential calculus. It was precisely what was needed to work with the infinitesimally small changes of time and distance that arose in the study of ever-changing motion as well as with the infinitesimal straight pieces of curves that arose in analytic geometry, the newfangled study of curves defined by algebraic equations that was all the rage in the first half of the 1600s. Yes, at one time, algebra was a craze, as we’ll see. Its popularity was a boon for all fields of mathematics, including geometry, but it also created an unruly jungle of new curves to explore. Thus, the mysteries of curves and motion collided. They were now both at the center stage of calculus in the mid-1600s, banging into each other, creating mathematical mayhem and confusion. Out of the tumult, differential calculus began to flower, but not without controversy. Some mathematicians were criticized for playing fast and loose with infinity. Others derided algebra as a scab of symbols. With all the bickering, progress was fitful and slow.
And then a child was born on Christmas Day. This young messiah of calculus was an unlikely hero. Born premature and fatherless and abandoned by his mother at age three, he was a lonesome boy with dark thoughts who grew into a secretive, suspicious young man. Yet Isaac Newton would make a mark on the world like no one before or since.
First, he solved the holy grail of calculus: he discovered how to put the pieces of a curve back together again — and how to do it easily, quickly, and systematically. By combining the symbols of algebra with the power of infinity, he found a way to represent any curve as a sum of infinitely many simpler curves described by powers of a variable x, like x2, x3, x4, and so on. With these ingredients alone, he could cook up any curve he wanted by putting in a pinch of x and a dash of x2 and a heaping tablespoon of x3. It was like a master recipe and a universal spice rack, butcher shop, and vegetable garden, all rolled into one. With it he could solve any problem about shapes or motions that had ever been considered.
Then he cracked the code of the universe. Newton discovered that motion of any kind always unfolds one infinitesimal step at a time, steered from moment to moment by mathematical laws written in the language of calculus. With just a handful of differential equations (his laws of motion and gravity), he could explain everything from the arc of a cannonball to the orbits of the planets. His astonishing “system of the world” unified heaven and earth, launched the Enlightenment, and changed Western culture. Its impact on the philosophers and poets of Europe was immense. He even influenced Thomas Jefferson and the writing of the Declaration of Independence, as we’ll see. In our own time, Newton’s ideas underpinned the space program by providing the mathematics necessary for trajectory design, the work done at NASA by African-American mathematician Katherine Johnson and her colleagues (the heroines of the book and hit movie Hidden Figures).
With the mysteries of curves and motion now settled, calculus moved on to its third lifelong obsession: the mystery of change. It’s a cliché, but it’s true all the same — nothing is constant but change. It’s rainy one day and sunny the next. The stock market rises and falls. Emboldened by the Newtonian paradigm, the later practitioners of calculus asked: Are there laws of change similar to Newton’s laws of motion? Are there laws for population growth, the spread of epidemics, and the flow of blood in an artery? Can calculus be used to describe how electrical signals propagate along nerves or to predict the flow of traffic on a highway?
By pursuing this ambitious agenda, always in cooperation with other parts of science and technology, calculus has helped make the world modern. Using observation and experiment, scientists worked out the laws of change and then used calculus to solve them and make predictions. For example, in 1917 Albert Einstein applied calculus to a simple model of atomic transitions to predict a remarkable effect called stimulated emission (which is what the s and e stand for in laser, an acronym for light amplification by stimulated emission of radiation). He theorized that under certain circumstances, light passing through matter could stimulate the production of more light at the same wavelength and moving in the same direction, creating a cascade of light through a kind of chain reaction that would result in an intense, coherent beam. A few decades later, the prediction proved to be accurate. The first working lasers were built in the early 1960s. Since then, they have been used in everything from compact-disc players and laser-guided weaponry to supermarket bar-code scanners and medical lasers.
The laws of change in medicine are not as well understood as those in physics. Yet even when applied to rudimentary models, calculus has been able to make lifesaving contributions. For example, in chapter 8 we’ll see how a differential-equation model developed by an immunologist and an AIDS researcher played a part in shaping the modern three-drug combination therapy for patients infected with HIV. The insights provided by the model overturned the prevailing view that the virus was lying dormant in the body; in fact, it was in a raging battle with the immune system every minute of every day. With the new understanding that calculus helped provide, HIV infection has been transformed from a near-certain death sentence to a manageable chronic disease — at least for those with access to combination-drug therapy.
Admittedly, some aspects of our ever-changing world lie beyond the approximations and wishful thinking inherent in the Infinity Principle. In the subatomic realm, for example, physicists can no longer think of an electron as a classical particle following a smooth path in the same way that a planet or a cannonball does. According to quantum mechanics, trajectories become jittery, blurry, and poorly defined at the microscopic scale, so we need to describe the behavior of electrons as probability waves instead of Newtonian trajectories. As soon as we do that, however, calculus returns triumphantly. It governs the evolution of probability waves through something called the Schrödinger equation.
It’s incredible but true: Even in the subatomic realm where Newtonian physics breaks down, Newtonian calculus still works. In fact, it works spectacularly well. As we’ll see in the pages ahead, it has teamed up with quantum mechanics to predict the remarkable effects that underlie medical imaging, from MRI and CT scans to the more exotic positron emission tomography.
It’s time for us to take a closer look at the language of the universe. Naturally, the place to start is at infinity.
THE BEGINNINGS OF mathematics were grounded in everyday concerns. Shepherds needed to keep track of their flocks. Farmers needed to weigh the grain reaped in the harvest. Tax collectors had to decide how many cows or chickens each peasant owed the king. Out of such practical demands came the invention of numbers. At first they were tallied on fingers and toes. Later they were scratched on animal bones. As their representation evolved from scratches to symbols, numbers facilitated everything from taxation and trade to accounting and census taking. We see evidence of all this in Meso-potamian clay tablets written more than five thousand years ago: row after row of entries recorded with the wedge-shaped symbols called cuneiform.
Along with numbers, shapes mattered too. In ancient Egypt, the measurement of lines and angles was of paramount importance. Each year surveyors had to redraw the boundaries of farmers’ fields after the summer flooding of the Nile washed the borderlines away. That activity later gave its name to the study of shape in general: geometry, from the Greek gē, “earth,” and metrēs, “measurer.”
At the start, geometry was hard-edged and sharp-cornered. Its predilection for straight lines, planes, and angles reflected its utilitarian origins — triangles were useful as ramps, pyramids as monuments and tombs, and rectangles as tabletops, altars, and plots of land. Builders and carpenters used right angles for plumb lines. For sailors, architects, and priests, knowledge of straight-line geometry was essential for surveying, navigating, keeping the calendar, predicting eclipses, and erecting temples and shrines.
Yet even when geometry was fixated on straightness, one curve always stood out, the most perfect of all: the circle. We see circles in tree rings, in the ripples on a pond, in the shape of the sun and the moon. Circles surround us in nature. And as we gaze at circles, they gaze back at us, literally. There they are in the eyes of our loved ones, in the circular outlines of their pupils and irises. Circles span the practical and the emotional, as wheels and wedding rings, and they are mystical too. Their eternal return suggests the cycle of the seasons, reincarnation, eternal life, and never-ending love. No wonder circles have commanded attention for as long as humanity has studied shapes.
Mathematically, circles embody change without change. A point moving around the circumference of a circle changes direction without ever changing its distance from a center. It’s a minimal form of change, a way to change and curve in the slightest way possible. And, of course, circles are symmetrical. If you rotate a circle about its center, it looks unchanged. That rotational symmetry may be why circles are so ubiquitous. Whenever some aspect of nature doesn’t care about direction, circles are bound to appear. Consider what happens when a raindrop hits a puddle: tiny ripples expand outward from the point of impact. Because they spread equally fast in all directions and because they started at a single point, the ripples have to be circles. Symmetry demands it.
Circles can also give birth to other curved shapes. If we imagine skewering a circle on its diameter and spinning it around that axis in three-dimensional space, the rotating circle makes a sphere, the shape of a globe or a ball. When a circle is moved vertically into the third dimension along a straight line at right angles to its plane, it makes a cylinder, the shape of a can or a hatbox. If it shrinks at the same time as it’s moving vertically, it makes a cone; if it expands as it moves vertically, it makes a truncated cone (the shape of a lampshade).
Circles, spheres, cylinders, and cones fascinated the early geometers, but they found them much harder to analyze than triangles, rectangles, squares, cubes, and other rectilinear shapes made of straight lines and flat planes. They wondered about the areas of curved surfaces and the volumes of curved solids but had no clue how to solve such problems. Roundness defeated them.
Calculus began as an outgrowth of geometry. Back around 250 BCE in ancient Greece, it was a hot little mathematical startup devoted to the mystery of curves. The ambitious plan of its devotees was to use infinity to build a bridge between the curved and the straight. The hope was that once that link was established, the methods and techniques of straight-line geometry could be shuttled across the bridge and brought to bear on the mystery of curves. With infinity’s help, all the old problems could be solved. At least, that was the pitch.
At the time, that plan must have seemed pretty far-fetched. Infinity had a dubious reputation. It was known for being scary, not useful. Worse yet, it was nebulous and bewildering. What was it exactly? A number? A place? A concept?
Nevertheless, as we’ll see soon and in the chapters to come, infinity turned out to be a godsend. Given all the discoveries and technologies that ultimately flowed from calculus, the idea of using infinity to solve difficult geometry problems has to rank as one of the best ideas anyone ever had.
Of course, none of that could have been foreseen in 250 BCE. Still, infinity did put some impressive notches in its belt right away. One of its first and finest was the solution of a long-standing enigma: how to find the area of a circle.
Before I go into the details, let me sketch the argument. The strategy is to reimagine the circle as a pizza. Then we’ll slice that pizza into infinitely many pieces and magically rearrange them to make a rectangle. That will give us the answer we’re looking for, since moving slices around obviously doesn’t change their area from what they were originally, and we know how to find the area of a rectangle: we just multiply its width times its height. The result is a formula for the area of a circle.
For the sake of this argument, the pizza needs to be an idealized mathematical pizza, perfectly flat and round, with an infinitesimally thin crust. Its circumference, abbreviated by the letter C, is the distance around the pizza, measured by tracing around the crust. Circumference isn’t something that pizza lovers ordinarily care about, but if we wanted to, we could measure C with a tape measure.
Another quantity of interest is the pizza’s radius, r, defined as the distance from its center to every point on its crust. In particular, r also measures how long the straight side of a slice is, assuming that all the slices are equal and cut from the center out to the crust.
Suppose we start by dividing the pie into four quarters. Here’s one way to rearrange them, but it doesn’t look too promising.
The new shape looks bulbous and strange with its scalloped top and bottom. It’s certainly not a rectangle, so its area is not easy to guess. We seem to be going backward. But as in any drama, the hero needs to get into trouble before triumphing. The dramatic tension is building.
While we’re stuck here, though, we should notice two things, because they are going to hold true throughout the proof, and they will ultimately give us the dimensions of the rectangle we’re seeking. The first observation is that half of the crust became the curvy top of the new shape, and the other half became the bottom. So the curvy top has a length equal to half the circumference, C/2, and so does the bottom, as shown in the diagram. That length is eventually going to turn into the long side of the rectangle, as we’ll see. The other thing to notice is that the tilted straight sides of the bulbous shape are just the sides of the original pizza slices, so they still have length r. That length is eventually going to turn into the short side of the rectangle.
The reason we aren’t seeing any signs of the desired rectangle yet is that we haven’t cut enough slices. If we make eight slices and rearrange them like so, our picture starts to look more nearly rectangular.
In fact, the pizza starts to look like a parallelogram. Not bad — at least it’s almost rectilinear. And the scallops on the top and bottom are a lot less bulbous than they were. They flattened out when we used more slices. As before, they have curvy length C/2 on the top and bottom and a slanted-side length r.
To spruce up the picture even more, suppose we cut one of the slanted end pieces in half lengthwise and shift that half to the other side.
Now the shape looks very much like a rectangle. Admittedly, it’s still not perfect because of the scalloped top and bottom caused by the curvature of the crust, but at least we’re making progress.
Since making more pieces seems to be helping, let’s keep slicing. With sixteen slices and the cosmetic sprucing-up of the end piece, as we did before, we get this result:
The more slices we take, the more we flatten out the scallops produced by the crust. Our maneuvers are producing a sequence of shapes that are magically homing in on a certain rectangle. Because the shapes keep getting closer and closer to that rectangle, we’ll call it the limiting rectangle.
The most innovative aspect of the proof is the way infinity came to the rescue. When we had only four slices, or eight, or sixteen, the best we could do was rearrange the pizza into an imperfect scalloped shape. After an unpromising start, the more slices we took, the more rectangular the shape became. But it was only in the limit of infinitely many slices that it became truly rectangular. That’s the big idea behind calculus. Everything becomes simpler at infinity.
A limit is like an unattainable goal. You can get closer and closer to it, but you can never get all the way there.
For example, in the pizza proof we were able to make the scalloped shapes more and more nearly rectangular by cutting enough slices and rearranging them. But we could never make them genuinely rectangular. We could only approach that state of perfection. Fortunately, in calculus, the unattainability of the limit usually doesn’t matter. We can often solve the problems we’re working on by fantasizing that we can actually reach the limit and then seeing what that fantasy implies. In fact, many of the greatest pioneers of the subject did precisely that and made great discoveries by doing so. Logical, no. Imaginative, yes. Successful, very.
A limit is a subtle concept but a central one in calculus. It’s elusive because it’s not a common idea in daily life. Perhaps the closest analogy is the Riddle of the Wall. If you walk halfway to the wall, and then you walk half the remaining distance, and then you walk half of that, and on and on, will there ever be a step when you finally get to the wall?
The answer is clearly no, because the Riddle of the Wall stipulates that at each step, you walk halfway to the wall, not all the way. After you take ten steps or a million or any other number of steps, there will always be a gap between you and the wall. But equally clearly, you can get arbitrarily close to the wall. What this means is that by taking enough steps, you can get to within a centimeter of it, or a millimeter, or a nanometer, or any other tiny but nonzero distance, but you can never get all the way there. Here, the wall plays the role of the limit. It took about two thousand years for the limit concept to be rigorously defined. Until then, the pioneers of calculus got by just fine with intuition. So don’t worry if limits feel hazy for now. We’ll get to know them better by watching them in action. From a modern perspective, they matter because they are the bedrock on which all of calculus is built.
If the metaphor of the wall seems too bleak and inhuman (who wants to approach a wall?), try this analogy: Anything that approaches a limit is like a hero engaged in an endless quest. It’s not an exercise in total futility, like the hopeless task faced by Sisyphus, who was condemned to roll a boulder up a hill only to see it roll back down again over and over for eternity. Rather, when a mathematical process advances toward a limit (like the scalloped shapes homing in on the limiting rectangle), it’s as if a protagonist is striving for something he knows is impossible but for which he still holds out the hope of success, encouraged by the steady progress he’s making while trying to reach an unreachable star.
Until that moment, I’d never heard a grownup mention infinity. My parents certainly had no use for it. It seemed like a secret that only kids knew about. On the playground, it came up all the time in taunts and one-upmanship.
“You’re a jerk!”
“Yeah, well, you’re a jerk times two!”
“And you’re a jerk times infinity!”
“And you’re a jerk times infinity plus one!”
“That’s the same as infinity, you idiot!”
Those edifying sessions had convinced me that infinity did not behave like an ordinary number. It didn’t get bigger when you added one to it. Even adding infinity to it didn’t help. Its invincible properties made it great for finishing arguments in the schoolyard. Whoever deployed it first would win.
But no teacher had ever talked about infinity until Ms. Stanton brought it up that day. Everyone in our class already knew about finite decimals, the familiar kind used for amounts of money, like $10.28, with its two digits after the decimal point. By comparison, infinite decimals, which had infinitely many digits after the decimal point, seemed strange at first but appeared natural as soon as we started to discuss fractions.
We learned that the fraction ⅓ could be written as 0.333 . . . where the dot-dot-dots meant that the threes repeated indefinitely. That made sense to me, because when I tried to calculate ⅓ by doing the long-division algorithm on it, I found myself stuck in an endless loop: three doesn’t go into one, so pretend the one is a ten; then three goes into ten three times, which leaves a remainder of one; and now I’m back where I started, still trying to divide three into one. There was no way out of the loop. That’s why the threes kept repeating in 0.333 . . . .
The three dots at the end of 0.333 . . . have two interpretations. The naive interpretation is that there are literally infinitely many 3s packed side by side to the right of the decimal point. We can’t write them all down, of course, since there are infinitely many of them, but by writing the three dots we signify that they are all there, at least in our minds. I’ll call this the completed infinity interpretation. The advantage of this interpretation is that it seems easy and commonsensical, as long as we are willing not to think too hard about what infinity means.
The more sophisticated interpretation is that 0.333 . . . represents a limit, just like the limiting rectangle does for the scalloped shapes in the pizza proof or like the wall does for the hapless walker. Except here, 0.333 . . . represents the limit of the successive decimals we generate by doing long division on the fraction ⅓. As the division process continues for more and more steps, it generates more and more 3s in the decimal expansion of ⅓. By grinding away, we can produce an approximation as close to ⅓ as we like. If we’re not happy with ⅓ ≈ 0.3, we can always go a step further to ⅓ ≈ 0.33, and so on. I’ll call this the potential infinity interpretation. It’s “potential” in the sense that the approximations can potentially go on for as long as desired. There’s nothing to stop us from continuing for a million or a billion or any other number of steps. The advantage of this interpretation is that we never have to invoke woolly-headed notions like infinity. We can stick to the finite.
As a chastening example, suppose we put a certain number of dots on a circle, space them evenly, and connect them to one another with straight lines. With three dots, we get an equilateral triangle; with four, a square; with five, a pentagon; and so on, running through a sequence of rectilinear shapes called regular polygons.
Notice that the more dots we use, the rounder the polygons become and the closer they get to the circle. Meanwhile, their sides get shorter and more numerous. As we move progressively further through the sequence, the polygons approach the original circle as a limit.
In this way, infinity is bridging two worlds again. This time it’s taking us from the rectilinear to the round, from sharp-cornered polygons to silky-smooth circles, whereas in the pizza proof, infinity brought us from round to rectilinear as it transformed a circle into a rectangle.
Of course, at any finite stage, a polygon is still just a polygon. It’s not yet a circle and it never becomes one. It gets closer and closer to being a circle, but it never truly gets there. We are dealing here with potential infinity, not completed infinity. So everything is airtight from the standpoint of logical rigor.
But what if we could go all the way to completed infinity? Would the resulting infinite polygon with infinitesimally short sides actually be a circle? It’s tempting to think so, because then the polygon would be smooth. All its corners would be sanded off. Everything would become perfect and beautiful.
There’s a general lesson here: Limits are often simpler than the approximations leading up to them. A circle is simpler and more graceful than any of the thorny polygons that approach it. So too for the pizza proof, where the limiting rectangle was simpler and more elegant than the scalloped shapes, with their unsightly bulges and cusps. And likewise for the fraction ⅓. It was simpler and more handsome than any of the ungainly fractions creeping up on it, with their big ugly numerators and denominators, like 3/10 and 33/100 and 333/1000. In all these cases, the limiting shape or number was simpler and more symmetrical than its finite approximators.
This is the allure of infinity. Everything becomes better there.
With that lesson in mind, let’s return to the parable of the infinite polygon. Should we take the plunge and say that a circle truly is a polygon with infinitely many infinitesimal sides? No. We mustn’t do that, mustn’t yield to that temptation. Doing so would be to commit the sin of completed infinity. It would condemn us to logical hell.
To see why, suppose we entertain the thought, just for a moment, that a circle is indeed an infinite polygon with infinitesimal sides. How long, exactly, are those sides? Zero length? If so, then infinity times zero — the combined length of all those sides — must equal the circumference of the circle. But now imagine a circle of double the circumference. Infinity times zero would also have to equal that larger circumference as well. So infinity times zero would have to be both the circumference and double the circumference. What nonsense! There simply is no consistent way to define infinity times zero, and so there is no sensible way to regard a circle as an infinite polygon.