The Quantum Age - Brian Clegg - E-Book

The Quantum Age E-Book

Brian Clegg

0,0

Beschreibung

The stone age, the iron age, the steam and electrical ages all saw the reach of humankind transformed by new technology. Now we are living in the quantum age, a revolution in everyday life led by our understanding of the very, very small. Quantum physics lies at the heart of every electronic device from smartphones to lasers; quantum superconductors allow levitating trains and MRI scanners, while superfast, ultra-secure quantum computers may soon be a reality. Yet quantum particles such as atoms, electrons and photons remain mysterious, acting totally unlike the objects we experience directly. With his trademark clarity and enthusiasm, acclaimed popular science author Brian Clegg reveals the amazing world of the quantum that lies all around us.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 371

Veröffentlichungsjahr: 2014

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



THE QUANTUM AGE

HOW THE PHYSICS OF THE VERY SMALL HAS TRANSFORMED OUR LIVES

BRIAN CLEGG

Published in the UK in 2014 byIcon Books Ltd, Omnibus Business Centre,39–41 North Road, London N7 9DP

email: [email protected]

www.iconbooks.com

Sold in the UK, Europe and Asiaby Faber & Faber Ltd, Bloomsbury House,74–77 Great Russell Street,London WC1B 3DA or their agents

Distributed in the UK, Europe and Asiaby TBS Ltd, TBS Distribution Centre, Colchester Road,Frating Green, Colchester CO7 7DW

Distributed in South Africaby Jonathan Ball, Office B4, The District,41 Sir Lowry Road, Woodstock 7925

Distributed in Australia and New Zealandby Allen & Unwin Pty Ltd,PO Box 8500, 83 Alexander Street,Crows Nest, NSW 2065

Distributed in Canada byPenguin Books Canada,90 Eglinton Avenue East, Suite 700,Toronto, Ontario M4P 2YE

Distributed to the trade in the USAby Consortium Book Sales and DistributionThe Keg House, 34 Thirteenth Avenue NE, Suite 101Minneapolis, Minnesota 55413-1007

ISBN: 978-184831-664-5

Text copyright © 2014 Brian Clegg

The author has asserted his moral rights.

No part of this book may be reproduced in any form, or by any means, without prior permission in writing from the publisher.

Typeset in Melior by Marie Doherty

Printed and bound in the UK by Clays Ltd, St Ives plc

Contents

Acknowledgements

Introduction

1. Enter the quantum

2. Quantum nature

3. The electron’s realm

4. QED

5. Light and magic

6. Super beams

7. Making light work

8. Resistance is futile

9. Floating trains and well-chilled SQUIDs

10. Spooky entanglement

11. From bit to qubit

12. It’s alive!

13. A quantum universe

Index

About the author

Science writer Brian Clegg studied physics at Cambridge University and specialises in making the strangest aspects of the universe – from infinity to time travel and quantum theory – accessible to the general reader. He is editor of www.popularscience.co.uk and a Fellow of the Royal Society of Arts. His previous books include Inflight Science, Build Your Own Time Machine, The Universe Inside You, Dice World and Introducing Infinity: A Graphic Guide.

www.brianclegg.net

For Gillian, Chelsea and Rebecca

Acknowledgements

With thanks as always to my editor, Duncan Heath, for his help and support, and to all those who have provided me with information and assistance – you know who you are.

One person I would like to mention by name is the late Richard Feynman, whose books enthralled me and who turned quantum theory from a confusing mystery to an exciting challenge.

Introduction

The chances are that most of the time you were at school your science teachers lied to you. Much of the science, and specifically the physics, they taught you was rooted in the Victorian age (which is quite probably why so many people find school science dull). Quantum theory, special and general relativity, arguably the most significant fundamentals of physics, were developed in the 20th century and yet these are largely ignored in schools, in part because they are considered too ‘difficult’ and in part because many of the teachers have little idea about these subjects themselves. And that’s a terrible pity, when you consider that in terms of impact on your everyday life, one of these two subjects is quite possibly the most important bit of scientific knowledge there is.

Relativity is fascinating and often truly mind-boggling, but with the exception of gravity, which I admit is rather useful, it has few applications that influence our experience. GPS satellites have to be corrected for both special and general relativity, but that’s about it, because the ‘classical’ physics that predates Einstein’s work is a very close approximation to what’s observed unless you travel at close to the speed of light, and is good enough to deal with everything from the acceleration of a car to planning a Moon launch. But quantum physics is entirely different. While it too is fascinating and mind-boggling, it also lies behind everything. All the objects we see and touch and use are made up of quantum particles. As is the light we use to see those objects. As are you. As is the Sun and all the other stars. What’s more, the process that fuels the Sun, nuclear fusion, depends on quantum physics to work.

That makes the subject interesting in its own right, something you really should have studied at school; but there is far more, because quantum science doesn’t just underlie the basic building blocks of physics: it is there in everyday practical applications all around you. It has been estimated that around 35 per cent of GDP in advanced countries comes from technology that makes use of quantum physics in an active fashion, not just in the atoms that make it up. This has not always been the case – we have undergone a revolution that just hasn’t been given an appropriate label yet.

This is not the first time that human beings have experienced major changes in the way they live as a result of the development of technology. Historians often highlight this by devising a technological ‘age’. So, for instance, we had the stone, bronze and iron ages as these newly workable materials made it possible to produce more versatile and effective tools and products. In the 19th century we entered the steam age, when applied thermodynamics transformed our ability to produce power, moving us from depending on the basic effort of animals and the unpredictable force of wind and water to the controlled might of steam. And though it is yet to be formally recognised as such, we are now in the quantum age.

It isn’t entirely clear when this era began. It is possible to argue that the use of current electricity was the first use of true quantum technology, as the flow of electricity through conductors is a quantum process, though of course none of the electrical pioneers were aware that this was the case. If that is a little too concealed a usage to be a revolution, then there can be no doubt that the introduction of electronics, a technology that makes conscious use of quantum effects, meant that we had moved into a new phase of the world. Since then we have piled on all sorts of explicitly quantum devices from the ubiquitous laser to the MRI scanner. Every time we use a mobile phone, watch TV, use a supermarket checkout or take a photograph we are making use of sophisticated quantum effects.

Without quantum physics there would be no matter, no light, no Sun … and most important, no iPhones.

I’ve already used the word ‘quantum’ thirteen times, not counting the title pages and cover. So it makes sense to begin by getting a feel for what this ‘quantum’ word means and to explore the weird and wonderful science that lies behind it.

CHAPTER 1

Enter the quantum

Until the 20th century it was assumed that matter was much the same on whatever scale you looked at it. When back in Ancient Greek times a group of philosophers imagined what would happen if you cut something up into smaller and smaller pieces until you reached a piece that was uncuttable (atomos), they envisaged that atoms would be just smaller versions of what we observe. A cheese atom, for instance, would be no different, except in scale, to a block of cheese. But quantum theory turned our view on its head. As we explore the world of the very small, such as photons of light, electrons and our modern understanding of atoms, they behave like nothing we can directly experience with our senses.

A paradigm shift

Realising the very different reality at the quantum level was what historians of science like to give the pompous term a ‘paradigm shift’. Suddenly, the way that scientists looked at the world became different. Before the quantum revolution it was assumed that atoms (if they existed at all – many scientists didn’t really believe in them before the 20th century) were just like tiny little balls of the stuff they made up. Quantum physics showed that they behaved so weirdly that an atom of, say, carbon has to be treated as if it is something totally different to a piece of graphite or diamond – and yet all that is inside that lump of graphite or diamond is a collection of these carbon atoms. The behaviour of quantum particles is strange indeed, but that does not mean that it is unapproachable without a doctorate in physics. I quite happily teach the basics of quantum theory to ten-year-olds. Not the maths, but you don’t need mathematics to appreciate what’s going on. You just need the ability to suspend your disbelief. Because quantum particles refuse to behave the way you’d expect.

As the great 20th-century quantum physicist Richard Feynman (we’ll meet him again in detail before long) said in a public lecture: ‘[Y]ou think I’m going to explain it to you so you can understand it? No, you’re not going to be able to understand it. Why, then, am I going to bother you with all this? Why are you going to sit here all this time, when you won’t be able to understand what I am going to say? It is my task to persuade you not to turn away because you don’t understand it. You see, my physics students don’t understand it either. This is because I don’t understand it. Nobody does.’

It might seem that Feynman had found a good way to turn off his audience before he had started by telling them that they wouldn’t understand his talk. And surely it’s ridiculous for me to suggest I can teach this stuff to ten-year-olds when the great Feynman said he didn’t understand it? But he went on to explain what he meant. It’s not that his audience wouldn’t be able to understand what took place, what quantum physics described. It’s just that no one knows why it happens the way it does. And because what it does defies common sense, this can cause us problems. In fact quantum theory is arguably easier for ten-year-olds to accept than adults, which is one of the reasons I think that it (and relativity) should be taught in junior school. But that’s the subject of a different book.

As Feynman went on to say: ‘I’m going to describe to you how Nature is – and if you don’t like it, that’s going to get in the way of your understanding it … The theory of quantum electrodynamics [the theory governing the interaction of light and matter] describes Nature as absurd from the point of view of common sense. And it agrees fully with experiment. So I hope you can accept Nature as she is – absurd.’ We need to accept and embrace the viewpoint of an unlikely enthusiast for the subject, the novelist D.H. Lawrence, who commented that he liked quantum theory because he didn’t understand it.

The shock of the new

Part of the reason that quantum physics proved such a shocking, seismic shift is that around the start of the 20th century, scientists were, to be honest, rather smug about their level of understanding – an attitude they had probably never had before, and certainly should never have had since (though you can see it creeping in with some modern scientists). The hubris of the scientific establishment is probably best summed up by the words of a leading physicist of the time, William Thomson, Lord Kelvin. In 1900 he commented, no doubt in rounded, selfsatisfied tones: ‘There is nothing new to be discovered in physics. All that remains is more and more precise measurement.’ As a remark that he would come to bitterly regret this is surely up there with the famous clanger of Thomas J. Watson Snr, who as chairman of IBM made the impressively non-prophetic remark in 1943: ‘I think there is a world market for maybe five computers.’

Within months of Kelvin’s pronouncement, his certainty was being undermined by a German physicist called Max Planck. Planck was trying to iron out a small irritant to Kelvin’s supposed ‘nothing new’ – a technical problem that was given the impressive nickname ‘the ultraviolet catastrophe’. We have all seen how things give off light when they are heated up. For instance, take a piece of iron and put it in a furnace and it will first glow red, then yellow, before getting to white heat that will become tinged with blue. The ‘catastrophe’ that the physics of the day predicted was that the power of the light emitted by a hot body should be proportional to the square of the frequency of that light. This meant that even at room temperature, everything should be glowing blue and blasting out even more ultraviolet light. This was both evidently not happening and impossible.

To fix the problem, Planck cheated. He imagined that light could not be given off in whatever-sized amounts you like, as you would expect if it were a wave. Waves could come in any size or wavelength – they were infinitely variable, rather than being broken into discrete components. (And everyone knew that light was a wave, just as you were taught at school in the Victorian science we still impose on our children.)

Instead, Planck thought, what if the light could come out only in fixed-sized chunks? This sorted out the problem. Limit light to chunks and plug it into the maths and you didn’t get the runaway effect. Planck was very clear – he didn’t think light actually did come in chunks (or ‘quanta’ as he called them, the plural of the Latin quantum which roughly means ‘how much’), but it was a useful trick to make the maths work. Why this was the case, he had no idea, as he knew that light was a wave because there were plenty of experiments to prove it.

Mr Young’s experiment

Perhaps the best-known example of these experiments, and one we will come back to a number of times, is Young’s slits, the masterpiece of polymath Thomas Young (1773–1829). This well-off medical doctor and amateur scientist was obviously remarkable from an early age. He taught himself to read when he was two, something his parents discovered only when he asked for help with some of the longer words in the Bible. By the time he was thirteen he was a fluent reader in Greek, Latin, Hebrew, Italian and French. This was a natural precursor to one of Young’s impressive claims to fame – he made the first partial translation of Egyptian hieroglyphs. But his language abilities don’t reflect the breadth of his interests, from discovering the concept of elasticity in engineering to producing mortality tables to help insurance companies set their premiums.

His big breakthrough in understanding light came while studying the effect of temperature on the formation of dewdrops – there really was nothing in nature that didn’t interest this man. While watching the effect of candlelight on a fine mist of water droplets he discovered that they produced a series of coloured rings when the light then fell on a white screen. Young suspected that this effect was caused by interactions between waves of light, proving the wave nature that Christiaan Huygens had proclaimed back in Newton’s time. By 1801, Young was ready to prove this with an experiment that has been the definitive demonstration that light is a wave ever since.

Young produced a sharp beam of light using a slit in a piece of card and shone this light onto two parallel slits, close together in another piece of card, finally letting the result fall on a screen behind. You might expect that each slit would project a bright line on the screen, but what Young observed was a series of alternating dark and light bands. To Young this was clear evidence that light was a wave. The waves from the two slits were interfering with each other. When the side-to-side ripples in both waves were heading in the same direction – say both up – at the point they met the screen, the result was a bright band. If the wave ripples were heading in opposite directions, one up and one down, they would cancel each other out and produce a dark band. A similar effect can be spotted if you drop two stones into still water near to each other and watch how the ripples interact – some waves reinforce, some cancel out. It is natural wave behaviour.

Fig. 1. Young’s slits.

It was this kind of demonstration that persuaded Planck that his quanta were nothing more than a workaround to make the calculations match what was observed, because light simply had to be a wave – but he was to be proved wrong by a man who was less worried about convention than the older Planck, Albert Einstein. Einstein was to show that Planck’s idea was far closer to reality than Planck would ever accept. This discrepancy in viewpoint was glaringly obvious when Planck recommended Einstein for the Prussian Academy of Sciences in 1913. Planck requested the academy to overlook the fact that Einstein sometimes ‘missed the target in his speculations, as for example, in his theory of light quanta …’.

The Einstein touch

That ‘speculation’ was made by Einstein in 1905 when he was a young man of 26 (forget the white-haired icon we all know: this was a dapper young man-about-town). For Einstein, 1905 was a remarkable year in which the budding scientist, who was yet to achieve a doctorate and was technically an amateur, came up with the concept of special relativity,1 showed how Brownian motion2 could be explained, making it clear that atoms really did exist, and devised an explanation for the photoelectric effect (see page 13) that turned Planck’s useful calculating method into a model of reality.

Einstein was never one to worry too much about fitting expectations. As a boy he struggled with the rigid nature of German schooling, getting himself a reputation for being lazy and uncooperative. By the time he was sixteen, when most students had little more on their mind than getting through their exams and getting on with the opposite sex, he decided that he could no longer tolerate being a German citizen. (Not that young Albert was the classic geek in finding it difficult to get on with the girls – quite the reverse.) Hoping to become a Swiss citizen, Einstein applied to the exclusive Federal Institute of Technology, the Eidgenössische Technische Hochschule or ETH, in Zürich. Certain of his own abilities in the sciences, Einstein took the entrance exam – and failed.

His problem was a combination of youth and very tightly focused interests. Einstein had not seen the point of spending much time on subjects outside the sciences, but the ETH examination was designed to pick out allrounders. However, the principal of the school was impressed by young Albert and recommended he spent a year in a Swiss secondary school to gain a more appropriate education. Next year, Einstein applied again and got through. The ETH certainly allowed Einstein more flexibility to follow his dreams than the rigid German schools, though his headstrong approach made the head of the physics department, Heinrich Weber, comment to his student: ‘You’re a very clever boy, but you have one big fault. You will never allow yourself to be told anything.’

After graduating, Einstein tried to get a post by writing to famous scientists, asking them to take him on as an assistant. When this unlikely strategy failed, he took a position as a teacher, primarily to be able to gain Swiss citizenship, as he had already renounced his German nationality, so was technically stateless. Soon, though, he would get another job, one that would give him plenty of time to think. Einstein successfully applied for the post of Patent Officer (third class) in the Swiss Patent Office in Bern.

Electricity from light

It was while working there in 1905 that Einstein turned Planck’s useful trick into the real foundation of quantum theory, writing the paper that would win him the Nobel Prize. The subject was the photoelectric effect, the science behind the solar cells we see all over the place these days producing electricity from sunlight. By the early 1900s, scientists and engineers were well aware of this effect, although at the time it was studied only in metals, rather than the semiconductors that have made modern photoelectric cells viable. That the photoelectric effect occurred was no big surprise. It was known that light had an electrical component, so it seemed reasonable that it might be able to give a push to electrons3 in a piece of metal and produce a small current. But there was something odd about the way this happened.

A couple of years earlier, the Hungarian Philipp Lenard had experimented widely with the effect and found that it didn’t matter how bright the light was that was shone on the metal – the electrons freed from the metal by light of a particular colour always had the same energy. If you moved down the spectrum of light, you would eventually reach a colour where no electrons flowed at all, however bright the light was. But this didn’t make any sense if light was a wave. It was as if the sea could only wash something away if the waves came very frequently, while vast, towering waves with a low frequency could not move a single grain of sand.

Einstein realised that Planck’s quanta, his imaginary packets of light, would provide an explanation. If light were made up of a series of particles, rather than a wave, it would produce the effects that were seen. An individual particle of light4 could knock out an electron only if it had enough energy to do so, and as far as light was concerned, higher energy corresponded to being further up the spectrum. But the outcome had no connection with the number of photons present – the brightness of the light – as the effect was produced by an interaction between a single photon and an electron.

Einstein had not only turned Planck’s useful mathematical cheat into a description of reality and explained the photoelectric effect, he had set the foundation for the whole of quantum physics, a theory that, ironically, he would spend much of his working life challenging. In less than a decade, Einstein’s concept of the ‘real’ quantum would be picked up by the young Danish physicist Niels Bohr to explain a serious problem with the atom. Because atoms really shouldn’t be stable.

Uncuttable matter

As we have seen, the idea of atoms goes all the way back to the Ancient Greeks. It was picked up by British chemist John Dalton (1766–1844) as an explanation for the nature of elements, but it was only in the early 20th century (encouraged by another of Einstein’s 1905 papers, the one on Brownian motion) that the concept of the atom was taken seriously as a real thing, rather than a metaphorical concept. The original idea of an atom was that it was the ultimate division of matter – that Greek word for uncuttable, atomos – but the British physicist Joseph John Thomson (usually known as J.J.) had discovered in 1897 that atoms could give off a smaller particle he called an electron, which seemed to be the same whatever kind of atom produced it. He deduced that the electron was a component of atoms – that atoms were cuttable after all.

The electron is negatively charged, while atoms have no electrical charge, so there had to be something else in there, something positive to balance it out. Thomson dreamed up what would become known as the ‘plum pudding model’ of the atom. In this, a collection of electrons (the plums in the pudding) are suspended in a sort of jelly of positive charge. Originally Thomson thought that all the mass of the atom came from the electrons – which meant that even the lightest atom, hydrogen, should contain well over a thousand electrons – but later work suggested that there was mass in the positive part of the atom too, and hydrogen, for example, had only the single electron we recognise today.

Bohr’s voyage of discovery

When 25-year-old physicist Niels Bohr won a scholarship to spend a year studying atoms away from his native Denmark he had no doubt where he wanted to go – to work on atoms with the great Thomson. And so in 1911 he came to Cambridge, armed with a copy of Dickens’ The Pickwick Papers and a dictionary in an attempt to improve his limited English. Unfortunately he got off to a bad start by telling Thomson at their first meeting that a calculation in one of the great man’s books was wrong. Rather than collaborating with Thomson as he had imagined, Bohr hardly saw the then star of Cambridge physics, spending most of his time allocated to his least favourite activity, undertaking experiments.

Towards the end of 1911, though, two chance meetings changed Bohr’s future and paved the way for the development of quantum theory. First, on a visit to a family friend in Manchester, and again at a ten-course dinner in Cambridge, Bohr met the imposing New Zealand physicist Ernest Rutherford, then working at Manchester University. Rutherford had recently overthrown the plum pudding model by showing that most of the atom’s mass was concentrated in a positive-charged lump occupying a tiny nucleus at the heart of the atom. Rutherford seemed a much more attractive person to work for than Thomson, and Bohr was soon heading for Manchester.

There Bohr put together his first ideas that would form the basis of the quantum atom. It might seem natural to assume that an atom with a (relatively) massive nucleus and a collection of smaller electrons on the outside was similar in form to a solar system, with the gravitational force that keeps the planets in place replaced by the electromagnetic attraction between the positively charged nucleus and the negatively charged electrons. But despite the fact that this picture is still often employed to illustrate the atom (it’s almost impossible to restrain illustrators from using it), it incorporates a fundamental problem. If an electron were to orbit around the nucleus it would spurt out energy and collapse into the centre, because an accelerating electrical charge gives off energy – and to keep in orbit, an electron would have to accelerate. Yet it was no better imagining that the electrons were fixed in position. There was no stable configuration where the electrons didn’t move. This presented Bohr with a huge challenge.

Inspired by discovering reports of experiments showing that when heated, atoms gave off light photons of distinct energies, Bohr suggested something radical. Yes, he decided, the electrons could be in orbits – but only if those orbits were fixed, more like railway tracks than the freely variable orbit of a satellite. And to move between two tracks required a fixed amount of energy, corresponding to absorbing or giving off a photon. Not only was light ‘quantised’, so was the structure of the atom. An electron could not drift from level to level, it could only jump from one distinct orbit to another.

Inside the atom

An atom is an amazing thing, so it is worth spending a moment thinking about what it appears to be like. That traditional picture of a solar system is still a useful starting point, despite the fatal flaw. To begin with, just like a solar system, the atom has a massive bit at the centre and much less massive bits on the outside. If we look at the simplest atom, hydrogen, it has a single positively charged particle – a proton – as a nucleus and a single negatively charged electron outside of it. The proton, the nucleus, is nearly 2,000 times more massive than the electron, just as the Sun is much more massive than the Earth. And like a solar system, an atom is mostly made up of empty space.

One of the earliest and still most effective illustrations of the amount of emptiness in an atom is that if you imagine the nucleus of an atom to be the size of a fly, the whole atom will be about the size of a cathedral – and apart from the vague presence of the electron(s) on the outside, all the rest is empty space. Now we need to move away from the solar system model, though. I’ve already mentioned that a true solar-system-style atom would collapse. Another difference is that, unlike the solar system, the electrons and the nucleus are attracted by electromagnetism rather than gravity. And here we come across a real oddity, with a Nobel Prize waiting for anyone who can explain it. The electron has exactly the same magnitude of charge (if opposite in value) to the positive charge on a proton in the nucleus. No one has a clue why, but it’s rather handy in making atoms work the way they do. The solar system has no equivalent to this. Gravity comes in only one flavour.

The final reason we have to throw away the solar system model is that electrons simply don’t travel around nuclei in nice, well-defined orbits, the same way that planets travel around the Sun. They don’t even move around on the sort of rail tracks that Bohr first envisaged. As we will discover, quantum particles are never so considerate and predictable as to do something like this. A better picture of an electron is a sort of fuzzy cloud of probability spread around the outside of the atom, rather than those sweeping orbit lines so favoured by graphic designers – though that is much harder to draw. More on that in a moment.

Building on Bohr

It would be an exaggeration to say that Bohr’s idea for the structure of atoms transformed our view of physics on its own – apart from anything, his original model worked only for the simplest atom, hydrogen. But before long a group of young physicists – with de Broglie, Heisenberg, Schrödinger and Dirac to the fore – had picked up the baton and were pushing forward to build quantum theory into an effective description of the way that atoms and other quantum particles like photons behave. And their message was that they behave very badly indeed – at least if we expect them to carry on the way we expect ordinary everyday objects to act.

Louis de Broglie showed that Einstein’s transformation of the wavy nature of light into particles was a two-way street – because quantum objects we usually thought of as particles, like atoms and electrons, could just as happily behave as if they were waves. It was even possible to do a variant of the two-slit experiment with particles, producing interference patterns. Werner Heisenberg, meanwhile, was uncomfortable with Bohr’s orbits modelled on the ‘real’ observed world and totally abandoned the idea of trying to provide an explanation of quantum particles that could be envisaged. He developed a purely mathematical method of predicting the behaviour of quantum particles called matrix mechanics. The matrices (two-dimensional arrays of numbers) did not represent anything directly observable – they were simply values that, when manipulated the right way, produced the same results as were seen in nature.

Erwin Schrödinger, always more comfortable than Heisenberg with something that could be visualised, came up with an alternative formulation known as wave mechanics that it was initially hoped described the behaviour of de Broglie’s waves. Paul Dirac would eventually show that Schrödinger’s and Heisenberg’s approaches were entirely equivalent. But Schrödinger was mistaken if he believed he had tamed the quantum wildness. If his wave equation had truly described the behaviour of particles it would show that quantum particles gradually spread out over time, becoming immense. This was absurd. To make matters worse, the solutions of his wave equations contained imaginary numbers, which generally indicated there was something wrong with the maths.

Numbers that can’t be real

Imaginary numbers had been around as a concept since the 16th century. They were based on the idea of square roots. As you probably remember from school, the square root of a number is the value which, multiplied by itself, produces that original number. So, for instance, the square root of 4 is 2. Or, rather, 2 is one of 4’s square roots. Because it is also true that –2 multiplied by itself makes 4. The number 4 has two square roots, 2 and –2. But this leaves a bit of a gap in the square root landscape. What, for example, is the square root of –4? It can’t be 2, nor can it be –2, as both of those produce 4 when multiplied by themselves. So what can the square root of a negative number be? To deal with this, mathematicians invented an arbitrary value for the square root of –1, referred to as ‘i’. Once i exists, we can say the square roots of –4 are 2i and –2i. These numbers based on i are imaginary numbers.

This would seem to be the kind of thing mathematicians do in their spare time to amuse themselves – quite entertaining, but of no interest in the real world. But in fact complex numbers, which have both a real and an imaginary component, such as 3+2i, proved to be very useful in physics and engineering. This is because by representing a complex number as a point plotted on a graph, where the real numbers are on the x axis and the imaginary numbers on the y axis, a complex number provides a single value that represents a point in two dimensions. As long as the imaginary parts cancel out before coming up with a real world prediction, complex numbers proved a great tool. But in Schrödinger’s wave equation, the imaginary numbers did not politely go away, staying around to the embarrassment of all concerned.

Probability on the square

This mess was sorted out by Einstein’s good friend, Max Born. Born worked out that Schrödinger’s equation did not actually say how a particle like an electron or a photon behaved. Instead of showing the location of a particle, it showed the probability of a particle being in a particular location. To be more precise, the square of the equation showed the probability, handily disposing of those inconvenient imaginary numbers. Where it was inconceivable that the particle itself would spread out over time, it was perfectly reasonable that the probability of finding it in any location would spread out this way. But the price that was paid for Born’s fix was that probability became a central part of our description of reality. Born’s explanation of the equation worked wonderfully, though it had to be taken on trust – no one could say why, for instance, it was necessary to square the outcome.

There is nothing new in using probability to describe a level of uncertainty. I can demonstrate this if I put a dog in the middle of a park and close my eyes for ten seconds. I don’t know exactly where that dog will be when I open my eyes. I can say, though, that it will probably be within about 20 metres of where I left it, and the probability is higher that it will be near the lamppost than that it will be halfway up a beech tree or taking a ride on the roundabout. However, this use of probability in the ordinary world does not reflect reality, but rather the uncertainty in my knowledge. The dog will actually be in a particular location at all times with 100 per cent certainty – I just don’t know what that location is until I open my eyes.

If instead of a dog I was observing a quantum particle, Schrödinger’s equation, newly explained by Born, also gives me the probability of finding the particle in the different possible locations available to it. But the difference here is that there is no underlying reality of which I am unaware before I look. Until I make the measurement and produce a location for the particle, the probability is all that existed. The particle wasn’t ‘really’ in the place I eventually found it up until the point the measurement was made.

Taking this viewpoint requires a huge stretch of the imagination (which is probably why ten-year-olds cope with quantum theory better than grown-ups), but if you can overcome common sense’s attempt to put you straight, it throws away the problems we face when thinking, for instance, of how the Young’s slits experiment could possibly work with photons of light. If you remember, the traditional wave picture had waves passing through both slits and interfering with each other to create the pattern of fringes on the screen. But how could this work with photons (or electrons)? This difficulty is made particularly poignant if you consider that we can now fine-tune the production of these particles to the extent that they can be sent towards the slits one at a time – and yet still, over time, the interference pattern, caused by the interaction of waves of probability, builds up on the screen.

Where is that particle?

There is a very dangerous temptation that almost all science communicators fall into at this point. I have to admit I have done it frequently in the past. And I have heard TV scientist Brian Cox do it too, commenting on his radio show The Infinite Monkey Cage that the photon is in two places at once. In fact Cox’s book, The Quantum Universe (co-authored with Jeff Forshaw), even has a chapter entitled ‘Being in two places at once’. The tempting but faulty description is that quantum theory says that a photon can be in two places at once, so it manages to go through both slits and interferes with itself. However, this gives a misleading picture of what is really happening in the probabilistic world of the quantum.

What would be much more accurate would be to say that a photon in the Young’s slits experiment isn’t anywhere until it hits the screen and is registered. Up to that point all that exists is a series of probabilities for its location, described by the (square of the) wave equation. As these waves of probability encompass both slits, then the final result at the screen is that those probability waves interfere – but the waves are not the photon itself. If the experimenter puts a detector in one of the slits that lets a photon through but detects its passing, the interference pattern disappears. We have forced the photon to have a location and there is no opportunity for the probability waves to interfere.

It was this fundamental role for probability that so irritated Einstein, making him write several times to Max Born that this idea simply couldn’t be right, as God did not play dice. As Einstein put it, when describing one of the quantum effects that are controlled by probability: ‘In that case, I would rather be a cobbler, or even an employee in a gaming house, than a physicist.’

It was from the central role of probability that Heisenberg would deduce the famous Uncertainty Principle. He showed that quantum particles have pairs of properties – location and momentum, for instance, or energy and time – that are intimately related by probability. The more accurately you discover one of these pairs of values, the less accurately it is possible to know the other. If, for instance, you knew the exact momentum (mass times velocity) of a particle, then it could be located anywhere in the universe.

The infernal cat

It is probably necessary also at this point to mention Schrödinger’s cat, not because it gives us any great insights into quantum theory, but rather because it is so often mentioned when quantum physics comes up that it needs putting into context. This thought experiment was dreamed up by Schrödinger to demonstrate how absurd he felt the probabilistic nature of quantum theory became when it was linked to the ‘macro’ world that we observe every day.

In the Young’s slits experiment, even single photons produce an interference pattern as described above – but if you check which slit a photon goes through, the probabilities collapse into an actual value and the pattern disappears. Quantum particles typically get into ‘superposed’ states until they are observed. (Superposition just says that a particle has simultaneous probabilities of being in a range of states, rather than having an actual unique state.) In the cat experiment, a quantum particle of a radioactive material is used to trigger the release of a deadly gas when the particle decays. The gas then kills a cat that is in a box. Because the radioactive particle is a quantum particle, until observed it is in a superposed state, merely a combination of the probabilities of it being decayed or not decayed. Which presumably leaves the cat in a superposed state of alive and dead. Which is more than a little weird.

In reality, the moggy doesn’t seem to have much to worry about, at least as far as being superposed goes – it can, of course, still die. As the experiment is described, it is assumed that the particle, and hence the cat, is in a superposed state until the box is opened. Yet in the Young’s slit experiment the mere presence of a detector is enough to collapse the states and produce an actual value for which slit the particle travelled through. So there is no reason to assume that the detector in the cat experiment that triggers the gas would not also collapse the states. But Schrödinger’s cat is such a favourite with science writers – if only because it gives illustrators something interesting to draw – that it really needs highlighting.

Because it is so famous, the cat has a tendency to turn up in other quantum thought experiments. The original Schrödinger’s cat experiment is all about the fuzzy borderline between the quantum world of the very small and the classical world we observe around us. Experimenters are always trying to stretch that boundary, achieving superposition and other quantum effects for larger and larger objects. Until recently there was no good measure of just what ‘bigger’ meant in this context – how to measure how macroscopic or microscopic (and liable to quantum effects) an object was. However, in 2013 Stefan Nimmrichter and Klaus Hornberger of the University of Duisburg-Essen devised a mathematical measure that describes the minimum modification required in the appropriate Schrödinger’s equation to destroy a quantum state, giving a numerical measure of just how realistic a superposition would be.

This measure produces a value that compares any given superposition with a single electron’s ability to stay in a superposed state. For example, the biggest molecule that has been superposed to date has 356 atoms. The theorists calculated that this would have a ‘macroscopicity’ factor of 12, which means it being superposed for a second is on a par with an electron staying superposed for 1012 seconds. There is reasonable expectation that items with a factor of up to around 23 could be put into a superposed state. To put this into context, and in honour of Schrödinger, the theorists also calculated the macroscopicity of a cat.

They started with a classic physicist’s simplification by assuming that the cat was a 4-kilogram sphere of water, and that it managed to get into a superposition of being in two places 10 centimetres apart for one second. The result of the calculation was a factor of around 57 – it was the equivalent of putting an electron into a superposed state for 1057 seconds, around 1039 times the age of the universe, stressing just how unlikely this is – though it is worth noting that even the 1023 expectation is longer than the lifetime of the universe. Unlikely things do happen (if rather infrequently), and quantum researchers are always careful never to say ‘never’.

It is these weird aspects of quantum theory that make the field so counterintuitive … and so fascinating. And nowhere more so than when quantum effects crop up in the natural world. Quantum theory is not just something that is relevant to the lab, or even to high-tech engineering. It has a direct impact on the world around us, from the operation of the Sun that is so central to life on Earth, to some of the more subtle aspects of biology.

Footnotes

1. Einstein’s expansion of Galileo’s theory of relativity. Galileo had observed that all movement has to be measured relative to something, but Einstein added that light always travels at the same speed. This special relativity shows that time and space are linked and dependent on the observer’s motion.

2. The observation by the Scottish botanist Robert Brown (1773–1858) that pollen grains suspended in water danced around. Einstein showed how this could be caused by fast-moving water molecules colliding with the grains.

3. The electron is the negatively charged fundamental particle that occupies the outer reaches of atoms and carries electrical current.

4. They wouldn’t be known as photons until the 1920s when they were given the name by the American chemist Gilbert Lewis.