Quantum Computing - Brian Clegg - E-Book

Quantum Computing E-Book

Brian Clegg

0,0

Beschreibung

The ultimate non-technical guide to the fast-developing world of Quantum Computing  Computer technology has improved exponentially over the last 50 years. But the headroom for bigger and better electronic solutions is running out. Our best hope is to engage the power of quantum physics.  'Quantum algorithms' had already been written long before hardware was built. These would enable, for example, a quantum computer to exponentially speed up an information search, or to crack the mathematical trick behind internet security. However, making a quantum computer is incredibly difficult. Despite hundreds of laboratories around the world working on them, we are only just seeing them come close to 'supremacy' where they can outperform a traditional computer.  In this approachable introduction, Brian Clegg explains algorithms and their quantum counterparts, explores the physical building blocks and quantum weirdness necessary to make a quantum computer, and uncovers the capabilities of the current generation of machines.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 217

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



QUANTUM COMPUTING

The Transformative Technology of the Qubit Revolution

BRIAN CLEGG

For Gillian, Chelsea and Rebecca

ACKNOWLEDGEMENTS

My thanks to the team at Icon Books who have helped shape this series, notably Duncan Heath, Robert Sharman and Andrew Furlow.

My interest in computing was shaped by my first exposure at the Manchester Grammar School, where a young teacher encouraged us to punch holes in cards (by hand), send them off to a computer facility in London by post and wait a week for the result to come back in the post. After mixed experiences at university, I fell in love with computing again at British Airways, under the guidance of two mentors, John Carney and Keith Rapley, sadly both no longer with us. It was there that I learned that computer programming is an amalgam of fun and frustration, combining as it does the challenges of puzzle solving and of writing.

Although my coding experience is long in the past, it helps me appreciate the ingenuity of those who attempt to solve the many problems facing anyone who wishes to harness quantum capabilities to bring in a new computer revolution.

CONTENTS

Title PageDedicationAcknowledgements 1 Instructions for a ghost engine2 Making a world bit by bit3 The soft touch4 Quantum strangeness5 Quantum algorithms6 Quantum hardware7 To infinity and beyond Further ReadingIndexAbout the AuthorCopyright

 

quantum (ˈkwɒntəm)

A minimum amount of a physical quantity that can exist due to the physical limits in nature, meaning that the item can only be varied in such units. Describes the properties of the particles that make up light and matter, which behave entirely differently from familiar objects, with probability at the heart of their behaviour. From the post-classical Latin ‘quantum’ meaning amount, quantity or determination of quantity.

 

computing (kəmˈpjuːtɪŋ)

The action or an example of calculation or counting. Since the 20th century, the use of computers, particularly electronic computers, to perform computations mechanically. From the Latin ‘computare’: to calculate, account, reckon or count up.

 

quantum computing (ˈkwɒntəm kəmˈpjuːtɪŋ)

Performing calculations with a device that makes use of the special properties of quantum particles, such as photons of light or electrons, in order to perform certain operations exponentially faster than is possible with a conventional computer.

1

1

INSTRUCTIONS FOR A GHOST ENGINE

Program one: 1843

In 1840, the British inventor and polymath Charles Babbage gave a number of lectures in Turin, taking as his subject an as-yet-unconstructed device, the Analytical Engine. Born in 1791, Babbage had a sufficiently large inheritance from his father – a goldsmith and banker – never to need to take gainful employment. He enjoyed the social life of the salon as much as his work. It is said that his inspiration to explore mechanical means of calculation was helping out his friend, the astronomer John Herschel, to check astronomical tables. The experience was tedious beyond measure and Babbage is said to have cried out, ‘I wish to God these calculations had been executed by steam!’

Had the Analytical Engine ever been built, it would arguably have been the world’s first computer in the modern sense of the word. Babbage had been playing the role of a computer for Herschel – that is, a person who undertook calculations. The terminology dates back at least to 2the seventeenth century – it was only in the mid-twentieth century that the term was shifted from human beings to machines. Although it was entirely mechanical, the Analytical Engine was intended to hold both its data and its programs* on punched cards, based on the cards that had been devised to produce intricate patterns on the Jacquard silk-weaving loom. Unlike its semi-constructed (but never finished by Babbage) predecessor, the Difference Engine, where the instructions on what to do with the data were built into the machinery, the Analytical Engine’s instructions could be varied to taste.

Two years later, in an impressive piece of internationalism, a minor Italian military engineer Luigi Federico Menabrea (later to unexpectedly become Prime Minister of Italy) published a write-up of Babbage’s Turin talks, written in French for a Swiss publication, the Bibliothèque Universelle de Genève. Left in that periodical, this memoir would no doubt have rapidly disappeared into obscurity. However, in 1843 it was translated into English by Ada King, the Countess of Lovelace. In truth, ‘translation’ is a distinctly weak term for the resultant document, as King added copious notes that tripled the length of the piece, speculating on the future use of the unbuilt Analytical Engine and describing how it could be programmed for a number of tasks.

It is thanks to this single document that Ada Lovelace, as King is usually known, has gained the reputation of being the world’s first computer programmer. There is no doubt that Lovelace succeeded in bringing the potential of the Engine to a wider audience, though the degree to which she was indeed 3the first programmer has been disputed. One certainty is that the machine these instructions were intended for was never built – realistically, it could not have been constructed with the mechanical tolerances of the time. And so, strictly speaking, we should say that the document contained algorithms, in the form of tables that reflected the structure of the Engine, rather than computer programs in the modern sense.

Algorithms are structured instructions that could be anything from the sequence of actions required to brew a cup of tea to complex manipulations of data to solve a mathematical problem. They don’t require any computer – they can be worked by hand – but can, as was the case here, be structured in such a way that they fit well with a computer’s architecture.

Unfortunately, in the entirely desirable urge to provide good female role models from the past, Lovelace’s contribution has been exaggerated. Lovelace was the daughter of the poet Lord Byron and Annabella Milbanke. As a child, Lovelace was encouraged by her mother to study mathematics. She is often described as a mathematician, but it would be accurate to describe her as an undergraduate-level maths student. From letters between Babbage and Lovelace, it seems well-established that the notes that Lovelace added to Menabrea’s work were strongly influenced by Babbage. And even taking algorithms as programs, we know that Lovelace was not the first. This is because, as historian of science Thony Christie points out:

The Menabrea Memoir that Ada had translated already contained examples of programs for the Analytical Engine that Babbage had used to illustrate his Turin lectures and 4had actually developed several years before. The notes contain further examples from the same source that Babbage supplied to the authoress. The only new program example developed for the notes was the one to determine the so-called Bernoulli numbers.

We do know that Babbage, in his lectures, described algorithms that could have become programs for the Analytical Engine, had it ever been built. Usually with a totally new piece of technology like this we have to wait for some kind of prototype to be constructed before we can be certain of the device’s capabilities. But, remarkably, the Analytical Engine algorithms clearly showed the remarkable power that the machine would be capable of, had it ever been built. Rather than wait for programs to be developed, the Engine could instantly leap into action.

Having algorithms that were ready to go on a technology that was impossible to construct at the time was remarkable. That such a thing should happen twice seems even more surprising. Yet 153 years after the publication of Lovelace’s translation and notes, a very similar occurrence would play out. This time, the imagined engines in question would invoke the power of the quantum.

Program two: 1996

By 1996, electronic computers had been part of government and business establishments for decades, and had become relatively commonplace in homes, since personal computers moved from the realm of enthusiasts’ toys to commercial products in the 1980s. Unlike the Analytical Engine, 5electronic computers were too complex to be designed by a single person – and although some early programs were the work of an individual, many were constructed by teams.

Although the underlying concepts behind the electronic computer would continue to be developed, and these devices would continue to become increasingly powerful for decades to come, by the 1990s scientists were already aware of limitations in the way that such computers worked. Fifteen years before our key date, the physicist Richard Feynman had speculated about constructing computers where the basic unit of operation was not the traditional bit, which was limited to holding values of 0 or 1, but rather a ‘quantum bit’, based on a quantum particle such as a photon or electron, which could be in an intermediate state with a potentially infinite set of possibilities.

By the 1990s, teams were thinking about or attempting to construct such quantum computers. The challenges they faced were huge. In 1996, ability to manipulate individual quantum particles was in its infancy. The technology required to build a quantum computer was entirely impractical. Yet, just like the algorithms for the Analytical Engine, it proved possible to devise an algorithm for quantum computers that, should the machines ever work, could revolutionise the business of searching for data – a task that lies at the very heart of the computer business.

This particular quantum algorithm was devised by Lov Grover, then working at Bell Labs in America. Grover was born in 1961 in the North Indian city of Roorkee. His original intention was to become an electrical engineer – perhaps not a surprising ambition, as he lived in the city that was home to Asia’s first specialist technology establishment, founded in the 1840s to give training in engineering. However, Grover 6did not attend the University of Roorkee (now the Indian Institute of Technology Roorkee), but rather the Indian Institute of Technology in Delhi.

A career in electrical engineering was still Grover’s intention when he emigrated to the USA, but by 1985, when he had achieved a PhD in the subject at Stanford University, his interest had grown in the quantum applications of physics – his doctorate concerned a device that typifies the oddities of the quantum world, the laser, and it was quantum physics that drove his thinking when he joined Bell Labs.

Then the research arm of the communication company AT&T, Bell Labs was not unlike the research department of a modern technology giant such as Google or Apple today. Researchers at the laboratories were given an impressive degree of freedom to explore new ideas, as a result of which the company was rewarded with dramatic developments – often in and around computing. It was at Bell Labs, for example, that John Bardeen and Walter Brattain, under the direction of William Shockley, had come up with the transistor. In the 70s, Bell had been responsible for developing UNIX, the operating system that still is central to much computing, and the C programming language which, with its derivates, still dominates the world of conventional computer programming.

When Grover joined Bell, the idea of developing algorithms that could run on quantum computers was already in the air, with the first devised in 1994. The freedom to do what Grover described as ‘forward-looking research’ was still available at Bell and he almost immediately devised his quantum computing search algorithm. He would later write up his new idea in the journal Physics Review Letters as ‘Quantum Mechanics Helps in Searching for a Needle in a 7Haystack’. His idea could, in theory, transform the business of searching.

At the time, search engines† as we now know them were in their infancy. Prior to this point, if you were an early adopter of the internet and wanted to find your way around the World Wide Web, you would use a curated list of links – a manual index. The leading search engine in 1996, AltaVista, only started operation the year before. Google would begin life as a research project in 1996. But though search engines per se were a novelty, databases had been at the heart of much computing for decades – the need to quickly find and retrieve data was the driving force behind much commercial computer use, and anything that could speed up that process would clearly be attractive. What Grover realised is that with a quantum computer, he could not only speed up searching, he could supercharge it.

We can think of a database as an electronic version of a card index. Each database consists of a set of records, with a record being the equivalent of a single card. To find our way around the records, the database is indexed. In the card index, that is done simply by putting the cards in alphabetical order of the heading – but an electronic database can have multiple indices, enabling it to effectively re-order the cards depending on a whole range of pieces of data on the card. So, for example, you could find a customer by name, or phone number, email, or address, or purchases.

To perform this task, databases have clever get-arounds that enable them to get to information that isn’t stored in an ordered way faster than would be the case if they were 8simply to search through item by item until they reached the correct record. The same goes, on a tremendous scale, for a search engine like Google. It would be ridiculous to expect Google’s software to look through the entire content of the web, with well over a billion websites, some of which are enormous in their own right, every time we type in a query. According to Google, their index alone is over 100,000 terabytes in size (in bytes, that’s 1017 – 1 followed by 17 zeros) and is growing all the time. Searching through unstructured data like the net is a nightmare.

To get a feeling for the problems of dealing with unstructured data, think of an old-fashioned phone book. You’ve got a list of phone numbers ordered by the name of the people or businesses to whom they correspond. So, given that name, you can rapidly home in on the correct entry and pull out the phone number. But imagine you had the phone number and wanted to find the corresponding name. Making a search the other way round would be a nightmare. You would have to look at each entry in turn and see if you found a match. If, for example, your phone book had 1 million entries, on average you would need to look at 500,000 of these before hitting on the correct entry. If you were ridiculously lucky it could be the first item you checked, but if you were unlucky you might need to check every one of them.

If you are thinking how easy it is to look up a piece of information such as a phone number online, remember that’s only because someone has already done the hard work for you and indexed the information, effectively turning it from an unstructured collection of data into the equivalent of a phone book that can be sorted on any part of the entry. But with Grover’s algorithm running on a quantum computer, things would be radically different. Instead of potentially 9looking through the entire list, Grover’s algorithm guarantees to get to your result with a maximum number of attempts that is the square root of the number of entries – in the case of our million-entry phone book, that’s a limit of 1,000 tries. The algorithm’s abilities grow explosively faster than an item-by-item search. And that’s just the beginning – as we’ll discover later, in 2000 Grover came up with another quantum computing algorithm that makes it possible to do fuzzy searching, which is a whole new ballgame.

It’s no surprise that Google is now one of the biggest investors in quantum computing technology. This one algorithm alone could have a huge impact on their business – and it is not the only way that quantum computers could far outperform their traditional equivalents. Although a lot of work on developing quantum computers and quantum algorithms is taking place in universities, companies such as Google and IBM (something of a database specialist) are probably the biggest players.

Before we dive into the workings of quantum algorithms and see how Grover’s algorithm can perform this amazing high-speed searching (or, for that matter, how another early quantum algorithm, Shor’s factoring algorithm, could render current internet security useless), we need to take a few steps back. Most of us use computers on a daily basis without having a clue about what’s going on inside. To provide the necessary context, the next two chapters will give an introduction to the workings of computer hardware and computer algorithms and the associated programs. That, in effect, gives us the toolkit for the traditional computer that you may have on your desk or in your pocket in the guise of a smartphone.

We then need to get up to speed on quantum physics, and specifically how a quantum computer operates differently 10from a conventional electronic computer (which is a quantum device in its own right). Finally, we can see just what quantum computing algorithms are capable of, why it has taken so long to get to a workable quantum computer and what it may be able to do for us in the future.

Let’s get started, then, where Charles Babbage and Ada Lovelace left off – with the invention of the computer itself.

* For those who insist on UK English spelling, it should be noted that the worldwide standard spelling for a computer program is the American one, not ‘programme’.

† The term ‘search engine’ is such a part of modern life that we tend to forget that the anachronistic use of ‘engine’ in this context is an intentional reference back to Babbage’s engines.

11

2

MAKING A WORLD BIT BY BIT

It might seem entirely reasonable that the idea of Babbage’s Analytical Engine, combined with the programming possibilities described in the Menabrea memoir, would be sufficient to inspire others to construct computers. However, the usual picture of Babbage as the ‘father’ of computing is just as wildly over-romanticised as making Lovelace the first programmer. In reality, the Analytical Engine was a novelty concept, unachievable in practice, that had no more impact on real life than the heat rays in H.G. Wells’ novel The War of the Worlds did on the development of weaponry.

If anything, the next step along the road to modern computers involved taking a step back when the American inventor Herman Hollerith provided technology to help with the 1890 US census. As a result of the growth of population and far more information being collected than in the first censuses, it was taking longer and longer to process the data that had been collected. It took administrators a whole eight years to collect and tabulate the data from the 1880 census – the fear was that, before long, it would take longer 12to process the data than the ten-year gap between censuses being taken.

Like Babbage’s theoretical engine, Hollerith’s equipment made use of punched cards – rectangular cards with a grid of numbered positions on them, though Hollerith’s held significantly more data than Babbage had envisaged. Hollerith’s earliest cards were unprinted, but they soon had a grid of numbers to make it easier for humans to check them. The first cards were divided into twelve rows of 24 columns. Over time, the number of columns packed in was increased, in part by changing from circular holes to narrow rectangular ones, until they reached a standard of 80 columns. Although the size wasn’t initially consistent, the cards would end up 3¼ inches by 7⅜ inches, apparently based on the size of banknotes.

An IBM punched card from the 1960s.

Holes were punched in some of the available positions to record the data. Eventually, sophisticated typewriter-like card-punching machines were developed, though initially much of the punching was done by hand. Such cards would be very familiar to anyone who worked on computers in the 1960s and early 70s, when information was still entered into computers using a deck that could potentially contain 13thousands of cards. A line on the card was referred to as a ‘Hollerith string’ in the inventor’s honour.

There was a fundamental difference between Hollerith’s devices and the kind of computer that used the cards later on. Hollerith’s machines provided only two functions – counting and sorting cards, dependent on the position of the punched holes. Unlike the Analytical Engine or those twentieth-century computers, the algorithm – the logic of what to do – was fixed in the structure of the machines, hence the huge step back from the promise of the Analytical Engine. Hollerith’s cards could be used, for example, to count the number of children of a certain age in a city, or to provide a list of names in alphabetical order, but there was none of the flexibility that we (and Babbage) would expect.

However, at the time, the relative simplicity was arguably something of a benefit. The cards and machines proved to be a huge success. The census was saved, and soon this data-processing revolution would be available to a whole range of businesses, government departments and universities. Hollerith’s Tabulating Machine Company later joined with three other companies to become the rather better-known International Business Machines, which morphed into IBM. But true computers, in the sense implied by the Analytical Engine, would not become a reality until the end of the Second World War.

Dr Turing’s universal machine

Mechanical computing could only go so far. Even though it became more practical to engineer the extremely fine gears to the tight tolerances required for Babbage’s design, and 14the Hollerith machines gradually added new ways to sort and shuffle the cards, it would take a more nimble approach to get to the kind of flexibility Babbage had envisaged. Mechanical calculators would continue to be used into the 1960s, but it was the switch to electronics which enabled the truly flexible computer to move from dream to reality.

There is considerable dispute over where the first truly programmable electronic computer was constructed. Colossus, built for the Bletchley Park codebreakers in the UK in 1943, and ENIAC, developed for artillery calculations in the US in 1945, both have reasonable claims to the label as a result of technical differences between the two. The important step forward was that where mechanical predecessors were limited by the physical difficulties of getting gears (or later electromechanical switches) to perform sufficient operations in a small enough space, Colossus and ENIAC made use of the abilities of electronic devices where the moving parts were electrons, enabling impressive miniaturisation.

That’s not to say that these early electronic computers were compact – quite the reverse. Despite having a tiny fraction of the capabilities of the most basic phone, they typically filled a large room and depended on electronic devices called valves (known as vacuum tubes in the US), which switched or amplified electrical signals to perform the basic functions that would be required to perform computations. However, much of the theory they depended on had been developed before these electronic devices were created, by two outstanding individuals – Alan Turing in the UK and John von Neumann in the US.

Born in London in 1912, Turing was educated at Cambridge, gaining a doctorate at Princeton before returning to the UK. Soon after getting back to Cambridge he would 15join the British cipher-cracking centre at Bletchley Park. For a long time, Turing was a hidden figure in British history. This was the result of a combination of security concerns from his involvement with Bletchley Park, the very existence of which was kept secret long after the war, and the British establishment’s reaction to Turing’s homosexuality. Turing was a wartime hero, now recognised as one of the greatest individuals of the twentieth century, and yet he was treated appallingly by the state, receiving a prison sentence and chemical treatment for his ‘crime’ of homosexuality.*

It is often said that Turing committed suicide as a result of the treatment he received. This, however, is pure speculation from a poorly conducted inquest. In fact, all the evidence from the time was that Turing was in good spirits shortly before he died and had recovered from the chemical abuse he received. Turing died at the age of 41 from cyanide poisoning and it was assumed that this was as a result of eating a deliberately poisoned apple, perhaps inspired by seeing the Disney film Snow White and the Seven Dwarfs. However, shockingly, the apple, which was found by his bed, was never tested for poison. And, at the time of his death, Turing had been running an experiment in a room adjacent to his bedroom which could have given off hydrogen cyanide fumes. Many now believe that his death was a tragic accident.

Turing would help build the Manchester Baby, the first stored-program electronic computer† in the late 1940s, but his most important contribution to the computing (and hence 16quantum computing) story came considerably earlier, in 1936, when he wrote a paper titled ‘On Computable Numbers’. In part, the paper was devised to address the Entscheidungsproblem (decision problem), a mathematical challenge from the 1920s which asked whether it was possible to have a mechanism that would be capable of deciding whether any mathematical statement was universally valid, given a set of axioms.

Axioms provide the foundations of mathematics. They are the basic assumptions that don’t need to be proved, and it’s impossible to do maths without them. Some examples are those for Euclidean geometry, such as ‘a (straight) line can be drawn from a point to any other point’ or those for set theory, which are necessary for all arithmetic to work. The mathematician Alonzo Church devised a mathematical solution to the Entscheidungsproblem