39,59 €
Navigate the quantum computing spectrum with this book, bridging the gap between abstract, math-heavy texts and math-avoidant beginner guides. Unlike intermediate-level books that often leave gaps in comprehension, this all-encompassing guide offers the missing links you need to truly understand the subject.
Balancing intuition and rigor, this book empowers you to become a master of quantum algorithms. No longer confined to canned examples, you'll acquire the skills necessary to craft your own quantum code. Quantum Computing Algorithms is organized into four sections to build your expertise progressively.
The first section lays the foundation with essential quantum concepts, ensuring that you grasp qubits, their representation, and their transformations. Moving to quantum algorithms, the second section focuses on pivotal algorithms — specifically, quantum key distribution and teleportation.
The third section demonstrates the transformative power of algorithms that outpace classical computation and makes way for the fourth section, helping you to expand your horizons by exploring alternative quantum computing models.
By the end of this book, quantum algorithms will cease to be mystifying as you make this knowledge your asset and enter a new era of computation, where you have the power to shape the code of reality.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 365
Veröffentlichungsjahr: 2023
Quantum Computing Algorithms
Discover how a little math goes a long way
Barry Burd
BIRMINGHAM—MUMBAI
Copyright © 2023 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
Associate Group Product Manager: Gebin George
Publishing Product Manager: Kunal Sawant
Senior Content Development Editor: Rosal Colaco
Technical Editor: Jubit Pincy
Copy Editor: Safis Editing
Associate Project Manager: Manisha Singh
Proofreader: Safis Editing
Indexer: Pratik Shirodkar
Production Designer: Shyam Sundar Korumilli
Business Development Executive: Kriti Sharma
DevRel Marketing Coordinator: Sonia Chauhan
First published: August 2023
Production reference: 1310823
Published by Packt Publishing Ltd.
Grosvenor House
11 St Paul’s Square
Birmingham
B3 1RB, UK
ISBN 978-1-80461-737-3
www.packtpub.com
- Barry Burd
Barry Burd received a master’s degree in computer science at Rutgers University and a Ph.D. in mathematics at the University of Illinois. As a teaching assistant in Champaign–Urbana, Illinois, he was elected five times to the university-wide List of Teachers Ranked as Excellent by Their Students.
Since 1980, Dr. Burd has been a professor in the department of mathematics and computer science at Drew University in Madison, New Jersey. He has spoken at conferences in the United States, Europe, Australia, and Asia. In 2020, he was honored to be named a Java Champion.
Dr. Burd lives in Madison, New Jersey, USA, where he spends most of his waking hours in front of a computer screen.
I extend thanks (in reverse alphabetical order) to the following people for their help and support: George Zipperlen, James Weaver, Johan Vos, Ron Schreiner, Naria Rush, Anna Naden, Bob Miller, Terrill Frantz, Dhruv Erry, and Bambordé Baldé.
Rakshit Jain is an IBM Certified Associate Quantum Developer who is very passionate about operating systems, quantum computing, and cyber security (white hat, obviously). He’s currently in the penultimate year of his studies at the Hong Kong Polytechnic University, where he leads the Google Developer Student Club as the vice president. Driven by curiosity, he experiments with (and breaks) any bleeding-edge technology he can get his hands on. When he’s not writing scripts, he is DJing at events or climbing mountains.
Ron Schreiner has been fortunate to have worked in a broad range of technical disciplines during his career. Some of these areas include digital hardware design, embedded CPUs, silicon functional architecture (VLSI), computer compiler development, image/live video processing, and others. As a provider of medical software, he has recently delved into machine learning and most recently, quantum computing with an emphasis on quantum machine learning. Outside of the technical arena, Ron is a licensed pilot, certified open-water SCUBA diver, bicyclist, woodworker, and a once-in-a-while artist.
In this brief introduction to quantum computing, we’ll cover the following topics:
What is quantum computing?Baby steps toward quantum computingProgramming a quantum computerThe future of quantum computingQuantum reality is very strange.
Imagine a spinning wheel that’s turning neither clockwise nor counterclockwise. Just by looking at the wheel, you set up a relationship between yourself and the wheel, and this makes the wheel turn in one direction or another.
Imagine two friends who travel to opposite ends of the Milky Way galaxy. Whatever one randomly decides to say upon landing on a distant planet, the other feels compelled to say too.
That’s the world of quantum mechanics. It’s what makes quantum computing so fascinating.
Here’s how we distinguish quantum computing from classical computing:
Classical computing: A computational model in which the fundamental unit for calculation is a binary bit. Each bit’s value is either 0 or 1.Every laptop, server, workstation, and smartphone is a kind of classical computer. Even the Frontier supercomputer with 600,000 cores in Oak Ridge, Tennessee is a classical computer. Almost all classical computers in use today are based on von Neumann’s architecture. With this design, the computer stores sets of instructions (commonly known as programs). A processing unit repeatedly fetches the next instruction and then executes it. (What counts as the “next” instruction may depend on the previous instruction’s outcome.)
Alternatives to von Neumann’s architecture include Turing machines, which serve mainly as theoretical models, and neural nets, which we will describe briefly in Chapter 10.
Quantum computing: A computational model in which the fundamental unit for calculation is a quantum bit, commonly known as a qubit. A qubit’s values can be 0, 1, or values that can roughly be considered “in between” 0 and 1.Qubits behave according to the rules of quantum mechanics. Among other things, this means that the previous paragraph’s wording (“in between 0 and 1”) is a convenient but gross simplification. No matter how you combine a “ qubit” with another “ qubit,” you don’t naturally get “1 qubit.” The arithmetic that’s embodied in a collection of qubits bears little resemblance to our common, everyday ideas about the numbers between 0 and 1.
The idea for quantum computing came in 1981 with presentations by Paul Benioff and Richard Feynman at the First Conference on the Physics of Computation. Fast forward to 1998, when the world’s first quantum computer had only two qubits.
Tip
For more information about the First Conference, the two-qubit computer, and other topics in this Introduction, refer to this chapter's Further reading section. You wouldn’t buy a laptop whose chip could process only two bits. In the same way, you wouldn’t expect a two-qubit quantum computer to solve your puzzling mathematical problems.
By 2006, the world had 12-qubit quantum computers. And by 2017, we had 50-qubit computers. The number of qubits in most advanced quantum computers of the early 2020s is in the low-to-mid hundreds. Compare this with a typical laptop’s memory, which stores about 64 billion bits.
Of course, the answer you get when you ask for a count of qubits depends a lot on who you ask. Reports on trustworthy websites sometimes contradict one another. The answer also depends on what you mean when you ask the question. Here are some facts:
Some specialized quantum computers (described briefly in Chapter 10) have 5,000 qubits.Today’s quantum computers have extra qubits for error checking and error correction, so counting qubits isn’t a straightforward task.Qubits are extremely fragile, so it’s unwise to draw conclusions based on a single qubit’s behavior. Quantum theory makes this abundantly clear. To compensate for this inconvenience, physicists and engineers combine the outcomes from several physical qubits to come up with an aggregate answer that they call a logical qubit. It’s like tallying the votes of several physical qubits to come up with one logical qubit representative. But, of course, the “voting” procedure is quite complicated.
This book is about the logic underlying quantum computing algorithms, not the engineering issues involved in implementing these algorithms on physical machines. So, throughout this book, the word qubit refers to a logical qubit. On some quantum computers, what we call a qubit might be realized using 10, 100, or even 1,000 physical qubits. A machine that’s touted as having 127 qubits may actually need thousands of physical qubits to do its work. As it is with any noisy system, the path to reliability requires some redundancy.
Some tasks require relatively small numbers of qubits. As I write this chapter in 2023, companies around the world are preparing to make a form of quantum-based cryptography available commercially. The machines that implement this technology have fewer than 100 qubits. (See Chapter 5 to find out why, in theory, just one qubit at a time will suffice.)Most quantum algorithms aren’t useful unless you have thousands of logical qubits. Millions or billions of logical qubits are even better. So, for now, many of the algorithms described in this book are theoretical, not practical.
But don’t underestimate the importance of theoretical results. In 1843, William Hamilton defined a strange number system called the quaternions. Today, quaternions help us create 3D computer games. And in 1917, Johann Radon defined his Radon transform, which has become an important tool in medical imaging.
Quantum computers don’t run independently. They receive input from classical computers and provide output to classical computers. In a sense, there’s no such thing as completely independent quantum computing. All quantum computing is part of a larger technology called hybrid computing.
When you work with quantum computers, you write code that runs on a classical computer. Based on your code, the classical computer feeds instructions to the quantum computer. There are many programming platforms designed specifically for quantum computing. They include Q# from Microsoft, Cirq from Google, OpenQASM from IBM, Ocean from D-Wave, and PennyLane, which is maintained by Xanadu.
In this book, we program using Qiskit – an open source software development kit. Qiskit (pronounced KISS-kit) is part of IBM’s family of quantum computing initiatives. Using Qiskit, you can run code for quantum computers on many different devices. Some of these devices are real quantum computers living in the cloud. Other devices are classical computers that, for small-scale programs, mimic the behavior of real quantum computers.
As far as we know, we’ll never trade in all our classical computers for quantum computing models. Quantum computers aren’t good for performing the mundane tasks that we assign to most computers today. You wouldn’t want to program a simple spreadsheet on a quantum computer, even if you could find a way to do it.
But to solve certain kinds of problems, a quantum computer with sufficiently many qubits will leave classical computers in the dust. Chapter 9 shows you how sufficiently powerful quantum computers will be able to factor 2,048-bit numbers. According to some estimates, a factoring problem that would take classical computers 300 trillion years to solve will require only 10 seconds of a quantum computer’s time. If we can achieve an advantage of this kind using a real quantum computer, we call it quantum supremacy.
In 2019, a team at Google claimed to have demonstrated quantum supremacy. Its 53-qubit quantum computer took about 3 minutes to solve a problem that supposedly would have required 47 classical computing years. But in 2022, researchers in China showed how a classical computer could solve the same problem in about a minute. So much for Google’s claim!
In 2020, a group at the University of Science and Technology of China staked its own quantum supremacy claim. Using 76 qubits, it generated a particular set of random numbers that it said would take classical computers nearly 1 billion years to generate. As of mid-2023, the group’s claim has not been debunked.
With or without any kind of supremacy, the field of quantum computing presents exciting possibilities. Engineers, physicists, and computer scientists are hard at work finding new ways to leverage the technology’s capabilities. Market research firms predict that the industry’s growth rate will be at least 30 percent per year. Start-ups are preparing for applications in healthcare and finance. The last time I visited LinkedIn, I found more than 2,500 quantum computing job listings in the United States, with over 6,500 listings worldwide.
So, it’s time to learn more about quantum computing. If I were you, I’d move on to Chapter 1.
This part lays the foundation for the exploration of quantum computing. You’ll learn about two weird features of quantum reality: superposition and entanglement. You’ll reinforce what you learn by writing code for quantum computers and performing calculations to back up your intuitions.
This part has the following chapters:
Chapter 1, New Ways to Think about BitsChapter 2, What Is a Qubit?Chapter 3, Math for Qubits and Quantum GatesChapter 4, Qubit Conspiracy TheoriesClassical computing is everywhere. The phone in your pocket, the laptop on your desk, and the world's fastest supercomputers use classical computing hardware. A classical computer codes everything in bits, and bits are quite simple. The rules for dealing with bits have been known since the mid-1800s. (George Boole wrote his ground-breaking work on logic in 1854). The computer hardware that manipulates bits has developed steadily since the mid-1940s.
Before you read about quantum computing, you need to have explored certain mathematical concepts. With that in mind, this chapter shows you how math applies to ordinary, classical bits.
If you’ve done any coding, you’re already familiar with the and, or, and not operators, but you may not know how to represent these operators using matrices. This chapter explores the connection between bit operations and matrices. After a tour through the math, we walk through the creation of matrices using Jupyter notebooks with Python.
In this chapter, we’ll cover the following main topics:
Bits and logic gatesWorking with matricesMatrix representation of bits and gatesJupyter notebooksMatrices in PythonEvery chapter in this book requires that you have access to a web browser. You can download each chapter’s Python code from the following GitHub link: https://github.com/PacktPublishing/Quantum-Computing-Algorithms.
A classical computer codes information as sequences of bits, each bit having one of two values – 0 or 1. Of course, if you looked inside the circuitry, you wouldn’t see things shaped like ovals for 0s and vertical lines for 1s. Instead, you might observe electrical voltages. Any voltage below 0.8 volts might stand for a 0 bit, and any voltage over 2.0 volts might stand for 1 bit. A voltage between 0.8 and 2.0 would be treated as an error condition.
As bits flow from one part of the computer to another, they pass through the circuitry’s gates. A gate is a piece of hardware (often made of transistors) that operates on bits. Each kind of gate is named after the operation that it performs.
Let’s see a few examples:
A NOT gate performs a NOT operation. The gate outputs 1if and only if the gate’s input is not 1:Figure 1.1 – Behavior of the NOT gate
An AND gate performs the AND operation with two inputs. The gate outputs 1 if and only if the first input is 1 and the second input is 1:Figure 1.2 – Behavior of the AND gate
An OR gate performs the OR operation with two inputs. The gate outputs 1 if and only if the first input or the second input, or both inputs are 1:Figure 1.3 – Behavior of the OR gate
Notice the very last OR gate example. In English, a sentence such as “It’s raining, or it’s cold” usually means “Either it’s raining, or it’s cold, but it’s not both raining and cold.” With classical gates, the word OR means “Either the topmost bit is 1 or the bottommost bit is 1, or both bits are 1.”
Taken together, the NOT, AND, and OR gates form a universal set. This means you can perform any logical operation using only NOT, AND, and OR gates.
For example, Figure 1.4 shows the addition of one-bit binary numbers:
Figure 1.4 – Binary addition
Figure 1.5 shows a circuit that adds one-bit binary numbers:
Figure 1.5 – Adding binary numbers using logic gates
Full disclosure
As human beings, we tend to reason using NOT, AND, and OR operators, but using NOT, AND, and OR gates isn’t always the most efficient way to build computer circuitry. Many circuits make heavy use of NAND gates (think NOT BOTH gates) since the NAND gate alone forms a universal set. Another kind of gate, called a NOR gate, also forms a universal set (for a NOR gate, think NEITHER ... NOR).
Our Arabic number system is biased by the human mind’s affinity with the number 10. For example, to represent the value four-hundred-sixty-three, we write 463. Figure 1.6 shows you how it works:
Figure 1.6 – Decimal number representation
This way of representing numbers is called the decimal (base 10) system because the prefix dec comes from the Greek word for ten. The numbers 0, 1, 2, 3, 4, 5, 6, 7,8, and 9 are called digits.
If we ever meet intelligent beings from another planet, they probably won’t use the decimal system. They may, in fact, use the binary (base 2) system. In the binary system, we represent four-hundred-sixty-three with the string 111001111, as shown in Figure 1.7:
Figure 1.7 – Binary number representation
This way of representing numbers is called the binary (base 2) system because the prefix bi comes from the Latin word for two. The numbers 0 and 1 are called binary digits, also known as bits.
So, when you see the characters 100, what do they mean? Are they the binary representation of the number four, or the decimal representation of the number one hundred? If there’s any doubt, I use a subscript to indicate the number’s base. For example, 1002 is the binary representation of four, and 10010 is the decimal representation of one hundred.
The table in Figure 1.8 shows 0 to 15 in their decimal and binary representations. It’s handy if you can easily recognize these four-bit binary representations, but don’t bother memorizing the numbers in the table. Instead, familiarize yourself with these numbers by looking for patterns. Take some time to do it. It’s worth the effort.
Figure 1.8 – Decimal versus binary
The next few sections describe a way to represent bits and logic gates in a particular numeric form.
A matrix is a rectangle of numbers. For example:
The matrix shown previously has two rows and three columns, so we call it a 2×3 matrix (pronounced as two-by-three matrix).
Tip
The plural of matrix is matrices (pronounced as MAY-trih-sees). To sound like a pro, never say matrixesor matricee.
It’s common to use an uppercase to represent a matrix. You refer to the entries in a matrix using the entries’ row numbers and column numbers. Some authors start with row number 1 and column number 1, but, for our purposes, it’s better to start with row number 0 and column number 0.
When you talk about matrices, you need a name that refers to a single number – a number that isn’t inside of a matrix. For example, if you write the number 12 with no parentheses around it, you’re referring to one of these single numbers. A number of this kind is called a scalar.
So, what can you do with a scalar? One thing you can do is multiply a scalar by a matrix. To do so, you multiply the scalar by each of the matrix’s entries. For example:
One way to work with bits and logic gates is to represent them as matrices. From this point of view, combining gates is the same as finding products of matrices. With that in mind, the rest of this section covers various kinds of matrix products.
When a matrix has only one row or only one column, we give it a special name. We call it a vector. Let’s see the following examples:
This matrix is a row vector:And this matrix is a column vector:It’s common to refer to a vector’s entries using the entries’ row numbers or column numbers. For our purposes, it’s best to start with row number 0 or column number 0:
In classical computing, we can do the following:
Represent a bit with a column vectorRepresent a collection of bits with a column vectorRepresent an operation on bits with a matrixThe same is true for quantum computing. You can read more about this in the section entitled Matrix representation of bits and gates.
When you have two vectors of the same size (one row vector and one column vector), you can combine them by calculating the dot product. To do this, you zipper the two vectors together. You multiply the vectors’ corresponding entries and then add up the products. For example:
Important note
In order to take a dot product, you must start with two vectors of the same size. You can’t take the dot product of a four-entry row vector and a six-entry column vector.
You can think of a matrix as a combination of vectors. When you do, you can multiply matrices using repeated dot products. For example, start with the following two matrices:
Think of these matrixes as combinations of vectors, as shown here:
Combine each row on the left with each column on the right (that is, find all the dot products). Use the dot products to form a new matrix. The new matrix’s entries go in positions corresponding to the row number and column numbers in the original two matrices.
For example, the dot product of Row 0 on the left and Column 0 on the right gives you the entry in the new matrix’s row 0, column 0 position:
Figure 1.9 – Multiply Row 0 by Column 0
In general, you get the new matrix’s entry by taking the dot product of row on the left with column on the right:
Figure 1.10 – In general, multiply Row i by Column j
Important note
To multiply two matrices, the number of columns in one matrix (the matrix on the left) must equal the number of rows in the other matrix (the matrix on the right). You can multiply a 5x7 matrix by a 7x10 matrix, but you can’t take the product of a 5x7 matrix and a 6x10 matrix because 7 doesn’t equal 6.
At this point, I must emphasize how important it is to distinguish left from right. Remember that when you multiply matrices, you combine each row on the left with each column on the right. You don’t do it the other way around.
Figure 1.11 – Multiply the left matrix’s rows by the right matrix’s columns
Also, matrix multiplication isn’t commutative. In other words, order matters. Look at these two products:
Swapping the left and right matrices changes the final answer, and there’s no simple rule for turning one final answer into the other.
Important note
There are several ways to multiply matrices. The way that’s shown in this section is simply called multiplying or finding the product. For emphasis, I sometimes refer to this kind of multiplication as ordinary matrix multiplication or finding the dot product of two matrices. My terminology isn’t standard. Most authors refer only to the dot product of two vectors and the product of two matrices.
For a moment, let’s set ordinary matrix multiplication aside. Another way to multiply matrices goes by the name tensor product. To find the tensor product of two matrices, you perform a scalar multiplication of each entry in the left matrix with the entire matrix on the right. Here are some examples:
To find the tensor product , multiply 3 by and multiply 2 by :To find the tensor product , multiply each entry in the left matrix by :Notice that the matrices in a tensor product don’t have any size requirements. You can find the tensor product of a 3x5 matrix with a 10x4 matrix. When you do, you get a 30x20 matrix.
Important note
Remember that when you find a tensor product, you start by spreading the leftmost matrix’s entries and keeping the rightmost matrix’s entries grouped. You don’t do it the other way around.
For example:
Figure 1.12 – Multiply each entry in the left matrix by the entire right matrix
Like ordinary matrix multiplication, taking a tensor product isn’t commutative. For example:
The entries in one answer are rearranged in the other.
What do matrix operations have to do with computer logic? When you create a circuit, there are two ways to combine gates:
When you combine gates in series, you apply the gates one after another to the same stream of bits. To do the math, you use ordinary matrix multiplication.Figure 1.13 – Two operations in series
When you combine gates in parallel, you apply the gates side by side to different streams of bits. To do the math, you use a tensor product.Figure 1.14 – Two operations in parallel
You can also apply a gate to its input bits. When you do, you use ordinary matrix multiplication.
Figure 1.15 – Sending bits through a gate
You can read more about this in the next section.
For an inkling of the way matrices work in computer logic, we introduce two new ways to represent bits:
In Dirac notation, the zero bit is |0⟩, and the one bit is |1⟩.The | ⟩ combination of characters is called a ket.
In vector notation, the zero bit is , and the one bit is .These new ways to represent bits may seem cumbersome and redundant, but they’re really very helpful. If you like, think of the numbers in a vector as amounts ranging from zero to one. A vector’s top entry is an amount of zero-ness and the vector’s bottom entry is an amount of one-ness.
Figure 1.16 – The correspondence between vector notation and Dirac notation
This business about all zero-ness and all one-ness will make more sense when you read about qubits in the next chapter.
Disclaimer
Most authors reserve kets and vectors for qubits (quantum bits). For classical bits, they use the unadorned symbols 0 and 1. I’ll follow that rule in most chapters, but in this chapter, the Dirac and vector notations for bits are handy.
We represent a bit with a vector and represent an operator with a matrix. For example, the NOT operator is the matrix :
To describe the AND and OR operators, we need ways of representing combinations of two bits, three bits, or any other number of bits:
In Dirac notation, there are four different combinations consisting of two bits. We can write the four combinations this way:Or we can abbreviate the four combinations this way: