Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Dancing with Qubits

You're reading from   Dancing with Qubits From qubits to algorithms, embark on the quantum computing journey shaping our future

Arrow left icon
Product type Paperback
Published in Mar 2024
Publisher Packt
ISBN-13 9781837636754
Length 684 pages
Edition 2nd Edition
Arrow right icon
Author (1):
Arrow left icon
Robert S. Sutor Robert S. Sutor
Author Profile Icon Robert S. Sutor
Robert S. Sutor
Arrow right icon
View More author details
Toc

Table of Contents (26) Chapters Close

Preface I Foundations
Why Quantum Computing FREE CHAPTER They’re Not Old, They’re Classics More Numbers Than You Can Imagine Planes and Circles and Spheres, Oh My Dimensions 6 What Do You Mean “Probably”? II Quantum Computing
One Qubit Two Qubits, Three Wiring Up the Circuits From Circuits to Algorithms Getting Physical III Advanced Topics
Considering NISQ Algorithms Introduction to Quantum Machine Learning Questions about the Future Afterword
A Quick Reference B Notices C Production Notes Other Books You May Enjoy
References
Index
Appendices

1.4 Applications to artificial intelligence

Artificial intelligence (AI) and one of its subsets, machine learning, are broad collections of data-driven techniques and models. They help find patterns in information, learn from the information, and automatically perform tasks more “intelligently.” They also give humans help and insight that might have been difficult to get otherwise. AI machine learning artificial intelligence

Here is a way to start thinking about how quantum computing might apply to large, complicated, computation-intensive systems of processes such as those found in AI and elsewhere. These three cases are, in some sense, the “small, medium, and large” ways quantum computing might complement classical techniques:

  • There is a single mathematical computation somewhere in the middle of a software component that might be sped up via a quantum algorithm.
  • There is a well-described component of a classical process that could be replaced with a quantum version.
  • There is a way to avoid using some classical components entirely in the traditional method because of quantum, or we can replace the entire classical algorithm with a much faster or more effective quantum alternative.

At the time of writing, quantum computers are not “big data” machines. You cannot take millions of information records and provide them as input to a quantum calculation. Instead, quantum may be able to help where the number of inputs is modest, but the computations “blow up” as you start examining relationships or dependencies in the data. Quantum, with its exponentially growing working memory, as we saw in the caffeine example in section 1.2, may be able to control and work with the blow-up. (See section 2.7 for a discussion of exponential growth.)

In the future, however, quantum computers may be able to input, output, and process much more data. Even if it is only theoretical now, it makes sense to ask if there are quantum algorithms that can be useful in AI someday.

Let’s look at some data. I’m a big baseball fan, and baseball has a lot of statistics associated with it. The analysis of this even has its own name: “sabermetrics.” sabermetrics

 Figure 1.7: Baseball player statistics by year

Suppose I have a table of statistics for a baseball player given by year, as shown in Figure 1.7. We can make this look more mathematical by creating a matrix of the same data:

Displayed math

Given such information, we can manipulate it using machine learning techniques to predict the player’s future performance or even how other similar players may do. These techniques use the matrix operations we discuss in Chapter 5, “Dimensions.”

There are 30 teams in Major League Baseball in the United States. With their training and feeder “minor league” teams, each major league team may have more than 400 players throughout their systems. That would give us over 12,000 players, each with their complete player histories. There are more statistics than I have listed, so we can easily get more than 100,000 values in our matrix.

As another example, in the area of entertainment, it’s hard to estimate how many movies exist, but it is well above 100,000. For each movie, we can list features such as whether it is a comedy, drama, romance, or action film, and who each actor is. We might also know all members of the directorial and production staff, the geographic locations shown in the film, the languages spoken, and so on. There are hundreds of such features and millions of people who have watched the films!

For each person, we can also add features such as whether they like or dislike a type of movie, actor, scene location, or director. Using all this information, which film should I recommend to you on a Saturday night in December, based on what you and people similar to you like?

Think of each feature or each baseball player or film as a dimension. While you may think of two and three dimensions in nature, we might have thousands or millions of dimensions in AI.

Matrices as above for AI can grow to millions of rows and entries. How can we make sense of them to get insights and see patterns? Aside from manipulating that much information, can we even eventually do the math on classical computers quickly and accurately enough?

While scientists initially thought that quantum algorithms might offer exponential improvements to such classical recommender systems, a 2019 algorithm showed a classical method to gain such a large improvement. 213 An example of a process being exponentially faster is doing something in 6 days instead of 106 = 1 million days. That’s approximately 2,740 years. Tang, Ewin

Tang’s work is a fascinating example of the interplay of progress in classical and quantum algorithms. People who develop algorithms for classical computing look to quantum computing and vice versa. Also, any particular solution to a problem may include classical and quantum components.

Nevertheless, many believe that quantum computing will greatly improve some matrix computations. One such example is the HHL algorithm, whose abbreviation comes from the first letters of the last names of its authors, Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd. This algorithm is an example of case number 1 above. 105 63

Algorithms such as these may find use in fields as diverse as economics and computational fluid dynamics. They also place requirements on the structure and density of the data and may use properties such as the condition number, which we discuss in section 5.13.

To learn more

Completing this book will equip you to read the original paper describing the HHL algorithm and more recent surveys about applying quantum computing to linear algebraic problems. 105

An important problem in machine learning is classification. In its simplest form, a binary classifier separates items into one of two categories or buckets. Depending on the definitions of the categories, it may be more or less easy to do the classification.

Examples of binary categories include:

  • book you like or book you don’t like
  • comedy movie or dramatic movie
  • gluten-free or not gluten-free
  • fish dish or chicken dish
  • UK football team or Spanish football team
  • hot sauce or extremely hot sauce
  • cotton shirt or permanent press shirt
  • open-source or proprietary
  • spam email or valid email
  • American League baseball team or National League team

The second example of distinguishing between comedies and dramas may not be well designed, since there are movies that are both.

Mathematically, we can imagine taking some data value as input and classifying it as +1 or –1. We take a reasonably large data set and label each value by hand as a +1 or –1. We then learn from this training set how to classify future data.

Machine learning binary classification algorithms include random forest, k-nearest neighbor, decision tree, neural networks, naive Bayes classifiers, and support vector machines. algorithm$support vector machine algorithm$SVM algorithm$classification

In the training phase, we have a list of pre-classified objects (books, movies, proteins, operating systems, baseball teams, etc.). We then use the above algorithms to learn how to put a new object in one bucket or another.

The support vector machine (SVM) is a straightforward approach with a precise mathematical description. In the two-dimensional case, we try to draw a line separating the objects (represented by points in the plot in Figure 1.8) into one category or the other.

 Figure 1.8: Points we wish to separate

The line should maximize the gap between the sets of objects.

 Figure 1.9: The points separated by a line

The plot in Figure 1.9 is an example of a line separating the dark gray points below from the light gray ones above.

Given a new point, we plot it and determine whether it is above or below the line. That will classify it as dark or or light gray, respectively.

Suppose we know we correctly classified the point with those above the line. We accept that and move on. If we misclassify the point, we add it to the training set and try to compute a new and better line. This may not be possible.

 Figure 1.10: We cannot separate the points by a line

In the plot in Figure 1.10, I added a new light gray point close to 2 on the vertical axis. With this extra point, there is no line we can compute to separate the points.

Had we represented the objects in three dimensions, we would have tried to find a plane that separated the points with a maximum gap. We would need to compute some new amount that the points are above or below the plane. In geometric terms, if we have x and y only, we must somehow compute a z to work in that third dimension.

For a representation using n dimensions, we try to compute an (n – 1)-dimensional separating hyperplane. We look at two and three dimensions in Chapter 4, “Planes and Circles and Spheres, Oh My,” and the general case in Chapter 5, “Dimensions.” hyperplane

 Figure 1.11: The points moved into three dimensions

In the three-dimensional plot in Figure 1.11, I take the same values from the last two-dimensional version and lay the coordinate plane flat. I then add a vertical dimension. I push the light gray points below the plane and the dark gray ones above. With this construction, the coordinate plane itself separates the values.

While we can’t separate the points in two dimensions, we can in three dimensions. This kind of mapping into a higher dimension is called a kernel trick. While the coordinate plane might not be the ideal separating hyperplane, it gives you an idea of what we are trying to accomplish. The benefit of kernel functions (as part of the similarly named “trick”) is that we can do far fewer explicit geometric computations than you might expect in these higher-dimensional spaces. kernel$trick kernel$function

It’s worth mentioning now that we don’t need to try quantum methods on small problems that we can handle quite well using traditional means. We won’t see any quantum advantage until the problems are big enough to overcome the quantum circuit overhead versus classical circuits. Also, if we come up with a quantum approach that we can simulate efficiently on a classical computer, we don’t need a quantum computer.

A quantum computer with 1 qubit provides us with a two-dimensional working space. Every time we add a qubit, we double the number of dimensions. This is due to the properties of superposition and entanglement that I introduce in Chapter 7, “One Qubit.” For 10 qubits, we get 210 = 1,024 dimensions. Similarly, for 50 qubits, we get 250 = 1,125,899,906,842,624 dimensions.

Remember all those dimensions for the features, baseball players, and films? We want a sufficiently large quantum computer to perform the AI calculations in a quantum feature space. This is the main point: handle the large number of dimensions coming out of the data in a large quantum feature space. feature space quantum$feature space

There is a quantum approach that can generate the separating hyperplane in the quantum feature space. There is another that skips the hyperplane step and produces a highly accurate classifying kernel function. As the ability to entangle more qubits increases, the successful classification rate also improves. 106 This is an active area of research: how can we use entanglement, which does not exist classically, to find new or better patterns than we can with strictly traditional methods?

To learn more

Researchers are writing an increasing number of papers connecting quantum computing with machine learning and other AI techniques, but the results are somewhat fragmented. 236 We return to this topic in Chapter 13, “Introduction to Quantum Machine Learning.” I warn you again that quantum computers cannot process much data now!

For an advanced application of machine learning for quantum computing and chemistry, see Torlai et al. 216 I introduce machine learning and classification methods in Chapter 15 of Dancing with Python. 211

You have been reading a chapter from
Dancing with Qubits - Second Edition
Published in: Mar 2024
Publisher: Packt
ISBN-13: 9781837636754
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image