(Picture: János Pach)
The Quantum Computer – A Miracle or Mirage
inaugural address of
honorary member of the MTA,
Budapest, 15 June, 2022, 15:00
Abstract: On February 12, 2002, Michel Devoret’s lecture entitled “The Quantum Computers: Miracle or Mirage” kicked off the 150th Anniversary Celebration of the Yale School of Engineering. In his abstract, Devoret asserted that while quantum mechanics had often been viewed as limiting the information one could extract from a physical system, “recent discoveries show that fundamental properties of quantum mechanics could actually be used to perform computations that would be impossible on a standard ‘classical’ computer.” Devoret’s question of whether the quantum computer is a miracle or a mirage is, to this day, one of the fascinating clear-cut open scientific questions of our time. In my inaugural lecture I will explain the question and the discoveries from the 1990s that suggested that quantum computers could perform miracles, and I will present my theory as to why the quantum computer is a mirage.
Janos Pach told me that the name of the room were my lecture was given, “Felolvaso terem” means something like “Hall for reading papers”. This is a rarely used, somewhat archaic, expression that really refers to the act of reading out papers loudly for an audience. Quite untypically for mathematical lectures, this was precisely the style of my lecture. Here is the text of my lecture:
I was very honored and happy for being elected as an honorary member of the Hungarian Academy of Sciences and I am especially happy to come to the beautiful city of Budapest to give this inaugural address and the Turán memorial lectures and I am thankful for the opportunity to meet friends and colleagues of many decades as well as a new generation of wonderful mathematicians.
Our mathematical life and the inherent difficulties and uncertainties of mathematics and science are for us islands of stability and solace in times of great concerns and difficulties.
Quantum computers are new type of computers based on quantum physics. When it comes to certain computational objectives, the computational ability of quantum computers is tens, and even hundreds of orders of magnitude faster than that of the digital computers we are familiar with, and their construction will enable us to break most of the current cryptosystems. While quantum computers represent a future technology, which captivates the hearts and imaginations of many, there is also an ongoing dispute over the very possibility of their existence.
My theory asserts that quantum computers are inherently noisy. Their robust components represent a low-level (classical) computational ability, and they necessarily exhibit chaotic behavior.
The classical computer can be seen as a device that contains n “bits” where every bit is in one of two positions of either “zero” or “one.” The computer operates through “gates” that perform logical operations on either a single or two bits. Two simple types of gates are sufficient to describe all forms of computation: the NOT gate, which reverses the value of a single bit, and the AND gate, which receives two bits as input and produces “one” if and only if the value of each of the bits in the input is “one.” The model presented here for computers is called the Boolean circuit model and is close to the 1940s models of Church, Turing, and others, which preceded the advent of the digital computer and the computer revolution in the second half of the twentieth century.
In the 1960s and 1970s, computer science researchers came to an important insight regarding the inherent limitations of computers. There are certain computational tasks that are very easy to formulate (and even to check whether a suggested answer to them is correct), but which are nonetheless impossible to perform because they require too many computational steps. For example, if the number of computational steps for a problem described by an input of n bits is 2n, then this computation will not be feasible for the most powerful computers, even when n = 100. To describe feasible (or “efficient”) computations, that is, computations that can actually be performed, an important assumption was added, according to which the number of computational steps (i.e., the number of gates) does not exceed a constant power (e.g., n3) in the number n of bits describing the input. This definition represents an important step in the theory of computational complexity, while providing the central tool for the analysis of the computational power of computational models, algorithms, and physical computational systems.
The main lens through which we analyze the difficulty of computation is the asymptotic lens. For most computational tasks we cannot hope for a full understanding of the number of required computational steps as a function of the number of bits of the input, but we can gain understanding (at times rough and at times just conjectural) of the computational difficulty by studying how the number of computational steps asymptotically depends on the input size. Experience gained in recent decades indicates that asymptotic analysis provides a good understanding of the practical difficulty of computational tasks.
We will now enrich the picture by adding probability. A single bit has two possible values, “zero” and “one,” and it is possible to extend the Boolean circle model by allowing the bit to have the value “zero” with probability p and the value “one” with probability 1-p. In this way, the classical computer equipped with probabilistic bits can describe a sample from a probability space of sequences of zeros and ones. The model of probabilistic classical computers is an important model in the theory of computational complexity and it also constitutes a convenient introduction to the concept of quantum computers. Probabilistic Boolean circuits can be realized rather easily by digital computers.
The model of quantum computers is a computational model based on quantum mechanics and was first introduced in the 1980s. In quantum computers, the classical bit is replaced by the basic computational element called a qubit.
The state of a single qubit is described by a unit vector in a complex two-dimensional vector space (namely, it is described by four real numbers whose sum of squares equals 1). The uncertainty principle asserts that we cannot identify the precise state of the qubit, but rather we can measure it and obtain a probabilistic bit. One way to think about it is as follows: there are two basic states for the qubit and we denote them by and while the general state of the qubit is a “superposition” of these two basic states, namely, the general state of a qubit is a linear combination of the form
where and are complex numbers satisfying
The state of the qubit can be measured and when a qubit in this state is measured, we obtain a probabilistic bit with a value 0 with probability and 1 with probability. One possible physical realization of the two basic states of a single qubit is with the two energy levels of a Hydrogen atom, with quantum physics allowing the atom to be in a superposition of these basic states as described above.
The computer acts on n qubits through “quantum gates,” which are the basic quantum computation operations. (The gates are described mathematically by “unitary operators.”) At the end of the computation process, the state of the computer is measured, and this yields a single sample from a probability distribution of 0-1 strings of length n. In this case too, the assumption is that for each efficient computation the number of gates is at most polynomial in n. The crucial fact about quantum computers is that they would allow us to efficiently achieve samples that cannot be efficiently achieved by classical computers.
In the 1990s, Peter Shor discovered that quantum computers would allow the execution of a certain computational tasks – factoring an integer to its prime components – hundreds of orders of magnitude faster than regular computers and would enable us to break most current cryptosystems.
Quantum Computers and Noise
Quantum systems are by nature “noisy” and unstable. The term “noise” refers to a deviation of the computer from the planned program, and in the case of a quantum computer any unwanted interaction or leakage of information leads to such a deviation. The model of noisy quantum computer adds a component of noise to the description of the quantum computer. Noisy intermediate-scale quantum (NISQ) computers are simply noisy quantum circuits with at most 200 (say) qubits.
It was around that time that early doubts concerning the model also surfaced: quantum systems are by nature “noisy” and unstable. The term “noise” refers to a deviation of the computer from the planned program, and in the case of a quantum computer any unwanted interaction or leakage of information leads to such a deviation. Therefore, in order to reduce the noise level, quantum computers must be isolated and protected from their environment. The key to a possible solution of the noise problem is quantum error-correcting codes, which will enable noisy quantum computers to perform the same computations conducted by the abstract noiseless model, provided that the level of noise can be reduced to below a certain threshold denoted by α.
It is a common opinion that the construction of quantum computers is possible, that the remaining challenge is mostly engineering-related, that such computers will be constructed in the next few decades, and that quantum error-correcting codes of the required quality will be developed in labs in the years to come. My standpoint is that it will be fundamentally impossible to reduce the noise level to below the required threshold. As a result, it will be impossible to develop the quantum codes required for quantum computation, nor will it be possible to reach the target of “quantum computation supremacy,” whereby a quantum computer performs computation that is extremely hard or even impossible for a classical computer.
The argument against quantum computers
My argument for the impossibility of quantum computers lies within the scope of quantum mechanics and does not deviate from its principles. In essence, the argument is based on computational complexity and its interpretation, and it is discussed in-depth in my papers which also include a discussion of general conclusions that derive from my argument and relate to quantum physics, alongside suggestions of general laws of nature that express the impossibility of quantum computation.
My argument mostly deals with understanding quantum computers on the intermediate scale (known as NISQ computers, an abbreviation of Noisy Intermediate Scale Quantum), that is, quantum computers of up to at most several hundreds of qubits. It is expected that on this scale we will be able to construct quantum codes of a quality sufficient for the construction of bigger quantum computers. It is further expected that on this scale the quantum computer will achieve computations far beyond the ability of powerful classical computers, that is, will achieve quantum computational supremacy. The Google’s Sycamore computer is an example of a noisy intermediate-scale quantum computer.
As specified later, it is my argument that NISQ computers cannot be controlled. Hence:
- Such systems cannot demonstrate significant quantum computational advantage.
- Such systems cannot be used for the creation of quantum error-correcting codes.
- Such systems lead to non-stationary and even chaotic distributions.
Regarding the first item, let me remind the audience that computational complexity theory provides tools for studying the computational power of models and physical computational devices. The reason NISQ computers cannot support quantum supremacy is that when we use computational complexity tools to understand the computational power of NISQ computers, we discover that they describe a very low-level computational class. This low-level computational class does not allow for any complicated computations, much less computational supremacy. My analysis draws computational conclusions for NISQ computers based on their mathematical model’s asymptotic behavior.
Regarding the second item, the reason it is impossible to build quantum error-correcting codes is that it requires an even lower noise level than that required for demonstrating quantum supremacy. The meaning of the infeasibility of quantum error-correcting codes is that even a quantum computer operating on a single qubit is inherently noisy. It is to be noted that the argument that the noise level required for error-correcting codes is lower than the level required for quantum supremacy is generally accepted by both theoreticians and experimental physicists.
Weiner Chaos and Lorenz Chaos
When I speak here of chaotic behavior, I refer to a system (either deterministic, probabilistic, or quantum) that is so sensitive to its defining parameters that its behavior (or a large component of its behavior) cannot be determined, not even probabilistically. This notion is related to the mathematical theory of “noise sensitivity” (Benjamini, Kalai, and Schramm 1999) that we mentioned before, and to the related mathematical theory of “black noise” (Tsirelson and Vershik 1998). Both these theories have their early roots in Weiner’s chaos expansion (Weiner 1938). Chaos in this sense is sometimes called “Knightian uncertainty”, a term that originates in economics. The first use of the theory of noise-sensitivity to quantum computers was in my 2014 paper with Guy Kindler on “Boson Sampling”. (There we use Hermite-Fourier expansion.)
The term “chaotic system” in mathematical chaos theory (Lorenz 1963) refers to nonlinear classical deterministic systems, whose development very much depends on their initial conditions, and since these conditions are not known precisely, the system’s development in the long run cannot be predicted.
It is plausible that natural chaotic phenomenon (like the weather) reflects both the Lorenz chaos and the Weiner chaos.
Here I thank I Bernard Chazelle for raising an appealing argument for why Lorenz chaos and traditional dynamic theoretic notions of chaos are insufficient to describe chaos in nature.
A Brief Look at the Mathematical Analysis: The Fourier Expansion
Following is a brief look at a central technical analytic tool that applies to all three components of the argument, namely, the Fourier expansion of functions. In our case we consider functions, whose values are real numbers that are described on sequences of length n of zeros and ones, and every such function can be written (as a linear combination) by means of a special set of functions, known as Fourier–Walsh functions. Fourier–Walsh functions can be sorted according to their degree, which is a natural number between 0 and n. Much as the regular Fourier expansion makes it possible to describe a musical sound as a combination of pure high and low tones, in Fourier–Walsh functions, too, the degrees can be seen as analogous to the heights of the pure tones.
Our first step is to study the Walsh–Fourier transform of the probability distribution of the 0-1 sequences in an ideal noiseless quantum computer, and the second step is to study the effect of the noise. The main technical component of my argument is that the noise will exponentially lower the coefficients of the higher-degree Fourier–Walsh functions. This leads to a mathematical distinction (Benjamini, Kalai, and Schramm 1999) between distributions that can be described by low-degree Fourier coefficients and are referred to as “noise stable” and distributions that are supported by high-degree Fourier coefficients and are referred to as “noise sensitive.” A general distribution can have both noise-stable and noise-sensitive components. This is the reason that, on the one hand, the stable component in the noisy distribution is described by low Fourier–Walsh levels, and hence this component represents an extremely low computational power, and that, on the other hand, if the unstable part, namely, the contributions of the higher-degree Fourier–Walsh functions remain substantial, this contribution will be chaotic.
On Classical Information and Classical Computation
The question “why does this argument fail for classical computers?” is an appropriate critique of any argument asserting that quantum computers are not possible. Regarding my argument, the answer to this question is as follows: the path to large-scale quantum computers requires building quantum error-correcting codes on NISQ systems, and my theory asserts that this is impossible. At the same time, the primitive computational power represented by NISQ systems still allows for stable classical information, and subsequently even for classical computation.
The Weak Link in My Argument
The mathematical analysis that leads to the conclusion that noisy intermediate-scale quantum computers have primitive computational power is asymptotic. That is, a mathematical model of these computers is described and analyzed when the noise level is fixed, and the number of qubits increases. The conclusion I draw from this asymptotic behavior concerns the lowest noise level engineers can reach. I argue that it would be impossible to lower the noise level in such a way that enables us to create computers whose computational ability is low in an asymptotic analysis, but very high – indeed beyond the ability of classical computers – for several dozen qubits. The move from an asymptotic argument regarding the model’s computational complexity to a concrete claim regarding engineering-related limitations of intermediate-scale computers is hardly standard, and my argument clashes with the strong intuition of experts in the field, according to whom an engineering effort would make it possible in principle (as well as in practice) to lower the noise level as much as we would like. Indeed, the multiple resources invested in building quantum computers, especially noisy intermediate-scale quantum computers, are based on the common position that there is no fundamental obstacle obstructing this effort.
Recent experimental results
In 2019 Researchers from Google claimed to achieve in 300 seconds a sampling task that requires 10,000 years for a supercomputer.
However, since the Google paper appeared the number “10,000” years was replaced by (roughly) 300 second by better classical algorithms.
In a joint work with Yosi Rinott and Tomer Shoham we also examine other aspects of the Google methods and claims.
In 2020 researchers from USTC, China claimed to achieve in 300 seconds a sampling task that requires a billion years for a supercomputer.
However, my 2014 paper with Guy Kindler on “Boson Sampling” (and subsequent works) largely refute these fantastic claims.
(Picture: János Pach)
Quantum computers are among the most important scientific developments of our time. In my judgement it is a serious possibility that quantum computers are not possible. This is what I expect, and I have been studying this possibility since 2005.
Understanding noisy quantum systems and potentially even the failure of quantum computers is related to the fascinating mathematics of noise stability and noise sensitivity and its connections to the theory of computing. Exploring this avenue may have important implications to various areas of quantum physics.
Evaluating the recent fantastic experimental claims is an exciting scientific matter on its own.
For the readers of the blog let me add some references to my papers:
The argument against quantum computers
(i) G. Kalai, The argument against quantum computers, Quantum, Probability, Logic: Itamar Pitowsky’s Work and Influence (M. Hemmo and O. Shenker, eds.), pp. 399–422, Springer, 2019.
(ii) G. Kalai, The quantum computer puzzle, Notices AMS, May 2016
(iii). G. Kalai, The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims, The Intercontinental Academia Laws: Rigidity and Dynamics (M. J. Hannon and E. Z. Rabinovici, eds.), World Scientific, 2020. arXiv:2008.05188.
(iv). G. Kalai, Conjecture C still stands, arXiv. 2022
This recent paper responds to a 2012 paper of Steve Flammia and Aram Harrow (see this post) regarding a certain “Conjecture C” discussed in my 2011 debate with Aram.
In the wider context of “mathematical computer science: theory and practice”
(v). G. Kalai, Three puzzles on mathematics computations, and games, Proc. ICM2018;
(vi). G. Kalai and G. Kindler, Gaussian Noise Sensitivity and BosonSampling, 2014.
Our study of Boson Sampling gives the basis to understanding of general NISQ systems.
(vii) G. Kalai and G. Kindler, Concerns about recent claims of a huge quantum computational advantage via Gaussian boson sampling,
This was a discussion paper for the 2020-2021 email exchange with the USTC photonic team and other researchers regarding the boson sampling claims.
On the Google 2019 experiment
(viii) Y. Rinott, T. Shoham, and G. Kalai, Statistical Aspects of the Quantum Supremacy Demonstration, Statistical Science (2022)
(ix) G. Kalai, Y. Rinott and T. Shoham, Google’s 2019 “Quantum Supremacy” Claims: Data, Documentation, & Discussion
See also Sections 5 and 6 in reference (iii).
A lecture in Bandung, Indonesia with the same title
And my lecture in Lahore, Pakistan now on youtube
E-T uncertainty and Boltzmann’s law introduces speed limits due to INTERNAL decoherence, in addition to external decoherence. John D. Norton pointed out that Landauer’s principle is completely untrue in the presence of quantum fluctuations (https://sites.pitt.edu/~jdnorton/homepage/research/Maxwell_demon.html), but people cite it all over the literature. It’s a comically basic misunderstanding. Bennett has also made some cranky arguments. ANY fluctuations prevent Bell’s arguments from going through (https://arxiv.org/abs/cond-mat/0403692). As Vervoort made evident in a paper (https://arxiv.org/abs/1406.0901) regarding background fields producing mundane explanations for Bell correlations through violations of measurement independence, the QC controversy points to a need to complete QM. As path integrals misleadingly led people to bogus metaphysics and an overestimation of perturbation theory (https://arxiv.org/abs/0801.1933), QM’s measurement problem is mystifying superposition and entanglement. I found the 2019 paper (https://www.nature.com/articles/s41586-019-1287-z) on reversing quantum jumps mid-flight quite revealing as to how “analog” subquantum theories may look. There is no global phase, initial conditions are sensitive, and QEC can add noise instead of subtract it. If QM was really complete, there would be no argument. I contend that unification has been stuck on this point (see: https://arxiv.org/abs/1901.00526).
There was an interesting Facebook discussion over my (personal) statement that mathematics and academic activity is a source of solace in difficult and uncertain times.
For those who want to know why quantum computing is such a pipe dream, here are the arguments: https://drive.google.com/file/d/1mft3vdT0OXH77Qr_ESbxUgB2uo2OqXi3/view?usp=sharing
(This was sent to Gil Kalai on November 4th and no answer was received)