Gil’s Collegial Quantum Supremacy Skepticism FAQ

The first 15 samples of Google’s  53 qubit  flagship quantum supremacy experiment!  

After the sensationally successful Scott’s Supreme Quantum Superiority FAQ and Boaz’s inferior classical inferiority FAQ let me add my contribution, explaining my current skeptical view. (I was actually asked many of the questions below.) I also recommend Lipton and Regan’s post on the Google paper.

While much of the post will be familiar let me mention right away a new critique of the Google supremacy claim: One of the central claims in the Google experiment – that the fidelity (quality) of a complex circuit is very close to the product of the fidelity of its basic components –  qubit and gates, seems very improbable and this may shed serious doubts on the validity of the experiment and on its conclusions.

Update (27/11): For quantum computer skeptics poetry on Twitter see  this thread by Peter Shor, and this tweet by ⟨dl|yonge|mallo⟩. (See at the end of the post.)

Before we start, a few links: For the amazing news on the threshold of random discrete structures, see this post.  Here is my first post on Google matter. Let me recommend the paper From Operator Algebras to Complexity Theory and Back by Thomas Vidick. It is about a problem by Boris Tsirelson (related to various deep mathematics) and about connections to quantum computation. And just fresh on the arXiv, Quantum speedups need structure by Nathan Keller, Ohad Klein, resolving the Aaronson-Ambainis Conjecture. (Update: here is a blog post on the Shtetl-Optimized.) Congrats to Nathan and Ohad! (Dec 2: unfortunately a gap in the proof was found; see comment below.)

And now, lets start.

So what is quantum supremacy? And what other things do we need to know in order to understand the claims regarding quantum computers?

Quantum supremacy  is the ability of quantum computers to demonstrate computations that classical computers cannot demonstrate. (Or that it is very very hard for classical computers  to demonstrate.)

Quantum error correcting codes are certain quantum gadgets that a quantum computer needs to create that will be used as building blocks for larger quantum computers.

A sampling task is  a task where the computer (quantum or classic) produces samples from a certain probability distribution D. Each sample is 0-1 vector of length n.

John Martinis


What did the Google team do?

The Google team produced a sample of a few millions 0-1 vectors of length 53 which is based on a certain “ideal” probability distribution D. They made two crucial claims regarding their sample

A) The statistical test for how close their sample is to the ideal distribution D will give a result above t=1/10,000

B) Producing a sample with similar statistical property will require 10,000 years on a supercomputer.

The probability distribution D depends on a quantum computation process (or by the technical jargon, a quantum circuit) denoted later by C.

What is the meaning of the statistical statement in part A)?

Google’s quantum computers (like any other current quantum computers) are very “noisy” so what the computer is producing are not samples from D but rather a noisy version which could roughly be described as follows: a fraction t of the samples are from D and a fraction (1-t) of the samples are from a uniform distribution. The statistical test allows to estimate the value of t which is referred to as the fidelity.

Could they directly verify claims A) and B) ?

No, it was only possible to give indirect evidence for both these claims.

What is the logic of Google’s quantum supremacy argument?

For claim A) regarding the success of the statistical test on the sample they have two arguments:

  1. Analysis based on the fidelity of the components of the quantum computer – qubits and gates (see formula (77) above),
  2. Experiments that support the analysis in the regime where they can be tested by a classical computer.

According to the paper  the fidelity of entire circuits agrees perfectly with the prediction of the simple mathematical formula (77) with deviation under 10-20 percents. There are several reported experiments in the classically tractable regime including on simplified circuits (that are easier to simulate on classical computers) to support the assumption that the prediction given by formula (77) for the fidelity applies to the 53-qubit circuit in the supremacy regime.

For claim B) For the classical difficulty they rely on:

  1. Extrapolation from the running time of a specific algorithm that they use.
  2. Computational complexity support for the assertion that the task they consider is asymptotically difficult.

What are the weak points in this logic?

A main weakness (which is crucial in my mind) of the experiment is that the experimental support from the regime where the experiments can be tested by a classical computer is too sparse. Much more could have been done and should have been done.

In my opinion, a major weakness of the Google experiment is that the support from experiments in the classically tractable regime (blue in the picture) is much too sparse and is unconvincing.

Another weakness is that the arguments for classical difficulty were mainly based on the performance of a specific algorithm.

Sources: The link to the Google paper in Nature. A videotaped lecture by John Martinis at Caltech. A short Google video “demonstrating quantum supremacy”.


What is your assessment of the Google claims, Gil?

I think that the claims are incorrect. Specifically, I find the evidence for  the claim “the statistical test applied to the 53-qubit  sample will give a result above 1/10,000” too weak and I expect that this claim and other related claims in the paper will not stand after a closer scrutiny and further experiments in the classically tractable regime. I also doubt the perfect proximity between predictions based on the 1- and 2- qubit fidelity and the circuit fidelity.

The Google experiment represents a very large leap in several aspects of human ability to control noisy quantum systems and accepting their claims  requires very careful evaluation of the experiments and, of course, successful replications.

The “most amazing thing” – is it real?

Do you want to tell us more about Formula (77)?

Formula (77)

Yes, thank you. Here again is Formula (77) and its explanation in the paper. The fact that the fidelity of entire circuits agrees with the prediction of the simple mathematical Formula (77) is “most amazing” according to John Martinis (videotaped lecture at Caltech).  Indeed the deviation according to the paper is at most 10-20 percents.  This perfect agreement can be seen in various other parts of the paper. The authors’ interpretation of this finding is that it validates the digital error model and shows that there are no new mechanisms for errors.

John explains the significance of Formula (77) at Caltech. Amazing big surprises are often false.

And what do you think about it, Gil

I completely agree that this is most amazing and, as a matter of fact, there are reasons to consider the  predictions based on Formula (77) as too good to be true even if qubit and gates fidelity account for all the errors in the system. The issue is that Formula (77) itself is very sensitive to noise. The formula estimates the fidelity as the product of hundreds of contributions from individual qubits and gates. Fairly small errors in estimating the individual terms can have large accumulative effect, well beyond the 10%-20% margin.

Anyway, this matter deserves further study.


Because this is considered by the authors as a major discovery and while looking skeptically at the Google papers this appears to be an orthogonal “miracle” to the main supremacy claim.

How to proceed

Let’s go back to your overall assessment. What could change your mind, Gil?

Here goes:


A) An independent verification of the statistical tests outcomes for the experiments in the regime where the Google team classically computed the probabilities. This looks to me like a crucial step in a verification of such an important experiment.

A more difficult verification that I’d also regard as necessary at a later stage would be to independently check the probability distributions for the circuits given by the Google computation.

Further analysis and crucial improvements

B) Experiments with the quantum computers giving sufficiently many samples to understand the noisy probability distribution for circuits in the 10-25 qubit range.  See my first post, this comment by Ryan O’Donnell, and this earlier one.  We need to understand the noisy probability distribution produced by the Google quantum computer, the actual fidelity, and the quality of the statistical tests used by the Google team.


C) Experiments in the 10-30 qubit range on the IBM (and other) quantum computers.  It is quite possible that experimenting of this kind with random quantum circuits was already carried out.

D) Since improved classical algorithms were found by the IBM team (but analyzing the 53 qubits samples still seems practically beyond reach).   Google can produce samples for 41-49 qubits for which IBM (or others) can compute the probabilities quickly and test Google’s prediction for the fidelity.


E) Success in demonstrating distance-3 and distance-5 surface codes and other quantum error-correcting codes.

So what precisely will convince you and what is the time-schedule that you expect for matters to be clarified?

A successful and convincing combination of three or more from A), B), C), D) or E) will go a long way to convince me. The verification part A) is important, and I don’t expect problems there, I expect that the Google claims will be verified and I consider it as very important that the data will be public and that various groups will verify the claims. This may take several months and certainly it should take less than a year.

At present, I expect parts B)-D) will not support Google’s supremacy claims. So outcomes of experiments in the next couple of years both by the Google group and by other groups will be crucial. One direction that I do not regard, at present, as useful for strengthening the quantum supremacy claims is increasing the number of qubits of the quantum computer.

What is required for the (easy) verification stage?

(1) Right now the raw samples of Google’s sampling experiments are public. There are altogether 300 files with samples.

(2) For every circuit that they experiment, the Google team also plans to upload  the 2^n probabilities that they obtained by the Feynman-Schrodinger algorithm.  This will  allow verifying their statistical tests, making subsequent analysis, and for other researchers to test other algorithms for computing the same probabilities.

(3) A convenient form of the data from (2) is a file that will give for every experiment the probabilities that the Google team computed for the samples. (For a large n those are much smaller files.)

(4) For each of the 300 experiments the estimated fidelity that formula (77) gave and the contribution of each qubit and gate to the RHS of (77).

Do you plan to take part yourself?

I plan to get involved myself with the “easy” verification and analysis of the “raw data” once it will become available. I do expect that the statistical tests will agree with the assertions in the Google paper, and at the same time, as I said,  I think it is important that this and other aspects of the experiments will be double checked and triple checked. This basic data already allows interesting analysis and indeed Google’s supplementary paper describes such analysis (that the Google people kindly pointed me to) on how the data fits the theory and on their statistical tests. See Figures S32-S36, table V and associated materials around pages 37-40.


What did the IBM rebuttal paper show?

Recall that the Google claim is based on two assertions:

A) The statistical test applied to the sample will give a result above 1/10,000

B) that producing a sample with similar statistical property will require 10,000 years on a supercomputer.

The IBM team described a different algorithm (on an even stronger current supercomputer) that would take only 2.5 days rather than 10,000 years.

Can the 2.5 days be further reduced?

As far as I can see the IBM claim is about a full computation of all the 2^53 probabilities. It is reasonable to think that producing a sample (or even a complete list of 2^53 probabilities) with fidelity t reduces the classical effort linearly with t. (This is the claim about the specific algorithm used by the Google team.) If this holds for the IBM algorithm then the 2.5 days will go down to less than a minute. (This will still be a notable “quantum speed up” in terms of the number of operations.) I don’t have an opinion as to whether  we should expect considerably better than IBM’s classical algorithms for computing the exact probabilities.

But lack of enthusiasm and skepticism of researchers from IBM about the Google paper appears to go beyond this particular point of the 2.5 computing days. Do you think that the objection by IBM people is motivated by fierce competition or envy?

No, I tend to think that there is a genuine interest by researchers who question the Google paper to understand the scientific matter, and carefully, critically and skeptically examine  Google’s claims. Maybe Google’s claims might seem to some other researchers who are working on quantum computers with superconducting qubits as remote from their own experimental experience, and this may give a strong reason for skepticism. It is also possible that in time people in IBM and elsewhere will change their mind and will become  more enthusiastic about the Google results.

IBM paper and blog post responding to Google’s announcement. (Update (Feb 24, 20) about classical simulation in this comment)


Tell us a little more about noise

Here is a nice toy model (which I think is quite realistic) to understand what the noise is.  Suppose that you ran a circuit C on your quantum computer with n qubits and the ideal probability distribution is D_C. The fidelity t noisy version of D_C will be N_C(x)=t D_C(x) + (1-t)S_C. And here S_C is the average (or weighted average) of values of D_C(y) where y is a vector in the neighborhood of x.

Here is a concrete version: We look at the expected value of D_C(y) where y is a new vector and y_i=x_i with probability p y_i=(1-x_i) with probability (1-p) independently. We choose p so that p^n=t. There are cases where positive correlation of errors for 2-qubit gates lead to correlated errors. (This is especially relevant in the context of quantum error correction.) To add this effect to the toy noise model replace the  word independently by “positively correlated“.

Your argument, Gil

Why do you think that quantum supremacy is not possible at all?

The gist of my argument against the possibility of achieving quantum supremacy by Noisy intermediate scale quantum computers is quite simple: “NISQ devices can’t outperform classical computers, for the simple reason that they are primitive classical computers.” 

(Note the similarity to Scott Aaronson’s critique on a paper published by Nature claiming implementation of a Shor-like algorithm on a classical device called “p-bits”.  Scott offered a very quick debunking: “ ‘p-bit’ devices can’t scalably outperform classical computers, for the simple reason that they are classical computers.)

If Google’s claim are correct – does it falsify your argument?

Yes! I predict that probability distributions described (robustly) by a noisy quantum circuit represent a polynomial time algorithm in terms of the description of the circuit. And by a polynomial time algorithm I assume small degree and modest constants. The Google claim, if true, appears to falsify this prediction. (And for this you do not need quantum supremacy in the strongest form of the term.)

But is there no way that Google’s huge (or at least large) computational advantage coexists with what you say?

There is an issue that I myself am not completely sure about regarding the possibility of chaotic behavior of quantum computers. Here is the classical analog: If you have n bits of memory inside a (classical) computer of m bits and ask about the complexity of the evolution on the n bits which may be chaotic. Of course, we cannot expect that this chaotic computer can lead to a computation that requires thousands of years in a super computer. But can it lead to a robust computation which is superpolynomial  in n (but polynomial in m)?

I don’t know the general answer but, in any case, I don’t think that it changes the situation here. If the Google claims  stand, I would regard it as a very strong argument against my theory. (Even if the noisy distributions themselves are not robust.) In any case, the question if the samples in the experiments represent robust distributions or are chaotic could and should be tested. (I discussed it in this post.)

If Google’s claims do not stand, will it confirm or give a strong support to your position that quantum supremacy and quantum error correction are impossible?

Failure of the Google claim will mainly support the position that quantum supremacy and quantum error correction require substantial improvement of the quality of qubit and gates. It would give a noteworthy support to my position (and probably would draw some attention to it) but I would not regard it as a decisive support. Let me mention that various specific predictions that I made can be tested in the Google, IBM and other systems.

OK, so why do you think that the quality of qubits and gates cannot be improved?

Yes, this is the crucial point. One argument (that I already mentioned) for thinking that there is a barrier for the quality of gates and qubits is computational theoretic. Computationally speaking NISQ devices are primitive classical computing devices and this gives a strong reason to think that it will not be possible to reduce the error rate to the level allowing computational supremacy. But there is an additional argument: for a wide range of lower levels of noise, reducing the noise will have the effect of making the system more chaotic. So the first argument tells us that there is a small range of error-rates that we can hope to  achieve and the second argument tells us that for a large range  of lower error-rates all we gain is chaos!

Links to my work: Three puzzles on mathematics computations, and games, Proc. ICM2018; The argument against quantum computers, To appear in Itamar Pitowsky’s memorial volume; The quantum computer puzzle, Notices AMS, May 2016

Slides from my 2019 CERN lecture. My ICM 2018 videotaped lecture.

Correlated errors and quantum error correction

People mainly refer to your conjectures about correlated errors.

Yes, this reflects my work between 2005-2013 (and was a central issue in my debate with Aram Harrow) and I think it is an important part of the overall picture. But this issue is different than my argument against quantum computers which represents my work between 2014-2019.  I think that my earlier work on error correlation is a key (or a starting point) to the question: What do we learn from failure of quantum computers on general properties of quantum noise. Indeed there are various consequences; some of them are fairly intuitive; some of them are counter-intuitive, and some of them are both. The basic intuition is that once your computation really makes use of a large portion of the Hilbert space, so will the error!

The major challenge is to put this intuition into formal mathematical terms and to relate it to the mathematics and physics of quantum physics.

I made a similar idea in  a comment to Dave Bacon in 2006 when I wrote “I believe that you may be able to approximate a rank-one matrix up to a rank-one error. I do not believe that you will be able to approximate an arbitrary matrix up to a rank one matrix.” to which Dave replied “I will never look at rank one matrices the same 😉”. Dave Bacon is among the authors of the new Google paper.

What is the connection between the ability to achieve quantum supremacy and the ability to achieve quantum error-correction?

One of the main claims in my recent works is that quantum supremacy is an easier task compared to creating good quality error-correcting codes. For the attempted experiments by Google, we see  a clear demonstration that achieving good quality quantum error correction is harder than demonstrating quantum supremacy. Low fidelity circuits that Google claims to achieve are far from sufficient for quantum error-correction. The other claim in my argument is that quantum supremacy cannot be achieved without quantum error correction (and, in particular, not at all in the NISQ regime) and this claim is, of course, challenged by the Google claims.

You claim that without quantum error correction to start with we cannot reach quantum supremacy. But maybe John Martinis’ experimental methods have some seeds of quantum error correction inside them?

Maybe 🙂  See this 2017 cartoon from this post.

(Here is a nice overview video from 2014 about my stance and earlier work.)

The Google experiment: concerns and attacks

Beside the critique on experimental evidence that could be tested did you find some concrete issues with the Google experiment?

Perhaps even too many 🙂 . In the first post and comments I raised quite a few objections. Some of them are relevant and some of them turned out to be irrelevant or incorrect. Anyway, here, taken from my first post, are some of my concerns and attempted attacks on the Google experiment:

  1. Not enough experiments with full histograms; not enough experiments in the regime where they can be directly tested
  2. Classical supremacy argument is overstated and is based on the performance of a specific algorithm
  3. Error correlation may falsify the Google noise model
  4. Low degree Fourier coefficients may fool the statistical test
  5. (Motivated by a comment by Ryan.) It is easier to optimize toward the new statistical test “linear cross-ratio entropy” compared to the old logarithmic one.
  6. “System calibration” may reflect an optimization towards the specific required circuit.
  7. The interpolation argument is unjustified (because of the calibration issue).
  8. (I forgot about it, Added, nov 27.) The paper assumes that the noisy distribution is a mixture of the correct distribution and a uniform distribution. But (as seen e.g. by our toy model) this is not precisely the case.

We talked about items 1 and 2 what about 3-5. In particular, are correlated errors relevant to the Google experiment?

No! (As far as I can see.) Correlated errors mean that in the smoothing the flipped coordinates are positively correlated.  But for the random circuit and the (Porter Thomas) distribution D_C  this makes no difference!

As for item 4., it turns out (and this was essentially known by the work of Gao and Duan) that in the case of random circuits (unlike the case of Boson Sampling) there is no low degree coefficients to fool the statistical test.

As for item 5. (and also item 8.), the answer is “nice observation, but so what?” (Let me note that the supplementary paper of the Google team compares and analyzes the linear and logarithmic statistical measures.) I expect that observation 8 may play a role in analyzing the Google experiment. ) Dec 2: Let me add that I do expect point 8 to be important in figuring out the Google supremacy demonstration.

What about the calibration? You got a little overworked about it, no?

In almost every scientific experiment there could be concerns that there will be some sort of biased data selection toward the hoped-for result.

Based on the description of the calibration method I got the impression that part of the calibration/verification process (“system calibration”) was carried out towards the experimental outcome for a specific circuit, and that this does not improve the fidelity as the authors thought but rather mistakenly tweaked the experimental outcomes toward a specific probability distribution. This type of calibration would be a major (yet innocent) flaw in the experiment. However, this possibility was excluded by a clear assertion of the researchers regarding the nature of the calibration process, and also by a more careful reading of the paper itself by Peter Shor and Greg Kuperberg. I certainly was, for a short while, way overconfident about this theory.

One nice (and totally familiar) observation is that a blind experiment can largely eliminate the concern of biased data selection.

So how do you feel, Gil?

When did you hear about the Google claim?

There were certainly some reasons to think that Google’s quantum supremacy was coming for example a quanta magazine article by Kevin Hartnett entitled Quantum Supremacy Is Coming: Here’s What You Should Know and another article about Neven’s double exponential law. Also Scott Aaronson gave some hints about it.

On September 18, I met Thomas Vidick in a very nice conference of the Israeli and US academies on the future of computer science (it was mentioned in this post, links to all videos will be added here, Naftali Tishby’s lecture is especially recommended.) Thomas told me about the new expected Google paper. Later that day I got involved in an amusing Facebook discussion about related matters. (See Barry Simon’s first comment to Preskill’s post and the subsequent 15 comments.)

When I introduced Thomas to Menachem Yaari (who was the president of the Israeli Academy), describing the former as a young superstar in quantum computation, Menachem’s reaction was: “but you do not believe in quantum computers.” I replied that I believe it is a fascinating intellectual area, and that perhaps I am even wrong about them being infeasible. Thomas said: “our area needs more people like Gil.” (!)

What about Scott?

Scott and I have been on friendly terms for many years and share a lot of interests and values. We are deeply divided regarding quantum computers and, naturally, I think that I am right and that Scott is wrong. In the context of the Google paper Scott’s references to me and my stance were peculiar and even a little hostile which was especially strange since at that time I did not have access to the paper and Scott was the referee of the paper.

Gil, how do you vision a situation where you are proven wrong?

If my theory of quantum computation being derailed by noise inherent in quantum gates is proven wrong, then physicists will say that I am a mathematician and mathematicians will say that I am a combinatorialist.

and how do you vision  a situation where you are proven right?

If my theory of quantum computation being derailed by noise inherent in quantum gates is  proven successful, then physicists will say that I am a mathematician and mathematicians will say that I am a combinatorialist. 🙂

And what would you say if your theory  prevails?

Where I have seen further than others, it is because I stood on Peter Shor’s shoulders and looked at the opposite direction. 🙂


One last thing, Gil. Nick Read just commented that experimental evidence is gradually pointing towards you being false on the matter of topological quantum qubits.

Nick is a great guy and topological quantum computing is a great topic. The general situation is quite simple and it applies to topological quantum computing like any other form of quantum computing. The way I see it,  gradual experimental progress will hit a barrier and non-gradual experimental progress will be falsified.

(See this 2014 videotaped lecture of mine on topological quantum computing, and also Section 3.5 of The argument against quantum computers. Feb. ’21: related article, regarding a 2018 paper; “editorial expression of concern”, and a new paper, with attached note .)

Appendix: The probability distribution and the statistical tests

Let’s have an Appendix question. Can you try to briefly describe the probability distribution and the statistical test used in the Google paper?

Update (Nov 27): Here is a very nice post by Greg Kuperberg on the mathematics of the probability distributions described by random quantum circuits going back to Archimedes.

Let me try. We start with a probability distribution described by the density function z e^{-z} supported on \mathbb R_+. Now we consider our set X of 0-1 vectors of length n.

We draw a random probability distribution p(x) on X. p(x)=q(x)/2^n and the value of q(x) is drawn at random from the exponential probability distribution.  (A very slight normalization may still be needed.) A probability distribution of this kind on \{0,1\}^n is called a Porter-Thomas distribution.

A random quantum circuit C leads to a (deterministic) probability distribution D_C of this kind. A classical computer can compute the probabilities based on the description of the quantum circuits but this becomes increasingly hard with n. A quantum computer can easily sample  x \in \{0,1\}^n according to D_C.

We are given t samples x_1,\dots, x_t that our noisy quantum computer drew.

Our research hypothesis is that the samples are drawn from \rho D_C +(1-\rho)U where U is a uniform distribution. \rho is called the fidelity. The null hypothesis is that the samples were drawn uniformly at random. (There is also a finer description of the noisy distribution with a Gaussian low order term depending on C. (This can be seen already from the noise toy model above but I will not discuss it here.)

The main test used in the Google paper is an estimator for \rho:

\frac {2^n}{t} \sum_{i=1}^t D_C(x_i) -1.

They also considered a logarithmic version.

The samples in the experiment (at least when n is large) are too sparse to identify the probability distribution on \{0,1\}^n. (This was one of my concerns that was also endorsed by Ryan.) But once you compute the probability distribution D_C you can study statistically the set of probabilities that you obtained for your sample. The Google paper offers some interesting statistical studies and in particular a statistical comparison between the set of values \{D_C(x_i):i=1,\dots,t\}.

A poem by Peter Shor

A Poem for Quantum Computing Skeptics

Quantum computers may at first sight seem
To be impossible. How dare we dream
Of solving problems that else would take more time
Than has passed since the cosmos’s Big Bang!

But is there any reason to believe
The universe computes the way we do?
Nature is profligate with galaxies.
Why shouldn’t her extravagance hold true
For other things? Why mightn’t she achieve
Prodigious feats of computation, too?

Updates (May 2020): I wrote a new paper (first posted in this post) “The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims.” (August 20: Slightly revised version.)  Remarks are most welcome. Another paper on statistical aspects of the Google’s experiment with Yosi Rinott is close to completion. (July 2020, slides for a mid-July talk at USTLC.)

Regarding the classical difficulty of Googles supremacy task. There are various interesting further papers that suggest that the task is not as classically difficult as originally believed. Y. Zhou, E. M. Stoudenmire, X. Waintal, What limits the simulation of quantum computers?, arXiv:2002.07730. Efficient classical simulation of random shallow 2D quantum circuits,  John Napp, Rolando L. La Placa, Alexander M. Dalzell, Fernando G. S. L. Brandao, Aram W. Harrow, and  Spoofing Linear Cross-Entropy Benchmarking in Shallow Quantum Circuits by Boaz Barak, Chi-Ning Chou, Xun Gao. See this nice post by Scott Aaronson mentioning the last two (and other striking papers).

Speaking of Scott, there was another blog post he wrote earlier on mid February “My ‘Quantum Supremacy: Skeptics Were Wrong’ 2020 World Speaking Tour“. The original title mentioned me directly and indeed the videotaped lectures of Scott’s talks refer at some length in a stand-up style to skeptics and mainly to me.  Here is the lecture at Harvard (See especially 39:45-42.53 ), and here at Princeton (see especially 28:00-32:48).  Scott’s world speaking tour was apparently interrupted due to the recent pandemic.

Another impressive paper on classical simulation is Classical Simulation of Quantum Supremacy Circuits by a team from Alibaba:  Cupjin Huang, Fang Zhang, Michael Newman, Junjie Cai, Xun Gao, Zhengxiong Tian, Junyin Wu, Haihong Xu, Huanjun Yu, Bo Yuan, Mario Szegedy, Yaoyun Shi, Jianxin Chen.

In the post above I criticized the Google team for not having enough experiments (and long enough samples) in the classically tractable regime.  Another critique is that they did not perform experiments in the range between 43 to 52 qubits, so it is still difficult for the improving classical algorithms to test the outcomes directly. (Update, March 2021: I was incorrect here; actually, there is data in this range as well.)  Of course, more experimental data from the Google chip for both regimes will be welcome and will allow to test the data via the improved classical algorithms.

Let me refer also to my Dec 2019 post describing my spontaneous lecture on Google’s supremacy demo and the nice panel we had on the topic later the same afternoon. (I discovered that my lecture was also videotaped so I may add a link when it will become available). Here is a more recent panel discussion at the Simons Institute.

Update (august 2020): Here is my paper with Yosi Rinot and Tomer Shoham Statistical aspects of the quantum supremacy demonstration. (slightly revised version)

This entry was posted in Combinatorics, Physics, Quantum and tagged . Bookmark the permalink.

26 Responses to Gil’s Collegial Quantum Supremacy Skepticism FAQ

  1. Dear Gil Kalai,

    First of all, I would like to thank you for your work and for sharing your thoughts about quantum computing and noise sensitivity. I find it extremely stimulating and interesting.

    I should however also probably start by saying that I don’t fully grasp why quantum computational supremacy could be proven impossible, given the definition that has been coined by John Preskill.
    Indeed it seems to me that reaching quantum supremacy depends strongly both on quantum but also on classical computing technologies, and may therefore not be captured solely by reasoning about asymptotic, unless we believe (but can we prove such thing?) that technology has reached some asymptotic limit. For example, if todays classical computers were running on vacuum tubes, we could certainly be confident that (today’s) Google’s quantum computer would be computationally more powerful than any of such classical computers.

    Now coming back to Google’s paper, I also have some doubts about the central claim, i.e.- as you also have pointed out- about the validity of the claim that for n=53 (number of qubits) and m=20 (circuit depth) the experimental data (that Google has released but could not verify) actually corresponds to F_XEB ~ 0.002 (and not to F_XEB ~ 0).

    The main reason for this doubt is that the F_XEB ~ 0.002 value is inferred from measurements that have been taken in an experimental regime that significantly differs from the targeted regime assumed to be hard to simulated classically, namely:
    {n=53, m=20, and « supremacy » circuit}.

    Instead, experimental F_XEB data has been taken:
    a) for smaller values of n (typically for n ≤ 43)
    b) for smaller circuit depth (14 and not 20)
    c) for different circuits

    a) and b) may be justified (however only to some extent) by the need to be able to rapidly run simulations of the RCS experiments on a classical computer, in order to actually compute F_XEB.

    The discrepancy of circuits, i.e. c), seems however more problematic:

    There are four circuits variants (cf SI tables III and IV): “patch”, “elided”, “full”, “supremacy”.
    The claimed F_XEB ~ 0.002 for {n=53, m=20, circuit=supremacy} is inferred from elided (green) F_XEB data. This inference is justified in the article by the fact that “full” and “elided” circuit have similar behaviour for n≤38 and m=14 (Fig S24).

    This inference hence raises at least two methodological concerns:

    1) Full and elided experimental F_XEB data are only given for m=14. This does not seem to be justified by computational limits and hence casts some doubts on the experimentally achievable F_XEB for m=20.

    2) Much more problematic: no experimental data is presented to justify that « full » and «supremacy » circuits have similar F_XEB behaviour.

    « Supremacy » circuits are indeed different from « full » circuits: their two-qubit gates are re-scheduled, in order to minimize entanglement. This scheduling difference is also likely to affect (reduce) the noise, and therefore improve the F_XEB.

    This hence questions the experimentally achievable F_XEB for the « supremacy » circuit. In particular, F_XEB could be significantly lower for the “supremacy” circuit than for the « full » and « elided » circuits, which would jeopardize the validity of the F_XEB inference made in the article. We can moreover note that some experimental measurements of F_XEB for the supremacy circuit in the classically verifiable region should be doable.

    • Gil Kalai says:

      Thanks for the comment, Romain.

      Just a general remark about comments – in view of the delicate issue regarding Formula (77), comments will now be moderated.

      The notion of quantum supremacy is indeed somewhat illusive but I think it is nevertheless a useful notion. (Some people understandably don’t like the term `quantum supremacy’.) It is certainly more illusive (and open to false positive claims) than the ability to create good quantum error correcting codes. In addition to its own importance demonstrations of quantum supremacy of various quality is an intermediate goal toward creating good quantum error correcting codes. For example, achieving high fidelity states on 100+ qubits which is what is needed for distance-5 surface code would allow such level of supremacy that it will be hard to argue about.

      I certainly agree that one weakness of the Google experiment is that the support from experiments in the classically tractable regime is much too sparse and is therefore unconvincing.

      • Peter Shor says:

        What is the delicate issue regarding formula (77)? As far as I can tell from reading the paper, (77) is based on a hand-waving plausibility argument and on experimental evidence, and nobody is claiming it is rigorous. And like all experimental findings in computational complexity, it is not to be entirely trusted: it might break down at higher qubit numbers (I doubt it), and might break down for non-random circuits (I think this is likely). Finally it seems to indicate that your hypothesis about there being unavoidable noise at a rate too high for fault tolerance to work is wrong. I don’t believe any of these statements is controversial, so what delicate issue are you referring to?

        The real question remains whether quantum fault tolerance works, and how well, which I believe is one of the next items on the experimentalists’ list.

      • Gil Kalai says:

        Hi Peter, why do you think that the fidelity formula (77) will break down for non-random circuits? In any case, since the researchers themselves regard how good the prediction of (77) is as most amazing and considered this discovery as a big surprise, and since also my own intuition is that it is too good to be true, this is one aspect of the experiments that we should carefully think about and check. Having said that, let me add that the Google team has a statistical argument that justifies the remarkable performances of (77). (If there are problems with (77) this may be delicate.)

        “Finally it seems to indicate that your hypothesis about there being unavoidable noise at a rate too high for fault tolerance to work is wrong.”

        Right, as I said in the post, if the Google claims are correct then my argument that there being unavoidable noise at a rate too high for fault tolerance to work is wrong. Even by formula (77) you will need better (even substantially better) gate performances for good quality quantum error-correcting codes and certainly for quantum fault-tolerance. But my argument that this is impossible is based on the claim that even achieving the noise-rate for quantum supremacy is impossible.

        Nice poem!

      • Peter Shor says:

        Why do I think it will break down? If there’s no correlation between the different gates (like there is in a random circuit), then it seems quite reasonable there won’t be any correlation between the errors, which is how you get formula (77). If there’s correlation, it seems less reasonable to me, although I have to admit that this is a very hand-wavy arguments.

      • Gil Kalai says:

        Hi Peter, I don’t understand your claim that for random circuits there is “no correlation between the gates” and for other circuits “there are correlation between the gates.” Also the paper shows evidence that formula (77) works under various variations of random circuits with different computation complexity and entanglement behavior. Indeed correlation is one reason to doubt formula (77) and also systematic errors may have large effect.

        In my opinion, the general context to think about Formula (77) is not necessarily “experimental findings in computational complexity” but rather in theory and practice of reliability theory. A crucial question is also if other implementations of quantum computers (like IBM’s) exhibit a similar behavior.

        (A small correction to my earlier remark. There is also some possibility that the computational task exhibited by the Google experiment is much easier computationally than what we expect at present. But, my intuition is that this is not the issue with the supremacy claim.)

  2. Daniel Bilar says:

    Hi Prof Kalai

    Thanks for the efforts keeping us updated, and in language accessible to lay people. This has been v helpful to me and other interested watchers.

    Thanks for Vidick Nov 2019 “Operator Algebras to Complexity Theory and Back”

    For a bit of background for “Conjectured complexity-theoretic counterpoint to Tsirelson’s problem: The class MIP∗ contains undecidable problems. IMHO readers may benefit from:

    0) Vidick (talks, videos) on NEEXP ⊆ MIP∗
    Re “NEEXP stands for nondeterministic doubly exponential time [..] by querying two provers sharing entanglement, a polynomial-time verifier has the ability to verify the validity of a proof of doubly exponential length”
    See proof systems w entangled provers >= exponentially more powerful than classical provers; Vidick: “Randomness & interaction brought us 1 exponential; entanglement gives us another

    1) Grilo (paper) NONHALT ∈ PZK-MIP*_(1,1)[4, 1]
    Re “the complexity-theoretic conjecture states that the class MIP∗ of problems that can be decided by a polynomial-time verifier interacting with quantum provers sharing entanglement contains undecidable languages”
    Grilo: “zero knowledge proof for the (non)halting problem (albeit with vanishing promise gap) ..every non-local game (including the NEEXP one!) can be converted into another game where the referee “learns nothing””
    and NEXP ⊆ ZK-MIP* : spatial isolation suffices to unconditionally achieve zero knowledge even in presence of quantum entanglement

    Thanks for your clear expositions, appreciated it for a long time. Early kernel for me was 12-13 years ago I attended a small CS colloquium at BU. L Levin was in the audience and I cornered him and discussed his “Polynomial Time and Extravagant Models”, sec. 2 The Tale of One-Way Functions in Probl. Inf. Transm. 39(1), 2003.

  3. Craig says:

    Gil, I recently discovered a paper by Draper “Addition on a quantum computer” that was talked about on Scott Aaronson’s blog.

    If a quantum computer of say 100 qubits can perform this algorithm of adding two numbers and get the right answer, then I would be convinced that quantum computing works, even though I could do the same thing easily on a classical computer. The reason I would be convinced is because the algorithm uses quantum Fourier transforms, which are also used in Shor’s algorithm and seems to be the main trick that gives Shor’s algorithm its power to factor integers.

    Would you be convinced?

  4. gentzen says:

    Thomas said: “our area needs more people like Gil.” (!)

    He might be right in the long term, when the field reaches the point were gradual experimental progress hits a barrier and non-gradual experimental progress gets falsified on a regular basis.

    But at the current moment, the field progresses nicely, so why unnecessarily disrupt that progress? OK, if non-gradual experimental progress call for attempts to falsify it, people willing to invest their time and standing into those efforts might be needed indeed.

    I personally am unwilling to invest my non-existing standing into those efforts. I am unsure how much time I should invest. I seem to be able to read such stuff, and understand some bits of what is said. I recently gave the following example of my ambivalent position:

    15 mK * k_B / h = 312.5 MHz where k_B is the Boltzmann constant, h is the Planck constant, and 15 mK is the temperature of 15 milliKelvin at which Sycamore was operated. Assume I would try to explain why those 312.5 MHz are a lower bound for how fast the quantum bits must be operated (or at least controlled) for extended quantum computations. The main paper contains sentences like: “We execute single-qubit gates by driving 25-ns microwave pulses resonant with the qubit frequency while the qubit–qubit coupling is turned off.” or “We perform two-qubit iSWAP-like entangling gates by bringing neighbouring qubits on-resonance and turning on a 20-MHz coupling for 12 ns, which allows the qubits to swap excitations.” So on the surface, it looks like my claim must be false. There are also sentences like “In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GHz) for complete control of the quantum processor.” in the main paper or “The microwave AWG provides signals with arbitrary spectral content within ±350 MHz of the local oscillator (LO)” in the supplementary material. So maybe there might still be a grain of truth in my claim, but my claim would have to be explained good enough to make sense and allow agreement.

    Surprisingly, the actual trigger for me to write such stuff was a recent paper by Aram Harrow. (And that fact in the end tipped the scale to post this seemingly unrelated comment below your post.) In Classical algorithms, correlation decay, and complex zeros of partition functions of quantum many-body systems by Aram Harrow, Saeed Mehraban, and Mehdi Soleimanifar

    a quasi-polynomial time classical algorithm that estimates the partition function of quantum many-body systems at temperatures above the thermal phase transition point

    is presented. Ever since I read Lienhard Pagel’s book, my expectation was that quantum computation might be possible, but only at extremely low temperatures. And Aram Harrow’s paper seems to somehow reinforce such expectations.

    My own explanation is that the difference between the energy levels of your states must be sufficiently bigger than 15 mK * k_B to be safe from random termal state flips, and those differences in energy levels lead to phase shifts that would become too big if you couldn’t control your qubits faster (or at least better) than 312.5 MHz.

    Lienhard Pagel’s explanation

    What are the advantages of quantum computing? First, some technical advantages should be mentioned:

    3. Quantum systems operating at room temperature must prevail over the thermal noise of the environment. As shown in Figure 4.4, only relatively high-energy quanta are possible at room temperature because they would otherwise be destroyed by the thermal energy of the environment. Because of ΔEΔt ≈ h, these systems can only be fast, even as electronic systems.

    To me, Pagel’s explanation is both disappointing and fascinating at the same time. The explanation itself is not very different from my sketched explanation, only less concrete. (In a certain sense, Pagel’s explanation is stretched out over different sections and examples, which makes it hard for me to reproduce it here.) What is fascinating is that Pagel considers his claim to be an obvious advantage of quantum computing (that doesn’t need serious justification), rather than as a serious technical obstacle for practical implementation of quantum computing (that might not apply to all possible implementations).

  5. Pingback: Not Quite Getting Up to Speed - Coder Buzz

  6. Gil Kalai says:

    The following somewhat simplified version of formula (77) is already suppose to give pretty good approximation for the fidelity:


    Here, n is the number of qubits, g1 is the number of 1-gates and g2 is the number of 2-gates. For the full circuit with m layers g1=n(m+1) and g2=nm/2. (For the simplified circuits “elided” and “patch” circuits the number of gates is somewhat different.)

    The formula expresses independence of the fidelity from computational complexity and from entanglement. (This assertion actually agrees with my own intuition: for example, I don’t expect a difference between noisy boson sampling and noisy fermion sampling which in the noiseless case are very different computationally. I expect strong effect of entanglement on error correlation but not on the fidelity.)

  7. Pingback: The story of Poincaré and his friend the baker | Combinatorics and more

  8. Pingback: TYI 41: How many steps does it take for a simple random walk on the discrete cube to reach the uniform distribution? | Combinatorics and more

  9. Gil Kalai says:

    Piet Hein wrote “Problems worthy of attack prove their worth by fighting back” and unfortunately this is now the case for the Aaronson-Ambainis Conjecture – a serious flaw in the proof was found by Paata Ivanishvili. Nathan and Ohad have withdrawn the paper but they are still working to fix the flaw.

  10. Pingback: Quantum DNA sequencing & the ultimate hardness hypothesis – Theory Dish

  11. Gil Kalai says:

    Regarding the IBM/Google competition and debate. (Three quotes and a brief opinion.)

    From the IBM blog posts: “We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.”  

    From the Google paper: “One may wonder to what extent algorithmic innovation can enhance classical simulations. Our assumption, based on insights from complexity theory, is that the cost of this algorithmic task is exponential in circuit size. Indeed, simulation methods have improved steadily over the past few years. We expect that lower simulation costs than reported here will eventually be achieved, but we also expect that they will be consistently outpaced by hardware improvements on larger quantum processors.”

    Scott’s assessment (Comment #52 this post):  “From what I know, there are two reasons why Google “won”:
    (1) The Martinis group has apparently been a little bit ahead of IBM in terms of the circuit fidelities they can achieve. (Having said this, I don’t actually know the details and would love to hear from experts.)

    (2) Even more important, the Google group decided years ago that clearly demonstrating quantum supremacy was going to be a central goal, whereas IBM decided that … well, you can read their recent blog post if you want to understand their perspective. It wasn’t what they chose to focus on. As a result, Google crossed a major finish line that its closest competitor in superconducting qubits knew about, but was somehow barely even racing toward!” 

    In my view the situation is different. A crucial questions are how well are the circuit fidelities of the IBM machines, and how well can one estimate the fidelity of states constructed by the IBM machines based on the qubits gate fidelities (Formula (77)). If the Martinis group are “a little bit ahead” as Scott describes or even “considerably ahead” then that this may explain why Google “won”. But if the circuit fidelities by Google and their estimation (77) based on the basic components are drastically or fantastically ahead (even for 20-30 qubits), then this may indicate a serious problem with the Google experiments. 

  12. Pingback: The Google Quantum Supremacy Demo. | Combinatorics and more

  13. Gil Kalai says:

    Let me point out to a very interesting paper on the arxive “What limits the simulation of quantum computers?” by Yiqing Zhou, E. Miles Stoudenmire, Xavier Waintal .

    One result that I found interesting is that there is sort of a threshold \epsilon_0 for the error rate where the computational complexity is polynomial for higher rate and is exponential in \epsilon_0/\epsilon for lower rates \epsilon.

    The paper gives a quick classical simulations for a certain 54 qubits 20 layers circuits with Fidelity 2/1000 in time which is only 1-2 orders of magnitude above the reported Google’s running time. The circuit is simpler than the Google circuit so the promise for quicker classical simulation for the Google circuit is not clear. It certainly gives new ideas for classically “attacking” the Google circuit.(Aaronson claims on SO that there were other classical simulations with similar performance. )

    This new result gives some support to the the feeling (that I expressed in the post) that the IBM algorithm can be improved by several orders of magnitudes if instead of perfect running of the quantum circuit the target is only to sample with Fidelity 0.002. Perhaps the improvement is more than linear in (1/fidelity).

    This may also be related to a theoretical suggestion I raised in some of my papers about the limitation of cooling for classes of quantum states.

  14. Pingback: 구글의 양자컴퓨터 vs 고전 시뮬레이션, 근황 – Pseudorandom Things

  15. Pingback: 구글의 양자컴퓨터 vs 고전 시뮬레이션, 근황 – Pseudorandom Things

  16. Pingback: 구글의 양자컴퓨터 vs 고전 시뮬레이션, 근황 – Pseudorandom Things

  17. Pingback: 구글의 양자컴퓨터 vs 고전 시뮬레이션, 근황 – Pseudorandom Things

  18. Pingback: The Argument Against Quantum Computers – A Very Short Introduction | Combinatorics and more

  19. Pingback: Is HQCA Possible? A conversation with Michael Brooks | Combinatorics and more

  20. Pingback: Ordinary computers can beat Google’s quantum computer after all | Combinatorics and more

  21. Pingback: Greatest Hits 2015-2022, Part II | Combinatorics and more

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s