The first 15 samples of Google’s 53 qubit flagship quantum supremacy experiment!
After the sensationally successful Scott’s Supreme Quantum Superiority FAQ and Boaz’s inferior classical inferiority FAQ let me add my contribution, explaining my current skeptical view. (I was actually asked many of the questions below.) I also recommend Lipton and Regan’s post on the Google paper.
While much of the post will be familiar let me mention right away a new critique of the Google supremacy claim: One of the central claims in the Google experiment – that the fidelity (quality) of a complex circuit is very close to the product of the fidelity of its basic components – qubit and gates, seems very improbable and this may shed serious doubts on the validity of the experiment and on its conclusions.
Update (27/11): For quantum computer skeptics poetry on Twitter see this thread by Peter Shor, and this tweet by ⟨dl|yonge|mallo⟩. (See at the end of the post.)
Before we start, a few links: For the amazing news on the threshold of random discrete structures, see this post. Here is my first post on Google matter. Let me recommend the paper From Operator Algebras to Complexity Theory and Back by Thomas Vidick. It is about a problem by Boris Tsirelson (related to various deep mathematics) and about connections to quantum computation. And just fresh on the arXiv, Quantum speedups need structure by Nathan Keller, Ohad Klein, resolving the Aaronson-Ambainis Conjecture. (Update: here is a blog post on the Shtetl-Optimized.) Congrats to Nathan and Ohad! (Dec 2: unfortunately a gap in the proof was found; see comment below.)
And now, lets start.
So what is quantum supremacy? And what other things do we need to know in order to understand the claims regarding quantum computers?
Quantum supremacy is the ability of quantum computers to demonstrate computations that classical computers cannot demonstrate. (Or that it is very very hard for classical computers to demonstrate.)
Quantum error correcting codes are certain quantum gadgets that a quantum computer needs to create that will be used as building blocks for larger quantum computers.
A sampling task is a task where the computer (quantum or classic) produces samples from a certain probability distribution D. Each sample is 0-1 vector of length n.
What did the Google team do?
The Google team produced a sample of a few millions 0-1 vectors of length 53 which is based on a certain “ideal” probability distribution D. They made two crucial claims regarding their sample
A) The statistical test for how close their sample is to the ideal distribution D will give a result above t=1/10,000
B) Producing a sample with similar statistical property will require 10,000 years on a supercomputer.
The probability distribution D depends on a quantum computation process (or by the technical jargon, a quantum circuit) denoted later by C.
What is the meaning of the statistical statement in part A)?
Google’s quantum computers (like any other current quantum computers) are very “noisy” so what the computer is producing are not samples from D but rather a noisy version which could roughly be described as follows: a fraction t of the samples are from D and a fraction (1-t) of the samples are from a uniform distribution. The statistical test allows to estimate the value of t which is referred to as the fidelity.
Could they directly verify claims A) and B) ?
No, it was only possible to give indirect evidence for both these claims.
What is the logic of Google’s quantum supremacy argument?
For claim A) regarding the success of the statistical test on the sample they have two arguments:
- Analysis based on the fidelity of the components of the quantum computer – qubits and gates (see formula (77) above),
- Experiments that support the analysis in the regime where they can be tested by a classical computer.
According to the paper the fidelity of entire circuits agrees perfectly with the prediction of the simple mathematical formula (77) with deviation under 10-20 percents. There are several reported experiments in the classically tractable regime including on simplified circuits (that are easier to simulate on classical computers) to support the assumption that the prediction given by formula (77) for the fidelity applies to the 53-qubit circuit in the supremacy regime.
For claim B) For the classical difficulty they rely on:
- Extrapolation from the running time of a specific algorithm that they use.
- Computational complexity support for the assertion that the task they consider is asymptotically difficult.
What are the weak points in this logic?
A main weakness (which is crucial in my mind) of the experiment is that the experimental support from the regime where the experiments can be tested by a classical computer is too sparse. Much more could have been done and should have been done.
In my opinion, a major weakness of the Google experiment is that the support from experiments in the classically tractable regime (blue in the picture) is much too sparse and is unconvincing.
Another weakness is that the arguments for classical difficulty were mainly based on the performance of a specific algorithm.
Sources: The link to the Google paper in Nature. A videotaped lecture by John Martinis at Caltech. A short Google video “demonstrating quantum supremacy”.
What is your assessment of the Google claims, Gil?
I think that the claims are incorrect. Specifically, I find the evidence for the claim “the statistical test applied to the 53-qubit sample will give a result above 1/10,000” too weak and I expect that this claim and other related claims in the paper will not stand after a closer scrutiny and further experiments in the classically tractable regime. I also doubt the perfect proximity between predictions based on the 1- and 2- qubit fidelity and the circuit fidelity.
The Google experiment represents a very large leap in several aspects of human ability to control noisy quantum systems and accepting their claims requires very careful evaluation of the experiments and, of course, successful replications.
The “most amazing thing” – is it real?
Do you want to tell us more about Formula (77)?
Yes, thank you. Here again is Formula (77) and its explanation in the paper. The fact that the fidelity of entire circuits agrees with the prediction of the simple mathematical Formula (77) is “most amazing” according to John Martinis (videotaped lecture at Caltech). Indeed the deviation according to the paper is at most 10-20 percents. This perfect agreement can be seen in various other parts of the paper. The authors’ interpretation of this finding is that it validates the digital error model and shows that there are no new mechanisms for errors.
John explains the significance of Formula (77) at Caltech. Amazing big surprises are often false.
And what do you think about it, Gil
I completely agree that this is most amazing and, as a matter of fact, there are reasons to consider the predictions based on Formula (77) as too good to be true even if qubit and gates fidelity account for all the errors in the system. The issue is that Formula (77) itself is very sensitive to noise. The formula estimates the fidelity as the product of hundreds of contributions from individual qubits and gates. Fairly small errors in estimating the individual terms can have large accumulative effect, well beyond the 10%-20% margin.
Anyway, this matter deserves further study.
Because this is considered by the authors as a major discovery and while looking skeptically at the Google papers this appears to be an orthogonal “miracle” to the main supremacy claim.
How to proceed
Let’s go back to your overall assessment. What could change your mind, Gil?
A) An independent verification of the statistical tests outcomes for the experiments in the regime where the Google team classically computed the probabilities. This looks to me like a crucial step in a verification of such an important experiment.
A more difficult verification that I’d also regard as necessary at a later stage would be to independently check the probability distributions for the circuits given by the Google computation.
Further analysis and crucial improvements
B) Experiments with the quantum computers giving sufficiently many samples to understand the noisy probability distribution for circuits in the 10-25 qubit range. See my first post, this comment by Ryan O’Donnell, and this earlier one. We need to understand the noisy probability distribution produced by the Google quantum computer, the actual fidelity, and the quality of the statistical tests used by the Google team.
C) Experiments in the 10-30 qubit range on the IBM (and other) quantum computers. It is quite possible that experimenting of this kind with random quantum circuits was already carried out.
D) Since improved classical algorithms were found by the IBM team (but analyzing the 53 qubits samples still seems practically beyond reach). Google can produce samples for 41-49 qubits for which IBM (or others) can compute the probabilities quickly and test Google’s prediction for the fidelity.
E) Success in demonstrating distance-3 and distance-5 surface codes and other quantum error-correcting codes.
So what precisely will convince you and what is the time-schedule that you expect for matters to be clarified?
A successful and convincing combination of three or more from A), B), C), D) or E) will go a long way to convince me. The verification part A) is important, and I don’t expect problems there, I expect that the Google claims will be verified and I consider it as very important that the data will be public and that various groups will verify the claims. This may take several months and certainly it should take less than a year.
At present, I expect parts B)-D) will not support Google’s supremacy claims. So outcomes of experiments in the next couple of years both by the Google group and by other groups will be crucial. One direction that I do not regard, at present, as useful for strengthening the quantum supremacy claims is increasing the number of qubits of the quantum computer.
What is required for the (easy) verification stage?
(1) Right now the raw samples of Google’s sampling experiments are public. There are altogether 300 files with samples.
(2) For every circuit that they experiment, the Google team also plans to upload the 2^n probabilities that they obtained by the Feynman-Schrodinger algorithm. This will allow verifying their statistical tests, making subsequent analysis, and for other researchers to test other algorithms for computing the same probabilities.
(3) A convenient form of the data from (2) is a file that will give for every experiment the probabilities that the Google team computed for the samples. (For a large n those are much smaller files.)
(4) For each of the 300 experiments the estimated fidelity that formula (77) gave and the contribution of each qubit and gate to the RHS of (77).
Do you plan to take part yourself?
I plan to get involved myself with the “easy” verification and analysis of the “raw data” once it will become available. I do expect that the statistical tests will agree with the assertions in the Google paper, and at the same time, as I said, I think it is important that this and other aspects of the experiments will be double checked and triple checked. This basic data already allows interesting analysis and indeed Google’s supplementary paper describes such analysis (that the Google people kindly pointed me to) on how the data fits the theory and on their statistical tests. See Figures S32-S36, table V and associated materials around pages 37-40.
What did the IBM rebuttal paper show?
Recall that the Google claim is based on two assertions:
A) The statistical test applied to the sample will give a result above 1/10,000
B) that producing a sample with similar statistical property will require 10,000 years on a supercomputer.
The IBM team described a different algorithm (on an even stronger current supercomputer) that would take only 2.5 days rather than 10,000 years.
Can the 2.5 days be further reduced?
As far as I can see the IBM claim is about a full computation of all the 2^53 probabilities. It is reasonable to think that producing a sample (or even a complete list of 2^53 probabilities) with fidelity t reduces the classical effort linearly with t. (This is the claim about the specific algorithm used by the Google team.) If this holds for the IBM algorithm then the 2.5 days will go down to less than a minute. (This will still be a notable “quantum speed up” in terms of the number of operations.) I don’t have an opinion as to whether we should expect considerably better than IBM’s classical algorithms for computing the exact probabilities.
But lack of enthusiasm and skepticism of researchers from IBM about the Google paper appears to go beyond this particular point of the 2.5 computing days. Do you think that the objection by IBM people is motivated by fierce competition or envy?
No, I tend to think that there is a genuine interest by researchers who question the Google paper to understand the scientific matter, and carefully, critically and skeptically examine Google’s claims. Maybe Google’s claims might seem to some other researchers who are working on quantum computers with superconducting qubits as remote from their own experimental experience, and this may give a strong reason for skepticism. It is also possible that in time people in IBM and elsewhere will change their mind and will become more enthusiastic about the Google results.
IBM paper and blog post responding to Google’s announcement. (Update (Feb 24, 20) about classical simulation in this comment)
Tell us a little more about noise
Here is a nice toy model (which I think is quite realistic) to understand what the noise is. Suppose that you ran a circuit C on your quantum computer with n qubits and the ideal probability distribution is . The fidelity noisy version of will be . And here is the average (or weighted average) of values of where y is a vector in the neighborhood of x.
Here is a concrete version: We look at the expected value of where y is a new vector and with probability with probability independently. We choose so that . There are cases where positive correlation of errors for 2-qubit gates lead to correlated errors. (This is especially relevant in the context of quantum error correction.) To add this effect to the toy noise model replace the word independently by “positively correlated“.
Your argument, Gil
Why do you think that quantum supremacy is not possible at all?
The gist of my argument against the possibility of achieving quantum supremacy by Noisy intermediate scale quantum computers is quite simple: “NISQ devices can’t outperform classical computers, for the simple reason that they are primitive classical computers.”
(Note the similarity to Scott Aaronson’s critique on a paper published by Nature claiming implementation of a Shor-like algorithm on a classical device called “p-bits”. Scott offered a very quick debunking: “ ‘p-bit’ devices can’t scalably outperform classical computers, for the simple reason that they are classical computers.)
If Google’s claim are correct – does it falsify your argument?
Yes! I predict that probability distributions described (robustly) by a noisy quantum circuit represent a polynomial time algorithm in terms of the description of the circuit. And by a polynomial time algorithm I assume small degree and modest constants. The Google claim, if true, appears to falsify this prediction. (And for this you do not need quantum supremacy in the strongest form of the term.)
But is there no way that Google’s huge (or at least large) computational advantage coexists with what you say?
There is an issue that I myself am not completely sure about regarding the possibility of chaotic behavior of quantum computers. Here is the classical analog: If you have n bits of memory inside a (classical) computer of m bits and ask about the complexity of the evolution on the n bits which may be chaotic. Of course, we cannot expect that this chaotic computer can lead to a computation that requires thousands of years in a super computer. But can it lead to a robust computation which is superpolynomial in n (but polynomial in m)?
I don’t know the general answer but, in any case, I don’t think that it changes the situation here. If the Google claims stand, I would regard it as a very strong argument against my theory. (Even if the noisy distributions themselves are not robust.) In any case, the question if the samples in the experiments represent robust distributions or are chaotic could and should be tested. (I discussed it in this post.)
If Google’s claims do not stand, will it confirm or give a strong support to your position that quantum supremacy and quantum error correction are impossible?
Failure of the Google claim will mainly support the position that quantum supremacy and quantum error correction require substantial improvement of the quality of qubit and gates. It would give a noteworthy support to my position (and probably would draw some attention to it) but I would not regard it as a decisive support. Let me mention that various specific predictions that I made can be tested in the Google, IBM and other systems.
OK, so why do you think that the quality of qubits and gates cannot be improved?
Yes, this is the crucial point. One argument (that I already mentioned) for thinking that there is a barrier for the quality of gates and qubits is computational theoretic. Computationally speaking NISQ devices are primitive classical computing devices and this gives a strong reason to think that it will not be possible to reduce the error rate to the level allowing computational supremacy. But there is an additional argument: for a wide range of lower levels of noise, reducing the noise will have the effect of making the system more chaotic. So the first argument tells us that there is a small range of error-rates that we can hope to achieve and the second argument tells us that for a large range of lower error-rates all we gain is chaos!
Links to my work: Three puzzles on mathematics computations, and games, Proc. ICM2018; The argument against quantum computers, To appear in Itamar Pitowsky’s memorial volume; The quantum computer puzzle, Notices AMS, May 2016
Slides from my 2019 CERN lecture. My ICM 2018 videotaped lecture.
Correlated errors and quantum error correction
People mainly refer to your conjectures about correlated errors.
Yes, this reflects my work between 2005-2013 (and was a central issue in my debate with Aram Harrow) and I think it is an important part of the overall picture. But this issue is different than my argument against quantum computers which represents my work between 2014-2019. I think that my earlier work on error correlation is a key (or a starting point) to the question: What do we learn from failure of quantum computers on general properties of quantum noise. Indeed there are various consequences; some of them are fairly intuitive; some of them are counter-intuitive, and some of them are both. The basic intuition is that once your computation really makes use of a large portion of the Hilbert space, so will the error!
The major challenge is to put this intuition into formal mathematical terms and to relate it to the mathematics and physics of quantum physics.
I made a similar idea in a comment to Dave Bacon in 2006 when I wrote “I believe that you may be able to approximate a rank-one matrix up to a rank-one error. I do not believe that you will be able to approximate an arbitrary matrix up to a rank one matrix.” to which Dave replied “I will never look at rank one matrices the same 😉”. Dave Bacon is among the authors of the new Google paper.
What is the connection between the ability to achieve quantum supremacy and the ability to achieve quantum error-correction?
One of the main claims in my recent works is that quantum supremacy is an easier task compared to creating good quality error-correcting codes. For the attempted experiments by Google, we see a clear demonstration that achieving good quality quantum error correction is harder than demonstrating quantum supremacy. Low fidelity circuits that Google claims to achieve are far from sufficient for quantum error-correction. The other claim in my argument is that quantum supremacy cannot be achieved without quantum error correction (and, in particular, not at all in the NISQ regime) and this claim is, of course, challenged by the Google claims.
You claim that without quantum error correction to start with we cannot reach quantum supremacy. But maybe John Martinis’ experimental methods have some seeds of quantum error correction inside them?
Maybe 🙂 See this 2017 cartoon from this post.
(Here is a nice overview video from 2014 about my stance and earlier work.)
The Google experiment: concerns and attacks
Beside the critique on experimental evidence that could be tested did you find some concrete issues with the Google experiment?
Perhaps even too many 🙂 . In the first post and comments I raised quite a few objections. Some of them are relevant and some of them turned out to be irrelevant or incorrect. Anyway, here, taken from my first post, are some of my concerns and attempted attacks on the Google experiment:
- Not enough experiments with full histograms; not enough experiments in the regime where they can be directly tested
- Classical supremacy argument is overstated and is based on the performance of a specific algorithm
- Error correlation may falsify the Google noise model
- Low degree Fourier coefficients may fool the statistical test
- (Motivated by a comment by Ryan.) It is easier to optimize toward the new statistical test “linear cross-ratio entropy” compared to the old logarithmic one.
- “System calibration” may reflect an optimization towards the specific required circuit.
- The interpolation argument is unjustified (because of the calibration issue).
- (I forgot about it, Added, nov 27.) The paper assumes that the noisy distribution is a mixture of the correct distribution and a uniform distribution. But (as seen e.g. by our toy model) this is not precisely the case.
We talked about items 1 and 2 what about 3-5. In particular, are correlated errors relevant to the Google experiment?
No! (As far as I can see.) Correlated errors mean that in the smoothing the flipped coordinates are positively correlated. But for the random circuit and the (Porter Thomas) distribution this makes no difference!
As for item 4., it turns out (and this was essentially known by the work of Gao and Duan) that in the case of random circuits (unlike the case of Boson Sampling) there is no low degree coefficients to fool the statistical test.
As for item 5. (and also item 8.), the answer is “nice observation, but so what?” (Let me note that the supplementary paper of the Google team compares and analyzes the linear and logarithmic statistical measures.) I expect that observation 8 may play a role in analyzing the Google experiment. ) Dec 2: Let me add that I do expect point 8 to be important in figuring out the Google supremacy demonstration.
What about the calibration? You got a little overworked about it, no?
In almost every scientific experiment there could be concerns that there will be some sort of biased data selection toward the hoped-for result.
Based on the description of the calibration method I got the impression that part of the calibration/verification process (“system calibration”) was carried out towards the experimental outcome for a specific circuit, and that this does not improve the fidelity as the authors thought but rather mistakenly tweaked the experimental outcomes toward a specific probability distribution. This type of calibration would be a major (yet innocent) flaw in the experiment. However, this possibility was excluded by a clear assertion of the researchers regarding the nature of the calibration process, and also by a more careful reading of the paper itself by Peter Shor and Greg Kuperberg. I certainly was, for a short while, way overconfident about this theory.
One nice (and totally familiar) observation is that a blind experiment can largely eliminate the concern of biased data selection.
So how do you feel, Gil?
When did you hear about the Google claim?
There were certainly some reasons to think that Google’s quantum supremacy was coming for example a quanta magazine article by Kevin Hartnett entitled Quantum Supremacy Is Coming: Here’s What You Should Know and another article about Neven’s double exponential law. Also Scott Aaronson gave some hints about it.
On September 18, I met Thomas Vidick in a very nice conference of the Israeli and US academies on the future of computer science (it was mentioned in this post, links to all videos will be added here, Naftali Tishby’s lecture is especially recommended.) Thomas told me about the new expected Google paper. Later that day I got involved in an amusing Facebook discussion about related matters. (See Barry Simon’s first comment to Preskill’s post and the subsequent 15 comments.)
When I introduced Thomas to Menachem Yaari (who was the president of the Israeli Academy), describing the former as a young superstar in quantum computation, Menachem’s reaction was: “but you do not believe in quantum computers.” I replied that I believe it is a fascinating intellectual area, and that perhaps I am even wrong about them being infeasible. Thomas said: “our area needs more people like Gil.” (!)
What about Scott?
Scott and I have been on friendly terms for many years and share a lot of interests and values. We are deeply divided regarding quantum computers and, naturally, I think that I am right and that Scott is wrong. In the context of the Google paper Scott’s references to me and my stance were peculiar and even a little hostile which was especially strange since at that time I did not have access to the paper and Scott was the referee of the paper.
Gil, how do you vision a situation where you are proven wrong?
If my theory of quantum computation being derailed by noise inherent in quantum gates is proven wrong, then physicists will say that I am a mathematician and mathematicians will say that I am a combinatorialist.
and how do you vision a situation where you are proven right? Continue reading