Combinatorics and more

Quantum computers: amazing progress (Google & IBM), and extraordinary but probably false supremacy claims (Google).

Advertisements

A 2017 cartoon from this post.

After the embargo update (Oct 25): Now that I have some answers from the people involved let me make a quick update: 1) I still find the paper  unconvincing, specifically, the verifiable experiments (namely experiments that can be tested on a classical computers) cannot serve as a basis for the unverifiable fantastic claims. 2) Many of my attempts to poke hole in the experiments and methodology are also incorrect. 3) In particular, my suggestion regarding the calibration goes against the description in the supplement and the basic strategy of the researchers. 4) I will come back to the matter in a few weeks. Meanwhile, I hope that some additional information will become available. Post rearranged chronologically.

Some main issues described in the post and also this critique was brought to the attention of John Martinis and some of the main researchers of the experiment on October 9, 2019.

(Oct 28, 2019) The paper refers to an external site with all the experimental results. Here is the link https://datadryad.org/stash/dataset/doi:10.5061/dryad.k6t1rj8. In my view, it will be valuable, (especially, for an experiment of this magnitude) if there will be an independent verification of the statistical computations.

(October 23, 2019). The paper (with a supplementary long paper) is now published in Nature. The published versions look similar to the leaked versions.

Original post: For my fellow combinatorialists here is the link to the previous post on Kahn-Park’s result, isoperimetry, Talagrand, nice figures and links.  Quick update (Sept. 27) Avi Wigderson’s book is out!

Recent quantum computer news

You already heard by now that Google (informally) announced it achieved the milestone of “quantum supremacy” on a 53 qubit quantum computer. IBM announced launching in October a quantum computer also on 53 of qubits.  Google’s claim of quantum supremacy is given in two papers from August that were accidentally leaked to the public. Here are links to the short main paper and to the long supplementary paper. The paper correctly characterizes achieving quantum supremacy as an achievement of the highest caliber.

Putting 50+ qubits together and allowing good quality quantum gates to operate on them is a big deal. So, in my view, Google and IBM’s  noisy quantum circuits represent a remarkable achievement. Of course, the big question is if they can do some interesting computation reliably, but bringing us to the place that this can be tested at all is, in my view, a big deal!

Of course, demonstrating quantum supremacy is even a much bigger deal, but I expect that Google’s claim will not stand. As you know, I expect that quantum supremacy cannot be achieved at all. (See this post, this paper A, this paper B, and these slides of my recent lecture at CERN.)  My specific concerns expressed in this post are, of course, related to my overall skeptic stance as well as to some technical points that I made in my papers, but they could (and should) have also been made by responsible quantum computer believers.

Achieving quantum supremacy via sampling

In the last decade there were several suggestions to demonstrate the computational superior power of quantum circuits via sampling tasks. The computer creates (approximately) a probability distribution D on 0-1 strings of length n (or other combinatorial objects) that we have good computational complexity reasons to think that classical computers cannot achieve. In our case, D is the probability distribution obtained by measuring the outcome of a fixed pseudo-random quantum circuit.

By creating a 0-1 distribution we mean sampling sufficiently many times from that distribution D so it allows us to show that the sampled distribution is close enough to D. Because of the imperfection (noise) of qubits and gates (and perhaps some additional sources of noise) we actually do not sample from D but from another distribution D’. However if D’ is close enough to D, the conclusion that classical computers cannot efficiently sample according to D’ is plausible.

How to run supremacy sampling experiment

You compare the distribution E obtained by the experiment to the ideal distribution for increasingly larger values of n. If there  is a good match this supports your supremacy claims.

There are two important caveats:

  1. A single run of the quantum computer gives you only one sample from D’ so to get a meaningful description of the target distribution you need to have many samples.
  2. Computing the ideal distribution D is computationally hard so as n increases you need a bigger and bigger computational effort to compute D.

Still, if you can show that D’ is close enough to D before you reach the supremacy regime, and you can carry out the sampling in the supremacy regime then this gives you good reason to think that your experiments in the supremacy regime demonstrate “quantum supremacy”.

Earlier experiments

The Google group itself ran this experiment for 9 qubits in 2017. One concern I have with this experiment is that I did not see quantitative data indicating how close D’ is to D. Those are distributions on 512 strings that can be described very accurately.  (There were also some boson sampling experiments with 6 bosons and 2 modes and 5 bosons with 5 modes. In this case, supremacy requires something like 50 bosons with 100 modes.)

The twist in Google’s approach

The twist in Google’s approach is that they try to compute D’ based mainly on the 1-qubit and 2-qubit (and readout errors) errors and then run an experiment on 53 qubits where they can neither compute D nor verify that they sample from D’. In fact they sample samples from 0-1 strings of length 53 so this is probably much too sparse to distinguish between D and the uniform distribution even if we have unlimited computational power. (Actually as several people mentioned the last sentence is incorrect.)

The missing part in the experiment.

The obvious missing part in the experiment is to run the experiment on random circuits with 9-25 qubits and to test the research assumption about D’. I find it a bit surprising that apparently this was not carried out. What is needed is experiments to understand probability distributions obtained by  pseudorandom circuits on 9-, 15-, 20-, 25- qubits. How close they are to the ideal distribution D and how robust they are (namely, what is the gap between experimental distributions for two samples obtained by two runs of the experiment.)

Actually, as I noted in the introduction to Paper A (subsection on near term plans for “quantum supremacy” and  Figure 3), you can bring the hypothesis of quantum supremacy via pseudo-random circuits into test already in the 10-30 qubit regime without even building 53- or 72- qubit devices. (I can certainly see the importance in building larger circuits that are necessary for good quality error correcting codes.)

The main potential mistake and the matter of correlation.

(Oct 18): In hindsight this section about correlation is not relevant to the Google supremacy story.

Let me also indicate what is a potential mistake in the computation of D’ relying just on the behavior of qubits and gates. This is related to correlated (2-qubit gate) errors that I studied mainly before 2014.

The general picture regarding correlated qubit errors is the following:

(1) Errors for gated qubits (for a CNOT gate) are (substantially) positively correlated.

Here, you can think about the extreme form of “bad” noise where with a small probability t  both gated qubits are corrupted and with probability (1-t) nothing happens. (“Good” noise is when each qubit is corrupted with probability t, independently. Real life 2-qubit noise is a certain mixture of good and bad noise.)

(2) Therefore, (unless you are below the threshold level and apply quantum fault tolerance scheme) qubits in cat states that were created indirectly will have correlated noise of this kind.

(3) Therefore (and going from (2) to (3) is a mathematical part) probability distributions described by pseudorandom circuits will have a strong effect of synchronized errors.

What is most devastating about correlated errors is that the accumulated error-rate in terms of qubit errors becomes quadratic in the number of rounds instead of linear. See Remark 4.2 about “model error-rate” and “effective error-rate” in Paper A.

My predictions.

Let me mention that Paper B describes fairly detailed predictions about  probability distributions obtained by  pseudo-random circuits and these predictions can be tested in the 9-25 qubits range. In particular, they suggest that robust outcomes will be easy to simulate classically, as they belong to a very low level (LDP) computational complexity class,  and that noise sensitivity will lead to chaotic outcomes which are far from the ideal distribution.

A problem with the supremacy statements itself.

On Google Cloud servers, we estimate that performing the same task for m = 20 with 0.1% fidelity using the SFA algorithm would cost 50 trillion core-hours and consume 1 petawatt hour of energy. To put this in perspective, it took 600 seconds to sample the circuit on the quantum processor 3 million times, where sampling time is limited by control hardware communications; in fact, the net quantum processor time is only about 30 seconds.

It is not clear where these fantastic numbers come from. They  refer to a specific algorithm and there might be some vague reason to think that this algorithm cannot be improved for D or near by distributions. But, for example,  for the probability distribution I predict this algorithm is extremely inefficient and the sampling can be carried out efficiently.

Distance-3 surface code

In 2013 I was one of the organizers of a conference Qstart celebrating our then new Quantum Science Center. John Martinis gave a lecture where he mentioned building distance-3 surface codes on 20+ qubits as a (then) near term task. My argument against quantum computers does not give a definite prediction whether  distance-3 surface codes are within or beyond reach and it would be interesting to examine it.

Conclusion

It looks that Martinis’ groups announcement of supremacy, while based on a remarkable experimental progress,  was premature and in my view it is mistaken. (To be fair, the papers themselves were prematurely released.) This post was also written rather quickly so I will certainly have to think about matters further, perhaps also in view of more careful reading of the papers, some comments by members of the Google group themselves and other people, and also Scott Aaronson promised to write about it.

 

Links and updates

My papers and early posts

Paper A:   Three puzzles on mathematics computations, and games, Proc. Int
Congress Math 2018, Rio de Janeiro, Vol. 1 pp. 551–606.

Paper B: The argument against quantum computers, To appear in: Hemmo, Meir and Shenker, Orly (eds.) Quantum, Probability, Logic: Itamar Pitowsky’s Work and Influence, Springer, Nature (2019), forthcoming

Paper C: The quantum computer puzzle, Notices AMS, May 2016

my ICM 2018 videotaped lecture (Also about other things)

My videotaped CERN 2019 lecture (I am not sure how well it works) and the slides.

A cartoon-post from 2017: If quantum computers are not possible why are classical computers possible?

The authors

Whether it stands or refuted the Google paper represents serious multidisciplinary effort, an important moment for the scientists involved in the project, and a notable event in science!

Other links and updates

Scott Aaronson enthusiastic blog post gives his take in the new development and also mentions some several key researchers behind the pursuit of quantum supremacy on NISQ systems. Among the interesting comments on this post: Camille  (the effect of non-uniform errors; the tension with Ali Baba’s classicl simulation capabilities);  Ryan O’Donnell  (Proposing to demonstrate the situation for 20+ and 30+ qubits. Ryan made several additional interesting comments);

Comparing various types of circuits: (Oct, 2) From what I could see one new aspect of the methodology is the comparison between various types of circuits – the full circuit on the one hand and some simplified versions of it that are easier to simulate for large number of qubits.

There is a large media coverage of the claims by Google’s researchers. Let me mention a nice Quanta Magazine article by John Preskill “Why I called it quantum supremacy” on inventing the terms “quantum supremacy” and “NISQ”.

Another blog post by Scott on SO is on a paper published by Nature claiming implementation of a Shor-like algorithm on a classical device.  Scott offers a very quick debunking: “ ‘p-bit’ devices can’t scalably outperform classical computers, for the simple reason that they are classical computers.” The gist of my argument against the possibility of achieving quantum supremacy by NISQ devices is quite similar: “NISQ devices can’t outperform classical computers, for the simple reason that they are primitive classical computers.” 

Updates (Oct 16):

1) Here is a remark from Oct 10I thought this is a big deal for a while but then it turned out to be a wrong understanding of the system calibration process. Upon more careful reading of the papers, it seems that there are major issues with two elements of the experiments: the calibration and the extrapolation.

The main mistake is that the researchers calibrate parameters of the QC to improve the outcomes of the statistical tests. This calibration is flawed for two reasons. First it invalidates the statistical test since the rejection of the null hypothesis reflects the calibration process. Second, the calibration requires computational power which is larger than the task they have for the QC. (So its invalidate any claim for quantum advantage.)

The second related mistake is that the researchers extrapolate from behaviors for experiments that they can calibrate (30 or so qubits for the full circuit or 53 qubits for a simplified circuit) to the regime where they cannot calibrate (53 qubits for the full circuit). So even if the calibration itself was kosher, the extrapolation is false and there is no reason to think that the statistical test on the 53-qubits sampled for the full circuit will do as well as they expect it.

Let me add some detail on the main mistake: The crucial mistake in the supremacy claims is that the researchers’ illusion of a calibration method toward a better quality of the quantum computer was in reality a tuning of the device toward a specific statistical goal for a specific circuit. If the effect of the calibration w.r.t. one random circuit C was to improve the fidelity of the QC then they could indeed run it after calibration on another random circuits D. But there is no claim or evidence in the paper (or an earlier one) that this should lead to good match for D. (This certainly can be tested on small number of qubits.) Without such a claim the logic behind the calibration process is fundamentally flawed.

2) Scott Aaronson initiated (email)  (See also his comment) (Sept 25) an email discussion between Ryan O’Donnell and me and John Martinis and some of his team. John noted that because of the press embargo they would not like to discuss this much more until the paper is published, and raised the concern that discussions before the embargo is lifted will be leaked. I wrote to John and the groups two emails on October 7 and 9.

3) In the same comment Scott wrote regarding the request for full distributions that he passed to John:  “John Martinis wrote back, pointing out that they already did this for 9 qubits in their 2018 Science paper.” Indeed, this paper and some supplementary slide (that John kindly attached)  show an impressive match between the empirical and theoretical distributions for 9 qubits.  The 2018 Science paper gives a detailed description of the calibration method.

4) It will be valuable if John Matrinis and his team will clarify (publicly) even before the press embargo is lifted, if and in what way does the calibration process depends on the target circuit. (And making it clearer for the 2018 science experiments does not even violate any obligation regarding press embargo of the current paper.)

5) The whole notion of press embargo was new to me, and the inability of the scientific community to discuss openly scientific claims before they are published raises some interesting issues. I tend to think that an obligation toward the publisher does not cancel other obligations the scientists might have especially (but not only) in a case where an early version of the paper had become publicly accessible.

Advertisements

Advertisements