Yosi Rinott, Tomer Shoham, and I wrote a manuscript regarding our study of the Google 2019 supremacy experiment. This is still a draft and comments or corrections are most welcome. (The paper already incorporates a few comments by the Google team; October 11, 2022: a new version is posted based on excellent comments and many corrections by Carsten Voelkmann; October 23, 2022. Here is a new version, we thank several colleagues for useful comments.)

### Google’s 2019 “Quantum Supremacy” Claims: Data, Documentation, & Discussion

by Gil Kalai, Yosef Rinott, and Tomer Shoham

NatureAbstract:In October 2019,published a paper describing an experimental work that took place at Google. The paper claims to demonstrate quantum (computational) supremacy on a 53-qubit quantum computer. Since September 2019 the authors have been involved in a long-term project to study various statistical aspects of the Google experiment. In particular, we have been trying to gather the relevant data and information, to reconstruct and verify those parts of the Google 2019 supremacy experiments that are based on classical computations (unless they require too heavy computation), and to put the data under statistical analysis. We have now (August 2022) concluded the part relating to the gathering of data and information needed for our study of the 2019 Google experiment, and this document describes the available data and information for the Google 2019 experiment and some of our results and plans.

The manuscript describes the stage of gathering data and information needed for our study and our analysis based on this data will be described separately. Statistical analysis of the Google experiment is already described in our Statistical Science paper Statistical Aspects of the Quantum Supremacy Demonstration. (In this paper we mainly rely on data of 12-qubit and 14-qubit circuits.) Some preliminary statistical analysis is also given in Sections 6 and 7 of my paper The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims.

*Quo Vadis* random circuit sampling*?*

Here are three concrete questions about random circuit sampling of a quantum circuit C of the kind discussed in the Google paper **with**** 22 qubits and depth ****14**. These three questions refer, of course, to the ability of current quantum computers (it is quite easy to achieve them with classical simulations).

1. Can humanity produce at present samples which are good approximations of the Google noise model or any other specific noise model?

2. Has humanity reached the ability to produce samples for quantum circuit C with fidelity according to the linear cross entropy fidelity-estimator above 0.15?

3. Has humanity reached the ability to predict, for a quantum circuit C, with good accuracy, the linear cross entropy fidelity estimator based on the fidelity of the individual components of this circuit?

The findings of our Statistical Science paper indicate that the answer to the first question is negative. The Google supremacy paper itself and subsequent confirmations present a strong case for a positive answer to the other two questions (even for larger circuits). However, there are remaining doubts and concerns that need to be carefully checked, and not enough replications to regard the answer as a solid yes.

## How to check a 20-qubit quantum computer?

It was announced recently that Israel is going to build a quantum computer. It is an interesting question to find a methodology to confirm that a 10-qubit, 20-qubit or 50-qubit quantum computer genuinely performs quantum computation or rather that the experimental data represents classical computation. In our Statistical Science paper we offered a certain blind experiment mechanism, but it still requires that classical simulation be much slower than quantum computing.

In the new paper we also propose a mechanism for letting other groups test calibration methods (which are crucial ingredients in such experiments) on the Sycamore QC or other NISQ computers. In our discussions, the Google team endorsed both these proposals for future experiments. (See Section 5 of the new manuscript.)

I already mentioned the elementary thermodynamic speed limit to QCs that follows from basic considerations of E-T uncertainty and Boltzmann’s law [(h*S^2)/(k*T)], but one would ask for a better explanation of quantum speedups to ascertain the possibility of any form of error correction. It seems to me that linearity and unitarity have little to do with it, given say a classical random field interpretation. For instance, we find that violations of Bell inequalities occur due to contextual non-Kolmogorov probability and are seen with Brownian motion (Allahverdyan, 2005), classical electrodynamics (numerous papers) and even water waves (Papatryfonos, 2022). Forgetting the absence of any true “global basis”, if speedups derive from interacting amplitudes, then error-correcting to the Hilbert basis will DESTROY the quantum evolution rather than aid it. That wouldn’t be a mere problem of gate fidelity. In other words, what real evidence do we have that QCs aren’t like analog computers?

Pingback: The Google Supremacy Experiment: Information, Info, Discussions see photos of the model - sqwera.com

Pingback: The Google Supremacy Experiment: Data, Information, Discussions - My Blog