After the sensationally successful Scott’s Supreme Quantum Superiority FAQ and Boaz’s inferior classical inferiority FAQ let me add my contribution, explaining my current skeptical view. (I was actually asked many of the questions below.) I also recommend Lipton and Regan’s post on the Google paper.

While much of the post will be familiar let me mention right away a new critique of the Google supremacy claim: One of the central claims in the Google experiment – that the fidelity (quality) of a complex circuit is very close to the product of the fidelity of its basic components – qubit and gates, seems very improbable and this may shed serious doubts on the validity of the experiment and on its conclusions.

Before we start, a few links: For the amazing news on the threshold of random discrete structures, see this post. Here is my first post on Google matter. Let me recommend the paper From Operator Algebras to Complexity Theory and Back by Thomas Vidick. It is about a problem by Boris Tsirelson (related to various deep mathematics) and about connections to quantum computation. And just fresh on the arXiv, Quantum speedups need structure by Nathan Keller, Ohad Klein, resolving the Aaronson-Ambainis Conjecture. (Update: here is a blog post on the Shtetl-Optimized.) Congrats to Nathan and Ohad!

And now, lets start.

So what is quantum supremacy? And what other things do we need to know in order to understand the claims regarding quantum computers?

**Quantum supremacy** is the ability of quantum computers to demonstrate computations that classical computers cannot demonstrate. (Or that it is very very hard for classical computers to demonstrate.)

**Quantum error correcting codes** are certain quantum gadgets that a quantum computer needs to create that will be used as building blocks for larger quantum computers.

**A sampling task** is a task where the computer (quantum or classic) produces samples from a certain probability distribution **D**. Each sample is 0-1 vector of length n.

**John Martinis**

What did the Google team do?

The Google team produced a sample of a few millions 0-1 vectors of length 53 which is based on a certain “ideal” probability distribution **D**. They made two crucial claims regarding their sample

A) The statistical test for how close their sample is to the ideal distribution **D** will give a result above t=1/10,000

B) Producing a sample with similar statistical property will require 10,000 years on a supercomputer.

The probability distribution **D** depends on a quantum computation process (or by the technical jargon, a **quantum circuit**) denoted later by C.

What is the meaning of the statistical statement in part A)?

Google’s quantum computers (like any other current quantum computers) are very “noisy” so what the computer is producing are not samples from **D** but rather a noisy version which could roughly be described as follows: a fraction *t* of the samples are from **D** and a fraction *(1-t)* of the samples are from a uniform distribution. The statistical test allows to estimate the value of *t* which is referred to as the fidelity.

Could they directly verify claims A) and B) ?

No, it was only possible to give indirect evidence for both these claims.

What is the logic of Google’s quantum supremacy argument?

For claim A) regarding the success of the statistical test on the sample they have two arguments:

- Analysis based on the fidelity of the components of the quantum computer – qubits and gates (see formula (77) above),
- Experiments that support the analysis in the regime where they can be tested by a classical computer.

According to the paper the fidelity of entire circuits agrees perfectly with the prediction of the simple mathematical formula (77) with deviation under 10-20 percents. There are several reported experiments in the classically tractable regime including on simplified circuits (that are easier to simulate on classical computers) to support the assumption that the prediction given by formula (77) for the fidelity applies to the 53-qubit circuit in the supremacy regime.

For claim B) For the classical difficulty they rely on:

- Extrapolation from the running time of a specific algorithm that they use.
- Computational complexity support for the assertion that the task they consider is asymptotically difficult.

What are the weak points in this logic?

A main weakness (which is crucial in my mind) of the experiment is that the experimental support from the regime where the experiments can be tested by a classical computer is too sparse. Much more could have been done and should have been done.

In my opinion, a major weakness of the Google experiment is that the support from experiments in the classically tractable regime (blue in the picture) is much too sparse and is unconvincing.

Another weakness is that the arguments for classical difficulty were mainly based on the performance of a specific algorithm.

Sources: The link to the Google paper in *Nature. *A videotaped lecture by John Martinis at Caltech.

What is your assessment of the Google claims, Gil?

I think that the claims are incorrect. Specifically, I find the evidence for the claim “the statistical test applied to the 53-qubit sample will give a result above 1/10,000 too weak and I expect that this claim and other related claims in the paper will not stand after a closer scrutiny and further experiments in the classically tractable regime. I also doubt the perfect proximity between predictions based on the 1- and 2- qubit fidelity and the circuit fidelity.

The Google experiment represents a very large leap in several aspects of human ability to control noisy quantum systems and accepting their claims requires very careful evaluation of the experiments and, of course, successful replications.

Do you want to tell us more about Formula (77)?

Formula (77)

Yes, thank you. Here again is Formula (77) and its explanation in the paper. The fact that the fidelity of entire circuits agrees with the prediction of the simple mathematical Formula (77) is “most amazing” according to John Martinis (videotaped lecture at Caltech). Indeed the deviation according to the paper is at most 10-20 percents. This perfect agreement can be seen in various other parts of the paper. The authors’ interpretation of this finding is that it validates the digital error model and shows that there are no new mechanisms for errors.

John explains the significance of Formula (77) at Caltech. Amazing big surprises are often false.

And what do you think about it, Gil

I completely agree that this is most amazing and, as a matter of fact, there are reasons to consider the predictions based on Formula (77) as **too good to be true** even if qubit and gates fidelity account for all the errors in the system. The issue is that Formula (77) itself is very sensitive to noise. The formula estimates the fidelity as the product of hundreds of contributions from individual qubits and gates. Fairly small errors in estimating the individual terms can have large accumulative effect, well beyond the 10%-20% margin.

Anyway, this matter deserves further study.

Why?

Because this is considered by the authors as a major discovery and while looking skeptically at the Google papers this appears to be an orthogonal “miracle” to the main supremacy claim.

Let’s go back to your overall assessment. What could change your mind, Gil?

Here goes:

A) An independent verification of the statistical tests outcomes for the experiments in the regime where the Google team classically computed the probabilities. This looks to me like a crucial step in a verification of such an important experiment.

A more difficult verification that I’d also regard as necessary at a later stage would be to independently check the probability distributions for the circuits given by the Google computation.

B) Experiments with the quantum computers giving sufficiently many samples to understand the noisy probability distribution for circuits in the 10-25 qubit range. See my first post, this comment by Ryan O’Donnell, and this earlier one. We need to understand the noisy probability distribution produced by the Google quantum computer, the actual fidelity, and the quality of the statistical tests used by the Google team.

C) Experiments in the 10-30 qubit range on the IBM (and other) quantum computers. It is quite possible that experimenting of this kind with random quantum circuits was already carried out.

D) Since improved classical algorithms were found by the IBM team (but analyzing the 53 qubits samples still seems practically beyond reach). Google can produce samples for 41-49 qubits for which IBM (or others) can compute the probabilities quickly and test Google’s prediction for the fidelity.

E) Success in demonstrating distance-3 and distance-5 surface codes and other quantum error-correcting codes.

So what precisely will convince you and what is the time-schedule that you expect for matters to be clarified?

A successful and convincing combination of** three or more** from A), B), C), D) or E) will go a long way to convince me. The verification part A) is important, and I don’t expect problems there, I expect that the Google claims will be verified and I consider it as very important that the data will be public and that various groups will verify the claims. This may take several months and certainly it should take less than a year.

At present, I expect parts B)-D) will not support Google’s supremacy claims. So outcomes of experiments in the next couple of years both by the Google group and by other groups will be crucial. One direction that I *do not* regard, at present, as useful for strengthening the quantum supremacy claims is increasing the number of qubits of the quantum computer.

What is required for the (easy) verification stage?

(1) Right now the raw samples of Google’s sampling experiments are public. There are altogether 300 files with samples.

(2) For every circuit that they experiment, the Google team also plans to upload the 2^n probabilities that they obtained by the Feynman-Schrodinger algorithm. This will allow verifying their statistical tests, making subsequent analysis, and for other researchers to test other algorithms for computing the same probabilities.

(3) A convenient form of the data from (2) is a file that will give for every experiment the probabilities that the Google team computed for the samples. (For a large *n* those are much smaller files.)

(4) For each of the 300 experiments the estimated fidelity that formula (77) gave and the contribution of each qubit and gate to the RHS of (77).

Do you plan to take part yourself?

I plan to get involved myself with the “easy” verification and analysis of the “raw data” once it will become available. I do expect that the statistical tests will agree with the assertions in the Google paper, and at the same time, as I said, I think it is important that this and other aspects of the experiments will be double checked and triple checked. This basic data already allows interesting analysis and indeed Google’s supplementary paper describes such analysis (that the Google people kindly pointed me to) on how the data fits the theory and on their statistical tests. See Figures S32-S36, table V and associated materials around pages 37-40.

What did the IBM rebuttal paper show?

Recall that the Google claim is based on two assertions:

A) The statistical test applied to the sample will give a result above 1/10,000

B) that producing a sample with similar statistical property will require 10,000 years on a supercomputer.

The IBM team described a different algorithm (on an even stronger current supercomputer) that would take only 2.5 days rather than 10,000 years.

Can the 2.5 days be further reduced?

As far as I can see the IBM claim is about a full computation of all the 2^53 probabilities. It is reasonable to think that producing a sample (or even a complete list of 2^53 probabilities) with fidelity t reduces the classical effort linearly with t. (This is the claim about the specific algorithm used by the Google team.) If this holds for the IBM algorithm then the 2.5 days will go down to less than a minute. (This will still be a notable “quantum speed up” in terms of the number of operations.) I don’t have an opinion as to whether we should expect considerably better than IBM’s classical algorithms for computing the exact probabilities.

But lack of enthusiasm and skepticism of researchers from IBM about the Google paper appears to go beyond this particular point of the 2.5 computing days. Do you think that the objection by IBM people is motivated by fierce competition or envy?

No, I tend to think that there is a genuine interest by researchers who question the Google paper to understand the scientific matter, and carefully, critically and skeptically examine Google’s claims. Maybe Google’s claims might seem to some other researchers who are working on quantum computers with superconducting qubits as remote from their own experimental experience, and this may give a strong reason for skepticism. It is also possible that in time people in IBM and elsewhere will change their mind and will become more enthusiastic about the Google results.

IBM paper and blog post responding to Google’s announcement.

Tell us a little more about noise

Here is a nice toy model (which I think is quite realistic) to understand what the noise is. Suppose that you ran a circuit C on your quantum computer with n qubits and the ideal probability distribution is . The fidelity noisy version of will be . And here is the average (or weighted average) of values of where y is a vector in the neighborhood of x.

Here is a concrete version: We look at the expected value of where y is a new vector and with probability with probability **independently**. We choose so that . There are cases where positive correlation of errors for 2-qubit gates lead to correlated errors. (This is especially relevant in the context of quantum error correction.) To add this effect to the toy noise model replace the word independently by “positively correlated“.

Why do you think that quantum supremacy is not possible at all?

The gist of my argument against the possibility of achieving quantum supremacy by Noisy intermediate scale quantum computers is quite simple: “*NISQ* *devices can’t outperform classical computers, for the simple reason that they are primitive classical computers.” *

(Note the similarity to Scott Aaronson’s critique on a paper published by *Nature* claiming implementation of a Shor-like algorithm on a classical device called “p-bits”. Scott offered a very quick debunking: “ ‘*p-bit’ devices can’t scalably outperform classical computers, for the simple reason that they are classical computers.)*

If Google’s claim are correct – does it falsify your argument?

Yes! I predict that probability distributions described (robustly) by a noisy quantum circuit represent a polynomial time algorithm in terms of the description of the circuit. And by a polynomial time algorithm I assume small degree and modest constants. The Google claim, if true, appears to falsify this prediction. (And for this you do not need quantum supremacy in the strongest form of the term.)

But is there no way that Google’s huge (or at least large) computational advantage coexists with what you say?

There is an issue that I myself am not completely sure about regarding the possibility of chaotic behavior of quantum computers. Here is the classical analog: If you have *n* bits of memory inside a (classical) computer of *m* bits and ask about the complexity of the evolution on the *n* bits which may be chaotic. Of course, we cannot expect that this chaotic computer can lead to a computation that requires thousands of years in a super computer. But can it lead to a robust computation which is superpolynomial in *n* (but polynomial in *m*)?

I don’t know the general answer but, in any case, I don’t think that it changes the situation here. If the Google claims stand, I would regard it as a very strong argument against my theory. (Even if the noisy distributions themselves are not robust.) In any case, the question if the samples in the experiments represent robust distributions or are chaotic could and should be tested. (I discussed it in this post.)

If Google’s claims do not stand, will it confirm or give a strong support to your position that quantum supremacy and quantum error correction are impossible?

Failure of the Google claim will mainly support the position that quantum supremacy and quantum error correction require substantial improvement of the quality of qubit and gates. It would give a noteworthy support to my position (and probably would draw some attention to it) but I would not regard it as a decisive support. Let me mention that various specific predictions that I made can be tested in the Google, IBM and other systems.

OK, so why *do you think* that the quality of qubits and gates *cannot* be improved?

Yes, this is the crucial point. One argument (that I already mentioned) for thinking that there is a barrier for the quality of gates and qubits is computational theoretic. Computationally speaking NISQ devices are primitive classical computing devices and this gives a strong reason to think that it will not be possible to reduce the error rate to the level allowing computational supremacy. But there is an additional argument: for a wide range of lower levels of noise, reducing the noise will have the effect of making the system more chaotic. So the first argument tells us that there is a small range of error-rates that we can hope to achieve and the second argument tells us that for a large range of lower error-rates all we gain is chaos!

Links to my work: Three puzzles on mathematics computations, and games, Proc. ICM2018; The argument against quantum computers, To appear in Itamar Pitowsky’s memorial volume; The quantum computer puzzle, Notices AMS, May 2016

Slides from my 2019 CERN lecture. My ICM 2018 videotaped lecture.

People mainly refer to your conjectures about correlated errors.

Yes, this reflects my work between 2005-2013 (and was a central issue in my debate with Aram Harrow) and I think it is an important part of the overall picture. But this issue is different than my argument against quantum computers which represents my work between 2014-2019. I think that my earlier work on error correlation is a key (or a starting point) to the question: What do we learn from failure of quantum computers on general properties of quantum noise. Indeed there are various consequences; some of them are fairly intuitive; some of them are counter-intuitive, and some of them are both. The basic intuition is that once your computation really makes use of a large portion of the Hilbert space, so will the error!

The major challenge is to put this intuition into formal mathematical terms and to relate it to the mathematics and physics of quantum physics.

I made a similar idea in a comment to Dave Bacon in 2006 when I wrote “I believe that you may be able to approximate a rank-one matrix up to a rank-one error. I do not believe that you will be able to approximate an arbitrary matrix up to a rank one matrix.” to which Dave replied “I will never look at rank one matrices the same ”. Dave Bacon is among the authors of the new Google paper.

What is the connection between the ability to achieve quantum supremacy and the ability to achieve quantum error-correction?

One of the main claims in my recent works is that quantum supremacy is an easier task compared to creating good quality error-correcting codes. For the attempted experiments by Google, we see a clear demonstration that achieving good quality quantum error correction is harder than demonstrating quantum supremacy. Low fidelity circuits that Google claims to achieve are far from sufficient for quantum error-correction. The other claim in my argument is that quantum supremacy cannot be achieved without quantum error correction (and, in particular, not at all in the NISQ regime) and this claim is, of course, challenged by the Google claims.

You claim that without quantum error correction to start with we cannot reach quantum supremacy. But maybe John Martinis’ experimental methods have some seeds of quantum error correction inside them?

Maybe See this 2017 cartoon from this post.

(Here is a nice overview video from 2014 about my stance and earlier work.)

Beside the critique on experimental evidence that could be tested did you find some concrete issues with the Google experiment?

Perhaps even too many . In the first post and comments I raised quite a few objections. Some of them are relevant and some of them turned out to be irrelevant or incorrect. Anyway, here, taken from my first post, are some of my concerns and attempted attacks on the Google experiment:

- Not enough experiments with full histograms; not enough experiments in the regime where they can be directly tested
- Classical supremacy argument is overstated and is based on the performance of a specific algorithm
- Error correlation may falsify the Google noise model
- Low degree Fourier coefficients may fool the statistical test
- (Motivated by a comment by Ryan.) It is easier to optimize toward the new statistical test “linear cross-ratio entropy” compared to the old logarithmic one.
- “System calibration” may reflect an optimization towards the specific required circuit.
- The interpolation argument is unjustified (because of the calibration issue).

We talked about items 1 and 2 what about 3-5. In particular, are correlated errors relevant to the Google experiment?

No! (As far as I can see.) Correlated errors mean that in the smoothing the flipped coordinates are positively correlated. But for the random circuit and the (Porter Thomas) distribution this makes no difference!

As for item 4., it turns out (and this was essentially known by the work of Gao and Duan) that in the case of random circuits (unlike the case of Boson Sampling) there is no low degree coefficients to fool the statistical test.

As for item 5., the answer is “nice observation, but so what?” (Let me note that the supplementary paper of the Google team compares and analyzes the linear and logarithmic statistical measures.)

What about the calibration? You got a little overworked about it, no?

In almost every scientific experiment there could be concerns that there will be some sort of biased data selection toward the hoped-for result.

Based on the description of the calibration method I got the impression that part of the calibration/verification process (“system calibration”) was carried out towards the experimental outcome for a specific circuit, and that this does not improve the fidelity as the authors thought but rather mistakenly tweaked the experimental outcomes toward a specific probability distribution. This type of calibration would be a major (yet innocent) flaw in the experiment. However, this possibility was excluded by a clear assertion of the researchers regarding the nature of the calibration process, and also by a more careful reading of the paper itself by Peter Shor and Greg Kuperberg. I certainly was, for a short while, way overconfident about this theory.

One nice (and totally familiar) observation is that a blind experiment can largely eliminate the concern of biased data selection.

When did you hear about the Google claim?

There were certainly some reasons to think that Google’s quantum supremacy was coming for example a quanta magazine article by Kevin Hartnett entitled Quantum Supremacy Is Coming: Here’s What You Should Know and another article about Neven’s double exponential law. Also Scott Aaronson gave some hints about it.

On September 18, I met Thomas Vidick in a very nice conference of the Israeli and US academies on the future of computer science (it was mentioned in this post, links to all videos will be added here, Naftali Tishby’s lecture is especially recommended.) Thomas told me about the new expected Google paper. Later that day I got involved in an amusing Facebook discussion about related matters. (See Barry Simon’s first comment to Preskill’s post and the subsequent 15 comments.)

When I introduced Thomas to Menachem Yaari (who was the president of the Israeli Academy), describing the former as a young superstar in quantum computation, Menachem’s reaction was: “but you do not believe in quantum computers.” I replied that I believe it is a fascinating intellectual area, and that perhaps I am even wrong about them being infeasible. Thomas said: “our area needs more people like Gil.” (!)

What about Scott?

Scott and I have been on friendly terms for many years and share a lot of interests and values. We are deeply divided regarding quantum computers and, naturally, I think that I am right and that Scott is wrong. In the context of the Google paper Scott’s references to me and my stance were peculiar and even a little hostile which was especially strange since at that time I did not have access to the paper and Scott was the referee of the paper.

Gil, how do you vision a situation where you are proven wrong?

If my theory of quantum computation being derailed by noise inherent in quantum gates is proven wrong, then physicists will say that I am a mathematician and mathematicians will say that I am a combinatorialist.

and how do you vision a situation where you are proven right?

If my theory of quantum computation being derailed by noise inherent in quantum gates is proven successful, then physicists will say that I am a mathematician and mathematicians will say that I am a combinatorialist.

And what would you say if your theory prevails?

Where I *have seen further* than others, it is because I stood on Peter Shor’s shoulders and looked at the opposite direction.

One last thing, Gil. Nick Read just commented that experimental evidence is gradually pointing towards you being false on the matter of topological quantum qubits.

Nick is a great guy and topological quantum computing is a great topic. The general situation is quite simple and it applies to topological quantum computing like any other form of quantum computing. The way I see it, gradual experimental progress will hit a barrier and non-gradual experimental progress will be falsified.

(See this 2014 videotaped lecture of mine on topological quantum computing, and also Section 3.5 of The argument against quantum computers.)

Let’s have an Appendix question. Can you try to briefly describe the probability distribution and the statistical test used in the Google paper?

Let me try. We start with a probability distribution described by the density function supported on . Now we consider our set *X* of *0-1* vectors of length *n*.

We draw a random probability distribution on . and the value of is drawn at random from the probability distribution . (A very slight normalization may still be needed.) A probability distribution of this kind on is called a Porter-Thomas distribution.

A random quantum circuit leads to a (deterministic) probability distribution of this kind. A classical computer can compute the probabilities based on the description of the quantum circuits but this becomes increasingly hard with . A quantum computer can easily sample according to .

We are given samples that our noisy quantum computer drew.

Our research hypothesis is that the samples are drawn from where is a uniform distribution. is called the fidelity. The null hypothesis is that the samples were drawn uniformly at random. (There is also a finer description of the noisy distribution with a Gaussian low order term depending on . (This can be seen already from the noise toy model above but I will not discuss it here.)

The main test used in the Google paper is an estimator for :

.

They also considered a logarithmic version.

The samples in the experiment (at least when is large) are too sparse to identify the probability distribution on . (This was one of my concerns that was also endorsed by Ryan.) But once you compute the probability distribution you can study statistically the set of probabilities that you obtained for your sample. The Google paper offers some interesting statistical studies and in particular a statistical comparison between the set of values .

]]>Let me mention that important updates on the matter of applying the intermediate value theorem for football (or soccer as referred to in the US) that was discussed in this 2009 post were added to the post. For readers interested in the Google’s quantum supremacy news, here is a link of my main post on the matter.

This morning the following paper appeared on the arXive: Thresholds versus fractional expectation-thresholds by Keith Frankston, Jeff Kahn, Bhargav Narayanan, and Jinyoung Park.

**Abstract:** Proving a conjecture of Talagrand, a fractional version of the ‘expectation-threshold’ conjecture of Kalai and the second author, we show for any increasing family **F** on a finite set **X** that , where and are the threshold and ‘fractional expectation-threshold’ of **F**, and ℓ(**F**) is the largest size of a minimal member of **F**. This easily implies various heretofore difficult results in probabilistic combinatorics, e.g. thresholds for perfect hypergraph matchings (Johansson-Kahn-Vu) and bounded-degree spanning trees (Montgomery). We also resolve (and vastly extend) one version of the ‘random multi-dimensional assignment’ problem of Frieze and Sorkin. Our approach builds on recent breakthrough work of Alweiss, Lovett, Wu and Zhang on the Erdős-Rado ‘sunflower’ conjecture.

The 2006 expectation threshold conjecture gives a justification for a naive way to estimate the threshold probability of a random graph property. Suppose that you are asked about the critical probability for a random graph in G(n,p) for having a perfect matching (or a Hamiltonian cycle). You compute the expected number of perfect matchings and realize that when p is C/n this expected number equals 1/2. (For Hamiltonian cycles it will be C’/n.) Of course, if the expectation is one half the probability for a perfect matching can be very low, indeed in this case, an isolated vertex is quite likely but when there is no isolated vertices the expected number of perfect matchings is rather large. Our 2006 conjecture boldly asserts that the gap between the value given by such a naive computation and the true threshold value is at most logarithmic in the number of vertices. Jeff and I tried hard to find a counterexample but instead we managed to find more general and stronger forms of the conjecture that we could not disprove.

The expectation threshold conjecture had some connections with a 1995 paper of Michel Talagrand entitled Are all sets of positive measure essentially convex? In a 2010 STOC paper Are Many Small Sets Explicitly Small? Michel formulated a weaker fractional version of the expectation threshold conjecture which is sufficient for the various applications of the original conjecture. This conjecture (as well as a stronger form also posed by Talagrand) is now verified in the new paper!

In our 2006 paper we tried to relate the expectation threshold conjecture to various questions of independent interest related to stability theorems for discrete isoperimetric inequalities. This direction did not play a role in the new paper. Let me note that the isoperimetric problems served as partial motivation for the recent breakthrough results by Peter Keevash, Noam Lifshitz, Eoin Long, and Dor Minzer that are reported in this October 2018 post. See their paper Hypercontractivity for global functions and sharp thresholds.

- The threshold value for perfect matching – this was proved already by Erdos and Renyi (1960) and it follow from the new results. The same goes for the threshold for connectivity.
- The threshold value for Hamiltonian circuits – posed as a problem by Erdos and Renyi it was solved by Korshunov (1976) and by Posa (1976).
- The threshold for perfect matching in 3-uniform hypergraphs – was posed by Schmidt and Shamir (1983) and was settled by Johansson, Kahn, and Vu. (It was one of the motivation for my 2006 paper with Jeff.)
- The threshold for bounded degree spanning trees that was open for a long time and was settled by Montgomery (2019).

Let me mention that in various cases the gap between the (fractional) expectation threshold and threshold is a smaller power of log *n, *or is a constant, or has different behavior. Understanding this through a general theory is still unknown.

What did play a major role in the new development was the recent breakthrough work of Alweiss, Lovett, Wu and Zhang on the Erdős-Rado ‘sunflower’ conjecture. (See this post.) I expected that the method of the sunflower paper will have major applications but this application took me by a surprise.

]]>

Repeats every week every Sunday until Sat Feb 01 2020

Location: Ross 70

See also: Seminar announcement; previous post Symplectic Geometry, Quantization, and Quantum Noise.

The Google supremacy claims are discussed (with updates from time to time) in this earlier post. Don’t miss our previous post on combinatorics.

1. Mathematical models of classical and quantum mechanics.

2. Correspondence principle and quantization.

3. Classical and quantum computation: gates, circuits, algorithms (Shor, Grover). Solovay-Kitaev. Some ideas of cryptography

4. Quantum noise and measurement, and rigidity of the Poisson bracket.

5. Noisy classical and quantum computing and error correction, threshold theorem- quantum fault tolerance (small noise is good for quantum computation). Kitaev’s surface code.

6. Quantum speed limit/time-energy uncertainty vs symplectic displacement energy.

7. Time-energy uncertainty and quantum computation (Dorit or her student?)

8. Berezin transform, Markov chains, spectral gap, noise.

9. Adiabatic computation, quantum PCP (probabilistically checkable proofs) conjecture [? under discussion]

10. Noise stability and noise sensitivity of Boolean functions, noisy boson sampling

11. Connection to quantum field theory (Guy?).

Literature: Aharonov, D. Quantum computation, In “Annual Reviews of Computational Physics” VI, 1999 (pp. 259-346). https://arxiv.org/abs/quant-ph/9812037

Kalai, G., Three puzzles on mathematics computations, and games, Proc. Int Congress Math 2018, Rio de Janeiro, Vol. 1 pp. 551–606. https://arxiv.org/abs/1801.02602

Nielsen, M.A., and Chuang, I.L., Quantum computation and quantum information. Cambridge University Press, Cambridge, 2000.

Polterovich, L., Symplectic rigidity and quantum mechanics, European Congress of Mathematics, 155–179, Eur. Math. Soc., Zürich, 2018. https://sites.google.com/site/polterov/miscellaneoustexts/symplectic-rig…

Polterovich L., and Rosen D., Function theory on symplectic manifolds. American Mathematical Society; 2014. [Chapters 1,9] https://sites.google.com/site/polterov/miscellaneoustexts/function-theor…

Wigderson, A., Mathematics and computation, Princeton Univ. Press, 2019. https://www.math.ias.edu/files/mathandcomp.pdf

]]>**Original post (edited):**

Here is a little update on the Google supremacy claims that we discussed in this earlier post. Don’t miss our previous post on combinatorics.

Recall that a quantum supremacy demonstration would be an experiment where a quantum computer can compute in 100 seconds something that requires a classical computer 10 hours. (Say.)

In the original version I claimed that: “The Google experiment actually showed that a quantum computer running for 100 seconds PLUS a classic computer that runs 1000 hours can compute something that requires a classic computer 10 hours. (So, of course, this has no computational value, the Google experiment is a sort of a stone soup.)” and that: “The crucial mistake in the supremacy claims is that the researchers’ illusion of a calibration method toward a better quality of the quantum computer was in reality a tuning of the device toward a specific statistical goal for a specific circuit.” However, it turned out that this critique of the calibration method is unfounded. (I quote it since we discussed it in the comment section.) I remarked that “the mathematics of this (alleged) mistake seems rather interesting and I plan to come back to it (see at the end of the post for a brief tentative account), Google’s calibration method is an interesting piece of experimental physics, and expressed the hope that in spite of what appears (to me then) to be a major setback, Google will maintain **and enhance** its investments in quantum information research. Since we are still at a basic-science stage where we can expect the unexpected.”)

Now, how can we statistically test such a flaw in a statistical experiment? This is also an interesting question and it reminded me of the following legend (See also here (source of pictures below), here, and here) about Poincaré and the baker, which is often told in the context of using statistics for detection. I first heard it from Maya Bar-Hillel in the late 90s. Since this story never really happened I tell it here a little differently. Famously, Poincaré did testify as an expert in a famous trial and his testimony was on matters related to statistics.

“My friend the baker,” said Poincaré, “I weighed every loaf of bread that I bought from you in the last year and the distribution is Gaussian with mean 950 grams. How can you claim that your average loaf is 1 kilogram?”

“You are so weird, dear Henri,” the baker replied, “but I will take what you say into consideration.”*

A year later the two pals meet again

“How are you doing dear Henri” asked the baker “are my bread loaves heavy enough for you?”

“Yes, for me they are,” answered Poincaré “but when I weighed all the loaves last year I discovered that your mean value is still 950 grams.”

“How is this possible?” asked the baker

“I weighed your loaves all year long and I discovered that the weights represent a Gaussian distribution with mean 950 grams truncated at 1 kilogram. You make the same bread loaves as before but you keep the heavier ones for me!”

“Ha ha ha” said the baker “touché!”** and the baker continued “I also have something that will surprise you, Henri. I think there is a gap in your proof that 3-manifolds with homology of a sphere is a sphere. So if you don’t tell the bread police I won’t tell the wrong mathematical proofs police :)” joked the baker.

The rest of the story is history, the baker continued to bake bread loaves with an average weight of 950 grams and Poincaré constructed his famous Dodecahedral sphere and formulated the Poincaré conjecture. The friendship of Poincaré and the baker continued for the rest of their lives.

* “There are many bakeries in Paris” thought the baker “and every buyer can weight the quality, weight, cost, and convenience”.

** While the conversation was originally in French, here, the French word touché is used in its English meaning.

The ideal distribution for a (freezed) random circuit can be seen exponentially distributed probabilities depending on .

The first order effect of the noise is to replace by a convex combination with a uniform distribution . (For low fidelity is rather small.)

The second order effect of the noise is adding a Gaussian fluctuation described by Gaussian-distributed probabilities . Like these probabilities also depend on the circuit .

For low fidelity, as in our case, the calibration mainly works in the range where is dominant and the calibration (slightly) “cancels” this Gaussian fluctuation. This does not calibrate the quantum computer but rather tweaks it toward the specific Gaussian contribution that depends on the circuit .

Technical update (Nov 18): Actually, some calculation shows that even with a hypothetical calibration toward noisy , the contribution to the statistical test from instances that represent the Gaussian part of the noisy distribution is rather small. So (under the hypothetical no-longer-relevant calibration scenario that I raised) Peter Shor’s interpretation of a computationally-heavy proof-of-concept of some value (rather than a valueless stone soup), is reasonable.

Answer to trivia question (Nov 15, 2019).

The famous story is the Happy Prince by OSCAR WILDE

“He looks just like an angel,” said the Charity Children as they came out of the cathedral in their bright scarlet cloaks and their clean white pinafores.

“How do you know?” said the Mathematical Master, “you have never seen one.”

“Ah! but we have, in our dreams,” answered the children; and the Mathematical Master frowned and looked very severe, for he did not approve of children dreaming.

A famous sentence from the story is:

“Swallow, Swallow, little Swallow,” said the Prince, “will you not stay with me for one night, and be my messenger?

]]>

Gérard Cornuéjols

Gérard Cornuéjols‘s beautiful (and freely available) book from 2000 Optimization: Packing and Covering is about an important area of combinatorics which is lovely described in the preface to the book

The integer programming models known as set packing and set covering have a wide range of applications, such as pattern recognition, plant location and airline crew scheduling. Sometimes, due to the special structure of the constraint matrix, the natural linear programming relaxation yields an optimal solution that is integer, thus solving the problem. Sometimes, both the linear programming relaxation and its dual have integer optimal solutions. Under which conditions do such integrality properties hold? This question is of both theoretical and practical interest. Min-max theorems, polyhedral combinatorics and graph theory all come together in this rich area of discrete mathematics. In addition to min-max and polyhedral results, some of the deepest results in this area come in two flavors: “excluded minor” results and “decomposition” results. In these notes, we present several of these beautiful results. Three chapters cover min-max and polyhedral results. The next four cover excluded minor results. In the last three, we present decomposition results.

The last sentence of the preface gives this post some urgency

In particular, we state 18 conjectures. For each of these conjectures, we offer $5000 as an incentive for the first correct solution or refutation before December 2020.

The book starts with Konig’s theorem, the first figure is the Petersen graph, and among the other mathematical heroes mentioned in the book are Edmonds, Johnson, Seymour, Lovász, Lehman, Camion, Tutte, and Truemper.

The title of this post refers to Baker’s dozen. In the 13th century Bakers who were found to have shortchanged customers could be liable to severe punishment, and to guard against the punishment of losing a hand to an axe, a baker would give 13 for the price of 12, to be certain of not being known as a cheat. (Wikipedia) In this post we mention a 19th problem for which Gerard offered 5000 dollars. (I am not sure if there is time limit for that problem. I am thankful to Maria Chudnovsky for telling me about the problem.)

Perhaps the most difficult problem on the list was solved first: two of the problems in the list were about perfect graphs and were settled with the solution of the strong perfect graph conjecture by Chudnovsky, Robertson, Seymour, and Thomas. Three of the problems were about balanced bipartite graphs. They were solved by Chudnovsky and Seymour in 2006. Conjecture 4.14 in Chapter 4 was solved by Jonathan Wang (2010) 30,000 dollars were thus collected and 60,000 dollars are still offered (until Dec 2020).

Balanced bipartite graphs are sort of bipartite analogs of perfect graphs. They are bipartite graphs so that every induced cycle have length divisible by four. Gerard’s 19the prize money problem is also about balanced bipartite graphs.

**Conjecture:** Let G be balanced. Then there is such that is a balanced graph.

In other words every balanced bipartite graph contains an edge which is not a unique chord in any cycle.

This conjecture is Conjecture 5.20 in

M. Conforti, G. Cornuejos, K. Vuskovic Balanced matrices

In that paper, this conjecture is attributed to:

M. Conforti and M.R Rao “Structural properties and decomposition of linear

balaced matrices”, Mathematical Programming 55 (1992) 129-168.

On unrelated matter, I just heard Shachar Lovett’s very beautiful TCS+ lecture on the sunflower conjecture (see this post on the Alweiss, Lovett, Wu, and Zhang’s breakthrough). You can see the lecture and many others on the TCS+ you tube channel.

Slide 30 from my August, ’19 CERN lecture: predictions of near-term experiments. (Here is the full powerpoint presentation.) In this post we mainly **discuss** **point b) about chaotic behavior. **See also my paper: The argument against quantum computers.

Consider an experiment aimed for establishing quantum supremacy: your quantum computer produced a sample which is a 0-1 string of length from a certain distribution . The research assumption is that is close enough to a fixed distribution ( accounts for the computing process and the noise) which is very hard to be demonstrated on a classical computer. By looking at a large number of samples you can perform a statistical test on the samples to verify that they were (approximately) sampled from , or at least that they were sampled from a probability distribution that is very hard to be computed on a classical computer!

But, is it possible that all the distributions ‘s are very different? Namely that each sample is taken from a completely different distribution? More formally, is it possible that under a correct modeling of the device for two different samples and , has a very small correlation with ? In this case we say that the experiment outcomes are **not robust** and that the situation is **chaotic**.

Here are a couple of questions that I propose to think about:

- How do we test robustness?
- Do the supremacy experiments require that the experiment is robust?
- If, after many samples, you reach a probability distribution that require exponential time on a classical computer should you worry about the question whether the experiment is robust?
- Do the 10,000,000 samples for the Google 53-qubit experiment represent a robust sampling experiment?

]]>

Domotorp got the answer right. congratulations, Domotorp!

To all our readers:

Shana Tova Umetuka – שנה טובה ומתוקה – Happy and sweet (Jewish) new year.

Yesterday, September 28, 2019 I was celebrating a major event by hinting to a small personal corner of this event, and asked: watch the video (click on the picture) and answer TYI 40: What Are We Celebrating on September 28, 2019?

Solution to TYI39 is below the fold.

For every correct answer as well as a creative incorrect answer, you will earn a glass of Beer (or coffee) on our next meeting! Answers are welcome but to avoid spoiling please make your answer zero-knowledge, namely that it reveals that you know the answer and no additional information. (Like in this post.) You can also record your answer (as an additional answer) in the following poll. (And for the prize – add also your name.)

The answer will be revealed in one week.

There is a class of children who move to a new class. Each child lists three friends, and the assignment of children into classes ensures that each child will have at least one of these three friends in his class. We asked: Is there a strategy for five of the children that will ensure that all five will be assigned to the same class?

The answer is negative, there is no such strategy. See the manuscript by Noga Alon, High School Coalitions. The question was asked by Ruthi Shaham in a Facebook Group focusing on Mathematics. It is related to some interesting results and problems in graph theory.

Of course, if we want a strategy that will give five friends high probability to be in the same class the situation may change. Actually, when I told the problem to my family, my wife told me that 25 years ago one of my children and four of his friends faced a similar situation, one of the mothers planned a strategy for the five and they all end up in the same class.

]]>

A 2017 cartoon from this post.

**After the embargo update **(Oct 25): Now that I have some answers from the people involved let me make a quick update: 1) I still find the paper unconvincing, specifically, the verifiable experiments (namely experiments that can be tested on a classical computers) cannot serve as a basis for the unverifiable fantastic claims. 2) Many of my attempts to poke hole in the experiments and methodology are also incorrect. 3) In particular, my suggestion regarding the calibration goes against the description in the supplement and the basic strategy of the researchers. 4) I will come back to the matter in a few weeks. Meanwhile, I hope that some additional information will become available. Post rearranged chronologically.

Some main issues described in the post and also this critique was brought to the attention of John Martinis and some of the main researchers of the experiment on October 9, 2019.

(Oct 28, 2019) The paper refers to an external site with all the experimental results. Here is the link https://datadryad.org/stash/dataset/doi:10.5061/dryad.k6t1rj8. In my view, it will be valuable, (especially, for an experiment of this magnitude) if there will be an independent verification of the statistical computations.

(October 23, 2019). The paper (with a supplementary long paper) is now published in Nature. The published versions look similar to the leaked versions.

**Original post:** For my fellow combinatorialists here is the link to the previous post on Kahn-Park’s result, isoperimetry, Talagrand, nice figures and links. Quick update (Sept. 27) Avi Wigderson’s book is out!

You already heard by now that Google (informally) announced it achieved the milestone of “quantum supremacy” on a 53 qubit quantum computer. IBM announced launching in October a quantum computer also on 53 of qubits. Google’s claim of quantum supremacy is given in two papers from August that were accidentally leaked to the public. Here are links to the short main paper and to the long supplementary paper. The paper correctly characterizes achieving quantum supremacy as an achievement of the highest caliber.

Putting 50+ qubits together and allowing good quality quantum gates to operate on them is a big deal. So, in my view, Google and IBM’s noisy quantum circuits represent a remarkable achievement. Of course, the big question is if they can do some interesting computation reliably, but bringing us to the place that this can be tested at all is, in my view, a big deal!

Of course, demonstrating quantum supremacy is even a much bigger deal, but I expect that Google’s claim **will not stand**. As you know, I expect that quantum supremacy cannot be achieved at all. (See this post, this paper A, this paper B, and these slides of my recent lecture at CERN.) My specific concerns expressed in this post are, of course, related to my overall skeptic stance as well as to some technical points that I made in my papers, but they could (and should) have also been made by responsible quantum computer believers.

In the last decade there were several suggestions to demonstrate the computational superior power of quantum circuits via sampling tasks. The computer creates (approximately) a probability distribution **D** on 0-1 strings of length n (or other combinatorial objects) that we have good computational complexity reasons to think that classical computers cannot achieve. In our case, **D** is the probability distribution obtained by measuring the outcome of a fixed pseudo-random quantum circuit.

By creating a 0-1 distribution we mean sampling sufficiently many times from that distribution **D** so it allows us to show that the sampled distribution is close enough to **D**. Because of the imperfection (noise) of qubits and gates (and perhaps some additional sources of noise) we actually do not sample from **D** but from another distribution **D’**. However if **D’** is close enough to **D**, the conclusion that classical computers cannot efficiently sample according to **D’** is plausible.

You compare the distribution **E** obtained by the experiment to the ideal distribution **D **for increasingly larger values of n. If there is a good match this supports your supremacy claims.

There are two important caveats:

- A single run of the quantum computer gives you only one sample from
**D’**so to get a meaningful description of the target distribution you need to have many samples. - Computing the ideal distribution
**D**is computationally hard so as*n*increases you need a bigger and bigger computational effort to compute**D**.

Still, if you can show that **D’** is close enough to **D** before you reach the supremacy regime, and you can carry out the sampling in the supremacy regime then this gives you good reason to think that your experiments in the supremacy regime demonstrate “quantum supremacy”.

The Google group itself ran this experiment for 9 qubits in 2017. One concern I have with this experiment is that I did not see quantitative data indicating how close **D’** is to **D**. Those are distributions on 512 strings that can be described very accurately. (There were also some boson sampling experiments with 6 bosons and 2 modes and 5 bosons with 5 modes. In this case, supremacy requires something like 50 bosons with 100 modes.)

The twist in Google’s approach is that they try to compute **D’** based mainly on the 1-qubit and 2-qubit (and readout errors) errors and then run an experiment on 53 qubits where they can neither compute **D** nor verify that they sample from **D’**. In fact they sample samples from 0-1 strings of length 53 so this is probably much too sparse to distinguish between **D** and the uniform distribution even if we have unlimited computational power. (Actually as several people mentioned the last sentence is incorrect.)

The obvious missing part in the experiment is to run the experiment on random circuits with 9-25 qubits and to test the research assumption about **D’**. I find it a bit surprising that apparently this was not carried out. What is needed is experiments to understand probability distributions obtained by pseudorandom circuits on 9-, 15-, 20-, 25- qubits. How close they are to the ideal distribution D and how robust they are (namely, what is the gap between experimental distributions for two samples obtained by two runs of the experiment.)

Actually, as I noted in the introduction to Paper A (subsection on near term plans for “quantum supremacy” and Figure 3), you can bring the hypothesis of quantum supremacy via pseudo-random circuits into test already in the 10-30 qubit regime without even building 53- or 72- qubit devices. (I can certainly see the importance in building larger circuits that are necessary for good quality error correcting codes.)

**(Oct 18):** In hindsight this section about correlation is not relevant to the Google supremacy story.

Let me also indicate what is a potential mistake in the computation of **D’ **relying just on the behavior of qubits and gates. This is related to correlated (2-qubit gate) errors that I studied mainly before 2014.

The general picture regarding correlated qubit errors is the following:

(1) Errors for gated qubits (for a CNOT gate) are (substantially) positively correlated.

Here, you can think about the extreme form of “bad” noise where with a small probability t both gated qubits are corrupted and with probability (1-t) nothing happens. (“Good” noise is when each qubit is corrupted with probability t, independently. Real life 2-qubit noise is a certain mixture of good and bad noise.)

(2) Therefore, (unless you are below the threshold level and apply quantum fault tolerance scheme) qubits in cat states that were created indirectly will have correlated noise of this kind.

(3) Therefore (and going from (2) to (3) is a mathematical part) probability distributions described by pseudorandom circuits will have a strong effect of synchronized errors.

What is most devastating about correlated errors is that the accumulated error-rate in terms of qubit errors becomes quadratic in the number of rounds instead of linear. See Remark 4.2 about “model error-rate” and “effective error-rate” in Paper A.

Let me mention that Paper B describes fairly detailed predictions about probability distributions obtained by pseudo-random circuits and these predictions can be tested in the 9-25 qubits range. In particular, they suggest that robust outcomes will be easy to simulate classically, as they belong to a very low level (**LDP**) computational complexity class, and that noise sensitivity will lead to chaotic outcomes which are far from the ideal distribution.

On Google Cloud servers, we estimate that performing the same task for m = 20 with 0.1% fidelity using the SFA algorithm would cost 50 trillion core-hours and consume 1 petawatt hour of energy. To put this in perspective, it took 600 seconds to sample the circuit on the quantum processor 3 million times, where sampling time is limited by control hardware communications; in fact, the net quantum processor time is only about 30 seconds.

It is not clear where these fantastic numbers come from. They refer to a specific algorithm and there might be some vague reason to think that this algorithm cannot be improved for D or near by distributions. But, for example, for the probability distribution I predict this algorithm is extremely inefficient and the sampling can be carried out efficiently.

In 2013 I was one of the organizers of a conference Qstart celebrating our then new Quantum Science Center. John Martinis gave a lecture where he mentioned building distance-3 surface codes on 20+ qubits as a (then) near term task. My argument against quantum computers does not give a definite prediction whether distance-3 surface codes are within or beyond reach and it would be interesting to examine it.

It looks that Martinis’ groups announcement of supremacy, while based on a remarkable experimental progress, was premature and in my view it is mistaken. (To be fair, the papers themselves were prematurely released.) This post was also written rather quickly so I will certainly have to think about matters further, perhaps also in view of more careful reading of the papers, some comments by members of the Google group themselves and other people, and also Scott Aaronson promised to write about it.

Paper A: Three puzzles on mathematics computations, and games, Proc. Int

Congress Math 2018, Rio de Janeiro, Vol. 1 pp. 551–606.

Paper B: The argument against quantum computers, To appear in: Hemmo, Meir and Shenker, Orly (eds.) Quantum, Probability, Logic: Itamar Pitowsky’s Work and Influence, Springer, Nature (2019), forthcoming

Paper C: The quantum computer puzzle, Notices AMS, May 2016

my ICM 2018 videotaped lecture (Also about other things)

My videotaped CERN 2019 lecture (I am not sure how well it works) and the slides.

A cartoon-post from 2017: If quantum computers are not possible why are classical computers possible?

The authors

Whether it stands or refuted the Google paper represents serious multidisciplinary effort, an important moment for the scientists involved in the project, and a notable event in science!

Other links and updates

Scott Aaronson enthusiastic blog post gives his take in the new development and also mentions some several key researchers behind the pursuit of quantum supremacy on NISQ systems. Among the interesting comments on this post: Camille (the effect of non-uniform errors; the tension with Ali Baba’s classicl simulation capabilities); Ryan O’Donnell (Proposing to demonstrate the situation for 20+ and 30+ qubits. Ryan made several additional interesting comments);

Comparing various types of circuits: (Oct, 2) From what I could see one new aspect of the methodology is the comparison between various types of circuits – the full circuit on the one hand and some simplified versions of it that are easier to simulate for large number of qubits.

There is a large media coverage of the claims by Google’s researchers. Let me mention a nice Quanta Magazine article by John Preskill “Why I called it quantum supremacy” on inventing the terms “quantum supremacy” and “NISQ”.

Another blog post by Scott on SO is on a paper published by *Nature* claiming implementation of a Shor-like algorithm on a classical device. Scott offers a very quick debunking: “ ‘*p-bit’ devices can’t scalably outperform classical computers, for the simple reason that they are classical computers.” *The gist of my argument against the possibility of achieving quantum supremacy by NISQ devices is quite similar: “*NISQ* *devices can’t outperform classical computers, for the simple reason that they are primitive classical computers.” *

**Updates** (Oct 16):

1) Here is a remark from Oct 10. I thought this is a big deal for a while but then it turned out to be a wrong understanding of the system calibration process. Upon more careful reading of the papers, it seems that there are major issues with two elements of the experiments: the calibration and the extrapolation.

The main mistake is that the researchers calibrate parameters of the QC to improve the outcomes of the statistical tests. This calibration is flawed for two reasons. First it invalidates the statistical test since the rejection of the null hypothesis reflects the calibration process. Second, the calibration requires computational power which is larger than the task they have for the QC. (So its invalidate any claim for quantum advantage.)

The second related mistake is that the researchers extrapolate from behaviors for experiments that they can calibrate (30 or so qubits for the full circuit or 53 qubits for a simplified circuit) to the regime where they cannot calibrate (53 qubits for the full circuit). So even if the calibration itself was kosher, the extrapolation is false and there is no reason to think that the statistical test on the 53-qubits sampled for the full circuit will do as well as they expect it.

Let me add some detail on the main mistake: The crucial mistake in the supremacy claims is that the researchers’ illusion of a calibration method toward a better quality of the quantum computer was in reality a tuning of the device toward a specific statistical goal for a specific circuit. If the effect of the calibration w.r.t. one random circuit C was to improve the fidelity of the QC then they could indeed run it after calibration on another random circuits D. But there is no claim or evidence in the paper (or an earlier one) that this should lead to good match for D. (This certainly can be tested on small number of qubits.) Without such a claim the logic behind the calibration process is fundamentally flawed.

2) Scott Aaronson initiated (email) (See also his comment) (Sept 25) an email discussion between Ryan O’Donnell and me and John Martinis and some of his team. John noted that because of the press embargo they would not like to discuss this much more until the paper is published, and raised the concern that discussions before the embargo is lifted will be leaked. I wrote to John and the groups two emails on October 7 and 9.

3) In the same comment Scott wrote regarding the request for full distributions that he passed to John: “John Martinis wrote back, pointing out that they already did this for 9 qubits in their 2018 *Science* paper.” Indeed, this paper and some supplementary slide (that John kindly attached) show an impressive match between the empirical and theoretical distributions for 9 qubits. The 2018 Science paper gives a detailed description of the calibration method.

4) It will be valuable if John Matrinis and his team will clarify (publicly) even before the press embargo is lifted, if and in what way does the calibration process depends on the target circuit. (And making it clearer for the 2018 science experiments does not even violate any obligation regarding press embargo of the current paper.)

5) The whole notion of press embargo was new to me, and the inability of the scientific community to discuss openly scientific claims before they are published raises some interesting issues. I tend to think that an obligation toward the publisher does not cancel other obligations the scientists might have especially (but not only) in a case where an early version of the paper had become publicly accessible.

]]>Three isoperimetric papers by Michel Talagrand (see the end of the post)

Discrete isoperimetric relations are of great interest on their own and today I want to tell you about a new isoperimetric inequality by Jeff Kahn and Jinyoung Park which leads to a solution to an old problem on the number of maximal independent sets. (I am thankful to Nathan Keller who told me about the new papers on the arXiv and to Jeff and Jinyoung who told me about the results a few months ago.)

Unrelated news: In connection with some a forthcoming announcement by Google regarding quantum computers, Scott Aaronson commented on his blog ” *Gil Kalai’s blog will indeed be an extremely interesting one to watch … you might get to witness the real-time collapse of a worldview! ” *Thanks for the publicity, Scott! Stay tuned, folks! For my current worldview see this post.

The number of maximal independent sets in the Hamming cube

**Abstract:** Let be the *n*-dimensional Hamming cube and . We prove that the number of maximal independent sets in is asymptotically ,

as was conjectured by Ilinca and the first author in connection with a question of Duffus, Frankl and Rödl.

The value is a natural lower bound derived from a connection between maximal independent sets and induced matchings. The proof that it is also an upper bound draws on various tools, among them “stability” results for maximal independent set counts and old and new results on isoperimetric behavior in .

An isoperimetric inequality for the Hamming cube and some consequences

**Abstract:** (Slightly modified.)

Our basic result, an isoperimetric inequality for Hamming cube , can be written:

.

Here is uniform measure on and, for and , is zero if , and is the number of neighbors of not in , if . (where is the number of neighbors of in ).

This implies inequalities involving mixtures of edge and vertex boundaries, with related stability results, and suggests some more general possibilities. One application, a stability result for the set of edges connecting two disjoint subsets of V of size roughly |V|/2, is a key step in showing that the number of maximal independent sets in is This asymptotic statement, whose proof will appear separately, was the original motivation for the present work.

Asymptotic enumeration of independent sets (and number of colorings) in various graph is a big topic with very remarkable techniques and methods, and there are quite a few results when the graph in question is that of the discrete -cube. The maximal size of an independent set is and you may recall that the recently solved (by Hao Huang) sensitivity conjecture asked for the properties of an induced subgraph on vertices.

How many independent sets are there? Well, we can consider all the subsets of the two independent sets of size so this gives us . Korshunov and Sapozhenko proved in 1983 that the number is , and here is a link to a paper of Galvin describing a beautiful proof by Sapozhenko for this result. As an exercise you can try to figure out where the term comes from.

Now, lets talk about maximal independent sets. The Kahn-Park asymptotic formula is very clean: try to figure the lower bound!

Two related results in this spirit: Erdos, Kleitman and Rothschild (1976): triangle-free graphs are almost always bipartite. The solution of Dedekind’s problem on the number of antichains of subsets of an -elemet set by Korshunov (1980) and by Sapozhenko (1991).

Let be a subset of the vertices of the discrete -cube. For every vertex in we define as the number of neighbors of that are not in . We define if does not belong to . ( is a variation on the sensitivity function of .)

Recall that denote the uniform probability measure on the discrete cube. A classic discrete isoperimetric inequality that goes back (in a stronger form) to Harper and others asserts that

(The left hand side is 1/2 times the average sensitivity aka total influence of .)

Talagrand proved that . This result is sharp up to some multiplicative constant both for subcubes and for Hamming balls. We discussed Talagrand’s result in this post.

Let . Kahn and Park proved that

Note that the exponent is higher than Talagrand’s exponent. The new inequality is sharp on the nose for subcubes of codimension one and two. **Let’s check it**: for codimension 1, $h_A$ is constant 1 on , so is 1/2 for every and this equals . When is a codimension 2 subcube, $h_A$ is constant 2 on . Now, by the definition of $\beta$, . Thus, and , **walla!**

For the relation between the Kahn-Park isoperimetric inequality and the Kahn-Park theorem on counting maximal independent sets I refer you to the original paper. The two papers are beautifully written.

Here are drawings and links regarding Talagrand’s three isoperimetric papers and subsequent papers, and I hope to come back to discuss them in the future.

Three important papers on discrete isoperimetric inequalities by Talagrand

Concentration of measure and isoperimetric inequalities in product spaces (Publ IHES 1995)

On Russo’s approximate zero-one law (Ann of Probability, 1994)

Isoperimetry, logarithmic Sobolev inequalities on the discrete cube, and Margulis’ graph connectivity theorem (GAFA, 1993)

Eight very recommended papers by Talagrand and Talagrand’s prize money problems

Michel Talagrand page: Become RICH with my prizes

An incomplete larger picture

]]>

which was used for the official T-shirt for Jean-François Le Gall’s birthday conference.

See also this quanta magazine article by Kevin Hartness.

]]>