Tomorrow: Boolean functions day at the TAU theory fest

As part of the 2019/2020 TAU theory fest, tomorrow, Friday, January 3, 2020,  is a Boolean function day at Tel Aviv University. The five speakers are Esty Kelman, Noam Lifschitz, Renan Gross, Ohad Klein, and Naomi Kirshner.

For more (and probably not all) mathematical activities in Israel these days including Abel in Jerusalem see this post.

The Abel in Jerusalem site has now the titles and abstracts of all talks. The talks are aimed so that advanced undergraduate students can learn from them and enjoy them. (The conference celebrating Boris Solomyak’s birthday was just added to the post on current math activities in Israel.)

Windows in theory reports from the theory fest on an exciting scientific bet between Elchanan Mossel and Subhash Khot. (Raising again the question if scientific bets are good.) Also in Windows in theory a report by Dorit Aharonov on the winter school on mathematics of quantum computing at HUJI.

Posted in Combinatorics, Conferences, Convex polytopes | Tagged , , , , , , , | Leave a comment

The Google Quantum Supremacy Demo and the Jerusalem HQCA debate.

Below are 10 annotated slides from a spontaneous informal talk that I gave at the school on mathematics of quantum computing a weak ago. (Power point presentation.) Later in the afternoon we had  a panel/debate on quantum supremacy (click for the video) moderated by Sandy Irani and featuring Scott Aaronson, Dorit Aharonov, Boaz Barak, Sergio Boixo, Adam Bouland,  Umesh Vazirani, and me. It was a thoughtful and interesting discussion. (The presentation was initially prepared for the afternoon debate but at the end I gave it as a separate talk and presented just one slide at the panel itself. The plan was also to post it as a background material before the discussion, but not untypically, I was too slow, so here it is a week later.) I am thankful to Dorit for inviting me to the panel (and for organizing a great school!), to Sandy for her excellent moderation, and to all the panelists for their good spirit. (Update: A blog post by Dorit at Windows on Theory)

My assessment continues to be that both the Google supremacy claims and their other central claim about fidelity estimation are incorrect.

(Dec 23, 2019: Following some renewed discussion in the last days on the terminology I proposed to replace the term “quantum supremacy” with “HQCA – Huge Quantum Computational Advantage” and I myself may try to follow my new term in my own lectures/papers in the future and see how it goes. In fact, I already used HQCA in our Sunday’s Kazhdan seminar where I lectured about noise stability and sensitivity for Boolean functions, Boson Sampling, and quantum circuits (lecture notes).)

My Presentation on the Google Quantum Supremacy Demonstration

Note: you may click on a slide to see it in full screen.

My lecture is not about quantum computers and quantum supremacy in general but about the Google quantum supremacy claims. (If you want to know more about my general argument against quantum computers, quantum supremacy and quantum error correction go to the previous post Gil’s Collegial Quantum Supremacy Skepticism FAQ and to several of my papers and presentations linked there.)

In this presentation I will concentrate on aspects of the Google experiment that are “too good to be true”. (Which is a bad sign, not a good sign.)

There is an amazing agreement between the fidelity as estimated from the experimental data and a very simple high-school model based on the probability of individual qubits and gates to malfunction.  (An even simplified version, below in purple, requires just three parameters – the number n of qubits, the number g1 of 1-qubit gates and the number g2 of 2-qubit gates.)

Formula (77) itself is not surprising but the terrific success of Formula (77) may serve as a smoking gun for the claim that the Google’s experiment has serious methodological problems.

I was not the only one to be amazed. Here is what John Martinis, the scientific leader of the Google supremacy experiment told about it. (Nov 1., 2019) Read carefully!

The accuracy of Formula (77) is the most amazing thing about the data, something that came as a big surprise, and that (jokingly) will let quantum scientists keep their jobs.

And here is what I think

No no, you cannot estimate with precision of 10-20% the probability of failure of a physical system with 1000 interacting elements as the product of 1000 error-probabilities.

This remarkable agreement is a major new discovery, and it is not needed for building quantum computers. It is only needed for the extrapolation argument leading the Google team to supremacy.

Even if quantum computers will be built this is something that we are not going to witness. It is interesting to check if we see anything remotely like this for the IBM quantum computers.

Here on the left you see for various values of n, the number of qubits, two experiments with different circuits that leads to entirely different probability distributions. Since each pair have a similar number of gates, Formula (77) leads to very close fidelity estimates which are so accurate that the green and blue lines coincide! We do not expect to see such close agreements in experimental data.

 

Blue, orange and green analytic smooth curves representing three distributions. For all three you see also empirical samples from the distributions. For two of them the samples are perfect, for the other one you see a sample based on a complicated physical experiment.

Test your intuition: can you tell the difference? Can you tell which is which?

(You can also test your intuition which was the single slide presented at the afternoon debate.)

Blind tests are standard in scientific experiments and are quite easy to implement here.

The meaning of the surprising statistical success of Formula (77) should be carefully examined.

Two extra slides

To put my view in context,  I devote one slide for my general argument against quantum computers, quantum supremacy and quantum error correction.

I mentioned that most experts do not agree with my argument or do not even understand it, and for some of them, this may give reasons for skepticism also about my critique on the Google experiment. I  personally think that my general argument is good, but  I don’t think it is ironclad and I am pleased to see this matter explored experimentally (properly). I also think that my critique on the Google demo stands well on its own.

And what may convince me to change my mind? poetry!

Let me add that all my critique as described here and elsewhere was first presented to the Google team.

From right to left: Sandy Irani, me, Boaz Barak, Adam Bouland, Dorit Aharonov, and on the left Umesh Vazirani, Scott Aaronson, and Sergio Boixo. (Collage; More pictures – below.)

Continue reading

Posted in Controversies and debates, Quantum | Tagged , , , , , , , , , , | 2 Comments

Four Great Numberphile Graph Theory Videos

A quick link to our previous post: Gil Bor, Luis Hernández-Lamoneda, Valentín Jiménez-Desantiago, and Luis Montejano-Peimbert: On the isometric conjecture of Banach.

 

We have mentioned the Numberphile video channel before (here, and here, and here). It has amazing videos for the very general public about mathematics. And the general public is interested! The channel has more than 3 millions subscribers and a Numberphile video gets usually well over 100,000 views.  While the videos are for very general audiences it is quite common that I learn new mathematics while watching these videos. Today I mention four recent Numberphile videos on graph theory.

Before that, let me mention that, while combinatorics has a special appeal for general audiences, it is a hard challenge to explain to a very large audience (or even to the large audience of Quanta Magazine) some very abstract areas of mathematics. It is not impossible: Here is Kevin Hartnett’s article in Quanta Magazine on ∞-categories and Jacob Lurie’s work and here is a 2016 article by Erica Klarreich on perfectoid spaces and Peter Schotze’s work. I hope to see Numberphile moving into these areas as well.

A breakthrough in Graph theory:  Yaroslav Shitov’s counterexample to Hedetniemi’s conjecture as told by Erica Klarreich.

For more on this development see this post. The video contains nice pictures of Stephen Hedetniemi and his 1966 thesis. An exciting open question is: Given two graphs of chromatic number n is the chromatic number of their tensor product at least some f(n), where f(n) tends to infinity with n.

Maria Chudnovsky – perfect graphs

Starting with the definition and ending with the strong perfect graph theorem (conjectured by Claud Berge) by Chudnovsky, Seymour, Robertson and Thomas.

An exciting open question: Find a good definition for perfect r-uniform hypergraphs.

We discussed perfect graphs and related notions in this post, this one and this one.

(Remark: Numberphile misguidedly regarded this video as a continuation of the “planar graph” video. The truth of the matter is that these are separated topics and the perfect graph video should be a separate Numperphile video.)

Maria Chudnovsky – planar graphs

Planar graphs are deep and fascinating (and miraculous, and always surprising) mathematical objects.  In Hebrew we can say about the topic that it is “Olam Umelo’o” (עולם ומלואו) (=a whole world with all its content). There are many fascinating properties of planar graphs and many open problems about them.

There are also many fascinating generalizations. Let me mention Abby Thompson’s problem: Suppose that G is the graph of a simple d-polytope with n vertices. Suppose also that n is even (this is automatic if d is odd). Can we always properly color the edges of G with d colors? (For d=3, the answer is yes and this is the four color theorem.)

(We discussed planar graphs here and on MO, many times.)

Discussing vertex covers and Alcuin numbers for graphs – and a famous old puzzle. Filmed at MSRI with Annie Raymond.

We discussed here and in other posts the important notion of vertex covers, but the mathematics of the video was completely new to me.

Posted in Combinatorics | Tagged , , , , , | Leave a comment

Gil Bor, Luis Hernández-Lamoneda, Valentín Jiménez-Desantiago, and Luis Montejano-Peimbert: On the isometric conjecture of Banach

Stefan Banach and one of his famous quotes. Is it really true? A commentator (troll?) named Gina tried to challenge it in a heated discussion over the n-Category cafe.

This post briefly describes a May 2019 paper On the isometric conjecture of Banach written by Gil Bor, Luis Hernández-Lamoneda, Valentín Jiménez-Desantiago, and Luis Montejano-Peimbert. (Thanks to Imre Barany who told me about it.)

Luis Montejano-Peimbert

The isometric conjecture of Banach

Stefan Banach asked in 1932 the following question: Let V be a Banach space, real or complex, finite or infinite dimensional, all of whose n-dimensional subspaces, for some fixed integer n, 2 ≤ n < dim(X), are isometrically isomorphic to each other. Is it true that X is a Hilbert space?

The question has been answered affirmatively in the following cases.

(1) In 1935, Auerbach, Mazur and Ulam gave a positive answer in case V is a real 3 dimensional Banach space and n = 2.

(2) In 1959, A. Dvoretzky proved a theorem, from which follows an affirmative answer for all real infinite dimensional V and n ≥ 2.

(3) Dvoretzky’s theorem was extended in 1971 to the complex case by V. Milman.

(4) In 1967, M. Gromov gave an affirmative answer in case V is finite dimensional, real or complex, except when n is odd and dim(V ) = n + 1 in the real case, or n is odd and n < dim(V ) < 2n in the complex case

The new so beautiful theorem by Gil Bor, Luis Hernández-Lamoneda, Valentín Jiménez-Desantiago, and Luis Montejano-Peimbert reads

Theorem: Let B\subset \mathbb R^{n+1} , n = 4k + 1, ~ k \ge 1, n \ne 133, be a convex symmetric body, all of whose sections by n-dimensional subspaces are linearly equivalent. Then B is an ellipsoid.

The reason for the strange exception n \ne 133 has to do with the fact that 133 is the dimension of the exceptional Lie group E_7.

Gil Bor with students

E7 (Source: Wikipedea) The 126 vertices of the 231 polytope represent the root vectors of E7, as shown in this Coxeter plane projection Coxeter–Dynkin diagram:

Posted in Geometry | Tagged , , , , | Leave a comment

Abel in Jerusalem – SUNDAY, January 12, 2020, and other events

Update: Boris Solomyak’s birthday conference (January 13-14, Bar-Ilan University)is now added.

I would like to report on seven eight nine ten mathematical events taking place in Jerusalem, the Tel Aviv area, and Haifa in the next few weeks. (Probably I missed quite a few other mathematical events, feel free to update in the comment section.)

1. Abel in Jerusalem: Sunday, January 12, 2020, 09:45-19:00, featuring: Christopher Hacon, Ulrike Tillmann, Claire Voisin, and François Labourie.

2. Winter School: The Mathematics of Quantum computing (starting  Dec. 15, 2019, program): The Rabin lecture by Boaz Barak Monday Dec 16 14:00 CS building;  and also including a panel/debate on quantum supremacy moderated by Sandy Irani, Dec. 19 , 2019 16:00 CS building. (I am one of the seven panelists and for an introduction to my stance, see this post.) There are great videotaped lectures!

3. TAU Theory Fest  December 29, 2019  – January, 3, 2020. Great line of speakers, and various workshops.

4. Boolean function day, Friday, January 3, 2020, featuring Esty Kelman, Noam Lifschitz, Renan Gross, Ohad Klein, and Naomi Kirshner.

5. Zabrodsky’s lecture, Paul Seidel, December 19, 23, 24 2019, The Hebrew University of Jerusalem. (See this previous post.)

6. Dvoretzky’s lectures, Sylvia Serfaty,  January 15,16, 2020, the Hebrew University of Jerusalem. (See this post about Aryeh Dvoretzky.)

7. Distinguished lectures by Sergey Tabachnikov, January, 6, 8, 9 the Technion.

8. The Twenty-fifth Israeli Mini-Workshop in Applied and Computational Mathematics, Wednesday December 25. Bar-Ilan University. Great line of speakers.

9. January 14,15 2020, Games, Optimization and Optimism: in Honor of Uri Feige

Abel In Jerusalem, Sunday, January 12, 2020.

The Einstein Institute of mathematics is happy to host the Abel in Jerusalem Conference. The location of the conference in Manchester House, Lecture Hall 2. Each year the Abel Committee holds a symposium where members of the Abel Committee and some of the world’s top mathematicians will be speaking. “Abel in Jerusalem” will be the 10th one-day conference with lectures aimed at a mathematically educated and interested audience, with the objective of increasing public awareness of mathematics and of the Abel Prize.

The day starts (09:40) with opening remarks by Hans Petter Graver, President of the Norwegian Academy of Science and Letters, followed by lectures by  Christopher Hacon (University of Utah) 10:00-11:00; Ulrike Tillmann (Oxford University) 11:30-12:30; Claire Voisin (Collège de France) 14:30-15:30; and François Labourie (Université Côte d’Azur) 16:00-17:00. Participation, including lunch, and reception at the King David Hotel is free but registration is needed. Organizers: Gil Kalai and Tamar Ziegler; For more detail see this page. If you are interested in attending please register here.

Posted in Algebra, Analysis, Combinatorics, Computer Science and Optimization, Conferences, Geometry, Updates | Tagged , , , | 1 Comment

Test Your Intuition 42: How Much do you Gain by Knowing The Game You Play?

There are two players  – the row player and the column player and there are two possible zero-sum games described in the picture below. For each of these games the row players has two strategies T (top) and Bottom (B) and the column player has two strategies L (left) and R (right).  The entries in each game are the payoffs that the column player pays the row player, so if the game on the left is played, the row player chooses T and the column player chooses L, then the column player pays 1 to the row player.

The repeated game scenario

One of these two games is chosen at random with probability 1/2. And both players don’t know which one was chosen. Once this is done the game is repeated infinitely many times.  The players dont know what are the payoffs in each round! But the players do know after each round how the other players played. Both players want to guarantee as high as possible average payoff (as the number of repetitions tend to infinity).

We first ask what are the payoff that each player can guarantee.  If the row player chooses a mixed strategy  of playing T and B with equal probabilities he can guarantee that his expected average gain will be 1/4. The column player can also choose a mixed 1/2 1/2 strategy and guarantee that the expected amount of payment for each round is at most 1/4. So this is the value of the game.

Enters one side information

Now comes the twist: Suppose that the row player is told which game is played.

TYI: How much on average can the row player guarantee?

 

In the picture above you see Bob Aumann discusses this game during a celebration for his former student Professor Shmuel Zamir

Happy birthday, Shmuel!

Posted in Games, Test your intuition | Tagged , , , | 6 Comments

Old lecture notes by Alex Zabrodsky

The next post will announce several mathematical events taking place in the next few weeks in Israel. One of these events is the Zabrodsky lectures at the Hebrew University of Jerusalem, that are given this year by Paul Seidel, starting Thursday, January 19, 14:30. Recently, I was moved to discover Hebrew lecture notes that Alex prepared for his algebraic topology course 45 years ago. Alex was a beloved faculty member and a great algebraic topologist that was killed in a car accident driving to Rochester after giving a lecture at Cornell.

Above: Alex Zabrodsky (1936-1986), below 45 years old lecture notes of Alex’s course in algebraic topology that I recently found.

Posted in Geometry, People | Tagged | 1 Comment

TYI 41: How many steps does it take for a simple random walk on the discrete cube to reach the uniform distribution?

Aeiel Yadin’s homepage contains great lecture notes on harmonic functions on groups and on various other topics.

I have a lot of things to discuss and to report; exciting developments in the analysis of Boolean functions; much to report on algebraic, geometric and probabilistic combinatorics following our Oberwolfach summer meeting; much to tell about our Kazhdan seminar on quantum computation and symplectic geometry; a lot of exciting math and TCS activities in Israel; exciting things that Avi Wigderson’s told me on non commutative optimization and moment maps; and, of course, last but not least, the exciting Google supremacy demonstration that I most recently wrote about in my post Gil’s Collegial Quantum Supremacy Skepticism FAQ.  In particular, the unbelievable local-to-global fidelity Formula (77), and (NEW) a poem by Peter Shor for quantum computer skeptics. More poems are most welcome!

With all these excitements, plans, and blog duties it looks that this is the right time to take a pause for a Test Your Intuition post. (Based on chats with Asaf Nachmias, Jonathan Hermon and Itai Benjamini.)

Approaching the uniform distribution on the discrete cube with a lazy simple random walk.

Consider the discrete cube as a graph: the vertices are all the 0-1 vertices of length n , two vertices are adjacent if they differ in one coordinate.

A (lazy) simple random walk is described as follows: You start at the all 0 vertex. At each step when you are at vertex v you stay where you are with probability 1/2 and, with probability 1/2,  you move to a neighbor  of v chosen uniformly at random.

Your position after t steps is a random variable describing a probability distribution D_t on the vertices of the discrete cube. Now, lets fix once and for all the value of \epsilon to be 0.1.

Test your intuition: How many steps T(n) does it take until D_t is \epsilon-close to the uniform distribution U in total variation distance.  (The total variation distance is 1/2 the L^1 distance).

Others measure of proximity

We can also ask: How many steps M(n) does it take until D_t(x) is close to the uniform distribution for every x? Namely, for every x,

(1-\epsilon)/2^n \le D_t(x) \le (1+\epsilon)/2^n

For this question there is a simple analysis based on the coupon collector problem.

We can also consider intermediate measures of proximity, like the entropy:

How many steps H(n) it takes until the entropy of D_t(x) is \epsilon-close to the n the entropy of the uniform distribution?

Let me try now to test your more detailed intuition: For the public opinion poll below we say that X behaves like Y if their ratio tends to one as n tends to infinity, and that X is really smaller than Y if their ratio X/Y tends to a limit smaller than 1.

NEW: Share your knowledge

Let’s try something new: “Share your knowledge (SYK):” What other distances between probability distributions do you recommend? Tell us about them!

A theorem by Ajtai-Komlos-Szemeredi and a problem by Itai Benjamini

Ajtai, Komlos and Szemeredi proved that when you choose every edge of the discrete n-cube with probability greater than 1/n a giant component emerges! Now, choose every edge with probability 2/n  and start a simple random walk from a vertex of the giant component. Itai Benjamini conjectured that it will take roughly n^2 steps to approximately reach the stationary distribution. This seems very difficult.

 

Posted in Combinatorics, Probability, Test your intuition | Tagged , , | Leave a comment

Gil’s Collegial Quantum Supremacy Skepticism FAQ

The first 15 samples of Google’s  53 qubit  flagship quantum supremacy experiment!  

After the sensationally successful Scott’s Supreme Quantum Superiority FAQ and Boaz’s inferior classical inferiority FAQ let me add my contribution, explaining my current skeptical view. (I was actually asked many of the questions below.) I also recommend Lipton and Regan’s post on the Google paper.

While much of the post will be familiar let me mention right away a new critique of the Google supremacy claim: One of the central claims in the Google experiment – that the fidelity (quality) of a complex circuit is very close to the product of the fidelity of its basic components –  qubit and gates, seems very improbable and this may shed serious doubts on the validity of the experiment and on its conclusions.

Update (27/11): For quantum computer skeptics poetry on Twitter see  this thread by Peter Shor, and this tweet by ⟨dl|yonge|mallo⟩. (See at the end of the post.)

Before we start, a few links: For the amazing news on the threshold of random discrete structures, see this post.  Here is my first post on Google matter. Let me recommend the paper From Operator Algebras to Complexity Theory and Back by Thomas Vidick. It is about a problem by Boris Tsirelson (related to various deep mathematics) and about connections to quantum computation. And just fresh on the arXiv, Quantum speedups need structure by Nathan Keller, Ohad Klein, resolving the Aaronson-Ambainis Conjecture. (Update: here is a blog post on the Shtetl-Optimized.) Congrats to Nathan and Ohad! (Dec 2: unfortunately a gap in the proof was found; see comment below.)

And now, lets start.

So what is quantum supremacy? And what other things do we need to know in order to understand the claims regarding quantum computers?

Quantum supremacy  is the ability of quantum computers to demonstrate computations that classical computers cannot demonstrate. (Or that it is very very hard for classical computers  to demonstrate.)

Quantum error correcting codes are certain quantum gadgets that a quantum computer needs to create that will be used as building blocks for larger quantum computers.

A sampling task is  a task where the computer (quantum or classic) produces samples from a certain probability distribution D. Each sample is 0-1 vector of length n.

John Martinis

Google

What did the Google team do?

The Google team produced a sample of a few millions 0-1 vectors of length 53 which is based on a certain “ideal” probability distribution D. They made two crucial claims regarding their sample

A) The statistical test for how close their sample is to the ideal distribution D will give a result above t=1/10,000

B) Producing a sample with similar statistical property will require 10,000 years on a supercomputer.

The probability distribution D depends on a quantum computation process (or by the technical jargon, a quantum circuit) denoted later by C.

What is the meaning of the statistical statement in part A)?

Google’s quantum computers (like any other current quantum computers) are very “noisy” so what the computer is producing are not samples from D but rather a noisy version which could roughly be described as follows: a fraction t of the samples are from D and a fraction (1-t) of the samples are from a uniform distribution. The statistical test allows to estimate the value of t which is referred to as the fidelity.

Could they directly verify claims A) and B) ?

No, it was only possible to give indirect evidence for both these claims.

What is the logic of Google’s quantum supremacy argument?

For claim A) regarding the success of the statistical test on the sample they have two arguments:

  1. Analysis based on the fidelity of the components of the quantum computer – qubits and gates (see formula (77) above),
  2. Experiments that support the analysis in the regime where they can be tested by a classical computer.

According to the paper  the fidelity of entire circuits agrees perfectly with the prediction of the simple mathematical formula (77) with deviation under 10-20 percents. There are several reported experiments in the classically tractable regime including on simplified circuits (that are easier to simulate on classical computers) to support the assumption that the prediction given by formula (77) for the fidelity applies to the 53-qubit circuit in the supremacy regime.

For claim B) For the classical difficulty they rely on:

  1. Extrapolation from the running time of a specific algorithm that they use.
  2. Computational complexity support for the assertion that the task they consider is asymptotically difficult.

What are the weak points in this logic?

A main weakness (which is crucial in my mind) of the experiment is that the experimental support from the regime where the experiments can be tested by a classical computer is too sparse. Much more could have been done and should have been done.

In my opinion, a major weakness of the Google experiment is that the support from experiments in the classically tractable regime (blue in the picture) is much too sparse and is unconvincing.

Another weakness is that the arguments for classical difficulty were mainly based on the performance of a specific algorithm.

Sources: The link to the Google paper in Nature. A videotaped lecture by John Martinis at Caltech. A short Google video “demonstrating quantum supremacy”.

Assessment

What is your assessment of the Google claims, Gil?

I think that the claims are incorrect. Specifically, I find the evidence for  the claim “the statistical test applied to the 53-qubit  sample will give a result above 1/10,000” too weak and I expect that this claim and other related claims in the paper will not stand after a closer scrutiny and further experiments in the classically tractable regime. I also doubt the perfect proximity between predictions based on the 1- and 2- qubit fidelity and the circuit fidelity.

The Google experiment represents a very large leap in several aspects of human ability to control noisy quantum systems and accepting their claims  requires very careful evaluation of the experiments and, of course, successful replications.

The “most amazing thing” – is it real?

Do you want to tell us more about Formula (77)?

Formula (77)

Yes, thank you. Here again is Formula (77) and its explanation in the paper. The fact that the fidelity of entire circuits agrees with the prediction of the simple mathematical Formula (77) is “most amazing” according to John Martinis (videotaped lecture at Caltech).  Indeed the deviation according to the paper is at most 10-20 percents.  This perfect agreement can be seen in various other parts of the paper. The authors’ interpretation of this finding is that it validates the digital error model and shows that there are no new mechanisms for errors.

John explains the significance of Formula (77) at Caltech. Amazing big surprises are often false.

And what do you think about it, Gil

I completely agree that this is most amazing and, as a matter of fact, there are reasons to consider the  predictions based on Formula (77) as too good to be true even if qubit and gates fidelity account for all the errors in the system. The issue is that Formula (77) itself is very sensitive to noise. The formula estimates the fidelity as the product of hundreds of contributions from individual qubits and gates. Fairly small errors in estimating the individual terms can have large accumulative effect, well beyond the 10%-20% margin.

Anyway, this matter deserves further study.

Why?

Because this is considered by the authors as a major discovery and while looking skeptically at the Google papers this appears to be an orthogonal “miracle” to the main supremacy claim.

How to proceed

Let’s go back to your overall assessment. What could change your mind, Gil?

Here goes:

Verification

A) An independent verification of the statistical tests outcomes for the experiments in the regime where the Google team classically computed the probabilities. This looks to me like a crucial step in a verification of such an important experiment.

A more difficult verification that I’d also regard as necessary at a later stage would be to independently check the probability distributions for the circuits given by the Google computation.

Further analysis and crucial improvements

B) Experiments with the quantum computers giving sufficiently many samples to understand the noisy probability distribution for circuits in the 10-25 qubit range.  See my first post, this comment by Ryan O’Donnell, and this earlier one.  We need to understand the noisy probability distribution produced by the Google quantum computer, the actual fidelity, and the quality of the statistical tests used by the Google team.

Replications

C) Experiments in the 10-30 qubit range on the IBM (and other) quantum computers.  It is quite possible that experimenting of this kind with random quantum circuits was already carried out.

D) Since improved classical algorithms were found by the IBM team (but analyzing the 53 qubits samples still seems practically beyond reach).   Google can produce samples for 41-49 qubits for which IBM (or others) can compute the probabilities quickly and test Google’s prediction for the fidelity.

Follow-ups

E) Success in demonstrating distance-3 and distance-5 surface codes and other quantum error-correcting codes.

So what precisely will convince you and what is the time-schedule that you expect for matters to be clarified?

A successful and convincing combination of three or more from A), B), C), D) or E) will go a long way to convince me. The verification part A) is important, and I don’t expect problems there, I expect that the Google claims will be verified and I consider it as very important that the data will be public and that various groups will verify the claims. This may take several months and certainly it should take less than a year.

At present, I expect parts B)-D) will not support Google’s supremacy claims. So outcomes of experiments in the next couple of years both by the Google group and by other groups will be crucial. One direction that I do not regard, at present, as useful for strengthening the quantum supremacy claims is increasing the number of qubits of the quantum computer.

What is required for the (easy) verification stage?

(1) Right now the raw samples of Google’s sampling experiments are public. There are altogether 300 files with samples.

(2) For every circuit that they experiment, the Google team also plans to upload  the 2^n probabilities that they obtained by the Feynman-Schrodinger algorithm.  This will  allow verifying their statistical tests, making subsequent analysis, and for other researchers to test other algorithms for computing the same probabilities.

(3) A convenient form of the data from (2) is a file that will give for every experiment the probabilities that the Google team computed for the samples. (For a large n those are much smaller files.)

(4) For each of the 300 experiments the estimated fidelity that formula (77) gave and the contribution of each qubit and gate to the RHS of (77).

Do you plan to take part yourself?

I plan to get involved myself with the “easy” verification and analysis of the “raw data” once it will become available. I do expect that the statistical tests will agree with the assertions in the Google paper, and at the same time, as I said,  I think it is important that this and other aspects of the experiments will be double checked and triple checked. This basic data already allows interesting analysis and indeed Google’s supplementary paper describes such analysis (that the Google people kindly pointed me to) on how the data fits the theory and on their statistical tests. See Figures S32-S36, table V and associated materials around pages 37-40.

IBM

What did the IBM rebuttal paper show?

Recall that the Google claim is based on two assertions:

A) The statistical test applied to the sample will give a result above 1/10,000

B) that producing a sample with similar statistical property will require 10,000 years on a supercomputer.

The IBM team described a different algorithm (on an even stronger current supercomputer) that would take only 2.5 days rather than 10,000 years.

Can the 2.5 days be further reduced?

As far as I can see the IBM claim is about a full computation of all the 2^53 probabilities. It is reasonable to think that producing a sample (or even a complete list of 2^53 probabilities) with fidelity t reduces the classical effort linearly with t. (This is the claim about the specific algorithm used by the Google team.) If this holds for the IBM algorithm then the 2.5 days will go down to less than a minute. (This will still be a notable “quantum speed up” in terms of the number of operations.) I don’t have an opinion as to whether  we should expect considerably better than IBM’s classical algorithms for computing the exact probabilities.

But lack of enthusiasm and skepticism of researchers from IBM about the Google paper appears to go beyond this particular point of the 2.5 computing days. Do you think that the objection by IBM people is motivated by fierce competition or envy?

No, I tend to think that there is a genuine interest by researchers who question the Google paper to understand the scientific matter, and carefully, critically and skeptically examine  Google’s claims. Maybe Google’s claims might seem to some other researchers who are working on quantum computers with superconducting qubits as remote from their own experimental experience, and this may give a strong reason for skepticism. It is also possible that in time people in IBM and elsewhere will change their mind and will become  more enthusiastic about the Google results.

IBM paper and blog post responding to Google’s announcement. (Update (Feb 24, 20) about classical simulation in this comment)

Noise

Tell us a little more about noise

Here is a nice toy model (which I think is quite realistic) to understand what the noise is.  Suppose that you ran a circuit C on your quantum computer with n qubits and the ideal probability distribution is D_C. The fidelity t noisy version of D_C will be N_C(x)=t D_C(x) + (1-t)S_C. And here S_C is the average (or weighted average) of values of D_C(y) where y is a vector in the neighborhood of x.

Here is a concrete version: We look at the expected value of D_C(y) where y is a new vector and y_i=x_i with probability p y_i=(1-x_i) with probability (1-p) independently. We choose p so that p^n=t. There are cases where positive correlation of errors for 2-qubit gates lead to correlated errors. (This is especially relevant in the context of quantum error correction.) To add this effect to the toy noise model replace the  word independently by “positively correlated“.

Your argument, Gil

Why do you think that quantum supremacy is not possible at all?

The gist of my argument against the possibility of achieving quantum supremacy by Noisy intermediate scale quantum computers is quite simple: “NISQ devices can’t outperform classical computers, for the simple reason that they are primitive classical computers.” 

(Note the similarity to Scott Aaronson’s critique on a paper published by Nature claiming implementation of a Shor-like algorithm on a classical device called “p-bits”.  Scott offered a very quick debunking: “ ‘p-bit’ devices can’t scalably outperform classical computers, for the simple reason that they are classical computers.)

If Google’s claim are correct – does it falsify your argument?

Yes! I predict that probability distributions described (robustly) by a noisy quantum circuit represent a polynomial time algorithm in terms of the description of the circuit. And by a polynomial time algorithm I assume small degree and modest constants. The Google claim, if true, appears to falsify this prediction. (And for this you do not need quantum supremacy in the strongest form of the term.)

But is there no way that Google’s huge (or at least large) computational advantage coexists with what you say?

There is an issue that I myself am not completely sure about regarding the possibility of chaotic behavior of quantum computers. Here is the classical analog: If you have n bits of memory inside a (classical) computer of m bits and ask about the complexity of the evolution on the n bits which may be chaotic. Of course, we cannot expect that this chaotic computer can lead to a computation that requires thousands of years in a super computer. But can it lead to a robust computation which is superpolynomial  in n (but polynomial in m)?

I don’t know the general answer but, in any case, I don’t think that it changes the situation here. If the Google claims  stand, I would regard it as a very strong argument against my theory. (Even if the noisy distributions themselves are not robust.) In any case, the question if the samples in the experiments represent robust distributions or are chaotic could and should be tested. (I discussed it in this post.)

If Google’s claims do not stand, will it confirm or give a strong support to your position that quantum supremacy and quantum error correction are impossible?

Failure of the Google claim will mainly support the position that quantum supremacy and quantum error correction require substantial improvement of the quality of qubit and gates. It would give a noteworthy support to my position (and probably would draw some attention to it) but I would not regard it as a decisive support. Let me mention that various specific predictions that I made can be tested in the Google, IBM and other systems.

OK, so why do you think that the quality of qubits and gates cannot be improved?

Yes, this is the crucial point. One argument (that I already mentioned) for thinking that there is a barrier for the quality of gates and qubits is computational theoretic. Computationally speaking NISQ devices are primitive classical computing devices and this gives a strong reason to think that it will not be possible to reduce the error rate to the level allowing computational supremacy. But there is an additional argument: for a wide range of lower levels of noise, reducing the noise will have the effect of making the system more chaotic. So the first argument tells us that there is a small range of error-rates that we can hope to  achieve and the second argument tells us that for a large range  of lower error-rates all we gain is chaos!

Links to my work: Three puzzles on mathematics computations, and games, Proc. ICM2018; The argument against quantum computers, To appear in Itamar Pitowsky’s memorial volume; The quantum computer puzzle, Notices AMS, May 2016

Slides from my 2019 CERN lecture. My ICM 2018 videotaped lecture.

Correlated errors and quantum error correction

People mainly refer to your conjectures about correlated errors.

Yes, this reflects my work between 2005-2013 (and was a central issue in my debate with Aram Harrow) and I think it is an important part of the overall picture. But this issue is different than my argument against quantum computers which represents my work between 2014-2019.  I think that my earlier work on error correlation is a key (or a starting point) to the question: What do we learn from failure of quantum computers on general properties of quantum noise. Indeed there are various consequences; some of them are fairly intuitive; some of them are counter-intuitive, and some of them are both. The basic intuition is that once your computation really makes use of a large portion of the Hilbert space, so will the error!

The major challenge is to put this intuition into formal mathematical terms and to relate it to the mathematics and physics of quantum physics.

I made a similar idea in  a comment to Dave Bacon in 2006 when I wrote “I believe that you may be able to approximate a rank-one matrix up to a rank-one error. I do not believe that you will be able to approximate an arbitrary matrix up to a rank one matrix.” to which Dave replied “I will never look at rank one matrices the same 😉”. Dave Bacon is among the authors of the new Google paper.

What is the connection between the ability to achieve quantum supremacy and the ability to achieve quantum error-correction?

One of the main claims in my recent works is that quantum supremacy is an easier task compared to creating good quality error-correcting codes. For the attempted experiments by Google, we see  a clear demonstration that achieving good quality quantum error correction is harder than demonstrating quantum supremacy. Low fidelity circuits that Google claims to achieve are far from sufficient for quantum error-correction. The other claim in my argument is that quantum supremacy cannot be achieved without quantum error correction (and, in particular, not at all in the NISQ regime) and this claim is, of course, challenged by the Google claims.

You claim that without quantum error correction to start with we cannot reach quantum supremacy. But maybe John Martinis’ experimental methods have some seeds of quantum error correction inside them?

Maybe 🙂  See this 2017 cartoon from this post.

(Here is a nice overview video from 2014 about my stance and earlier work.)

The Google experiment: concerns and attacks

Beside the critique on experimental evidence that could be tested did you find some concrete issues with the Google experiment?

Perhaps even too many 🙂 . In the first post and comments I raised quite a few objections. Some of them are relevant and some of them turned out to be irrelevant or incorrect. Anyway, here, taken from my first post, are some of my concerns and attempted attacks on the Google experiment:

  1. Not enough experiments with full histograms; not enough experiments in the regime where they can be directly tested
  2. Classical supremacy argument is overstated and is based on the performance of a specific algorithm
  3. Error correlation may falsify the Google noise model
  4. Low degree Fourier coefficients may fool the statistical test
  5. (Motivated by a comment by Ryan.) It is easier to optimize toward the new statistical test “linear cross-ratio entropy” compared to the old logarithmic one.
  6. “System calibration” may reflect an optimization towards the specific required circuit.
  7. The interpolation argument is unjustified (because of the calibration issue).
  8. (I forgot about it, Added, nov 27.) The paper assumes that the noisy distribution is a mixture of the correct distribution and a uniform distribution. But (as seen e.g. by our toy model) this is not precisely the case.

We talked about items 1 and 2 what about 3-5. In particular, are correlated errors relevant to the Google experiment?

No! (As far as I can see.) Correlated errors mean that in the smoothing the flipped coordinates are positively correlated.  But for the random circuit and the (Porter Thomas) distribution D_C  this makes no difference!

As for item 4., it turns out (and this was essentially known by the work of Gao and Duan) that in the case of random circuits (unlike the case of Boson Sampling) there is no low degree coefficients to fool the statistical test.

As for item 5. (and also item 8.), the answer is “nice observation, but so what?” (Let me note that the supplementary paper of the Google team compares and analyzes the linear and logarithmic statistical measures.) I expect that observation 8 may play a role in analyzing the Google experiment. ) Dec 2: Let me add that I do expect point 8 to be important in figuring out the Google supremacy demonstration.

What about the calibration? You got a little overworked about it, no?

In almost every scientific experiment there could be concerns that there will be some sort of biased data selection toward the hoped-for result.

Based on the description of the calibration method I got the impression that part of the calibration/verification process (“system calibration”) was carried out towards the experimental outcome for a specific circuit, and that this does not improve the fidelity as the authors thought but rather mistakenly tweaked the experimental outcomes toward a specific probability distribution. This type of calibration would be a major (yet innocent) flaw in the experiment. However, this possibility was excluded by a clear assertion of the researchers regarding the nature of the calibration process, and also by a more careful reading of the paper itself by Peter Shor and Greg Kuperberg. I certainly was, for a short while, way overconfident about this theory.

One nice (and totally familiar) observation is that a blind experiment can largely eliminate the concern of biased data selection.

So how do you feel, Gil?

When did you hear about the Google claim?

There were certainly some reasons to think that Google’s quantum supremacy was coming for example a quanta magazine article by Kevin Hartnett entitled Quantum Supremacy Is Coming: Here’s What You Should Know and another article about Neven’s double exponential law. Also Scott Aaronson gave some hints about it.

On September 18, I met Thomas Vidick in a very nice conference of the Israeli and US academies on the future of computer science (it was mentioned in this post, links to all videos will be added here, Naftali Tishby’s lecture is especially recommended.) Thomas told me about the new expected Google paper. Later that day I got involved in an amusing Facebook discussion about related matters. (See Barry Simon’s first comment to Preskill’s post and the subsequent 15 comments.)

When I introduced Thomas to Menachem Yaari (who was the president of the Israeli Academy), describing the former as a young superstar in quantum computation, Menachem’s reaction was: “but you do not believe in quantum computers.” I replied that I believe it is a fascinating intellectual area, and that perhaps I am even wrong about them being infeasible. Thomas said: “our area needs more people like Gil.” (!)

What about Scott?

Scott and I have been on friendly terms for many years and share a lot of interests and values. We are deeply divided regarding quantum computers and, naturally, I think that I am right and that Scott is wrong. In the context of the Google paper Scott’s references to me and my stance were peculiar and even a little hostile which was especially strange since at that time I did not have access to the paper and Scott was the referee of the paper.

Gil, how do you vision a situation where you are proven wrong?

If my theory of quantum computation being derailed by noise inherent in quantum gates is proven wrong, then physicists will say that I am a mathematician and mathematicians will say that I am a combinatorialist.

and how do you vision  a situation where you are proven right? Continue reading

Posted in Combinatorics, Physics, Quantum | Tagged | 18 Comments

Amazing! Keith Frankston, Jeff Kahn, Bhargav Narayanan, Jinyoung Park: Thresholds versus fractional expectation-thresholds

This post describes a totally unexpected breakthrough about expectation and thresholds. The result  by Frankston, Kahn, Narayanan, and Park has many startling applications and it builds on the recent breakthrough work of Alweiss, Lovett, Wu and Zhang on the sunflower conjecture. Warm congratulations to Keith, Jeff, Bhargav, and Jinyoung!

Let me mention that important updates on the matter of applying the intermediate value theorem for football (or soccer as referred to in the US) that was discussed in this 2009 post were added to the post.  For readers interested in the Google’s quantum supremacy news, here is a link of my main post on the matter.

Thresholds versus fractional expectation-thresholds

This morning the following paper appeared on the arXive: Thresholds versus fractional expectation-thresholds by Keith Frankston, Jeff Kahn, Bhargav Narayanan, and Jinyoung Park.

Abstract: Proving a conjecture of Talagrand, a fractional version of the ‘expectation-threshold’ conjecture of Kalai and the second author, we show for any increasing family F on a finite set X that p_c(F)=O(q_f(F) \log \ell ({\bf F})), where p_c({\bf F}) and q_f({\bf F}) are the threshold and ‘fractional expectation-threshold’ of F, and ℓ(F) is the largest size of a minimal member of F. This easily implies various heretofore difficult results in probabilistic combinatorics, e.g. thresholds for perfect hypergraph matchings (Johansson-Kahn-Vu) and bounded-degree spanning trees (Montgomery). We also resolve (and vastly extend) one version of the ‘random multi-dimensional assignment’ problem of Frieze and Sorkin. Our approach builds on recent breakthrough work of Alweiss, Lovett, Wu and Zhang on the Erdős-Rado ‘sunflower’ conjecture.

The expectation-threshold conjecture

The 2006 expectation threshold conjecture gives a justification for a naive way to estimate the threshold probability of a random graph property. Suppose that you are asked about the critical probability for a random graph in G(n,p) for having a perfect matching (or a Hamiltonian cycle). You compute the expected number of perfect matchings and realize that when p is C/n this expected number equals 1/2. (For Hamiltonian cycles it will be C’/n.) Of course, if the expectation is one half the probability for a perfect matching can be very low, indeed in this case, an isolated vertex is quite likely but when there is no isolated vertices the expected number of perfect matchings is rather large. Our 2006 conjecture boldly asserts that the gap between the value given by such a naive computation and the true threshold value is at most logarithmic in the number of vertices. Jeff and I tried hard to find a counterexample but instead we managed to find more general and stronger forms of the conjecture that we could not disprove.

Conjectures by Talagrand

The expectation threshold conjecture had some connections with a 1995 paper of Michel Talagrand entitled Are all sets of positive measure essentially convex? In a 2010 STOC paper Are Many Small Sets Explicitly Small? Michel formulated a weaker fractional version of the expectation threshold conjecture which is sufficient for the various applications of the original conjecture. This conjecture (as well as a stronger form also posed by Talagrand) is now verified in the new paper!

Connection to isoperimetry

In our 2006 paper we tried to relate the expectation threshold conjecture to various questions of independent interest related to stability theorems for discrete isoperimetric inequalities. This direction did not play a role in the new paper. Let me note that the isoperimetric problems served as partial motivation for the recent breakthrough results by Peter Keevash, Noam Lifshitz, Eoin Long, and Dor Minzer that are reported in this October 2018 post. See their paper Hypercontractivity for global functions and sharp thresholds.

A sample from the applications

  1. The threshold value for perfect matching – this was proved already by Erdos and Renyi (1960)  and it follow from the new results. The same goes for the threshold for connectivity.
  2. The threshold value for Hamiltonian circuits – posed as a problem by  Erdos and Renyi it was solved by Korshunov (1976) and by Posa (1976).
  3. The threshold for perfect matching in 3-uniform hypergraphs – was posed by Schmidt and Shamir (1983) and was settled by  Johansson, Kahn, and Vu. (It was one of the motivation for my 2006 paper with Jeff.)
  4. The threshold for bounded degree spanning trees that was open for a long time and was settled by Montgomery (2019).

Let me mention that in various cases the gap between the (fractional) expectation threshold and threshold is a smaller power of log n, or is a constant, or has different behavior. Understanding this through a general theory is still unknown.

Connection to the sunflower breakthrough

What did play a major role in the new development was the recent breakthrough work of Alweiss, Lovett, Wu and Zhang on the Erdős-Rado ‘sunflower’ conjecture. (See this post.)  I expected that the method of the sunflower paper will have major applications but this application took me by a surprise.

 

Posted in Combinatorics, Probability | Tagged , , , | 4 Comments