A couple weeks ago and a few days after I heard an excellent lecture about it by Alexey Pokrovskiy in Oberwolfach, the paper A proof of Ringel’s Conjecture by Richard Montgomery, Alexey Pokrovskiy, and Benny Sudakov was arXived. Congratulations to Richard, Alexey, and Benny!

Here is the abstract:

A typical decomposition question asks whether the edges of some graphGcan be partitioned into disjoint copies of another graphH. One of the oldest and best known conjectures in this area, posed by Ringel in 1963, concerns the decomposition of complete graphs into edge-disjoint copies of a tree. It says that any tree withnedges packs2n+1times into the complete graph . In this paper, we prove this conjecture for largen.

Previous posts: Test your intuition 43 – What will be the distribution of anabraists and algasisits; MIP*=RE.

First the authors construct approximate decomposition (they divide the problem into two cases) and then they apply a powerful method called **absorption**. Here is what they write:

To prove our ﬁnishing lemmas, we use an absorption strategy. Absorption has its origins in work by Erdős, Gyárfás and Pyber [5] and Krivelevich [14], but was codiﬁed by Rödl, Ruciński and Szemerédi [20] as a versatile technique for extending approximate results into exact ones. For both Case A and Case B we use a new implementation of distributive absorption, a method introduced by the ﬁrst author in [16].

Absorption is one of the most powerful current methods in extremal combinatorics. It is important in the construction of designs (See here, here, here, and here, and my own survey), and in the the solution of several other important conjectures.

Here are links to the pioneering papers mentioned above.

P. Erdős, A. Gyárfás and L. Pyber [5] Vertex coverings by monochromatic cycles and trees. Journal of Combinatorial Theory, Series B, 90–95, 1991.

M. Krivelevich [14] Triangle factors in random graphs. Combinatorics, Probability and Computing, 6: 337–347, 1997.

V. Rödl, A. Ruciński and E. Szemerédi [20] A dirac-type theorem for 3-uniform hypergraphs. Combin. Probab. Comput., 15:229–251, 2006.

And Montgomery’s paper on distributive absorption:

R. Montgomery [16] Spanning trees in random graphs, Advances in Mathematics, 2019

A related paper with an amusing name and an amusing abstract is: E. Szemerédi “Is laziness paying off? (`Absorbing’ method).” The abstract of the paper reads “We are going to mention some embedding problems. The main ideas of most of the solution of these problems are that do not work too much. Finish the job almost, then use some kind of absorbing configuration (that is what we call laziness).”

Montgomery, Pokrovskiy, and Sudakov’s paper concludes with two related famous conjectures:

Motivated by Ringel’s conjecture, in 1967 Alexander Rosa conjectured that every tree *T* with *n* edges admits a **graceful labeling**, namely a labeling of the vertices by the numbers 0,1,…,n such that if we label an edge by the absolute value of the difference of the labels for its two vertices, then all the edge labels are different!

This famous conjecture (stronger than Ringel’s conjecture) is known as the **Graceful Tree conjecture **and also as the **Ringel-Rosa-Kötzig conjecture. **It is still open.

Gyárfás conjecture (1976): Let be trees with having vertices for each . Then the edges of can be decomposed into trees which are isomorphic to respectively.

]]>

In the community of mamathetitians in a certain country there are mamathetitians in two areas: Anabra (fraction ** p** of the mamathetitians) and Algasis (fraction

**The objective value** * v(x)* of every mamathetitian

The top ranked university hires the best two fresh Ph. D.’s according to average value in the eyes of the faculty, the second university hires the second best two students etc.

Let’s assume that to start is * p=0.25*, so

We can also ask about variations to the model:

Further test you intuition, imagination and programming skill: How does the answer change if you change the model in various ways. Change the value of p? Make the initial distribution random across departments? Add noise to the subjective value? Make the bias in subjective evaluation **asymmetric**? make different assumptions on the fresh Ph. D.s? etc. etc.

References to similar models considered in the literature are most welcome.

Disclaimer: I do not know the answers.

Here are very interesting outcomes by Joshua Paik. We see a sort of Zebra-strips patterns between Algasis dominated departments and Anabra dominated departments.

]]>Artist: Heidi Buck.** “Catch a Dragon by the Tail”** ( source )

A few months ago we briefly described the solution by to the infamous upper tail problem for random graph. More recently, Gadi Kozma and Wojciech Samotij solved the infamous lower tail problem.

*Remark: I chose only a few advances only from the last few months and mainly in probabilistic and extremal combinatorics. In the lecture I will briefly mention others as well.*

Colloquium: Gil Kalai (HUJI) – Some recent advances in combinatorics

I will discuss some recent advances in combinatorics, among them the disproof of Hedetniemi conjecture by Shitov, the proof of the sensitivity conjecture by Huang, the progress on the Erdos-Rado sunflower conjecture by Alweiss, Lovett, Wu, and Zhang, and the progress on the expectation threshold conjecture by Frankston, Kahn, Narayanan, and Park.

Location:

Manchester Building (Hall 2), Hebrew University Jerusalem

Combinatorics Seminar HUJI

When: Monday Jan 20, 10:00-11:45

Where: C-400, CS building

Speaker: Wojciech Samotij (TAU)

Title: The lower tail for triangles in random graphs

Abstract:

Let denote the number of triangles in the random graph . The problem of determining the asymptotics of the logarithmic lower tail probability of~, that is, the function , for , has attracted considerable attention of both the combinatorics and the probability communities. In this talk, we explain how one can bound from above by solving a natural combinatorial optimisation problem that generalises Mantel’s / Turán’s theorem. Moreover, we will prove a matching lower bound on in the case . Our argument, which employs ideas from an earlier joint work with Gady Kozma, Tom Meyerovitch, and Ron Peled, exploits connections between entropy and independence.

This is joint work with Gady Kozma.

]]>A tweet by Ryan

The new paper dramatically improved the 2019 result by Anand Natarajan, and John Wright asserting that “NEEXP in MIP*”.

In this post I will talk a little about two corners of this work and neglect many others: I will describe a few conjectures about infinite groups related to the new result, and I will give a very gentle beginning-of-an-introduction to interactive proofs. I will also give some useful links to papers, presentations and blog posts.

**Boris Tsirelson is one of my mathematical heroes. **The new paper gives a negative answer to an important problem posed by Tsirelson. (Here is a great interview with Boris.)

My Qstate: a master project (Vidick’s memories of the road since Julia Kempe offered him the problem 14 years ago; with a lot of scientific content). Older related posts on My Qstate I, II, III.

A Notices AMS article by Vidick: From Operator Algebras to Complexity Theory and Back.

Shtetl Optimized: MIP*=RE; GLL (Ken Regan) Halting Is Poly-Time Quantum Provable ; Other posts on blogs: WIT (Boaz Barak); CC (Lance Fortnow); WN; Posts on an earlier 2019 result MIP* contains NEEXP. Updates: A post on GLL on halting, two excellent posts in Hebrew I, II.

Quanta Magazine: An article by Kevin Hartnett about an earlier result MIP*=NEEXP; An article about the new result.

Older posts here: about Vidick- 2012 paper (among various updates); a 2008 post mentioning sofic groups (among various updates);

Videotaped lectures: from our recent winter school Thomas Vidick on quantum protocols Video 1, Video 2, Video3.

(I am thankful to Alex Lubotzky for telling me about the algebra background.)

*Links: Finitary approximations of groups and their applications, by Andreas Thom, Andreas’ ICM 2018 videotaped lecture. And a great video: Best of Andreas Thom. See also this paper Stability, cohomology vanishing, and non-approximable groups by Marcus De Chiffre, Lev Glebsky, Alex Lubotzky, and Andreas Thom.*

And (thanks Mikael de la Salle!) a recent Book by Gilles Pisier:

Tensor products of *C**-algebras and operator spaces; The Connes-Kirchberg problem

The assertion of Connes’ embedding conjecture refuted in the MIP*=RE paper would imply several (outrageous :)) stronger conjectures that are still open. One is the conjecture of Connes that every group is “hyperlinear.” Another famous conjecture (an affirmative answer to a question posed by Gromov) is that every group is sofic. As sofic groups are hyperlinear we can now expect (ever more than before) that non-sofic and even non hyperlinear groups will be found. Here is a rough short explanation what these conjectures are about. ~~(Kirchberg’s conjecture, is another group theoretic question of this nature.)~~

Every finite group is a permutation group and is a linear group. This is not the case for infinite groups and there are various interesting notions of “approximately-permutation-group” (this is “sofic”) and “approximately linear” (this is called “hyperlinear”).

Given a group Γ we want to find a sequence of functions

- From Γ to symmetric groups ,
- or from Γ to the unitary groups
*U(n).*

Such that asymptotically as *n* grows these functions are “almost homomorphisms” with respect to certain metrics *DIST* on or respectively. This means that for every two elements

,

tends to zero when *n* tends to infinity.

Now,

**Sofic group**refers to the normalized Hamming metric for symmetric groups.**Hyperlinear group**refers to metrics given by the normalized Hilbert-Schmidt norm on the unitary groups**MF-groups,**Again the unitary group but the operator norm this time.

And there are various other metrics that were also considered. The assertion of the famous embedding conjecture by Connes on von-Neumann algebras (now refuted by the new result) implies that every group is hyperlinear.

**A remaining wish list: Find a non sofic group; find a non-hyperlinear group; refute Kirchberg’s conjecture (if it was not already refuted).**

*Links: here are slides of a great talk by Yael Kalai: The evolution of proof in computer science; an a blog post on this topic by Yael Kalai, and a post here about Yael’s 2018 ICM paper and lecture.*

A decision problem is in P if there is a polynomial time algorithm (in terms of the input size) to test if the answer is yes or no. A decision problem is in NP if there is a proof for a YES answer that can be verified in a polynomial time.

Here are two examples: The question if graph has a perfect matching is in P. The question if graph has an Hamiltonian cycle is in NP. If the answer is yes a prover can give a proof that requires the verifier a polynomial number of steps to verify.

IP is a complexity class based on a notion of interactive proof where, based on a protocol for questions and answers, the prover can convince the verifier (with arbitrary high probability) that the answer is yes. Following a sequence of startling developments Adi Shamir proved that IP is quite a large complexity space P-space. When we consider several non-interacting provers (two provers suffice) the computational power denoted by MIP is even larger: László Babai, Lance Fortnow, and Cartsen Lund proved that MIP=NEXP! NEXP is the class of decision problems where if the answer is yes a prover can give a proof that requires the verifier (at most) an exponential number of steps to verify.

We replace the model of classical computation with quantum computation. Each of the two provers, Prover1 and Prover2, have access to separate sets of *m* qubits but they can prepare in advance a complicated quantum state on those *2m* qubits. When we run the verification protocol each prover has access only to its *m* qubits and, like in the classical case, the two provers cannot communicate. These types of verification protocols represent the complexity class MIP*. In 2012 and Tsuyoshi Ito and Thomas Vidick proved that MIP* contains NEXP. In this 2012 post I reported an unusual seminar we ran on the problem.

Interactive quantum lecture:We had an unususal quantum seminar presentation by Michael Ben-Or on the work A multi-prover interactive proof for NEXP sound against entangled provers by Tsuyoshi Ito and Thomas Vidick. Michael ran Vidick’s videotaped recent talk on the matter and from time to time he and Dorit acted as a pair of prover and the other audience as verifier. (Michael was one of the authors of the very first paper introducing multi-prover interactive proofs.)

Let me mention also a related 2014 paper by Yael Kalai, Ran Raz, and Ron Rothblum: How to delegate computations: the power of no-signaling proofs. They considered two provers that are limited by the “non-signaling principle” and showed that the power of interactive proofs is exactly EXP . (Here is a videotaped lecture by Ran Raz.)

In April 2019, Anand Natarajan and John Wright uploaded a paper with a proof that MIP* contain NEEXP. (NEEXP is the class of decision problems where if the answer is yes a prover can give a proof that requires the verifier (at most) doubly exponential number of steps to verify.)

Here is a nice quote from the Harnett’s quanta paper regarding the Natarajan-Wright breakthrough:

Some problems are too hard to solve in any reasonable amount of time. But their solutions are easy to check. Given that, computer scientists want to know: How complicated can a problem be while still having a solution that can be verified?

Turns out, the answer is: Almost unimaginably complicated.

In a paper released in April, two computer scientists dramatically increased the number of problems that fall into the hard-to-solve-but-easy-to-verify category. They describe a method that makes it possible to check answers to problems of almost incomprehensible complexity. “It seems insane,” said Thomas Vidick, a computer scientist at the California Institute of Technology who wasn’t involved in the new work.

*Now with the new result, I wonder if this bold philosophical interpretation is sensible: There is a shared quantum state that will allow two non-interacting provers (with unlimited computational power) to convince a mathematician if a given mathematical statement has a proof, and also to convince a historian or a futurist about any question regarding the past or future evolution of the universe.*

(I forgot to explain what RE is. Here is the description from the paper itself.)

RE is the class of recursively enumerable languages, i.e. languages *L* such that there

exists a Turing machine *M* such that *x ∈ L* if and only if *M* halts and accepts on input *x*. Note that, in addition to containing all decidable languages, this class also contains undecidable problems such as the Halting problem, which is to decide whether a given Turing machine eventually halts

The negative answer to Tsirelson problem asserts roughly that there are types of correlations that can be produced by an infinite quantum systems, but that can’t even be approximated by a finite system. Connes’ 1976 embedding conjecture (now refuted) from the theory of von Neumann algebras asserts that “Every type von Neumann factor embeds in an ultrapower of a hyperfinite factor.”

The abstract of the new paper mentions a few other works that are important for the new proof.

Anand Natarajan and Thomas Vidick. Low-degree testing for quantum states, and a quantum entangled games PCP for QMA. (FOCS, 2018.)

Anand Natarajan and John Wright. NEEXP ⊆ MIP∗ (FOCS 2019) (We mentioned it above.)

Joe Fitzsimons, Zhengfeng Ji, Thomas Vidick, and Henry Yuen. Quantum proof systems

for iterated exponential time, and beyond.

The abstract also mentions two papers about the connections with Tsirelson problem and Connes embedding conjecture

Tobias Fritz. Tsirelson’s problem and Kirchberg’s conjecture. Reviews in Mathematical

Physics, 24(05):1250012, 2012. (A few enlightening comments by Fritz on SO: I, II)

Marius Junge, Miguel Navascues, Carlos Palazuelos, David Perez-Garcia, Volkher B Scholz, and Reinhard F Werner. Connes’ embedding problem and Tsirelson’s problem. Journal of Mathematical Physics, 52(1):012102, 2011.

Let me also mention

Narutaka Ozawa. About the Connes embedding conjecture. Japanese Journal of Mathematics, 8(1):147–183, 2013.

]]>From left: **Christopher Hacon, Claire Voisin, Ulrike Tillmann, François Labourie**

**Update:** This was a great event with four great inspiring talks.

“* Abel in Jerusalem*” will be the 10th one-day conference with lectures aimed at a mathematically educated and interested audience, with the objective of increasing public awareness of mathematics and of the Abel Prize.

**9:40-10:00** Opening Remarks by **Hans Petter Graver**, President of the Norwegian Academy of Science and Letters

**10:00-11:00** **Christopher Hacon** (University of Utah): *Geometry of complex algebraic varieties* Abstract

11:00-11:30 Coffee Break

**11:30-12:30** **Ulrike Tillmann** (Oxford University): *Manifolds via cobordisms: old and new* Abstract

12:30-14:30 Lunch Break

**14:30-15:30** **Claire Voisin** (Collège de France): *Diagonals in algebraic geometry* Abstract

15:30-16:00 Coffee Break

**16:00-17:00** **François Labourie** (Université Côte d’Azur): *Counting curves and building surfaces: some works of Maryam Mirzakhani* Abstract

17:00-17:15 Closing remarks

17:15 – Transport to a reception at the King David Hotel

17:30 – Reception

19:00 – End of program

Abstract: Algebraic varieties are geometric objects defined by polynomial equations. The minimal model program (MMP) is an ambitous program that aims to classify algebraic varieties. According to the MMP, there are 3 building blocks: Fano varieties, Calabi-Yau varieties and varieties of general type which are higher dimensional analogs of Riemann surfaces of genus 0, 1, and greater or equal to 2. In this talk I will recall the general features of the MMP and discuss recent advances in our understanding of Fano varieties and varieties of general type.

Abstract: Manifolds are a fundamental mathematical structure of central importance to geometry. The notion of cobordism has played an important role in their classification since Thom’s work in the 1950s. In a different way, cobordisms are key to Atiyah’s axiomatic formulation of topological quantum field theory. We will explain how the two seemingly unrelated appearances of cobordisms have come together to give us a new approach to study the topology of manifolds and their diffeomorphisms. In addition to my own work, the talk will draw on results by Madsen, Weiss, Galatius and Randal-Williams.

Abstract: The diagonal of a manifold appears naturally in topology, for example in the Hopf formula. Furthermore, the Künneth decomposition for the class of the diagonal controls the torsion in integral cohomology. In the context of algebraic geometry, I will discuss a weaker notion of decomposition of the diagonal, which has important applications in the study of rationality of algebraic varieties.

Abstract: I will use some works of Maryam Mirzakhani as a thread to explain very basic and elementary facts of geometry : how to build surfaces, how to count curves on surfaces, what is hyperbolic geometry. The talk will be elementary and most of it will be targeting undergraduate students.

Lior’s talk reminded me that Peter Frankl and Voita Rodl told me in the late 80s about strong variants of the Brown-Erdős-Sós problem that implies the Szemeredi theorem. With the kind help of Voita Rodl who is here I can tell you about two such variants. (But I am solely responsible for all mistakes.)

Let *f(n,v,e)* be the largest number of edges in an 3-uniform hypergraph with *n* vertices and no v vertices spanning e edges or more.

**Brown-Erdős-Sós Conjecture:** , for every .

For the BES conjecture is the famous Ruzsa-Szemeredi theorem (that implies Roth’s theorem on 3-term arithmetic progressions). For the conjecture is painfully open (and is stronger than Szemeredi theorem for 4-term APs).

Here is an even stronger conjectures that implies Szemeredi theorem arithmetic progressions of arbitrary length.

Consider the following 3-uniform hypergraph *H(v)* with* v+3* vertices and *v* edges: Start with a path with v+1 vertices (and v edges). Color the edges alternately with red and brown and add a red vertex and a brown vertex. Now form a 3-uniform hypergraph whose edges are the red edges of the path joined with the red vertex and the brown edges of the graph joined with the brown vertex.

**Conjecture:** Every *H(v)*-free 3-uniform hypergraph with *n* vertices has vertices.

For another strengthening of the BES-conjecture that implies the Szemeredi theorem see the paper A remark of Pisier-type theorems of Erdos, Nesetril and Rodl. Famously, the celebrated hypergraph regularity lemma (and associated counting lemma) implies another Turan type theorem for hypergraphs that in turn implies Szemeredi’s theorem and its high dimensional extensions.

Although the theorem is for 3-uniform hypergraph the proof relies on the hypergraph regularity lemma for higher dimensional hypergraphs.

With Voita in 1985 (or 86) and today (2020)

]]>As part of the 2019/2020 TAU theory fest, tomorrow, **Friday, January 3, 2020**, is a Boolean function day at Tel Aviv University. The five speakers are **Esty Kelman, Noam Lifschitz, Renan Gross, Ohad Klein**, and **Naomi Kirshner**.

For more (and probably not all) mathematical activities in Israel these days including Abel in Jerusalem see this post.

The Abel in Jerusalem site has now the titles and abstracts of all talks. The talks are aimed so that advanced undergraduate students can learn from them and enjoy them. (The conference celebrating Boris Solomyak’s birthday was just added to the post on current math activities in Israel.)

Windows in theory reports from the theory fest on an exciting scientific bet between Elchanan Mossel and Subhash Khot. (Raising again the question if scientific bets are good.) Also in Windows in theory a report by Dorit Aharonov on the winter school on mathematics of quantum computing at HUJI.

]]>**My assessment continues to be that both the Google supremacy claims and their other central claim about fidelity estimation are incorrect.**

(Dec 23, 2019: Following some renewed discussion in the last days on the terminology I proposed to replace the term “quantum supremacy” with **“HQCA – Huge Quantum Computational Advantage”** and I myself may try to follow my new term in my own lectures/papers in the future and see how it goes. In fact, I already used HQCA in our Sunday’s Kazhdan seminar where I lectured about noise stability and sensitivity for Boolean functions, Boson Sampling, and quantum circuits (lecture notes).)

**Note:** you may click on a slide to see it in full screen.

My lecture is not about quantum computers and quantum supremacy in general but about the Google quantum supremacy claims. (If you want to know more about my general argument against quantum computers, quantum supremacy and quantum error correction go to the previous post Gil’s Collegial Quantum Supremacy Skepticism FAQ and to several of my papers and presentations linked there.)

In this presentation I will concentrate on aspects of the Google experiment that are “too good to be true”. (Which is a bad sign, not a good sign.)

There is an amazing agreement between the fidelity as estimated from the experimental data and a very simple high-school model based on the probability of individual qubits and gates to malfunction. (An even simplified version, below in purple, requires just three parameters – the number n of qubits, the number g1 of 1-qubit gates and the number g2 of 2-qubit gates.)

Formula (77) itself is not surprising but the terrific success of Formula (77) may serve as a smoking gun for the claim that the Google’s experiment has serious methodological problems.

I was not the only one to be amazed. Here is what John Martinis, the scientific leader of the Google supremacy experiment told about it. (Nov 1., 2019) Read carefully!

The accuracy of Formula (77) is the most amazing thing about the data, something that came as a big surprise, and that (jokingly) will let quantum scientists keep their jobs.

And here is what I think

No no, you cannot estimate with precision of 10-20% the probability of failure of a physical system with 1000 interacting elements as the product of 1000 error-probabilities.

**This remarkable agreement is a major new discovery, and it is not needed for building quantum computers. It is only needed for the extrapolation argument leading the Google team to supremacy.**

Even if quantum computers will be built this is something that we are not going to witness. It is interesting to check if we see anything remotely like this for the IBM quantum computers.

Here on the left you see for various values of *n*, the number of qubits, two experiments with different circuits that leads to entirely different probability distributions. Since each pair have a similar number of gates, Formula (77) leads to very close fidelity estimates which are so accurate that the green and blue lines coincide! We do not expect to see such close agreements in experimental data.

Blue, orange and green analytic smooth curves representing three distributions. For all three you see also empirical samples from the distributions. For two of them the samples are perfect, for the other one you see a sample based on a complicated physical experiment.

**Test your intuition:** can you tell the difference? Can you tell which is which?

(You can also test your intuition which was the single slide presented at the afternoon debate.)

**Blind tests** are standard in scientific experiments and are quite easy to implement here.

The meaning of the surprising statistical success of Formula (77) should be **carefully examined**.

To put my view in context, I devote one slide for my general argument against quantum computers, quantum supremacy and quantum error correction.

I mentioned that most experts do not agree with my argument or do not even understand it, and for some of them, this may give reasons for skepticism also about my critique on the Google experiment. I personally think that my general argument is good, but I don’t think it is ironclad and I am pleased to see this matter explored experimentally (properly). I also think that my critique on the Google demo stands well on its own.

And what may convince me to change my mind? poetry!

Let me add that all my critique as described here and elsewhere was first presented to the Google team.

From right to left: Sandy Irani, me, Boaz Barak, Adam Bouland, Dorit Aharonov, and on the left Umesh Vazirani, Scott Aaronson, and Sergio Boixo. (Collage; More pictures – below.)

**Answer to Test your intuition:** The orange graph is the one that comes from the experiments.

At the beginning of the debate, I nudged Sergio to upload some (additional; promised) data that is needed for verification and analysis of their results.

Formula (77) was discussed in some details in Q/A of the the informal talk and also in the panel/debate. (I raised the question if the Google experiment is correct and specifically the issue with Formula (77) at 35:00 and Formula (77) was discussed for the next 25 minutes.) Of course, the surprising issue is the accurate predictions that Formula (77) gives.

Sergio briefly explained the statistical rationale for such a precision. The way I see it, the quality of the prediction for (77) requires two assumptions: (1) No systematic errors, (2) statistical independence. Both these assumptions seem utterly unrealistic. No systematic errors means that the average error estimation say for read-out errors or for gate errors are precise and fluctuations are symmetric. This is very unrealistic. The assumption of statistical independence also seems unrealistic. (Positive correlations will actually lead to better fidelity.) Dorit, Boaz and several other people (and also Peter Shor earlier here on the blog) proposed that the remarkable predictive power of (77) and, in particular, the required statistical independence may follow from having a random circuit. I don’t understand that.

Beside the concrete attempts to understand the predictive power of Formula (77) there were some general comments about it. Adam opined that *in physics* the data fits the models very well. Scott joked that here I am arguing that the results fits the model too well and he could imagine me arguing in another universe that the results don’t fit the model well enough. At another point he said “First the data is not good enough, and now the data is too good, what will make you happy, Gil” and I answered that the data should be *reliable.*

Does “supremacy” (or HQCA), in general and in the context of Google’s experiment violates the extended Church-Turing thesis? Dorit claimed that the answer is negative since (roughly) the experiments cannot be scaled. Sergio sort of agreed and said they only claim to *challenge* the ECCT. (Indeed the fidelity goes exponentially to zero and even if producing such samples is beyond the power of a classical computer the sample sizes need to be exponential; it is an interesting question if every scalable supremacy demonstration requires quantum fault tolerance.) A point that I made is that if we assume further certain “naturalness” principle, namely that, in practice, constants in computational complexity asymptotic behavior are mild, then an astronomical speed-up as claimed by Google, can be considered as violating ECCT.

Actually, in the early lecture I presented another slide about my initial feeling towards the Google (then licked) paper. I was surprised by the trillions-time speed up, and regarded the 300,000,000,000 speed-up as expressing some “supremacy fever:” why not start with a more modest speed-up but with convincing direct evidence?

Why the trillions speed up should not surprise us? Adam Bouland commented that this is to be expected when we think about exponential speed-up. BTW, Adam gave very nice lectures on quantum algorithms and quantum supremacy.

- I was slowly loosing my voice so in the earlier lecture Or Sattath read the quote from Martinis, and Adam read the poem by Renan. (I should try that also in talks where I am not loosing my voice.)
- Sandy asked about other reasons for skepticism regarding the Google claims and Scott mentioned the possibility that much better classical algorithms will be found.
- Both me and Scott (and perhaps others in the panel as well) emphasized the importance of replications.
- Adam was Scott’s Ph. D student and Scott was Umesh’s Ph. D. student so we had on the panel three academic generations!
- Toward the end Scott described briefly the decade-old history, theory+experiment, of quantum supremacy.
- There were various other interesting issues that were raised.
- For history: The video flipped left and right. In reality Sandy was right farthermost and I was second to the right.

Dorit asked about quantum error correction and I asked specifically about distance-3 surface code and Sergio said that the quality of gates is insufficient for that. If I understood correctly the error rate needs to be reduced by a factor of five. (The fact that good quality quantum error correction requires achieving lower error rate than what is required for quantum supremacy is an important ingredient of my general argument against quantum computers.)

Dorit and others raised the issue of the noise model and discussed some toy noise models. There was some discussion on correlated errors. (This is unrelated to the statistical dependence of (77).) I pointed out that correlated errors are irrelevant to random circuits (but relevant to quantum error-correction). Later in the debate I mentioned the concern regarding not having large enough samples to determine the empirical noisy distribution.

At some point Umesh referred to another issue that I raised which is that the noisy model in the Google paper seems oversimplified. I came back to it briefly but we did not discuss this further. (At the Q/A part of my early lecture Boaz have made some interesting comments about identifying the effect of the errors in the last round of the computation.)

I also repeated my suggestion of “Beckner noise” (and correlated variants) as a good toy noise model for noisy quantum circuits.

At the panel I mentioned my general argument against quantum computers, quantum supremacy and quantum error correction only in a few sentences.

BTW, one way to think about the argument is as comparing four thresholds for the level of noise (on gates) and . is what is required for **HQCA** (quantum supremacy); is what is required for good quality quantum error correction (say, distance-5 surface code); is what is required for universal quantum computing; is the lower error rate that can realistically be achieved. I have a computational complexity argument (based on naturalness) that . I also rely on .

Nobody in the afternoon panel discussion (to the best of my memory) repeated earlier enthusiastic comparisons between Google’s experiment and the first flight in 1903, the first landing on the moon in 1968 (well, I referred to that), Fermi’s 1942 nuclear chain reaction, the first vacuum tubes computers, the discovery of the Higgs boson, etc. etc.

The beach of Tel Aviv

Major advances in human ability to simulate quantum physics and quantum chemistry are expected both if quantum supremacy can be demonstrated and quantum computers can be built and also if quantum supremacy cannot be demonstrated and quantum computers cannot be built.

Debate picture collage

The audience was happy

]]>

We have mentioned the Numberphile video channel before (here, and here, and here). It has amazing videos for the very general public about mathematics. And the general public is interested! The channel has more than 3 millions subscribers and a Numberphile video gets usually well over 100,000 views. While the videos are for very general audiences it is quite common that I learn new mathematics while watching these videos. Today I mention four recent Numberphile videos on graph theory.

Before that, let me mention that, while combinatorics has a special appeal for general audiences, it is a hard challenge to explain to a very large audience (or even to the large audience of Quanta Magazine) some very abstract areas of mathematics. It is not impossible: Here is Kevin Hartnett’s article in Quanta Magazine on ∞-categories and Jacob Lurie’s work and here is a 2016 article by Erica Klarreich on perfectoid spaces and Peter Schotze’s work. I hope to see Numberphile moving into these areas as well.

For more on this development see this post. The video contains nice pictures of Stephen Hedetniemi and his 1966 thesis. An exciting **open question** is: Given two graphs of chromatic number *n *is the chromatic number of their tensor product at least some *f(n)*, where *f(n)* tends to infinity with *n*.

Starting with the definition and ending with the strong perfect graph theorem (conjectured by Claud Berge) by Chudnovsky, Seymour, Robertson and Thomas.

An exciting **open question**: Find a good definition for perfect *r*-uniform hypergraphs.

We discussed perfect graphs and related notions in this post, this one and this one.

(**Remark:** Numberphile misguidedly regarded this video as a continuation of the “planar graph” video. The truth of the matter is that these are separated topics and the perfect graph video should be a separate Numperphile video.)

Planar graphs are deep and fascinating (and miraculous, and always surprising) mathematical objects. In Hebrew we can say about the topic that it is “Olam Umelo’o” (עולם ומלואו) (=a whole world with all its content). There are many fascinating properties of planar graphs and many open problems about them.

There are also many fascinating generalizations. Let me mention Abby Thompson’s **problem:** Suppose that *G* is the graph of a simple *d*-polytope with *n* vertices. Suppose also that *n* is even (this is automatic if *d* is odd). Can we always properly color the edges of *G* with *d* colors? (For *d=3*, the answer is yes and this is the four color theorem.)

(We discussed planar graphs here and on MO, many times.)

We discussed here and in other posts the important notion of vertex covers, but the mathematics of the video was completely new to me.

]]>Stefan Banach and one of his famous quotes. Is it really true? A commentator (troll?) named Gina tried to challenge it in a heated discussion over the n-Category cafe.

This post briefly describes a May 2019 paper On the isometric conjecture of Banach written by Gil Bor, Luis Hernández-Lamoneda, Valentín Jiménez-Desantiago, and Luis Montejano-Peimbert. (Thanks to Imre Barany who told me about it.)

Luis Montejano-Peimbert

Stefan Banach asked in 1932 the following question: Let *V* be a Banach space, real or complex, finite or infinite dimensional, all of whose *n*-dimensional subspaces, for some fixed integer *n, 2 ≤ n < dim(X)*, are isometrically isomorphic to each other. Is it true that *X* is a Hilbert space?

The question has been answered affirmatively in the following cases.

(1) In 1935, Auerbach, Mazur and Ulam gave a positive answer in case *V* is a real 3 dimensional Banach space and *n = 2*.

(2) In 1959, A. Dvoretzky proved a theorem, from which follows an affirmative answer for all real infinite dimensional *V* and *n ≥ 2*.

(3) Dvoretzky’s theorem was extended in 1971 to the complex case by V. Milman.

(4) In 1967, M. Gromov gave an affirmative answer in case V is finite dimensional, real or complex, except when n is odd and *dim(V ) = n + 1* in the real case, or *n* is odd and *n < dim(V ) < 2n* in the complex case

The new so beautiful theorem by Gil Bor, Luis Hernández-Lamoneda, Valentín Jiménez-Desantiago, and Luis Montejano-Peimbert reads

**Theorem:** Let , , , be a convex symmetric body, all of whose sections by -dimensional subspaces are linearly equivalent. Then *B* is an ellipsoid.

The reason for the strange exception has to do with the fact that 133 is the dimension of the exceptional Lie group .

Gil Bor with students

E_{7 }(Source: Wikipedea) The 126 vertices of the 2_{31} polytope represent the root vectors of E_{7}, as shown in this Coxeter plane projection Coxeter–Dynkin diagram: