What is the largest possible size of a set of ternary strings of length , with the property that for any three distinct strings in , there is a position where they all…]]>

Anurag Bishnoi wrote this beautiful post on a problem going back to a 1988 paper of Körner and Marton, and on a recent lovely paper by Anurag Bishnoi, Jozefien D’haeseleer, Dion Gijswijt, and Aditya Potukuchi, Blocking sets, minimal codes and trifferent codes.

What is the largest possible size of a set $latex C$ of ternary strings of length $latex n$, with the property that for any three distinct strings in $latex C$, there is a position where they all differ?

Let $latex T(n)$ denote this largest size. Trivially, $latex T(1) = 3^1 = 3$, and after some playing around you can perhaps prove that $latex T(2) = 4$ (I encourage you to try it so that you understand the problem). With a bit more effort, and perhaps the help of a computer, you might also be able to show that $latex T(3) = 6$, and $latex T(4) = 9$. For example, here is a set of nine ternary strings showing that $latex T(4) geq 9$: $latex 0000, 0111, 2012, 2201, 2120, 0111, 1012, 1101, 1210$. You should check that for any three strings from these nine, there is at least one position…

View original post 872 more words

Test your intuition 38 was contributed in March 2019 by my youngest son Lior. TYI38 Lior Kalai: Monty Hall Meets Survivor. Aside from the problem itself, the post contained beautiful pictures of my family, and, two days after the post was published, I added the startling breaking news about Karen Uhlenbeck being awarded the Abel Prize.

Also in March 2019, the post 10 Milestones in the History of Mathematics according to Nati and Me drew many views and many comments. And in April 2019 I made an unusual invitation: An Invitation to a Conference: Visions in Mathematics towards 2000, with links to videos and many pictures of us 20 years younger. This followed by a post discussing a bold statement made at the conference by Misha Gromov Are Natural Mathematical Problems Bad Problems?

In August 2019 I presented a new paper and a CERN colloquium: The Argument against Quantum Computers – a CERN Colloquium and a New Paper, a month later on September 22, 2019, a paper by a team from Google was leaked with a demonstration of quantum supremacy, which, if true, would refute my argument. I found the leaked paper rather unconvincing and raised some concerns about it in this post: Quantum computers: amazing progress (Google & IBM), and extraordinary but probably false supremacy claims (Google).

Two weeks later I wrote The story of Poincaré and his friend the baker, where I raised another concern regarding the calibration process of the paper, and this concern was amply discussed in the comment section. Following the appearance of the Google paper itself on October 21, 2019, and some responses from the Google team, I wrote on November 11, 2019 a new post Gil’s Collegial Quantum Supremacy Skepticism FAQ. Some further updates and discussion can be found in the Dec. 27 post The Google Quantum Supremacy Demo and the Jerusalem HQCA debate.

The mathematical news that was most exciting to me in 2019 was reported in August. Amazing: Ryan Alweiss, Shachar Lovett, Kewen Wu, Jiapeng Zhang made dramatic progress on the Sunflower Conjecture. This was a problem I had first heard about as an undergraduate, I thought about it over the years, and we ran polymath 10 for attacking it. I also was excited about other possible applications, but did not imagine it would be related to Jeff Kahn and mine expectation threshold conjecture. But indeed it was related: Amazing! Keith Frankston, Jeff Kahn, Bhargav Narayanan, Jinyoung Park: Thresholds versus fractional expectation-thresholds;

Other great results were: Amazing: Hao Huang Proved the Sensitivity Conjecture! (July); A sensation in the morning news – Yaroslav Shitov: Counterexamples to Hedetniemi’s conjecture (May); Another sensation – Annika Heckel: Non-concentration of the chromatic number of a random graph (June); Konstantin Tikhomirov: The Probability that a Bernoulli Matrix is Singular (February); Jeff Kahn and Jinyoung Park: Maximal independent sets and a new isoperimetric inequality for the Hamming cube; Matan Harel, Frank Mousset, and Wojciech Samotij and the “the infamous upper tail” problem; Attila Por’s Universality Result for Tverberg Partitions; Isabella Novik and Hailun Zheng: Neighborly centrally symmetric spheres exist in all dimensions!; Danny Nguyen and Igor Pak: Presburger Arithmetic Problem Solved! And in number theory 8866128975287528³+(-8778405442862239)³+(-2736111468807040)³;

We had two great guest posts: Dan Romik on the Riemann zeta function, and Imre Bárány: Limit shape.

In 2019 I also wrote about Jean Bourgain and about The last paper of Catherine Rényi and Alfréd Rényi: Counting k-Trees.

A drawing explaining Attila Por’s construction

Jean Bourgain and Joram Lindenstrauss

The post about mathematical news that excited both the readers and me the most appeared in January: Amazing: Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright, and Henry Yuen proved that MIP* = RE and thus disproved Connes 1976 Embedding Conjecture, and provided a negative answer to Tsirelson’s problem. My final correspondence with Boris Tsirelson was about this breakthrough and I wrote about him in this post Trees not Cubes! Memories of Boris Tsirelson.

Two other exciting and popular posts with 2020 major mathematical news came in July and December. The July post was To cheer you up in difficult times 7: Bloom and Sisask just broke the logarithm barrier for Roth’s theorem!; And in December To Cheer You Up in Difficult Times 15: Yuansi Chen Achieved a Major Breakthrough on Bourgain’s Slicing Problem and the Kannan, Lovász and Simonovits Conjecture

To cheer you up in difficult times 3: A guest post by Noam Lifshitz on the new hypercontractivity inequality of Peter Keevash, Noam Lifshitz, Eoin Long and Dor Minzer; To cheer you up in difficult times 5: A New Elementary Proof of the Prime Number Theorem by Florian K. Richter; To cheer you up in difficult times 6: Play Rani Sharim’s two-player games of life, read Maya Bar-Hillel presentation on catching lies with statistics, and more.;To Cheer you up in Difficult Times 8: Nathan Keller and Ohad Klein Proved Tomaszewski’s Conjecture on Randomly Signed Sums; To cheer you up in difficult times 9: Alexey Pokrovskiy proved that Rota’s Basis Conjecture holds asymptotically; To cheer you up in difficult times 12: Asaf Ferber and David Conlon found new lower bounds for diagonal Ramsey numbers; Three games to cheer you up.

The first post which explicitly aimed cheering up our readers was from mid-March when it had become clear that the Corvid outbreak is going to overshadow our lives. To cheer you up in complicated times – A book proof by Rom Pinchasi and Alexandr Polyanskii for a 1978 Conjecture by Erdős and Purdy!

Other mathematical news from 2019 were Ringel Conjecture, Solved! Congratulations to Richard Montgomery, Alexey Pokrovskiy, and Benny Sudakov; In February Remarkable New Stochastic Methods in ABF: Ronen Eldan and Renan Gross Found a New Proof for KKL and Settled a Conjecture by Talagrand; Kelman, Kindler, Lifshitz, Minzer, and Safra: Towards the Entropy-Influence Conjecture; Noam Lifshitz: A new hypercontractivity inequality — The proof!

In December 2020 I wrote about an attempt to establish quantum supremacy using a photonic device: Photonic Huge Quantum Advantage ???, This time, an algorithm from a 2014 paper by Guy Kindler and me sufficed to refute the fantastic claims. A few days later I wrote The Argument Against Quantum Computers – A Very Short Introduction, which has become one of the blog’s blockbusters.

**Discussion about Diversity:** the post To Cheer You Up in Difficult Times 31: Federico Ardila’s Four Axioms for Cultivating Diversity, led to heated discussions mainly on Facebook.

**Celebrations and discussions:** The post Cheerful News in Difficult Times: The Abel Prize is Awarded to László Lovász and Avi Wigderson, contains a lot of pictures of Avi and Laci and their families (and in a few cases of my family), and it was followed by the most viewed 2021 post presenting a debate between Avi Wigderson and me, which goes deeply into the connection between (computer science) theory, practice, and mathematics: The probabilistic proof that 2^400-593 is a prime: a revolutionary new type of mathematical proof, or not a proof at all?

**Mathematical news from 2021:** To cheer you up in difficult times 17: Amazing! The Erdős-Faber-Lovász conjecture (for large n) was proved by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus!; To cheer you up in difficult times 20: Ben Green presents super-polynomial lower bounds for off-diagonal van der Waerden numbers W(3,k);To cheer you up in difficult times 21: Giles Gardam lecture and new result on Kaplansky’s conjectures; To Cheer you up in difficult times 30: Irit Dinur, Shai Evra, Ron Livne, Alex Lubotzky, and Shahar Mozes Constructed Locally Testable Codes with Constant Rate, Distance, and Locality. To cheer you up in difficult times 34: Ringel Circle Problem solved by James Davies, Chaya Keller, Linda Kleist, Shakhar Smorodinsky, and Bartosz Walczak; Amazing: Simpler and more general proofs for the g-theorem by Stavros Argyrios Papadakis and Vasiliki Petrotou, and by Karim Adiprasito, Stavros Argyrios Papadakis, and Vasiliki Petrotou.

**Two collections of startling mathematical news:** To cheer you up in difficult times 25: some mathematical news! (Part 2) and To cheer you up in difficult times 22: some mathematical news! (Part 1)

**Memorabilia**: Nostalgia corner: John Riordan’s referee report of my first paper; To cheer you up in difficult times 23: the original hand-written slides of Terry Tao’s 2015 Einstein Lecture in Jerusalem

**Quantum matters: **Amazing: Feng Pan and Pan Zhang Announced a Way to “Spoof” (Classically Simulate) the Google’s Quantum Supremacy Circuit!; This work and several related works largely (but not fully) refuted Google’s 2019 quantum supremacy claim.

**Free will:** To cheer you up in difficult times 29: Free will, predictability and quantum computers

**A post about my papers**: Let me tell you about three of my recent papers;

**Open problems:** The 3ᵈ problem; Turan type theorems for simplicial complexes.

**Are we up for a new polymath project?**: Possible future Polymath projects (2009, 2021)

**Blogging from ICM 2022:** I slowly blogged about various events and lectures from ICM 2022. The most popular post about ICM2022 was about verifying mathematical proofs using computers and especially LEAN: ICM 2022. Kevin Buzzard: The Rise of Formalism in Mathematics. I attempted to report on ICM 2022: Langlands Day which I followed on Zoom, and I gave a live report from Helsinki on ICM 2022 awarding ceremonies (1).

Because of Russia’s invasion of Ukraine, the ICM originally planned for St. Petersburg turned into a virtual event, but the opening ceremonies and lectures of the medalists took place with live audience in Helsinki.

The Israeli delegation to the 2022 general assembly of the IMU. With Tammy Ziegler and Mina Teicher.

**The mathematical news that caught most attention:** A conjecture of Jeff Kahn and me is now fully resolved: Amazing: Jinyoung Park and Huy Tuan Pham settled the expectation threshold conjecture! … and major progress on Frankl’s conjecture: Amazing: Justin Gilmer gave a constant lower bound for the union-closed sets conjecture

**Other news**: Oliver Janzer and Benny Sudakov Settled the Erdős-Sauer Problem; Bo’az Klartag and Joseph Lehec: The Slice Conjecture Up to Polylogarithmic Factor! ;

**News about polytopes and combinatorial geometry:** Joshua Hinman proved Bárány’s conjecture on face numbers of polytopes, and Lei Xue proved a lower bound conjecture by Grünbaum; James Davies: Every finite colouring of the plane contains a monochromatic pair of points at an odd distance from each other.; Chaim Even-Zohar, Tsviqa Lakrec, and Ran Tessler present: The Amplituhedron BCFW Triangulation; Alexander A. Gaifullin: Many 27-vertex Triangulations of Manifolds Like the Octonionic Projective Plane (Not Even One Was Known Before).

**Quantum matters:** Inaugural address at the Hungarian Academy of Science: The Quantum Computer – A Miracle or Mirage;

**Quick updates:** In recent months I took part in two unusual conferences (for me). One conference in the city of Nazareth was about “free will” and the participants included neurologists, brain scientists, biologists, philosophers, psychologists, physicists, and experts in law and ethics. The other conference was at Tel Aviv about the Many Worlds Interpretation to quantum mechanics with a variety of researchers interested in the foundations of physics. In the last few weeks I gave (by Zoom) two physics colloquiums, one in Rutgers University and one in the Perimeter Institute at Waterloo, and in both cases the lecture was followed by very nice discussions. The lecture and discussion at the Perimeter Institute was videotaped and uploaded to PIRSA (Perimeter Institute Recorded Seminars Archive) and can be found in this link. The very interesting discussion included Ray Laflamme, Lee Smolin, Debbie Leung, Latham Boyle (my host, who, against all odds, perfectly pronounced my name ) and Dominique.

Here is a link to the (Hebrew) lectures and discussions from the Free Will conference. I gave there a short response to Yakir Aharonov’s lecture. And here is the You Tube link to the Many worlds interpretation conference.

Not many great hits in 2015 (since March) and the most viewed post from 2015 was the opening post for Polymath10: Polymath10: The Erdos Rado Delta System Conjecture.

The most viewed post from 2016 and one of the most viewed posts ever was A Breakthrough by Maryna Viazovska Leading to the Long Awaited Solutions for the Densest Packing Problem in Dimensions 8 and 24.

The most popular post in 2017 (and among the most popular ever) was Elchanan Mossel’s Amazing Dice Paradox (your answers to TYI 30). (The post posing the question was also very popular.)

2018’s blockbuster was: Aubrey de Grey: The chromatic number of the plane is at least 5. A 2018/2019 blockbuster was Amazing: Karim Adiprasito proved the g-conjecture for spheres!

2019 was the year with maximum views ever. There were many posts with news about breakthroughs, and the most popular one was Amazing: Hao Huang Proved the Sensitivity Conjecture! Another hit was my first post about Google’s quantum supremacy claims: Quantum computers: amazing progress (Google & IBM), and extraordinary but probably false supremacy claims (Google).

The most popular post in 2020 was also about quantum computers The Argument Against Quantum Computers – A Very Short Introduction. In April 2020 we introduced a new corner “To cheer you up in difficult times”.

The 2021 post with most views was a little debate between Avi Wigderson and me The probabilistic proof that 2^400-593 is a prime: a revolutionary new type of mathematical proof, or not a proof at all?

The most viewed post in 2022 was: Amazing: Jinyoung Park and Huy Tuan Pham settled the expectation threshold conjecture!

Here are other popular and favorite posts 2015-2018, the second part will be devoted to 2019-2022.

The post Updates and plans III, contains a lot of startling mathematical and personal news. Among the mathematical news was the Adiprasito-Huh-Katz solution to the Heron-Rota conjecture. On the personal side are pictures from my 60th birthday conference.

The most viewed post from 2015 was the opening post for Polymath10: Polymath10: The Erdos Rado Delta System Conjecture, Polymath10, which started on November 2017 and was concluded in January 2017. In the second post Polymath10, Post 2: Homological Approach, I proposed a homological approach to the conjecture and we discussed (and improved) it, among other things, in several subsequent posts.

Another popular post was NogaFest, NogaFormulas, and Amazing Cash Prizes announcing Noga’s birthday conference.

For the first (and last) time I used the blog as a political platform: I wrote a few posts in Hebrew criticizing our (then) prime minister Netanyahu and endorsing the labor opposition party for the 2015 elections. There were interesting comments (mostly negative).

I reported about several advances in combinatorics and computer science. Choongbum Lee proved the Burr-Erdős conjecture, More Reasons for Small Influence, New Isoperimetric Results for Testing Monotonicity, and The Simplex, the Cyclic polytope, the Positroidron, the Amplituhedron, and Beyond. The last three posts are a mixture of reporting new results, explaining some classical results and raising some ideas of my own. Also in 2015 was a detailed report on Midrasha Mathematicae #18: In And Around Combinatorics, and a celebration following the solution of Erdos discrepancy problem EDP Reflections and Celebrations.

On the skeptical side we had a post with Angry Birds Update

In 2015, my beloved mother Carmela Kalai passed away and I devoted a post to her and her art.

The most viewed post from 2016 and one of the most viewed posts ever was A Breakthrough by Maryna Viazovska Leading to the Long Awaited Solutions for the Densest Packing Problem in Dimensions 8 and 24. The second most popular post in 2016 involved the US elections, influence, noise sensitivity, social choice and Hex. The US Elections and Nate Silver: Information Aggregation, Noise Sensitivity, HEX, and Quantum Elections.

The startling news on the cup set problem came in 2016. Mind Boggling: Following the work of Croot, Lev, and Pach, Jordan Ellenberg settled the cap set problem! Dion Gijswijt also reached the same result. This post followed an earlier one: More Math from Facebook where we talked about Croot, Lev and Pach’s paper and other results. In December 2016 Jordan and I ran a Polynomial Method Workshop in Jerusalem. Also in 2016: The Erdős Szekeres polygon problem – Solved asymptotically by Andrew Suk.

Also in 2016, a beautiful guest post Stefan Steinerberger: The Ulam Sequence, and a report on my Notices paper The Quantum Computer Puzzle @ Notices of the AMS. Imre Barany’s beautiful presentation on Jiří Matoušek is given in Jirka.

2017 marked new attempts for humorous stories. Layish – a true story from 1995 about Avi W. and me. Proofs that will make you smile: The seventeen camels riddle, and Noga Alon’s camel proof and algorithms; From camels to lice: Micha A. Perles’ Proof By Lice!; Touching Simplices and Polytopes: Perles’ argument (with additional mathematical fantasies and humor); Micha Perles’ Geometric Proof of the Erdos-Sos Conjecture for Caterpillars; Test Your Intuition 33: The Great Free Will Poll; and a humorous quantum cartoons post: If Quantum Computers are not Possible Why are Classical Computers Possible?

A favorite post of mine was: The World of Michael Burt: When Architecture, Mathematics, and Art meet. As already mentioned the most popular posts featured Elchanan Mossel’s dice problem. 2017’s most viewed mathematical news was R(5,5) ≤ 48. Also popular was a thorough discussion Around the Garsia-Stanley’s Partitioning Conjecture. We discussed the question Is Heads-Up Poker in P? And here is a post on Edmund Landau and the Early Days of the Hebrew University of Jerusalem.

The post Where were we? described several updates including, most excitingly, a nice personal greeting by Benjamin Netanyahu (Israeli PM in 2017) to a talk I gave in the Technion. Another nice eclectic mathematical update was Updates (belated) Between New Haven, Jerusalem, and Tel-Aviv.

Many startling mathematical news: In December 2018 definite progress on the problem I worked the most: Amazing: Karim Adiprasito proved the g-conjecture for spheres! In January, Subhash Khot, Dor Minzer and Muli Safra completed the proof of the 2-to-2 Games Conjecture, Also in January, Cody Murray and Ryan Williams’ new ACC breakthrough: Updates from Oded Goldreich’s Choices. In February Serge Vlăduţ : Lattices with exponentially large kissing numbers. In April, Aubrey de Grey: The chromatic number of the plane is at least 5. Also in April, Cohen, Haeupler, and Schulman: Explicit Binary Tree-Codes & Cancellations, and Nathan Rubin Improved the Bound for Planar Weak ε-Nets and Other News From Ein-Gedi; In May, Duncan Dauvergne and Bálint Virág Settled the Random Sorting Networks Conjectures, and Zur Luria on the n-Queens Problem. Also in December, Nima Anari, Kuikui Liu, Shayan Oveis Gharan, and Cynthia Vinzant Solved the Mihail-Vazirani Conjecture for Matroids!

Toward the g-conjecture announcement I wrote about “what’s next”: Beyond the g-conjecture – algebraic combinatorics of cellular spaces I. I wrote another related post A Mysterious Duality Relation for 4-dimensional Polytopes.

Blogging from ICM 2018 Rio: I gave a plenary lecture at ICM 2018, and I was not the only Kalai in the congress: Yael Tauman Kalai’s ICM2018 Paper, My Paper, and Cryptography. I also slowly blogged about various events and lectures from the Rio congress. The most popular post from Rio was ICM 2018 Rio (3) – Coifman, Goldstein, Kronheimer and Mrowka, and the Four Color Theorem. I devoted much effort writing ICM 2018 Rio (4): Huh; Balog & Morris; Wormald.

Also in 2018, Ricky and Branko was an obituary to Ricky Pollack and Branko Grünbaum, and Coloring Problems for Arrangements of Circles (and Pseudocircles), was a post much in Ricky and Branko’s spirit.

2018 was the year of the first post in Alef’s corner: Alef’s Corner: There is Still a Small Gap in the Proof. Other popular posts were: Test Your Intuition 35: What is the Limiting Distance?; Conference in Singapore, Vietnam, Appeasement, Restorative Justice, Laws of History, and Neutrinos

]]>Tel Aviv University Theory Fest, December 26-December 28 2022.

Cryptography workshop @ TAU TheoryFest December 29, 2022

TAU 2022 TheoryFest, December 26-28, 2022

The Theory of Computing was born as a purely mathematical investigation into the notion of computation. From the start, it has been a cornerstone of computer science and has been used to tackle fundamental computing research problems. It was soon recognized as a useful paradigm of thought that facilitates research in other disciplines as well, such as economics and biology.

The purpose of the conference is to present current research in The Theory of Computing as well as to discuss its future in many areas. These include complexity-theory, Boolean-functions, cryptography, learning, algorithmic game-theory and discrete mathematics.

Eitan Bachmat:An old disk scheduling algorithm revisitedMichal Feldman:Algorithmic Contract Design

Yael Tauman Kalai:Recent Advancement in SNARGsPravesh Kothari:The Kikuchi Matrix MethodNoam Lifshitz:Product free sets in the alternating groupShachar Lovett:The monomial structure of Boolean function

Assaf Naor:Reverse isoperimetry

Aviad Rubinstein:Approximate maximum matching, and fantasies about SETH, obfuscation, and you

Amir Yehudayoff:Characterizations of Learnability

Cryptography seminar @TAU TheoryFest December 29, 2022

Shafi Goldwasser:The Right to Deny

Alon Rosen: Public-Key Encryption, Local Pseudorandom Generators, and the Low-Degree Method

Ran Canetti:On the Computational Hardness Needed for Quantum Cryptography

Moni Naor(Weizmann): Timing Attacks on Cryptographic and Privacy Preserving Schemes: Revisiting the Resistance

]]>

As a follow up to my previous post about Gilmer’s breakthrough regarding Frankl’s conjecture, here is a very nice example (from the paper of Zachary Chase and Shachar Lovett) related to the conjecture.

Let

Consider the following families of subsets of

, and

.

Now let .

Here are some observations:

- .
- The number of pairs whose union is also in is .
- For every , as grows, no element belongs to more than sets in .

The first assertion holds because is exponentially larger (in a fractional power of ) than .

For the second assertion we need to show that a typical union of a pair of sets in belongs to . Note that the intersection of two random subsets of of size is and hence their union is of size . As it happens the solution of the equation is precisely our . So letting we get that a typical union of two sets from is in .

The third assertion follows from the fact that an element belongs to a fraction of sets in .

This shows that a natural stability version of Frankl’s conjecture is incorrect, and gives some hint on the appearance of the parameter .

Such a stability version is correct when we replace with . Chase and Lovett improved Gilmer’s method and proved that

**Theorem:** If is a -approximate union closed set system, where , then there is an element which is contained in a fraction of sets in , where

I will be happy to see, based on Gilmer’s paper and the five follow-up papers, a detailed, as simple as possible, explanation of Frankl’s conjecture for the parameter , to learn what is involved in Sawin’s improvement that goes beyond , and to understand the counterexamples for Gilmer’s proposal towards the full conjecture, as well as thoughts, ideas and remarks of various kind on the problem.

- Justin Gilmer, A constant lower bound for the union-closed sets conjecture
- Will Sawin, An improved lower bound for the union-closed set conjecture
- Zachary Chase, Shachar Lovett, Approximate union closed conjecture
- Ryan Alweiss, Brice Huang, Mark Sellke, Improved Lower Bound for the Union-Closed Sets Conjecture
- David Ellis, Note: a counterexample to a conjecture of Gilmer which would imply the union-closed conjecture
- Luke Pebody, Extension of a Method of Gilmer
- (Dec 9.) Lei Yu, Dimension-Free Bounds for the Union-Closed Sets Conjecture

Justin Gilmer just proved an amazing weaker form of the conjecture asserting that there always exists an element such that

This is am amazing progress! Congratulations, Justin.

The breakthrough paper, just posted on the arXiv is:

A constant lower bound for the union-closed sets conjecture by Justin Gilmer

**Abstract:** We show that for any union-closed family there exists an which is contained in a 0.01 fraction of the sets in .

This is the first known constant lower bound, and improves upon the bounds of Knill and Wójick.

Our result follows from an information theoretic strengthening of the conjecture. Specifically, we show that if are independent samples from a distribution over subsets of such that for all and , then .

______

Mike Saks who first told me about the breakthrough wrote “the bound comes from a simple clever idea (using information theory) and 5 pages of gentle technical calculations.” (I thank Mike, Ryan Alweiss, and Nati Linial who wrote me about it.)

We mentioned Frankl’s conjecture several times including here, here, here, and here. Polymath11 on Tim Gowers’s blog was devoted to the conjecture. Below the fold: What it will take to prove the conjecture in its full strength and another beautiful conjecture by Peter Frankl.

**Update:** A few days after Gilmers’ paper was posted there were some developments by ~~four~~ six groups of researchers. Four papers by Ryan Alweiss, Brice Huang, and Mark Sellke; Zachary Chase and Shachar Lovett; Will Sawin; Luke Pebody, settled a conjecture by Gilmer that allows pushing the bound to: =0.381966… . In his paper, Will Sawin improved this lower bound by an additional small constant. Chase and Lovett found in their paper a related variant of Frankl’s conjecture for which the bound is oprimal. Will Sawin and independently David Ellis found counter examples to Gilmer’s conjecture that would imply Frankl’s conjecture.

Justin Gilmer’s mentions that proving a tight bout for Lemma 1 in the paper will push the 0.01 bound to =0.381966… . (This was proved by now.) He also presents an appealing information-theoretic strengthening of the conjecture which may consist of a path toward a proof. (This was disproved by now.)

To face a possible risk that Frankl’s “union closed” conjecture will be solved here is another beautiful conjecture by Peter Frankl.

A family of sets is convex if whenever and then also .

Conjecture (**P. Frankl**): Let be a convex family of subsets of . Then there exists an antichain such that

Barnabás Janzer studied the following question:

*Suppose we have convex body in that contains a copy of a convex body in every orientation. Is it always possible to move any one copy of to another copy of , keeping inside ?*

I will let you test your intuition about what the answer should be. First, some background.

A Kakeya set is a set that contains unit unterval in every direction. A famous open problem is the conjecture that every Kakeya set in has Housdorff dimension . in 2008, Zeev Dvir found a simple remarkable proof for a finite field analog of the conjecture. Finding possible connections between the finite field problem and the Euclidean problem is an exciting problem. Can we use the finite field result to prove the Euclidean result? Can we use or refine the finite field methods for the Euclidean problem? Here is a recent Quanta Magazine article about exciting “intermediate results”.

The question about the connection between finite fields analogs and questions over or can be asked about other problems. One example is the Roth problem (relevant posts I,II, III) vs. the cup set problems (relevant posts I,II,III).

Going back to our problem (posed by H. T. Croft) about convex Kakeya sets:

Suppose we have convex body in that contains a copy of a convex body in every orientation. Is it always possible to move any one copy of to another copy of , keeping inside ?

Barnabás Janzer proved:

Here is the link to the paper:

(Picture: János Pach)

inaugural address of

honorary member of the MTA,

Budapest, 15 June, 2022, 15:00

Abstract:On February 12, 2002, Michel Devoret’s lecture entitled “The Quantum Computers: Miracle or Mirage” kicked off the 150th Anniversary Celebration of the Yale School of Engineering. In his abstract, Devoret asserted that while quantum mechanics had often been viewed as limiting the information one could extract from a physical system, “recent discoveries show that fundamental properties of quantum mechanics could actually be used to perform computations that would be impossible on a standard ‘classical’ computer.” Devoret’s question of whether the quantum computer is a miracle or a mirage is, to this day, one of the fascinating clear-cut open scientific questions of our time. In my inaugural lecture I will explain the question and the discoveries from the 1990s that suggested that quantum computers could perform miracles, and I will present my theory as to why the quantum computer is a mirage.

Janos Pach told me that the name of the room were my lecture was given, “Felolvaso terem” means something like “Hall for reading papers”. This is a rarely used, somewhat archaic, expression that really refers to the act of reading out papers loudly for an audience. Quite untypically for mathematical lectures, this was precisely the style of my lecture. Here is the text of my lecture:

_________________________________________________

I was very honored and happy for being elected as an honorary member of the Hungarian Academy of Sciences and I am especially happy to come to the beautiful city of Budapest to give this inaugural address and the Turán memorial lectures and I am thankful for the opportunity to meet friends and colleagues of many decades as well as a new generation of wonderful mathematicians.

**Our mathematical life and the inherent difficulties and uncertainties of mathematics and science are for us islands of stability and solace in times of great concerns and difficulties.**

Quantum computers are new type of computers based on quantum physics. When it comes to certain computational objectives, the computational ability of quantum computers is tens, and even hundreds of orders of magnitude faster than that of the digital computers we are familiar with, and their construction will enable us to break most of the current cryptosystems. While quantum computers represent a future technology, which captivates the hearts and imaginations of many, there is also an ongoing dispute over the very possibility of their existence.

My theory asserts that quantum computers are inherently noisy. Their robust components represent a low-level (classical) computational ability, and they necessarily exhibit chaotic behavior.

The classical computer can be seen as a device that contains n “bits” where every bit is in one of two positions of either “zero” or “one.” The computer operates through “gates” that perform logical operations on either a single or two bits. Two simple types of gates are sufficient to describe all forms of computation: the NOT gate, which reverses the value of a single bit, and the AND gate, which receives two bits as input and produces “one” if and only if the value of each of the bits in the input is “one.” The model presented here for computers is called the Boolean circuit model and is close to the 1940s models of Church, Turing, and others, which preceded the advent of the digital computer and the computer revolution in the second half of the twentieth century.

In the 1960s and 1970s, computer science researchers came to an important insight regarding the inherent limitations of computers. There are certain computational tasks that are very easy to formulate (and even to check whether a suggested answer to them is correct), but which are nonetheless impossible to perform because they require too many computational steps. For example, if the number of computational steps for a problem described by an input of *n* bits is *2 ^{n}*, then this computation will not be feasible for the most powerful computers, even when n = 100. To describe feasible (or “efficient”) computations, that is, computations that can actually be performed, an important assumption was added, according to which the number of computational steps (i.e., the number of gates) does not exceed a constant power (e.g., n

The main lens through which we analyze the difficulty of computation is the asymptotic lens. For most computational tasks we cannot hope for a full understanding of the number of required computational steps as a function of the number of bits of the input, but we can gain understanding (at times rough and at times just conjectural) of the computational difficulty by studying how the number of computational steps asymptotically depends on the input size. Experience gained in recent decades indicates that asymptotic analysis provides a good understanding of the practical difficulty of computational tasks.

We will now enrich the picture by adding probability. A single bit has two possible values, “zero” and “one,” and it is possible to extend the Boolean circle model by allowing the bit to have the value “zero” with probability *p* and the value “one” with probability *1-p*. In this way, the classical computer equipped with probabilistic bits can describe a sample from a probability space of sequences of zeros and ones. The model of probabilistic classical computers is an important model in the theory of computational complexity and it also constitutes a convenient introduction to the concept of quantum computers. Probabilistic Boolean circuits can be realized rather easily by digital computers.

The model of quantum computers is a computational model based on quantum mechanics and was first introduced in the 1980s. In quantum computers, the classical bit is replaced by the basic computational element called a qubit.

The state of a single qubit is described by a unit vector in a complex two-dimensional vector space (namely, it is described by four real numbers whose sum of squares equals 1). The uncertainty principle asserts that we cannot identify the precise state of the qubit, but rather we can measure it and obtain a probabilistic bit. One way to think about it is as follows: there are two basic states for the qubit and we denote them by and while the general state of the qubit is a “superposition” of these two basic states, namely, the general state of a qubit is a linear combination of the form

where and are complex numbers satisfying

The state of the qubit can be measured and when a qubit in this state is measured, we obtain a probabilistic bit with a value 0 with probability and 1 with probability. One possible physical realization of the two basic states of a single qubit is with the two energy levels of a Hydrogen atom, with quantum physics allowing the atom to be in a superposition of these basic states as described above.

The computer acts on *n* qubits through “quantum gates,” which are the basic quantum computation operations. (The gates are described mathematically by “unitary operators.”) At the end of the computation process, the state of the computer is measured, and this yields a single sample from a probability distribution of 0-1 strings of length n. In this case too, the assumption is that for each efficient computation the number of gates is at most polynomial in *n*. The crucial fact about quantum computers is that they would allow us to efficiently achieve samples that cannot be efficiently achieved by classical computers.

In the 1990s, Peter Shor discovered that quantum computers would allow the execution of a certain computational tasks – factoring an integer to its prime components – hundreds of orders of magnitude faster than regular computers and would enable us to break most current cryptosystems.

Quantum systems are by nature “noisy” and unstable. The term “noise” refers to a deviation of the computer from the planned program, and in the case of a quantum computer any unwanted interaction or leakage of information leads to such a deviation. The model of noisy quantum computer adds a component of noise to the description of the quantum computer. Noisy intermediate-scale quantum (NISQ) computers are simply noisy quantum circuits with at most 200 (say) qubits.

It was around that time that early doubts concerning the model also surfaced: quantum systems are by nature “noisy” and unstable. The term “noise” refers to a deviation of the computer from the planned program, and in the case of a quantum computer any unwanted interaction or leakage of information leads to such a deviation. Therefore, in order to reduce the noise level, quantum computers must be isolated and protected from their environment. The key to a possible solution of the noise problem is quantum error-correcting codes, which will enable noisy quantum computers to perform the same computations conducted by the abstract noiseless model, provided that the level of noise can be reduced to below a certain threshold denoted by **α**.

It is a common opinion that the construction of quantum computers is possible, that the remaining challenge is mostly engineering-related, that such computers will be constructed in the next few decades, and that quantum error-correcting codes of the required quality will be developed in labs in the years to come. My standpoint is that it will be fundamentally impossible to reduce the noise level to below the required threshold. As a result, it will be impossible to develop the quantum codes required for quantum computation, nor will it be possible to reach the target of “quantum computation supremacy,” whereby a quantum computer performs computation that is extremely hard or even impossible for a classical computer.

My argument for the impossibility of quantum computers lies within the scope of quantum mechanics and does not deviate from its principles. In essence, the argument is based on computational complexity and its interpretation, and it is discussed in-depth in my papers which also include a discussion of general conclusions that derive from my argument and relate to quantum physics, alongside suggestions of general laws of nature that express the impossibility of quantum computation.

My argument mostly deals with understanding quantum computers on the intermediate scale (known as NISQ computers, an abbreviation of Noisy Intermediate Scale Quantum), that is, quantum computers of up to at most several hundreds of qubits. It is expected that on this scale we will be able to construct quantum codes of a quality sufficient for the construction of bigger quantum computers. It is further expected that on this scale the quantum computer will achieve computations far beyond the ability of powerful classical computers, that is, will achieve quantum computational supremacy. The Google’s Sycamore computer is an example of a noisy intermediate-scale quantum computer.

As specified later, it is my argument that NISQ computers cannot be controlled. Hence:

- Such systems cannot demonstrate significant quantum computational advantage.
- Such systems cannot be used for the creation of quantum error-correcting codes.
- Such systems lead to non-stationary and even chaotic distributions.

Regarding the first item, let me remind the audience that computational complexity theory provides tools for studying the computational power of models and physical computational devices. The reason NISQ computers cannot support quantum supremacy is that when we use computational complexity tools to understand the computational power of NISQ computers, we discover that they describe a very low-level computational class. This low-level computational class does not allow for any complicated computations, much less computational supremacy. My analysis draws computational conclusions for NISQ computers based on their mathematical model’s asymptotic behavior.

Regarding the second item, the reason it is impossible to build quantum error-correcting codes is that it requires an even lower noise level than that required for demonstrating quantum supremacy. The meaning of the infeasibility of quantum error-correcting codes is that even a quantum computer operating on a single qubit is inherently noisy. It is to be noted that the argument that the noise level required for error-correcting codes is lower than the level required for quantum supremacy is generally accepted by both theoreticians and experimental physicists.

When I speak here of chaotic behavior, I refer to a system (either deterministic, probabilistic, or quantum) that is so sensitive to its defining parameters that its behavior (or a large component of its behavior) cannot be determined, not even probabilistically. This notion is related to the mathematical theory of “noise sensitivity” (Benjamini, Kalai, and Schramm 1999) that we mentioned before, and to the related mathematical theory of “black noise” (Tsirelson and Vershik 1998). Both these theories have their early roots in Weiner’s chaos expansion (Weiner 1938). Chaos in this sense is sometimes called “Knightian uncertainty”, a term that originates in economics. The first use of the theory of noise-sensitivity to quantum computers was in my 2014 paper with Guy Kindler on “Boson Sampling”. (There we use Hermite-Fourier expansion.)

The term “chaotic system” in mathematical chaos theory (Lorenz 1963) refers to nonlinear classical deterministic systems, whose development very much depends on their initial conditions, and since these conditions are not known precisely, the system’s development in the long run cannot be predicted.

It is plausible that natural chaotic phenomenon (like the weather) reflects both the Lorenz chaos and the Weiner chaos.

Here I thank I Bernard Chazelle for raising an appealing argument for why Lorenz chaos and traditional dynamic theoretic notions of chaos are insufficient to describe chaos in nature.

Following is a brief look at a central technical analytic tool that applies to all three components of the argument, namely, the Fourier expansion of functions. In our case we consider functions, whose values are real numbers that are described on sequences of length *n* of zeros and ones, and every such function can be written (as a linear combination) by means of a special set of functions, known as Fourier–Walsh functions. Fourier–Walsh functions can be sorted according to their degree, which is a natural number between 0 and *n*. Much as the regular Fourier expansion makes it possible to describe a musical sound as a combination of pure high and low tones, in Fourier–Walsh functions, too, the degrees can be seen as analogous to the heights of the pure tones.

Our first step is to study the Walsh–Fourier transform of the probability distribution of the 0-1 sequences in an ideal noiseless quantum computer, and the second step is to study the effect of the noise. The main technical component of my argument is that the noise will exponentially lower the coefficients of the higher-degree Fourier–Walsh functions. This leads to a mathematical distinction (Benjamini, Kalai, and Schramm 1999) between distributions that can be described by low-degree Fourier coefficients and are referred to as “noise stable” and distributions that are supported by high-degree Fourier coefficients and are referred to as “noise sensitive.” A general distribution can have both noise-stable and noise-sensitive components. This is the reason that, on the one hand, the stable component in the noisy distribution is described by low Fourier–Walsh levels, and hence this component represents an extremely low computational power, and that, on the other hand, if the unstable part, namely, the contributions of the higher-degree Fourier–Walsh functions remain substantial, this contribution will be chaotic.** **

The question “why does this argument fail for classical computers?” is an appropriate critique of any argument asserting that quantum computers are not possible. Regarding my argument, the answer to this question is as follows: the path to large-scale quantum computers requires building quantum error-correcting codes on NISQ systems, and my theory asserts that this is impossible. At the same time, the primitive computational power represented by NISQ systems still allows for stable classical information, and subsequently even for classical computation.

The mathematical analysis that leads to the conclusion that noisy intermediate-scale quantum computers have primitive computational power is asymptotic. That is, a mathematical model of these computers is described and analyzed when the noise level is fixed, and the number of qubits increases. The conclusion I draw from this asymptotic behavior concerns the lowest noise level engineers can reach. I argue that it would be impossible to lower the noise level in such a way that enables us to create computers whose computational ability is low in an asymptotic analysis, but very high – indeed beyond the ability of classical computers – for several dozen qubits. The move from an asymptotic argument regarding the model’s computational complexity to a concrete claim regarding engineering-related limitations of intermediate-scale computers is hardly standard, and my argument clashes with the strong intuition of experts in the field, according to whom an engineering effort would make it possible in principle (as well as in practice) to lower the noise level as much as we would like. Indeed, the multiple resources invested in building quantum computers, especially noisy intermediate-scale quantum computers, are based on the common position that there is no fundamental obstacle obstructing this effort.

In 2019 Researchers from **Google** claimed to achieve in 300 seconds a sampling task that requires 10,000 years for a supercomputer.

However, since the Google paper appeared the number “10,000” years was replaced by (roughly) 300 second by better classical algorithms.

In a joint work with Yosi Rinott and Tomer Shoham we also examine other aspects of the Google methods and claims.

In 2020 researchers from **USTC**, China claimed to achieve in 300 seconds a sampling task that requires a billion years for a supercomputer.

However, my 2014 paper with Guy Kindler on “Boson Sampling” (and subsequent works) largely refute these fantastic claims.

(Picture: János Pach)

Quantum computers are among the most important scientific developments of our time. In my judgement it is a serious possibility that quantum computers are not possible. This is what I expect, and I have been studying this possibility since 2005.

Understanding noisy quantum systems and potentially even the failure of quantum computers is related to the fascinating mathematics of noise stability and noise sensitivity and its connections to the theory of computing. Exploring this avenue may have important implications to various areas of quantum physics.

Evaluating the recent fantastic experimental claims is an exciting scientific matter on its own.

For the readers of the blog let me add some references to my papers:

The argument against quantum computers

(i) G. Kalai, The argument against quantum computers, *Quantum, Probability, Logic: Itamar Pitowsky’s Work and Influence* (M. Hemmo and O. Shenker, eds.), pp. 399–422, Springer, 2019.

(ii) G. Kalai, The quantum computer puzzle, Notices AMS, May 2016

(iii). G. Kalai, The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims,* The Intercontinental Academia Laws: Rigidity and Dynamics *(M. J. Hannon and E. Z. Rabinovici, eds.), World Scientific, 2020. arXiv:2008.05188.

(iv). G. Kalai, Conjecture C still stands, arXiv. 2022

This recent paper responds to a 2012 paper of Steve Flammia and Aram Harrow (see this post) regarding a certain “Conjecture C” discussed in my 2011 debate with Aram.

In the wider context of “mathematical computer science: theory and practice”

(v). G. Kalai, Three puzzles on mathematics computations, and games, Proc. ICM2018;

Boson sampling

(vi). G. Kalai and G. Kindler, Gaussian Noise Sensitivity and BosonSampling, 2014.

*Our study of Boson Sampling gives the basis to understanding of general NISQ systems.*

(vii) G. Kalai and G. Kindler, Concerns about recent claims of a huge quantum computational advantage via Gaussian boson sampling,

This was a discussion paper for the 2020-2021 email exchange with the USTC photonic team and other researchers regarding the boson sampling claims.

On the Google 2019 experiment

(viii) Y. Rinott, T. Shoham, and G. Kalai, Statistical Aspects of the Quantum Supremacy Demonstration, Statistical Science (2022)

(ix) G. Kalai, Y. Rinott and T. Shoham, Google’s 2019 “Quantum Supremacy” Claims: Data, Documentation, & Discussion

See also Sections 5 and 6 in reference (iii).

A lecture in Bandung, Indonesia with the same title

And my lecture in Lahore, Pakistan now on youtube

]]>Here is the abstract:

Demonstrating quantum advantage requires experimental implementation of a computational task that is hard to achieve using state-of-the-art classical systems. One approach is to perform sampling from a probability distribution associated with a class of highly entangled many-body wavefunctions. It has been suggested that this approach can be certified with the Linear Cross-Entropy Benchmark (XEB). We critically examine this notion. First, in a “benign” setting where an honest implementation of noisy quantum circuits is assumed, we characterize the conditions under which the XEB approximates the fidelity. Second, in an “adversarial” setting where all possible classical algorithms are considered for comparison, we show that achieving relatively high XEB values does not imply faithful simulation of quantum dynamics. We present an efficient classical algorithm that, with 1 GPU within 2s, yields high XEB values, namely 2-12% of those obtained in experiments. By identifying and exploiting several vulnerabilities of the XEB, we achieve high XEB values without full simulation of quantum circuits. Remarkably, our algorithm features better scaling with the system size than noisy quantum devices for commonly studied random circuit ensembles. To quantitatively explain the success of our algorithm and the limitations of the XEB, we use a theoretical framework in which the average XEB and fidelity are mapped to statistical models. We illustrate the relation between the XEB and the fidelity for quantum circuits in various architectures, with different gate choices, and in the presence of noise. Our results show that XEB’s utility as a proxy for fidelity hinges on several conditions, which must be checked in the benign setting but cannot be assumed in the adversarial setting. Thus, the XEB alone has limited utility as a benchmark for quantum advantage. We discuss ways to overcome these limitations.

**F**– The fidelity. If is the ideal state and is the noisy state, then the fidelity**F**is defined by ,**XEB**– the linear cross entropy estimator for the fidelity**P(No err)**– The probability of no errors (denoted in the paper by ).

a) The fidelity **F** cannot be read from the distribution of the samples produced by the quantum computer. Suppose we are given an unlimited number of samples (or a large number of samples for which the empirical distribution is a good approximation to the noisy distribution), what is the best way to estimate the fidelity?

b) If we have a polynomial number of samples in (1/F). What are the best ways to estimate the fidelity?

c) What in general are the relations between **F**, **XEB**, and **P(No err)?**

A basic observation of the paper is that when you apply depolarizing noise to the gates, the resulting distribution has a positive correlation to the ideal distribution. (Hence this leads to positive XEB value.) The basic idea (as I see it) is simple: let’s consider 1-gate which is a certain unitary operator U.

The space of such operators is spanned by I, X, Y, and Z. Let us assume that applying to U unitary noise, say, Y, will lead to a new quantum circuit which gives uncorrelated samples and fidelity estimator 0. However, applying (I+X+Y+Z), which replace the qubit with a maximal entropy state is a very basic form of noise (depolarizing noise on the qubit on which the gate acts) and this form of noise is expected to slash the fidelity estimator by four and not send it to zero. (For 2-gates the story is similar but this time there are 16 basic possibilities for unitary noise so we can expect that a depolarizing noise will slash the linear cross entropy estimator by 1/16 (and not to zero).)

The paper by Gao et al. describes various additional reasons for which the effect of gate errors will lead to positive correlations with the ideal distribution, and in general will lead to strict inequalities

**(1) XEB > P(No err) **

and

**(2) ** **XEB > F > P(No err) **

First, it turns out that even a single unitary gate error can contribute to the increase of **XEB **(and also to increase** F)**, and, moreover, the effect of two (or more) gate errors can also lead to an increased **XEB (**and also an increased **F**.)

In expectation we can actually expect.

**(3) XEB > F > P(No err) **

This is demonstrated by Figure 1 of the paper.

The way Gao, Kalinowski, Chou, Lukin, Barak, and Choi used this observation is by applying such depolarization noise on a set of 2-gates that would split the circuit into two parts. This will lead to a sort of “patched” circuit for which one can make the computation separately on every patch, which gives much quicker classical algorithms.

One interesting aspect of the paper is that (as far as I could understand) when the number of qubits grows the “quantum advantage” , namely the advantage of the quantum algorithms over the classical algorithms, diminishes. As Gao et al. write

“Remarkably, theXEBvalue of our algorithm generally improves for largerquantum circuits, whereas that of noisy quantumdevices quickly deteriorates. Such scaling continues to hold when the number of qubits is increasedwhile the depth of the circuit and the error-per-gateare fixed…”

Remark: This conclusion assumes that you need enough samples to verify the fidelity. Philosophically one can claim that the quantum advantage may apply for producing *one* sample; After all, your argument is based anyway on extrapolation, and for supremacy experiments you cannot verify even the individual amplitudes. (I made this claim in a discussion with Daniel Lidar regarding the very nice paper by Zlokapa, Boixo, and Lidar, Boundaries of quantum supremacy via random circuit sampling, but I couldn’t persuade Daniel.)

The paper is relevant to two important aspects of my statistical science paper with Yosi Rinott and Tomer Shoham and to our work which puts the Google 2019 experiment under scrutiny.

- The fact that
**XEB**is systematically larger than**P(No err)**may support the concern that the precise agreement of the**XEB**estimator with the**P(No err)**computation (Formula (77)) is “too good to be true,” namely, it fits too well with the researcher’s expectations rather than with physical reality. - The basic observation (III) implies that the exponential decay for the Fourier coefficients with the degrees, that we attributed only to readout errors is also caused by gate errors. Subsequently, the Fourier description of the data that we regarded as providing confirmation to the Google claim (see, Figure 2 in our recent paper) actually appears to show that the empirical data does not fit the theory.

Apropos Fourier methods and Xun Gao: The first I heard of Xun Gao’s work was in connection with his excellent early work with Duan Efficient classical simulation of noisy quantum computation that used Fourier methods to study quantum circuits.

1. There is a very interesting paper by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani, A polynomial-time classical algorithm for noisy random circuit sampling. Here the noise model is based on an arbitrarily small constant amount of depolarizing noise applied to each qubit at each step. The analysis is built on Xun Gao and Luming Duan’s paper mentioned above and it seems related to Fourier expansion. Let me note that under the assumption of fixed-rate readout errors my simple algorithm with Guy Kindler applies and shows that approximate sampling can be achieved by low degree polynomials and hence it is in a very low-level computational subclass of P (and even in ). I don’t know if this conclusion applies to the model of Aharonov et al.’s recent paper and this is an interesting question.

Here is the abstract of Aharonov et al.’s paper

We give a polynomial time classical algorithm for sampling from the output distribution of a noisy random quantum circuit in the regime of anti-concentration to within inverse polynomial total variation distance. This gives strong evidence that, in the presence of a constant rate of noise per gate, random circuit sampling (RCS) cannot be the basis of a scalable experimental violation of the extended Church-Turing thesis. Our algorithm is not practical in its current form, and does not address finite-size RCS based quantum supremacy experiments.

2. There is a nice blog post by Scott Aaronson on several Sycamore matters. On quantum suppremacy Scott expresses an optimistic view:

“Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it’s still standing, but more precariously than I’d like—underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.”

Since Scott and I are in agreement that classical algorithms are now ten orders of magnitude quicker than those used by the Google team in 2019, I don’t see much reasons to debate the extra 2-3 orders of magnitude of supremacy in terms of electricity costs. But there is one technical matter worth noting regarding the 10% LXEB value that the Gao et al.’s method achieved. (Above we use XEB for LEXB.)

As we explain in the post, the basic observation of Gao et al. is that when we apply depolarizing noise on a set of edges that separate the circuit into two parts there is a positive correlation with the ideal distribution and there is a better classical algorithm which allows Gao et al. to achieve asymptotically better LXEB for running time. In the Google 2019 paper, the Google team talked about **patch circuits** that are obtained by deleting 2-gates that allows separating the full circuits into two parts and also about **elided circuits** where only part of these “boundary” 2-gates are deleted, and quick classical algorithms are still available. It is plausible that using these elided circuits (or alternatively even leaving alive a single 2-gate between the two parts) will allow quick classical algorithms that bridge the 10% gap that Scott mentioned.

**Further update:** After a long break I took part in the interesting discussion over SO. Here is a brief summary from the discussion of one of the pillars of my argument against quantum computers:

“Computational supremacy not supported by asymptotic analysis cannot be reached in practice”

This principle applies to NISQ systems for which there is by now various results (including the recent result by Aharonov et al. ) that they represent asymptotically efficient classical computation.

3. Regarding the other news of Sycamore wormholes simulation, my general view is that a demonstration of a quantum state or evolution on a few qubits could be impressive even if the classical computations required for confirming the claims are not computationally hard. For example, it will be very impressive to reach a “dense” cloud of quantum states with 3-6 qubits. (I once asked about it here.) Putting the wormholes aside I would be very interested to hear what is the precise circuit that the Sycamore ran, and how the outcomes were classically verified.

The distinction between classical simulation and quantum simulation that plays a role in Scott’s post was discussed several times in my 2012 debate with Aram Harrow. In fact our very last round of comments (Nov. 2012) was about this issue:

**(Aram)** (emphasis added): The same story can be told about the exponential-time classical emulation, except that now *the encoding is no longer a unitary, or indeed linear, map*.

I agree that the quantum emulation feels closer to the original physics than the classical emulation does. But this seems like a rather imprecise statement, and *not something that can be formalized*.

**(Gil)** This is great, Aram, in the last paragraph you say that the distinction cannot be formalized and in the previous paragraph you offer a way to formalize it!

**(Aram)** good point.

**A little more (Dec 9):** Regarding the Sycamore wormholes simulation here are links to the paper, a related commentary paper (both from *Nature*), articles in Quanta Magazine, NYT, and a nice video produced by Quanta Magazine. Peter Woit also wrote several critical blog posts about it. The scientific issue involves the Sachdev–Ye–Kitaev (SYK) model, and a well known principle “ER=EPR” proposed by Maldacena and Susskind a decade ago. Here is an article by Gali Weinstein with an historical perspective on Einstein’s work on ER and EPR. The ER=EPR conjecture is part of an ambitious effort to relate quantum gravity and quantum information (“it from qubit”). As Wikipedea puts it “a grander conjecture that the geometry of space, time and gravity is determined by entanglement.” Let me mention that Peter Shor (and others) had early ideas to relate quantum gravity with quantum error correction, and let me also mention a grand conjecture of my own that “the impossibility of quantum error correction and quantum fault tolerance is precisely what enable time and geometry to emerge in our physical world.”

It is unfortunate that the discussion regarding the new Sycamore experiment is centered around “meta” matters and “hype” and there is almost no discussion of the technical issues themselves.

]]>