A Nice Example Related to the Frankl Conjecture

The example

As a follow up to my previous post about Gilmer’s breakthrough regarding Frankl’s conjecture, here is a very nice example (from the paper of Zachary Chase and Shachar Lovett) related to the conjecture.

Let \psi=\frac {3-\sqrt 5}{2} = 0.38..

Consider the following families of subsets of [n]

{\cal F}_1=\{S \subset [n]: |S|=\psi n + n^{2/3}\}, and

{\cal F}_2 = \{ T \subset [n]: |T|>(1-\psi) n \}.

Now let {\cal F}={\cal F}_1 \cup {\cal F}_2.

Here are some observations:

  1. |{\cal F}_2| = o(|{\cal F}_1|).
  2. The number of pairs (S,T):S,T \in {\cal F} whose union is also in {\cal F} is (1-o(1)) {|\cal F}|^2.
  3. For every \epsilon >0, as n grows, no element k \in [n] belongs to more than (\psi+\epsilon) |{\cal F}| sets in {\cal F}.

The first assertion holds because {{n} \choose {\psi n +n^{2/3}}} is exponentially larger (in a fractional power of n) than {{n} \choose {(1-\psi )n}}.

For the second assertion we need to show that a typical union of a pair of sets in {\cal F}_1 belongs to {\cal F}_2. Note that the intersection of two random subsets of [n] of size \phi n is \phi ^2 n and hence their union is of size 2\phi - \phi^2. As it happens the solution of the equation 2\phi-\phi^2 = 1-\phi is precisely our \psi=\frac {3+\sqrt 5}{2}. So letting \phi=\psi + n^{-1/3} we get that a typical union of two sets from {\cal F}_1 is in {\cal F}_2.

The third assertion follows from the fact that an element k \in [n] belongs to a fraction of \psi + n^{-1/3}  sets in {\cal F}_1.

This shows that a natural stability version of Frankl’s conjecture is incorrect, and gives some hint on the appearance of the parameter \psi.

Stability result

Such a stability version is correct when we replace 1/2 with \psi. Chase and Lovett improved Gilmer’s method and  proved that

Theorem: If  \cal F is a (1-\epsilon)-approximate union closed set system, where \epsilon <1/2, then there is an element which is contained in a (\psi-\delta) fraction of sets in \cal F, where

\delta=2\epsilon (1+\frac {\log (1/\epsilon)}{\log |{\cal F}|}).

An invitation for further discussion

I will be happy to see, based on Gilmer’s paper and the five follow-up papers, a detailed, as simple as possible, explanation of Frankl’s conjecture for the parameter \psi, to learn what is involved in Sawin’s improvement that goes beyond \psi, and to understand the counterexamples for Gilmer’s proposal towards the full conjecture, as well as thoughts, ideas and remarks of various kind on the problem.

Links to the papers

  1. Justin Gilmer, A constant lower bound for the union-closed sets conjecture
  2. Will Sawin, An improved lower bound for the union-closed set conjecture
  3.  Zachary Chase, Shachar Lovett, Approximate union closed conjecture
  4. Ryan Alweiss, Brice Huang, Mark Sellke, Improved Lower Bound for the Union-Closed Sets Conjecture 
  5. David Ellis, Note: a counterexample to a conjecture of Gilmer which would imply the union-closed conjecture
  6. Luke Pebody, Extension of a Method of Gilmer

Posted in Combinatorics, Open discussion, Open problems | Tagged , , , , , , , , , , | 5 Comments

Amazing: Justin Gilmer gave a constant lower bound for the union-closed sets conjecture

Frankl’s conjecture (aka the union closed sets conjecture) asserts that if \cal F is a family of subsets of [n] (=: \{1,2,\dots,n \}) which is closed under union then there is an element k such that

|\{S \in {\cal F}: k \in S\}| \ge \frac {1}{2}|{\cal F}|.

Justin Gilmer just proved an amazing weaker form of the conjecture asserting that there always exists an element k such that

|\{S \in {\cal F}: k \in S\}| \ge  0.01 |{\cal F}|.

This is am amazing progress! Congratulations, Justin.

The breakthrough paper, just posted on the arXiv is:

A constant lower bound for the union-closed sets conjecture by Justin Gilmer

Abstract: We show that for any union-closed family  \mathcal{F} \subseteq 2^{[n]}, \mathcal{F} \neq \{\emptyset\} there exists an i \in [n]  which is contained in a 0.01 fraction of the sets in \mathcal F.

This is the first known constant lower bound, and improves upon the \Omega(\log_2(\mathcal{F}|)^{-1}) bounds of Knill and Wójick.

Our result follows from an information theoretic strengthening of the conjecture. Specifically, we show that if A,B are independent samples from a distribution over subsets of [n]  such that Pr[i \in A] < 0.01 for all i and H(A)>0, then H(A \cup B)> H(A).

______

Mike Saks who first told me about the breakthrough wrote “the bound comes from a simple clever idea (using information theory) and 5 pages of gentle technical calculations.” (I thank Mike, Ryan Alweiss, and Nati Linial who wrote me about it.)

We mentioned Frankl’s conjecture several times including here, here, here, and here. Polymath11 on Tim Gowers’s blog was devoted to the conjecture. Below the fold: What it will take to prove the conjecture in its full strength and another beautiful conjecture by Peter Frankl.

Update: A few days after Gilmers’ paper was posted there were some developments by four six groups of researchers.  Four papers by Ryan Alweiss, Brice Huang, and Mark SellkeZachary Chase and Shachar Lovett;  Will Sawin; Luke Pebody,  settled a conjecture by Gilmer that allows pushing the bound to: \frac{3-\sqrt 5}{2}=0.381966… . In his paper, Will Sawin improved this lower bound by an additional small constant.  Chase and Lovett found in their paper a related variant of Frankl’s conjecture for which the bound is oprimal.  Will Sawin  and independently David Ellis found counter examples to Gilmer’s conjecture that would imply Frankl’s conjecture. 

Continue reading

Posted in Combinatorics, Open problems | Tagged , , | 12 Comments

Barnabás Janzer: Rotation inside convex Kakeya sets

Barnabás Janzer studied the following question:

Suppose we have convex body K in \mathbb R^n that contains a copy of a convex body S in every orientation. Is it always possible to move any one copy of S to another copy of S, keeping inside K?

I will let you test your intuition about what the answer should be. First, some background.

A Kakeya set is a set that contains unit unterval in every direction. A famous open problem is the conjecture that every Kakeya set in \mathbb R^d has Housdorff dimension d. in 2008, Zeev Dvir found a simple remarkable proof for a finite field analog of the conjecture. Finding possible connections between the finite field problem and the Euclidean problem is an exciting problem. Can we use the finite field result to prove the Euclidean result? Can we use or refine the finite field methods for the Euclidean problem? Here is a recent Quanta Magazine article about exciting “intermediate results”.

The question about the connection between finite fields analogs and questions over \mathbb Z or \mathbb R can be asked about other problems. One example is the Roth problem (relevant posts I,II, III) vs. the cup set problems (relevant posts I,II,III).

Continue reading

Posted in Convexity, Test your intuition | Tagged , | 1 Comment

Inaugural address at the Hungarian Academy of Science: The Quantum Computer – A Miracle or Mirage

1655299431168

(Picture: János Pach)

The Quantum Computer – A Miracle or Mirage

inaugural address of

Gil Kalai

honorary member of the MTA,

Budapest, 15 June, 2022, 15:00

Abstract: On February 12, 2002, Michel Devoret’s lecture entitled “The Quantum Computers: Miracle or Mirage” kicked off the 150th Anniversary Celebration of the Yale School of Engineering. In his abstract, Devoret asserted that while quantum mechanics had often been viewed as limiting the information one could extract from a physical system, “recent discoveries show that fundamental properties of quantum mechanics could actually be used to perform computations that would be impossible on a standard ‘classical’ computer.” Devoret’s question of whether the quantum computer is a miracle or a mirage is, to this day, one of the fascinating clear-cut open scientific questions of our time. In my inaugural lecture I will explain the question and the discoveries from the 1990s that suggested that quantum computers could perform miracles, and I will present my theory as to why the quantum computer is a mirage.

Janos Pach told me that the name of the room were my lecture was given, “Felolvaso terem” means something like “Hall for reading papers”. This is a rarely used, somewhat archaic, expression that really refers to the act of reading out papers loudly for an audience. Quite untypically for mathematical lectures, this was precisely the style of my lecture.  Here is the text of my lecture:

_________________________________________________

I was very honored and happy for being elected as an honorary member of the Hungarian Academy of Sciences and I am especially happy to come to the beautiful city of Budapest to give this inaugural address and the Turán memorial lectures and I am thankful for the opportunity to meet friends and colleagues of many decades as well as a new generation of wonderful mathematicians.

Our mathematical life and the inherent difficulties and uncertainties of mathematics and science are for us islands of stability and solace in times of great concerns and difficulties.

Preface

Quantum computers are new type of computers based on quantum physics. When it comes to certain computational objectives, the computational ability of quantum computers is tens, and even hundreds of orders of magnitude faster than that of the digital computers we are familiar with, and their construction will enable us to break most of the current cryptosystems. While quantum computers represent a future technology, which captivates the hearts and imaginations of many, there is also an ongoing dispute over the very possibility of their existence.

My theory asserts that quantum computers are inherently noisy. Their robust components represent a low-level (classical) computational ability, and they necessarily exhibit chaotic behavior.

Classical Computers

The classical computer can be seen as a device that contains n “bits” where every bit is in one of two positions of either “zero” or “one.” The computer operates through “gates” that perform logical operations on either a single or two bits. Two simple types of gates are sufficient to describe all forms of computation: the NOT gate, which reverses the value of a single bit, and the AND gate, which receives two bits as input and produces “one” if and only if the value of each of the bits in the input is “one.” The model presented here for computers is called the Boolean circuit model and is close to the 1940s models of Church, Turing, and others, which preceded the advent of the digital computer and the computer revolution in the second half of the twentieth century.

In the 1960s and 1970s, computer science researchers came to an important insight regarding the inherent limitations of computers. There are certain computational tasks that are very easy to formulate (and even to check whether a suggested answer to them is correct), but which are nonetheless impossible to perform because they require too many computational steps. For example, if the number of computational steps for a problem described by an input of n bits is 2n, then this computation will not be feasible for the most powerful computers, even when n = 100. To describe feasible (or “efficient”) computations, that is, computations that can actually be performed, an important assumption was added, according to which the number of computational steps (i.e., the number of gates) does not exceed a constant power (e.g., n3) in the number n of bits describing the input. This definition represents an important step in the theory of computational complexity, while providing the central tool for the analysis of the computational power of computational models, algorithms, and physical computational systems.

The main lens through which we analyze the difficulty of computation is the asymptotic lens. For most computational tasks we cannot hope for a full understanding of the number of required computational steps as a function of the number of bits of the input, but we can gain understanding (at times rough and at times just conjectural) of the computational difficulty by studying how the number of computational steps asymptotically depends on the input size. Experience gained in recent decades indicates that asymptotic analysis provides a good understanding of the practical difficulty of computational tasks.

We will now enrich the picture by adding probability. A single bit has two possible values, “zero” and “one,” and it is possible to extend the Boolean circle model by allowing the bit to have the value “zero” with probability p and the value “one” with probability 1-p. In this way, the classical computer equipped with probabilistic bits can describe a sample from a probability space of sequences of zeros and ones. The model of probabilistic classical computers is an important model in the theory of computational complexity and it also constitutes a convenient introduction to the concept of quantum computers. Probabilistic Boolean circuits can be realized rather easily by digital computers.

Quantum Computers

The model of quantum computers is a computational model based on quantum mechanics and was first introduced in the 1980s. In quantum computers, the classical bit is replaced by the basic computational element called a qubit.

The state of a single qubit is described by a unit vector in a complex two-dimensional vector space (namely, it is described by four real numbers whose sum of squares equals 1). The uncertainty principle asserts that we cannot identify the precise state of the qubit, but rather we can measure it and obtain a probabilistic bit. One way to think about it is as follows: there are two basic states for the qubit and we denote them by \left|0\right\rangle   and \left|1\right\rangle while the general state of the qubit is a “superposition” of these two basic states, namely, the general state of a qubit is a linear combination of the form

z_1\left|0\right\rangle +z_2\left|1\right\rangle

where z_1  and z_2  are complex numbers  satisfying

|z_1|^2+|z_2|^2=1.

The state of the qubit can be measured and when a qubit in this state is measured, we obtain a probabilistic bit with a value 0 with probability  and 1 with probability. One possible physical realization of the two basic states of a single qubit is with the two energy levels of a Hydrogen atom, with quantum physics allowing the atom to be in a superposition of these basic states as described above.

The computer acts on n qubits through “quantum gates,” which are the basic quantum computation operations. (The gates are described mathematically by “unitary operators.”) At the end of the computation process, the state of the computer is measured, and this yields a single sample from a probability distribution of 0-1 strings of length n.  In this case too, the assumption is that for each efficient computation the number of gates is at most polynomial in n. The crucial fact about quantum computers is that they would allow us to efficiently achieve samples that cannot be efficiently achieved by classical computers.

In the 1990s, Peter Shor discovered that quantum computers would allow the execution of a certain computational tasks – factoring an integer to its prime components – hundreds of orders of magnitude faster than regular computers and would enable us to break most current cryptosystems.

Quantum Computers and Noise

Quantum systems are by nature “noisy” and unstable. The term “noise” refers to a deviation of the computer from the planned program, and in the case of a quantum computer any unwanted interaction or leakage of information leads to such a deviation. The model of noisy quantum computer adds a component of noise to the description of the quantum computer. Noisy intermediate-scale quantum (NISQ) computers are simply noisy quantum circuits with at most 200 (say) qubits.

It was around that time that early doubts concerning the model also surfaced: quantum systems are by nature “noisy” and unstable. The term “noise” refers to a deviation of the computer from the planned program, and in the case of a quantum computer any unwanted interaction or leakage of information leads to such a deviation. Therefore, in order to reduce the noise level, quantum computers must be isolated and protected from their environment. The key to a possible solution of the noise problem is quantum error-correcting codes, which will enable noisy quantum computers to perform the same computations conducted by the abstract noiseless model, provided that the level of noise can be reduced to below a certain threshold denoted by α.

It is a common opinion that the construction of quantum computers is possible, that the remaining challenge is mostly engineering-related, that such computers will be constructed in the next few decades, and that quantum error-correcting codes of the required quality will be developed in labs in the years to come. My standpoint is that it will be fundamentally impossible to reduce the noise level to below the required threshold. As a result, it will be impossible to develop the quantum codes required for quantum computation, nor will it be possible to reach the target of “quantum computation supremacy,” whereby a quantum computer performs computation that is extremely hard or even impossible for a classical computer.

The argument against quantum computers

My argument for the impossibility of quantum computers lies within the scope of quantum mechanics and does not deviate from its principles. In essence, the argument is based on computational complexity and its interpretation, and it is discussed in-depth in my papers which also include a discussion of general conclusions that derive from my argument and relate to quantum physics, alongside suggestions of general laws of nature that express the impossibility of quantum computation.

My argument mostly deals with understanding quantum computers on the intermediate scale (known as NISQ computers, an abbreviation of Noisy Intermediate Scale Quantum), that is, quantum computers of up to at most several hundreds of qubits. It is expected that on this scale we will be able to construct quantum codes of a quality sufficient for the construction of bigger quantum computers. It is further expected that on this scale the quantum computer will achieve computations far beyond the ability of powerful classical computers, that is, will achieve quantum computational supremacy. The Google’s Sycamore computer is an example of a noisy intermediate-scale quantum computer.

As specified later, it is my argument that NISQ computers cannot be controlled. Hence:

  1. Such systems cannot demonstrate significant quantum computational advantage.
  2. Such systems cannot be used for the creation of quantum error-correcting codes.
  3. Such systems lead to non-stationary and even chaotic distributions.

Regarding the first item, let me remind the audience that computational complexity theory provides tools for studying the computational power of models and physical computational devices. The reason NISQ computers cannot support quantum supremacy is that when we use computational complexity tools to understand the computational power of NISQ computers, we discover that they describe a very low-level computational class. This low-level computational class does not allow for any complicated computations, much less computational supremacy.  My analysis draws computational conclusions for NISQ computers based on their mathematical model’s asymptotic behavior.

Regarding the second item, the reason it is impossible to build quantum error-correcting codes is that it requires an even lower noise level than that required for demonstrating quantum supremacy. The meaning of the infeasibility of quantum error-correcting codes is that even a quantum computer operating on a single qubit is inherently noisy. It is to be noted that the argument that the noise level required for error-correcting codes is lower than the level required for quantum supremacy is generally accepted by both theoreticians and experimental physicists.

A picture by Alef

Weiner Chaos and Lorenz Chaos

When I speak here of chaotic behavior, I refer to a system (either deterministic, probabilistic, or quantum) that is so sensitive to its defining parameters that its behavior (or a large component of its behavior) cannot be determined, not even probabilistically. This notion is related to the mathematical theory of “noise sensitivity” (Benjamini, Kalai, and Schramm 1999) that we mentioned before, and to the related mathematical theory of “black noise” (Tsirelson and Vershik 1998). Both these theories have their early roots in Weiner’s chaos expansion (Weiner 1938). Chaos in this sense is sometimes called “Knightian uncertainty”, a term that originates in economics. The first use of the theory of noise-sensitivity to quantum computers was in my 2014 paper with Guy Kindler on “Boson Sampling”. (There we use Hermite-Fourier expansion.)

The term “chaotic system” in mathematical chaos theory (Lorenz 1963) refers to nonlinear classical deterministic systems, whose development very much depends on their initial conditions, and since these conditions are not known precisely, the system’s development in the long run cannot be predicted.

It is plausible that natural chaotic phenomenon (like the weather) reflects both the Lorenz chaos and the Weiner chaos.

Here I thank I Bernard Chazelle for raising an appealing argument for why Lorenz chaos and traditional dynamic theoretic notions of chaos are insufficient to describe chaos in nature.

A Brief Look at the Mathematical Analysis: The Fourier Expansion

Following is a brief look at a central technical analytic tool that applies to all three components of the argument, namely, the Fourier expansion of functions. In our case we consider functions, whose values are real numbers that are described on sequences of length n of zeros and ones, and every such function can be written (as a linear combination) by means of a special set of functions, known as Fourier–Walsh functions. Fourier–Walsh functions can be sorted according to their degree, which is a natural number between 0 and n. Much as the regular Fourier expansion makes it possible to describe a musical sound as a combination of pure high and low tones, in Fourier–Walsh functions, too, the degrees can be seen as analogous to the heights of the pure tones.

Our first step is to study the Walsh–Fourier transform of the probability distribution of the 0-1 sequences in an ideal noiseless quantum computer, and the second step is to study the effect of the noise. The main technical component of my argument is that the noise will exponentially lower the coefficients of the higher-degree Fourier–Walsh functions. This leads to a mathematical distinction (Benjamini, Kalai, and Schramm 1999) between distributions that can be described by low-degree Fourier coefficients and are referred to as “noise stable” and distributions that are supported by high-degree Fourier coefficients and are referred to as “noise sensitive.” A general distribution can have both noise-stable and noise-sensitive components.  This is the reason that, on the one hand, the stable component in the noisy distribution is described by low Fourier–Walsh levels, and hence this component represents an extremely low computational power, and that, on the other hand, if the unstable part, namely, the contributions of the higher-degree Fourier–Walsh functions remain substantial, this contribution will be chaotic. 

On Classical Information and Classical Computation

The question “why does this argument fail for classical computers?” is an appropriate critique of any argument asserting that quantum computers are not possible. Regarding my argument, the answer to this question is as follows: the path to large-scale quantum computers requires building quantum error-correcting codes on NISQ systems, and my theory asserts that this is impossible. At the same time, the primitive computational power represented by NISQ systems still allows for stable classical information, and subsequently even for classical computation.

The Weak Link in My Argument

The mathematical analysis that leads to the conclusion that noisy intermediate-scale quantum computers have primitive computational power is asymptotic. That is, a mathematical model of these computers is described and analyzed when the noise level is fixed, and the number of qubits increases. The conclusion I draw from this asymptotic behavior concerns the lowest noise level engineers can reach. I argue that it would be impossible to lower the noise level in such a way that enables us to create computers whose computational ability is low in an asymptotic analysis, but very high – indeed beyond the ability of classical computers – for several dozen qubits.  The move from an asymptotic argument regarding the model’s computational complexity to a concrete claim regarding engineering-related limitations of intermediate-scale computers is hardly standard, and my argument clashes with the strong intuition of experts in the field, according to whom an engineering effort would make it possible in principle (as well as in practice) to lower the noise level as much as we would like. Indeed, the multiple resources invested in building quantum computers, especially noisy intermediate-scale quantum computers, are based on the common position that there is no fundamental obstacle obstructing this effort.

Recent experimental results

In 2019 Researchers from Google claimed to achieve in 300 seconds a sampling task that requires 10,000 years for a supercomputer.

However, since the Google paper appeared the number “10,000” years was replaced by (roughly) 300 second by better classical algorithms. 

In a joint work with Yosi Rinott and Tomer Shoham we also examine other aspects of the Google methods and claims.

In 2020 researchers from USTC, China claimed to achieve in 300 seconds a sampling task that requires a billion years for a supercomputer.

However, my 2014 paper with Guy Kindler on “Boson Sampling” (and subsequent works) largely refute these fantastic claims.

1655299431197

(Picture: János Pach)

Conclusion

Quantum computers are among the most important scientific developments of our time. In my judgement it is a serious possibility that quantum computers are not possible. This is what I expect, and I have been studying this possibility since 2005.

Understanding noisy quantum systems and potentially even the failure of quantum computers is related to the fascinating mathematics of noise stability and noise sensitivity and its connections to the theory of computing. Exploring this avenue may have important implications to various areas of quantum physics.

Evaluating the recent fantastic experimental claims is an exciting scientific matter on its own.

______________________________________________

Some references

For the readers of the blog let me add some references to my papers:

The argument against quantum computers

(i) G. Kalai, The argument against quantum computers, Quantum, Probability, Logic: Itamar Pitowsky’s Work and Influence (M. Hemmo and O. Shenker, eds.), pp. 399–422, Springer, 2019.

(ii) G. Kalai, The quantum computer puzzle, Notices AMS, May 2016

(iii). G. Kalai, The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims, The Intercontinental Academia Laws: Rigidity and Dynamics (M. J. Hannon and E. Z. Rabinovici, eds.), World Scientific, 2020. arXiv:2008.05188.

(iv). G. Kalai, Conjecture C still stands, arXiv. 2022

This recent paper responds to a 2012 paper of Steve Flammia and Aram Harrow (see this post) regarding a certain “Conjecture C” discussed in my 2011 debate with Aram.

In the wider context of “mathematical computer science: theory and practice”

(v). G. Kalai, Three puzzles on mathematics computations, and games, Proc. ICM2018;

Boson sampling

(vi). G. Kalai and G. Kindler, Gaussian Noise Sensitivity and BosonSampling, 2014. 

Our study of Boson Sampling gives the basis to understanding of general NISQ systems.

(vii) G. Kalai and G. Kindler, Concerns about recent claims of a huge quantum computational advantage via Gaussian boson sampling,

This was a discussion paper for the 2020-2021 email exchange with the USTC photonic team and other researchers regarding the boson sampling claims.

On the Google 2019 experiment

(viii) Y. Rinott, T. Shoham, and G. Kalai, Statistical Aspects of the Quantum Supremacy Demonstration, Statistical Science  (2022)

(ix) G. Kalai, Y. Rinott and T. Shoham, Google’s 2019 “Quantum Supremacy” Claims: Data, Documentation, & Discussion

See also Sections 5 and 6 in reference (iii).

A lecture in Bandung, Indonesia with the same title

And my lecture in Lahore, Pakistan now on youtube

Posted in Academics, Computer Science and Optimization, Physics, Quantum | Tagged , , | 2 Comments

Remarkable: “Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage,” by Xun Gao, Marcin Kalinowski, Chi-Ning Chou, Mikhail D. Lukin, Boaz Barak, and Soonwon Choi

In this post I would like to report about an important paper (posted Dec. 2021) by Xun Gao, Marcin Kalinowski, Chi-Ning Chou, Mikhail D. Lukin, Boaz Barak, and Soonwon Choi. (I am thankful to Xun Gao and  Boaz Barak for helpful discussions).

Limitations of Linear Cross-Entropy as a Measure for Quantum Advantage

Here is the abstract:

Demonstrating quantum advantage requires experimental implementation of a computational task that is hard to achieve using state-of-the-art classical systems. One approach is to perform sampling from a probability distribution associated with a class of highly entangled many-body wavefunctions. It has been suggested that this approach can be certified with the Linear Cross-Entropy Benchmark (XEB). We critically examine this notion. First, in a “benign” setting where an honest implementation of noisy quantum circuits is assumed, we characterize the conditions under which the XEB approximates the fidelity. Second, in an “adversarial” setting where all possible classical algorithms are considered for comparison, we show that achieving relatively high XEB values does not imply faithful simulation of quantum dynamics. We present an efficient classical algorithm that, with 1 GPU within 2s, yields high XEB values, namely 2-12% of those obtained in experiments. By identifying and exploiting several vulnerabilities of the XEB, we achieve high XEB values without full simulation of quantum circuits. Remarkably, our algorithm features better scaling with the system size than noisy quantum devices for commonly studied random circuit ensembles. To quantitatively explain the success of our algorithm and the limitations of the XEB, we use a theoretical framework in which the average XEB and fidelity are mapped to statistical models. We illustrate the relation between the XEB and the fidelity for quantum circuits in various architectures, with different gate choices, and in the presence of noise. Our results show that XEB’s utility as a proxy for fidelity hinges on several conditions, which must be checked in the benign setting but cannot be assumed in the adversarial setting. Thus, the XEB alone has limited utility as a benchmark for quantum advantage. We discuss ways to overcome these limitations.

I. Three parameters for noisy quantum circuits:

  1. F – The fidelity. If \phi is the ideal state and \psi is the noisy state, then the fidelity F is defined by \langle \psi \left | \phi \right | \psi \rangle,
  2. XEB – the linear cross entropy estimator for the fidelity
  3. P(No err) – The probability of no errors (denoted in the paper by p_{no~err}).

II. Some issues:

a) The fidelity F cannot be read from the distribution of the samples produced by the quantum computer. Suppose we are given an unlimited number of samples (or a large number of samples for which the empirical distribution is a good approximation to the noisy distribution), what is the best way to estimate the fidelity?

b) If we have a polynomial number of samples in (1/F). What are the best ways to estimate the fidelity?

c) What in general are the relations between F, XEB, and P(No err)?

III. A basic observation:

A basic observation of the paper is that when you apply depolarizing noise to the gates, the resulting distribution has a positive correlation to the ideal distribution. (Hence this leads to positive XEB value.)  The basic idea (as I see it) is simple: let’s consider 1-gate which is a certain unitary operator U.
The space of such operators is spanned by I, X, Y, and Z. Let us assume that applying to U unitary noise, say, Y, will lead to a new quantum circuit which gives uncorrelated samples and fidelity estimator 0. However, applying (I+X+Y+Z), which replace the qubit with a maximal entropy state is a very basic form of noise (depolarizing noise on the qubit on which the gate acts) and this form of noise is expected to slash the fidelity estimator by four and not send it to zero. (For 2-gates the story is similar but this time there are 16 basic possibilities for unitary noise so we can expect that a depolarizing noise will slash the linear cross entropy estimator by 1/16 (and not to zero).)

IV. Additional observations

The paper by Gao et al. describes various additional reasons for which the effect of gate errors will lead to positive correlations with the ideal distribution, and in general will lead to strict inequalities

(1)   XEB   >  P(No err)

and

(2)   XEB > F > P(No err)

First, it turns out that even a single unitary gate error can contribute to the increase of XEB  (and also to increase F), and, moreover, the effect of two (or more) gate errors can also lead to an increased XEB (and also an increased F.)

In expectation we can actually expect.

(3)    XEB > F > P(No err)

This is demonstrated by Figure 1 of the paper.

XunGao

V. The idea behind the spoofing algorithm

The way Gao, Kalinowski, Chou, Lukin, Barak, and Choi used this observation is by applying such depolarization noise on a set of 2-gates that would split the circuit into two parts. This will lead to a sort of “patched” circuit for which one can make the computation separately on every patch, which gives much quicker classical algorithms.

VI The asymptotic behavior

One interesting aspect of the paper is that (as far as I could understand) when the number of qubits grows the “quantum advantage” , namely the advantage of the quantum algorithms over the classical algorithms, diminishes. As Gao et al. write

“Remarkably, the XEB value of our algorithm generally improves for larger quantum circuits, whereas that of noisy quantum devices quickly deteriorates. Such scaling continues to hold when the number of qubits is increased while the depth of the circuit and the error-per-gate are fixed…”

Remark: This conclusion assumes that you need enough samples to verify the fidelity. Philosophically one can claim that the quantum advantage may apply for producing *one* sample; After all, your argument is based anyway on extrapolation, and for supremacy experiments you cannot verify even the individual amplitudes. (I made this claim in a discussion with Daniel Lidar regarding the very nice paper by Zlokapa, Boixo, and Lidar, Boundaries of quantum supremacy via random circuit sampling, but I couldn’t persuade Daniel.)

VII Relevance to the statistical analysis and the concerns regarding the Google 2019 experiment

The paper is relevant to two important aspects of my statistical science paper with Yosi Rinott and Tomer Shoham and to our work which puts the Google 2019 experiment under scrutiny.

  1. The fact that XEB is systematically larger than P(No err) may support the concern that the precise agreement of the XEB estimator with the P(No err) computation (Formula (77)) is “too good to be true,” namely, it fits too well with the researcher’s expectations rather than with physical reality.
  2. The basic observation (III) implies that the exponential decay for the Fourier coefficients with the degrees, that we attributed only to readout errors is also caused by gate errors. Subsequently, the Fourier description of the data that we regarded as providing confirmation to the Google claim (see, Figure 2 in our recent paper) actually appears to show that the empirical data does not fit the theory.

Apropos Fourier methods and Xun Gao: The first I heard of Xun Gao’s work was in connection with his excellent early work with Duan Efficient classical simulation of noisy quantum computation that used Fourier methods to study quantum circuits.

Update (Dec. 2022):

1. There is a very interesting paper  by Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, and Umesh Vazirani, A polynomial-time classical algorithm for noisy random circuit sampling. Here the noise model is based on an arbitrarily small constant amount of depolarizing noise applied to each qubit at each step. The analysis is built on Xun Gao and Luming Duan’s paper mentioned above and it seems related to Fourier expansion. Let me note that under the assumption of fixed-rate readout errors my simple algorithm with Guy Kindler applies and shows that approximate sampling can be achieved by low degree polynomials and hence it is in a very low-level computational subclass of P (and even in AC^0). I don’t know if this conclusion applies to the model of Aharonov et al.’s recent paper and this is an interesting question.

Here is the abstract of Aharonov et al.’s paper

We give a polynomial time classical algorithm for sampling from the output distribution of a noisy random quantum circuit in the regime of anti-concentration to within inverse polynomial total variation distance. This gives strong evidence that, in the presence of a constant rate of noise per gate, random circuit sampling (RCS) cannot be the basis of a scalable experimental violation of the extended Church-Turing thesis. Our algorithm is not practical in its current form, and does not address finite-size RCS based quantum supremacy experiments. 

2. There is a nice blog post by Scott Aaronson on several Sycamore matters.  On quantum suppremacy Scott expresses an optimistic view:

“Stepping back, what is the current status of quantum supremacy based on Random Circuit Sampling? I would say it’s still standing, but more precariously than I’d like—underscoring the need for new and better quantum supremacy experiments. In more detail, Pan, Chen, and Zhang have shown how to simulate Google’s 53-qubit Sycamore chip classically, using what I estimated to be 100-1000X the electricity cost of running the quantum computer itself (including the dilution refrigerator!). Approaching from the problem from a different angle, Gao et al. have given a polynomial-time classical algorithm for spoofing Google’s Linear Cross-Entropy Benchmark (LXEB)—but their algorithm can currently achieve only about 10% of the excess in LXEB that Google’s experiment found.”

Since Scott and I are in agreement that classical algorithms are now ten orders of magnitude quicker than those used by the Google team in 2019, I don’t see much reasons to debate the extra 2-3 orders of magnitude of supremacy in terms of electricity costs. But there is one technical matter worth noting regarding the 10% LXEB value that the Gao et al.’s method achieved. (Above we use XEB for LEXB.)

As we explain in the post, the basic observation of Gao et al. is that when we apply depolarizing noise on a set of edges that separate the circuit into two parts there is a positive correlation with the ideal distribution and there is a  better classical algorithm which allows Gao et al. to achieve asymptotically better  LXEB for running time. In the Google 2019 paper, the Google team talked about patch circuits that are obtained by deleting 2-gates that allows separating the full circuits into two parts  and also about elided circuits where only part of these “boundary” 2-gates are deleted, and quick classical algorithms are still available. It is plausible that using these elided circuits (or alternatively even leaving alive a single 2-gate between the two parts) will allow quick classical algorithms that bridge the 10% gap that Scott mentioned.

Further update: After a long break I took part in the interesting discussion over SO. Here is a brief summary from the discussion of one of the pillars of my argument against quantum computers:

“Computational supremacy not supported by asymptotic analysis cannot be reached in practice”

This principle applies to NISQ systems for which there is by now various results (including the recent result by Aharonov et al. ) that they represent asymptotically efficient classical computation.

3. Regarding the other news of Sycamore wormholes simulation, my general view is that a demonstration of a quantum state or evolution on a few qubits could be impressive even if the classical computations required for confirming the claims are not computationally hard. For example, it will be very impressive to reach a “dense” cloud of quantum states with 3-6 qubits. (I once asked about it here.) Putting the wormholes aside I would be very interested to hear what is the precise circuit that the Sycamore ran, and how the outcomes were classically verified.

The distinction between classical simulation and quantum simulation that plays a role in Scott’s post was discussed several times in my 2012 debate with Aram Harrow. In fact our very last round of comments (Nov. 2012) was about this issue:

(Aram) (emphasis added): The same story can be told about the exponential-time classical emulation, except that now the encoding is no longer a unitary, or indeed linear, map.

I agree that the quantum emulation feels closer to the original physics than the classical emulation does. But this seems like a rather imprecise statement, and not something that can be formalized.

(Gil) This is great, Aram, in the last paragraph you say that the distinction cannot be formalized and in the previous paragraph you offer a way to formalize it! :)

(Aram) good point. :)

Posted in Quantum, Statistics | Tagged , , , , , | 1 Comment

James Davies: Every finite colouring of the plane contains a monochromatic pair of points at an odd distance from each other.

Here is a lovely piece of news: the following paper by James Davies was posted on the arXive a few weeks ago. The paper uses spectral methods to settle an old question, posed in 1994, by Moshe Rosenfeld.

(See this 2009 post over here and this 2008 paper by Moshe. Rosenfeld problem first appeared in print in a 1994 paper by Paul Erdos and Alexander Soifer also wrote in several places about this problem.)

Odd distances in colorings of the plane

Theorem (Davies):

Every finite colouring of the plane contains a monochromatic pair of points at an odd (integral) distance from each other.

The paper proves more general result and discover a new class of triangle free graphs with large chromatic number. Congratulations, James.

soifer

Challenge: what is the smallest odd distance between a monochromatic pair in the Hoffman-Soifer coloring?

Posted in Combinatorics, Geometry | Tagged , | Leave a comment

Bo’az Klartag and Joseph Lehec: The Slice Conjecture Up to Polylogarithmic Factor!

BoazJoseph

Bo’az Klartag (right) and Joseph Lehec (left)

In December 2020, we reported on Yuansi Chen breakthrough result on Bourgain’s alicing problem and the Kannan Lovasz Simonovits conjecture. It is a pleasure to report on a further fantastic progress on these problems.

Bourgain’s slicing problem (1984):  Is there c > 0 such that for any dimension n and any centrally symmetric convex body K ⊆ \mathbb R^n of volume one, there exists a hyperplane H such that the (n − 1)-dimensional volume of K ∩ H is at least c?

Some time ago we reported on Yuansi Chen’s startling result that c can be taken as n^{-o(1)}. More precisely, Chen proved:

Theorem (Chen, 2021): For any dimension n and any centrally symmetric convex body K ⊆ \mathbb R^n of volume one, there exists a hyperplane H such that the (n − 1)-dimensional volume of K ∩ H is at least \frac {1}{L_n} where

L_n=C \exp ( \sqrt {\log n} \sqrt {3\log \log n} ).

A major improvement was recently achieved by  Bo’az Klartag and Joseph Lehec

Theorem (Klartag and Lehec, 2022): For any dimension n and any centrally symmetric convex body K ⊆ \mathbb R^n of volume one, there exists a hyperplane H such that the (n − 1)-dimensional volume of K ∩ H is at least \frac {1}{L_n} where

L_n = C (\log n)^4.

Klartag and Lehec’s argument (as Chen’s earlier argument) relies on Ronen Eldan’s stochastic localization, with a new ingredients being the functional-analytic approach from a paper by Klartag and Eli Putterman

This is fantastic progress. Congratulations Bo’az and Joseph!

Update: I was happy to learn that Arun Jambulapati, Yin Tat Lee, and Santosh S. Vempala further improved in this paper the exponent from 4 to 2.2226.

Posted in Analysis, Computer Science and Optimization, Convexity, Geometry, Probability | Tagged , | 2 Comments

Alef’s Corner: “It won’t work, sorry”

gilIWNW

Alef thanks Nicolas Curien for the triangulation. Click here for other posts with Alef’s art.

Posted in Art | Tagged | 2 Comments

Test Your intuition 51

Suppose that K and L are two compact convex sets in space. Suppose that K contains L. Now consider two quantities

  • X is the average volume of a simplex forms by four points in K drawn uniformly at random.
  • Y is the average volume of a simplex forms by four points in L drawn uniformly at random.

TYI: Is it always the case that X ≥ Y?

Posted in Convexity, Geometry, Probability, Test your intuition | Tagged | 12 Comments

The Google Supremacy Experiment: Data, Information, Discussions, and Three Questions.

Yosi Rinott, Tomer Shoham, and I  wrote a manuscript regarding our study of the Google 2019 supremacy experiment. This is still a draft and comments or corrections are most welcome. (The paper already incorporates a few comments by the Google team; October 11, 2022: a new version is posted based on  excellent comments and many corrections by Carsten Voelkmann; October 23, 2022. Here is a new version, we thank several colleagues for useful comments.)

Google’s 2019 “Quantum Supremacy” Claims: Data, Documentation, & Discussion

by Gil Kalai, Yosef Rinott, and Tomer Shoham

Abstract: In October 2019, Nature published a paper describing an experimental work that took place at Google. The paper claims to demonstrate quantum (computational) supremacy on a 53-qubit quantum computer. Since September 2019 the authors have been involved in a long-term project to study various statistical aspects of the Google experiment. In particular, we have been trying to gather the relevant data and information, to reconstruct and verify those parts of the Google 2019 supremacy experiments that are based on classical computations (unless they require too heavy computation), and to put the data under statistical analysis. We have now (August 2022) concluded the part relating to the gathering of data and information needed for our study of the 2019 Google experiment, and this document describes the available data and information for the Google 2019 experiment and some of our results and plans.

The manuscript describes the stage of gathering data and information needed for our study and our analysis based on this data will be described separately. Statistical analysis of the Google experiment is already described in our Statistical Science paper  Statistical Aspects of the Quantum Supremacy Demonstration. (In this paper we mainly rely on data of 12-qubit and 14-qubit circuits.) Some preliminary statistical analysis is also given in Sections 6 and 7 of my paper  The argument against quantum computers, the quantum laws of nature, and Google’s supremacy claims.

Quo Vadis random circuit sampling?

Here are three concrete questions about random circuit sampling of a quantum circuit C of the kind discussed in the Google paper with 22 qubits and depth 14. These three questions refer, of course, to the ability of current quantum computers  (it is quite easy to achieve them with classical simulations).


1. Can humanity produce at present samples which are good approximations of the Google noise model or any other specific noise model?

2. Has humanity reached the ability to produce samples for quantum circuit C with fidelity according to the linear cross entropy fidelity-estimator above 0.15?

3. Has humanity reached the ability to predict, for a quantum circuit C, with good accuracy, the linear cross entropy fidelity estimator based on the fidelity of the individual components of this circuit?

The findings of our Statistical Science paper indicate that the answer to the first question is negative. The Google supremacy paper itself and subsequent confirmations present a strong case for a positive answer to the other two questions (even for larger circuits). However, there are remaining doubts and concerns that need to be carefully checked, and not enough replications to regard the answer as a solid yes.

How to check a 20-qubit quantum computer?

It was announced recently that Israel is going to build a quantum computer. It is an interesting question to find a methodology to confirm that a 10-qubit, 20-qubit or 50-qubit quantum computer genuinely performs quantum computation or rather that the experimental data represents classical computation. In our Statistical Science paper we offered a certain blind experiment mechanism, but it still requires that classical simulation be much slower than quantum computing.

In the new paper we also propose a mechanism for letting other groups test calibration methods (which are crucial ingredients in such experiments) on the Sycamore QC or other NISQ computers. In our discussions, the Google team endorsed both these proposals for future experiments. (See Section 5 of the new manuscript.)

 

Posted in Physics, Quantum | Tagged , , | 3 Comments