“If there’s so much shit around, there has to be a pony somewhere.”

Arundhati Roy, The God of Small Things

and

Give me a couple of years free from other duties, and I shall complete the task – the task of copying the whole book of nature, and of writing the new science

Francis Bacon, according to Karl Popper

Francis Bacon drawn by Neta Kalai

The hard choice between family, teaching and committee work on the one hand, and writing the whole book of nature on the other

Blogged again (original 2008, and Chapter 23 in “Gina Says“), to cheer you up in difficult times.

There has to be a pony somewhere

From Arundhati Roy’s book “The God of Small Things”

Here is a tale from Arundhati Roy‘s book “The God of Small Things”. In the book Margaret Kochamma tells the following joke to Chacko in Oxford, England, where the two meet: A man had twin sons… Pete and Stuart. Pete was an Optimist and Stuart was a Pessimist… On their thirteenth birthday their father gave Stuart – the Pessimist – an expensive watch, a carpentry set and a bicycle… And Pete’s – the Optimist’s – room he filled with horse dung… When Stuart opened his present he grumbled all morning. Continue reading →

Left: Gowers’s book Mathematics a very short introduction. Right C. elegans; Boson Sampling can be seen as the C. elegans of quantum computing. (See, this paper.)

Today, I will briefly explain the main argument in my theory regarding quantum computers. In this post I will use the term HQCA (“Huge Quantum Computational Advantage”) instead of “quantum supremacy” that Google uses and “quantum advantage” that IBM and Jiuzhang Boson Sampling group use. Before we move to the main topic of the post, here are two related updates.

Boson Sampling update

A group of researchers mainly from USTC in Hefei, China, claimed to achieve a fantastic quantum computational advantage using Jiuzhang, a photonic device for “Boson Sampling”. It seems plausible now that level-k approximation from my paper Gaussian Noise Sensitivity and BosonSampling with Guy Kindler will “spoof” the claims, namely, will show an efficient way to give samples of similar fidelity on a digital computer. (There are other related approaches for spoofing the claim, mainly one by Jelmer Renema, and there are interesting connections between Jelmer’s approximations and ours. Sergio Boixo, John Martinis, and other researchers also raised concerns regarding the new HQCA claims.) While it will probably take several weeks or months to clarify it further, it is already clear, that in view of Kindler and mine 2014 results, the claims of achieving in 200 seconds samples that require millions of computing years on current supercomputers are unfounded.

For more details see my post about the recent HQCA Boson Sampling experiment, and also Scott Aaronson’s recent post.

Scott Aaronson’s apology

Let me mention that Scott Aaronson’s recent post issued the following apology addressed to me, which I gladly welcome:

The argument against quantum computers: a very short introduction

Our starting point is the following recent assertion by Scott Aaronson:

Many colleagues told me they thought experimental BosonSampling was a dead end, because of photon losses and the staggering difficulty of synchronizing 50-100 single-photon sources. They said that a convincing demonstration of quantum supremacy would have to await the arrival of quantum fault-tolerance—or at any rate, some hardware platform more robust than photonics. I always agreed that they might be right. —

My argument for why all attempts to reach HQCA as well as quantum error-correction via NISQ systems will fail is based on two claims.

A) “A convincing demonstration of HQCA would have to await the arrival of quantum fault-tolerance”

B) The error rate required for quantum error correcting codes needed for quantum fault tolerance is even lower than that required for HQCA.

If claims A) and B) are both correct, then it is not possible to reach HQCA as well as good quality quantum error-correction via NISQ systems. In this case, all attempts to reach HQCA as well as quantum error-correction via NISQ systems will fail.

As Scott Aaronson himself asserted, many people in the quantum computing community agree (or used to agree) with claim A). And there is also wide agreement and experimental evidence for claim B). But fewer people see the logical consequence of claims A) and B) put together.

A little more

The basis for Claim A

We need to put claim A on theoretical grounds. And indeed it is based on a computational complexity assertion for the computational power of NISQ systems, PLUS a heuristic principle (“naturalness”) on the relation between asymptotic statements and the behaviour of intermediate scale systems.

My work with Guy Kindler shows that for constant level of noise, NISQ systems represent very low-level computational power that we call LDP. (We studied specifically BosonSampling but the argument extends.)

The naturalness principle is quite common when you relate theory to practice: E.g., when Scott Aaronson and Sam Gunn argue that “spoofing” the Google statistical test for 53 qubits is probably very hard for digital computers, they make a similar leap from asymptotic insights to hardness insights for intermediate-scale systems.

Non-stationary and chaotic behaviour

There are several facts that strengthen the argument for claim A. Let me mention one: We already mentioned that for constant level of noise, NISQ systems represent very low-level computational power. Our work also asserts that for a wide range of lower levels of noise the empirical distribution from sampling based on NISQ systems will be very noise-sensitive: we can expect non-stationary and even chaotic empirical outcomes. This is a great thing to test empirically!

But how is it that classical computation is possible?

This is a beautiful piece of the puzzle. The very low computational complexity class describing the power of NISQ systems does support a very rudimentary form of classical error-correction. This is the reason that robust classical information is possible so I can write this post for you to read. (See also this cartoon post from 2017.)

Topological quantum computing (4 Nick)

Topological quantum computing is a proposed alternative path to very stable quantum qubits and quantum computing. Accepting claims A) and B) does not directly imply that topological quantum computing will fail, but there are good reasons that the argument does extend.

Was HQCA achieved already?

There are two papers claiming it was! One describing the Sycamore Random Circuit Sampling experiment was published in Nature in 2019, and one describing the Jiuzhang Boson Sampling experiment was published in Science in 2020. I find both these claims unconvincing.

An Open Problem

One question that certainly comes to mind is whether the technological advances represented by Jiuzhang and earlier on by Sycamore and by other superconducting and ion trapped NISQ systems, could have technological and scientific fruits even without achieving a computational advantage.

This is especially interesting if my theory is correct, but it is also interesting if it is not.

More information

A few of my papers and lectures

The argument against quantum computers, Quantum, Probability, Logic: Itamar Pitowsky’s Work and Influence (M. Hemmo and O. Shenker, eds.), pp. 399–422, Springer, 2019. (Section 3.5 refers to topological quantum computing.)

Boson Sampling was studied in a 2014 paper by Guy Kindler and me Gaussian Noise Sensitivity and BosonSampling. Our study of Boson Sampling gives the basis to understanding of general NISQ systems.

Boson Sampling as the C. Elegans of quantum computing

Caenorhabditis elegans, or C. elegans for short, is a species of a free-living roundworm whose biological study shed much light on more complicated organisms. In my paper in Itamar Pitowsky’s memorial volume I regard BosonSampling as the C. elegans of quantum computing. Several technical and conceptual aspects of quantum computation are becoming much clearer for this model.

A remark about the terminology

John Preskill’s term “quantum supremacy” is problematic because the word “supremacy” (e.g. “the supremacy of the king”) often refers to superiority or dominance across the board, which is not the case for quantum computation. It is also unfortunate in terms of its connotations. I agree with Preskill that “advantage” is too weak e.g. when it is claimed that the new Boson Sampling experiment can achieve in 200 seconds what would take an ordinary supercomputer millions of years, this is not merely an “advantage.” This is why I propose “huge quantum computational advantage” (HQCA).

Robert Aumann, Benjamin Netanyahu, and quantum computers

This is a continuation of our series of posts on the HUJI seminar 2020 open problems. This time the post was kindly written by Eitan Bachmat who proposed the problem.

My summary: understanding of the distribution of largest increasing subsequences for random permutations, and the combinatorics of largest increasing subsequences of general permutations is among the most fascinating stories in modern mathematics. When you also add weights this is largely an uncharted territory.

Heaviest increasing subsequences

Setup: Let X be a bounded distribution with support on an interval [1,T]. Given a permutation of size N, an increasing subsequence is a set of indices such that . If in addition, we attach to each index a weight , the weight of the subsequence is the sum . We are interested in the asymptotic behavior as of , the random variable which is given by the weight of the heaviest increasing subsequence where is chosen uniformly among permutations of size and the are sampled i.i.d from X.

A standard sub-additivity argument shows that with high probability, as , approaches a constant . The theorem of Vershik and Kerov on the longest increasing subsequence in a random permutation states that if X is the Dirac distribution which always takes the value , then . From this result, we can conclude that for any , (Take the longest increasing subsequence, disregarding the weights). Using similar elementary arguments it can be shown also that

Which means that is a much better estimate of than ( can be as large as for distributions with support in [1,T]).

Problem 1: Is it true that for all X

Numerical experimentation with many distributions has failed so far to find a counter example. Even the weaker bound

for some constant is unknown to the best of our knowledge and would also be very nice to have.

We can conjecture a stronger statement, by considering the space of all distributions (say on [1,T]) with its convex structure given by mixtures of distributions. We say that is a mixture of and , if for all , .

Problem 2: Is it true that is a concave function with respect to mixtures, i.e., for any mixture we have

The first problem is essentially this statement for the mixture which expresses as a generalized mixture of the extreme points which are Dirac measures on . We donít have counter examples for the stronger statement either.

The third problem concerns the upper bound.

Problem 3: Is there a constant such that, for all bounded. We believe this is false. In particular, consider the family of Bounded Pareto distributions with parameter , with density given by for and the normalizing constant. We believe that the ratio for this particular family is unbounded (and in some moral sense, thatís the only case).

Problem 4: Compute for ANY distribution which is not a Dirac distribution. Best chance seems to be the exponential or geometric distributions, where Kurt Johansson answers the corresponding last passage percolation problem, https://arxiv.org/pdf/math/9903134.pdf

Motivation

The problems above are motivated by the study of the asymptotic behavior of certain finite particle (interval) systems which we call “`airplane boarding”‘. The system has a parameter . At any given moment, the system consists of closed moving intervals all of length which we call “passengers” and which can only overlap at endpoints. The index of an interval/passenger is called its “queue location”. The ordering of the intervals according to “queue locations” never changes, i.e., passengers cannot pass each other while “boarding”. The interval is called the “airplane aisle”. Each interval has a target “row” (in the airplane aisle) where is a permutation on (the system can be generalized to handle more than one “passenger” per “row”). To each “passenger” we also attach an “aisle clearing time” . Initially, at time , all the intervals are left of and in descending order of their index, for example, if , we might have , etc., at any given time point, each interval/passenger, ,moves at infinite speed towards its destination “row” and if it is not blocked by other intervals to its right it positions itself with the left endpoint at , i.e., occupies . When a passenger does reach its target “row”, then it waits its aisle clearing time before leaving the system and allowing the passengers behind to advance further towards their row. The “boarding time” is the time when all intervals have left the system and the process terminates. From the description, one can check that the boarding time when is simply the heaviest increasing subsequence for and weights , so its clear that the problems above are relevant to the analysis of “airplane boarding”. To motivate them more specifically, lets introduce one more element, a “boarding policy” which makes the boarding time into a random variable. The simplest policy, which we have already encountered, is Random boarding. In Random boarding we assume that the queue position, the row and the weights are all independent random variables. The first two are chosen uniformly, generating a uniformly random permutation and we think of the as drawn i.i.d. from a distribution , which is the aisle clearing time distribution of all passengers as a single group. We observe that is a normalized asymptotic indicator of the boarding time of the Random policy. Suppose now that we have two (or more in general) subpopulations with different aisle clearing times, say, passengers with no hand luggage and the other passengers, or passengers who need assistant or are travelling with children and the other passengers as a second group. If we denote by Y and Z the aisle clearing time of the first and second subpopulations and by p, the portion of passengers from the first subpopulation, then the aisle clearing time of the whole population will be the mixture . We use as a measure of the slowness or quickness of a group and will assume that the first group with aisle clearing time distribution is the group of ‘slow passengers”, i.e., . Given this scenario, we can naturally define two policies which are practiced by airlines. The first policy is the “Slow First” (SF) policy, where we use a uniformly random permutation , but for the weights, those of the first passengers are drawn i.i.d. from Y and the others are drawn i.i.d. from Z. The other policy is “Fast First” (FF) which as the name suggests, draws the weights of the first passengers i.i.d. from Z and the others from Y.

Now we come to the test your intuition portion of the post. Spoiler alert, please test your intuition before continuing to read below the fold. Assuming “normal” coach class conditions in a full occupancy airplane, , with a realistic portion of slow passengers, say p=0.2, and a realistic effective aisle clearing time ratio of , rank the 3 policies, Random, Slow First and Fast First, from fastest to slowest in terms of average asymptotic boarding time (ascending boarding time order).

Yuansi Chen gave an almost constant bounds for Bourgain’s 1984 slicing problem and for the Kannan-Lovasz-Simonovits 1995 conjecture. In this post I will describe these conjectures.

Bourgain’s slicing problem (1984): Is there c > 0 such that for any dimension n and any centrally symmetric convex body K ⊆ of volume one, there exists a hyperplane H such that the (n − 1)-dimensional volume of K ∩ H is at least c?

Vitali Milman wrote: “I was told once by Jean that he had spent more time on this problem and had dedicated more efforts to it than to any other problem he had ever worked on.”

A positive answer to the problem is sometimes referred to as the hyperplane conjecture. Bourgain himself proved in the late 1980s that the answer is yes if and twenty years later Boaz Klartag shaved away the logn factor. For more on the problem see this article by Boaz Klartag and Vitali Milman and for the history see the moving foreword by Vitali Milman.

Yuansi Chen’s result (combined with an earlier result of Klartag and Ronen) asserts that c can be taken as .

Computing the volume of convex sets and the 1995 Kannan-Lovasz-Simonovits conjecture

Kannan, Lovász and Simonovits (KLS) conjectured in 1995 that for any distribution that is
log-concave, the Cheeger isoperimetric coefficient equals to that achieved by half-spaces up to a universal constant factor. The crucial point here is that the constant does not depend on the dimension.

Let K be a convex body of volume one in and consider the uniform probability distribution on K. Given a subset A of K of volume t≤1/2 we can ask what is the surface S(A) area of . (We don’t want to consider points in the surface of itself.) Let be the minimum value of S(A) for subsets of volume t. The Cheeger constant or the expansion constant is the minimum value of f(t)/t. The KLS conjecture asserts that up to a universal constant independent from the dimension the Cheeger constant is realized by separating K with a hyperplane! (The conjecture is also called the Kannan, Lovász and Simonovits hyperplane conjecture.)

The motivation for the KLS conjecture came from a major 1990 breakthrough in the theory of algorithms. Dyer, Frieze, and Kannan found a polynomial-time algorithm to compute the volume of a convex set K in . Kannan, Lovász and Simonovich (later joined by Santosh Vampala) wrote a series of ingenious papers where they improved the exponents of the polynomial and in the process introduced new methods for proving rapid mixing of Markov chains and new Euclidean isoperimetric results and conjectures.

As it turned out the KLS conjecture implies Bourgain’s hyperplane conjecture and also the “thin-shell conjecture” that I will not describe here.

Yuansi Chen’s proved an almost constant lower bound of the isoperimetric coefficient in the KLS conjecture!

This also gives faster mixing time bounds of Markov chain Monte Carlo (MCMC) sampling algorithms on log-concave measures.

Ronen Eldan‘s stochastic localization scheme and the strategy proposed by Yin Tat Lee and SantoshVempala

Yuansi Chen’s proof relies on Ronen Eldan‘s stochastic localization scheme that was introduced in this 2013 paper

There is not much more I can tell you now about Ronen’s theory, about Yin Tat and Santosh’s strategy, about Yuansi’s proof, and about further intermediate results. (This post here might be somewhat related to Ronen’s method.) But let me very briefly mention a few related matters.

It is a very interesting question (that I heard from Moshe Vardi) to understand the connection between practice and theory regarding Markov chain Monte Carlo
(MCMC) sampling algorithms e.g. for approximating permanents and volumes.

The Busseman-Petty problem from 1956 asks whether it is true that a symmetric convex body with larger central hyperplane sections has a larger volume. So the question was if K and L are centrally symmetric problems and all hyperplane sections of K (through the origin) have a larger volume than those of L, is it true that the volume of K is larger than that of L. Busemann and Petty showed that the answer is positive if K is a ball. (A positive answer when L is a ball would have given a very strong version of the slicing conjecture.) But the answer is negative for dimensions larger than or equal to 5.

Here is a brief history:

Larman and Rogers constructed a counterexample in dimension 12 in their example L was a ball and K was a perturbation of a ball. Ball showed that taking L to be a ball and K the standard cube gives a counterexample in dimension 10; Giannopoulos and Bourgain gave counterexamples in dimension 7 and Papadimitrakis finally found a counterexample in dimension 5. Gardner gave a positive for answer for dimensions 3 Zhang gave a positive solution for dimension 4. Erwin Lutwak’s theory of intersection bodies plays an important role in the solution. Gardner, Koldobskiy, and Schlumprecht presented a unified solution in all dimensions.

This is a really beautiful story and was saw a slice of the story in our second “test your intuition”. In the late 90s I had a “what is new” corner on my homepage and I remember devoting it to some of these developments.

Here are the two problems presented by Ehud Friedgut. The first arose by Friedgut, Kindler, and me in the context of studying Lionel Levine’s infamous hat problem. The second is Lionel Levine’s infamous hat problem.

Problem 1: Independence numbers for random induced subgraphs

For a graph let be its independence number, namely the maximum size of an independent set of vertices in . Now consider a random subset of the set of vertices of . A vertex belongs to with probability 1/2, independently. Now let be the induced graph from on . Denote by the expected value of over all such random induced subgraphs .

Conjecture: For every real number , there is such that if then .

If we look at the example of the complete graph on 1/τ vertices we get that . The reason is that the independence number of is always 1 except in one case occurring with probability where is empty.

An example coming from a random graph gives that and the strong conjecture is that can be taken to be . (Here, are constants.)

LIONEL LEVINE

Problem 2: Lionel Levine’s infamous hat problem.

Warm up problem:

people are assigned at random white/black hats, Each person sees the hats of the other and need to guess the color of her own hat. They can decide in advance on the guessing strategy and their aim is maximize the probability that they all guessed correctly. How well they can do?

It turns out that there is a strategy with 50% chance of winning. They all need to assume that there are overall an even number of black hats.

The real infamous hat problem by Lionel Levine

Now every player has an infinite tower of hats on her head and, again, the hats are taken to be white or black at random. Again each players see all the hats of other players and again the players can plan in advance a strategy for success, This time each player has to point to one of the hats on her head and the players WIN if all these hats are black.

Again, based on the hats of all others the k-th player says something like “I think the 7th hat on my head is black” or “I think the 3rd hat on my head is black” and the players win if all these statement are correct.

Conjecture: As the number of players grows the probability for success tends to zero.

Let be the success probability for players. (Remark: instead of having infinite number of hats we can assume that is fixed, each player has hats, and then let goes to infinity.

Ehud explained why . He challenged the audience to prove that which is unknown.

One of the best-known applications of Helly’s theorem (that is, actually, equivalent to Helly’s theorem, see [Bo]) is the following theorem, due to Krasnoselskii [K]:

Theorem 1 (Krasnoselskii): Let be an infinite compact set in . If every points of are visible from a common point, then is starshaped, that is there exists some that sees all points in .

Krasnoselskii’s theorem does not hold for non-compact sets. However, all known counter-examples satisfy the weaker requirement of being finitely starlike, namely that any finite subset of is visible from a common point. This led Peterson [P] to ask the following:

Does there exist a Krasnoselskii number such that for any infinite set , if every points of are visible from a single point, then is finitely starlike?

Marilyn Breen obtained positive results for special cases of sets that satisfy additional topological conditions [Br1,Br2,Br3], e.g., Krasnoselskii number 3 for closed sets in the plane. On the other hand, she proved that if exists, then it must be larger than .

Recently, Micha Perles and me (Chaya Keller) proved that under no restriction on the set, the Krasnoselskii number does not exist. More precisely, we proved that for any , there exists a set such that any points in are visible from a common point, but there exist points in that are not visible from a common point. But this counter example is absolutely not constructive (since it is based on a transfinite induction), and the resulting set admits no resonable topological structure.

This leads to the following two problems:

Problem 1: Can you show constructively that the Krasnoselskii number does not exist?

Problem 2: Can you find more natural topological conditions under which the Krasnoselskii number exists?

References

[Bo] Borwein, J., A proof of the equivalence of Helly’s and Krasnosselsky’s theorems,Canad. Math. Bull., 20 (1977), 35-37.

[Br1] M. Breen, Clear visibility, starshaped sets, and finitely starlike sets, J. of Geometry, 19 (1982), 183-196.

[Br2] M. Breen, Some Krasnoseleskii numbers for finitely starlike sets in the plane, J. of Geometry, 32 (1988), 1-12.

[Br3] M. Breen, Finitely starlike sets whose F-stars have positive measure, J. of Geometry, 35 (1989), pp.~19–25.

[K] M. A. Krasnoselskii, Sur un Critére pour Qu’un Domain Soit Étoilé, Math. Sb., 19 (1946), pp.309-310.

[P] B. Peterson, Is there a Krasnonselskii theorem for finitely starlike sets?, Convexity and Related Combinatorial Geometry, Marcel Dekker, New York, 1982, pp.81-84.

The paper announces achieving quantum advantage (aka “quantum supremacy”) using photonic implementation of BosonSampling. (I heard about it from a SO post that contains further links.) The claimed advantage is huge and clearly I will have to look carefully at the results, the data, and the interpretation. The idea that this could be done was raised ten years ago by Aaronson and Arkhipov and we discussed it in here and in several other posts along with the idea that it cannot be done.

Boson Sampling was studied in a 2014 paper by Guy Kindler and me Gaussian Noise Sensitivity and BosonSampling. Our paper and the connection with noise sensitivity and the Fourier description is the basis for my argument against quantum computers. Of course, a demonstration of a huge quantum advantage, as claimed in the new paper, if valid, would refute my theory.

The crux of the matter is if the statistical performance of the photonic samples produced in the experiment could be achieved by classical sampling. (This is referred to as “spoofing.”) My paper with Guy proposes a very simple way to try to do it based on the low degree Hermite-Fourier truncation of the Boson Sampling distribution.

The easiest way to implement it is as follows: Given an n by m matrix you draw (with the appropriate weights based on repeated columns) at random n by n minor M (with repeated columns), then compute the degree k approximation X to the |permanent(M)|^2, (based on formula (8) from our paper) and then take the sample with probability according to the value of X and toss it away otherwise. This may work even for degree-2 truncation. (Rather than the straight truncation we can also try the “Beckner-noise” version but I don’t think this will make much difference.)

Since my paper with Guy Kindler offers “off the shelf” classic algorithm that may “spoof” the claims I propose to test it. (And I am a little surprised that this was not tried already.)

Abstract: In 1973, Brown, Erdős and Sós proved that if is a 3-uniform hypergraph on vertices which contains no triangulation of the sphere, then has at most edges, and this bound is the best possible up to a constant factor. Resolving a conjecture of Linial, also reiterated by Keevash, Long, Narayanan, and Scott, we show that the same result holds for triangulations of the torus. Furthermore, we extend our result to every closed orientable surface .

Remarks:

1) When is fixed Nati proved that there is a simplicial complex with vertices and 2-faces that does not contain a triangulation of .

2) It is not known if there is an example with vertices and 2-faces which does not contain a triangulation of any orientable closed surface .

We can ask the same question also for pseudomanifolds. What is the number of edges that guarantees a subcomplex with the property that every 1-face is included in precisely two 2-faces.

3) Nati conjectured that for every 2-dimensional simplicial complex there is a constant such that a 2-dimensional simplicial complex with $n$ vertices and $C_Sn^{5/2}$ 2-faces always contains a homeomorph of . The paper by Andrey, Alexandr, István, and Dmitriy asserts that proving the exponent 5/2 for the real projective space will imply the same exponent for all nonorientable closed surfaces.

4) The paper also mentions an unpublished result by Friedgut and Linial that 2-faces suffice for a torus.

5) Here is another problem I am curious about: How many 2-faces guarantee a subcomplex K with and ? (You can choose the field of coefficients.) Without the second requirement the answer is roughly .

Universal exponent for homeomorphs.

Nati’s Problem 2: Given a -dimensional-complex , how many facets can a -complex on vertices have if it contains no topological copy of ? (Topological copy = homeomorphic image)

Abstract: Our first main result is a uniform bound, in every dimension , on the topological Turán numbers of -dimensional simplicial complexes: for each , there is a such that for any -dimensional simplicial complex , every -complex on vertices with at least facets contains a homeomorphic copy of S.

This was previously known only in dimensions one and two, both by highly dimension-specific arguments: the existence of is a result of Mader from 1967, and the existence of was suggested by Linial in 2006 and recently proved by Keevash-Long-Narayanan-Scott.

Jason Long, Bhargav Narayanan, and Corrine Yap deduce this theorem from a very interesting purely combinatorial result about trace-bounded hypergraphs.

Here is the link to the aforementioned paper Peter Keevash, Jason Long, Bhargav Narayanan, and Alex Scott: A universal exponent for homeomorphs. The main theorem asserts that can be taken to be 14/5.

Let me also mention a 2018 result by Agelos Georgakopoulos, John Haslegrave, Richard Montgomery, and Bhargav Narayanan in their 2018 paper Spanning surfaces in 3-graphs where they prove a topological extension of Dirac’s theorem about Hamiltonian cycles in graphs proposed by Gowers in 2005.

Finally, finding the correct value for would be very interesting.

On November, 2020 we had a very nice open problem session in our weekly combinatorics seminar at HUJI. So I thought to have a series of posts to describe you the problems presented there. This is the first post in the series. Later I will add a link to the zoom recording of the session, and to a writeup of all the problems.

While talking on open problems let me mention that in connection with the OPAC 2020 conference (now delayed to May 2021), a community blog to discuss open problems in algebraic combinatorics was created. Everybody is invited to check out the problems posted there, to subscribe, and to contribute their own problems. And, of course, to solve the problems! The list of problems so far is impressive.

Nati’s problem

Problem 1: Given a 2-manifold what is the the number of 2-faces for a 2-dimensional simplicial complex S on n vertices that guarantee that S contains a triangulation of M?

The case that is a triangulation of a two dimensional sphere goes back to a theorem of Brown, Erdos and Sos. They proved that the answer is . Nati proved that the lower bound applies for every 2-manifold .

Problem 2: Given a k-complex S, how many facets can a k-complex on n vertices have if it contains no topological copy of S? (Topological copy = homeomorphic image)

Nati mentioned some further background and recent progress regarding these problems and I will mention them in a separate post.

“High dimensional combinatorics”

Problems 1 and 2 are parts of “Linial’s programme” of studying systematically “high dimensional combinatorics”. Here are links to a few papers, slides of talks and videotaped talks. Among the highlights of this theory are the study, starting with a paper by Linial and Roy Meshulam of the Erdos-Renyi model for simplicial complexes (which is closely related to high dimensional expanders), the study of high dimensional permutations, and high dimensional tournaments.

P3: What is the maximum density of a set A in without a 3-term AP? (AP=arithmetic progression.)

This is the celebrated Cap Set problem and we reported here in 2016 the breakthrough results by Croot-Lev-Pach-Ellenberg-Gijswijt (CLPEG) asserting that .

P4: What is the maximum density of a set A in without a 4-term AP?

P5: What is the maximum density of a set A in without a 5-term AP?

Moving from 3 term AP to 4 term AP represents a major leap for such problems. In some cases moving from 4 to 5 also represents a major leap. Usually the techniques that work for continue to work for , that is, I am not aware of cases where moving from 5 to 6 also represent a major leap. In other words, we do not have reasons to think, so far, that progress on P6 (below) will be considerably harder than progress on P5.

P6: What is the maximum density of a set A in without a 6-term AP?

And, of course, we can move on to P7, P8, P9,….,

It is known that the density of a set without -term AP for every fixed m goes to zero with n. This was proved by Furstenberg and Katznelson using ergodic theoretical tools. Following the CLPEG breakthrough we can certainly believe that the density that guarantees m-term AP in for every fixed m is exponentially small in n. I vaguely remember that the best known bounds for P4 are polylogarithmically small with n and were achieved by Green and Tao using tools from Gowers’ work on 4AP-free sets over the integers.

A little more before we move on

To have some general notation let denote the maximal size of a set with no k distinct elements in arithmetic progression. It is reasonable to believe that for some , that is that sets in avoiding terms AP are exponentially small. To be more precise let

We can believe that the limits exist, that , and we may also guess (wildly but safely) that they satisfy .

These problems, as well as the analogous problems over the integers, (Roth’s and Szemeredi’s theorems and later developments), and the related density Hales Jewett problem, form a holy grail of additive combinatorics and they attracted a lot of attention also on this blog.

Here is a partial list of posts: 2008 Pushing the Behrend bound; 2009A, 2009B2009C Attempts to relate the problem to the Frankl-Wilson Theorem (and the polynomial method) and to Frankl-Rodl theorem and also to find counterexamples; 2009D polymath1; 2009P public opinion poll about the true behavior; 2010 The logarithmic bound for Roth is reached; 2011 cap sets and matrix multiplications; 2016a, 2016b The CLPEG breakthrough; 2016W A polynomial method workshop; 2020 The logarithmic bound for Roth is broken.

Let’s move on to the new result by Péter Pál Pach and Richárd Palincza

Theabsolutely stunningresult by Péter Pál Pach and Richárd Palincza