On November, 2020 we had a very nice open problem session in our weekly combinatorics seminar at HUJI. So I thought to have a series of posts to describe you the problems presented there. This is the first post in the series. Later I will add a link to the zoom recording of the session, and to a writeup of all the problems.

While talking on open problems let me mention that in connection with the OPAC 2020 conference (now delayed to May 2021), a community blog to discuss open problems in **algebraic combinatorics** was created. Everybody is invited to check out the problems posted there, to subscribe, and to contribute their own problems. And, of course, to solve the problems! The list of problems so far is impressive.

**Problem 1:** Given a 2-manifold what is the the number of 2-faces for a 2-dimensional simplicial complex S on n vertices that guarantee that S contains a triangulation of M?

The case that is a triangulation of a two dimensional sphere goes back to a theorem of Brown, Erdos and Sos. They proved that the answer is . Nati proved that the lower bound applies for every 2-manifold .

**Problem 2:** Given a k-complex S, how many facets can a k-complex on n vertices have if it contains no topological copy of S? (Topological copy = homeomorphic image)

Nati mentioned some further background and **recent progress** regarding these problems and I will mention them in a separate post.

Problems 1 and 2 are parts of “Linial’s programme” of studying systematically “high dimensional combinatorics”. Here are links to a few papers, slides of talks and videotaped talks. Among the highlights of this theory are the study, starting with a paper by Linial and Roy Meshulam of the Erdos-Renyi model for simplicial complexes (which is closely related to high dimensional expanders), the study of high dimensional permutations, and high dimensional tournaments.

Challenges of high-dimensional combinatorics (slides), Laszlo Lovasz 70th birthday conference, Budapest July 2018. video (covering 2/3 of the slides)

What is high-dimensional combinatorics?, Random-Approx, August ’08.

On high-dimensional acyclic tournaments, Nati Linial and Avraham Morgenstern, Discrete and Computational Geometry: Volume 50, Issue 4 (2013), Page 1085-1100.

Homological connectivity of random 2-dimensional complexes, N. Linial, and R. Meshulam,* Combinatorica,* 26(2006) 475–487.

Random Simplicial complexes – Around the phase transition, Nati Linial and Yuval Peled, A Journey Through Discrete Mathematics, A tribute to Jiri Matousek, 543–570 (2017).

( See Nati’s homepage for more)

]]>Consider the following problems:

**P3:** What is the maximum density of a set A in without a 3-term AP? (AP=arithmetic progression.)

This is the celebrated Cap Set problem and we reported here in 2016 the breakthrough results by Croot-Lev-Pach-Ellenberg-Gijswijt (CLPEG) asserting that .

**P4:** What is the maximum density of a set A in without a 4-term AP?

**P5:** What is the maximum density of a set A in without a 5-term AP?

Moving from 3 term AP to 4 term AP represents a major leap for such problems. In some cases moving from 4 to 5 also represents a major leap. Usually the techniques that work for continue to work for , that is, I am not aware of cases where moving from 5 to 6 also represent a major leap. In other words, we do not have reasons to think, so far, that progress on P6 (below) will be considerably harder than progress on P5.

**P6:** What is the maximum density of a set A in without a 6-term AP?

And, of course, we can move on to **P7, P8, P9,….** ,

It is known that the density of a set without -term AP for every fixed m goes to zero with n. This was proved by Furstenberg and Katznelson using ergodic theoretical tools. Following the CLPEG breakthrough we can certainly believe that the density that guarantees m-term AP in for every fixed m is exponentially small in n. I vaguely remember that the best known bounds for P4 are polylogarithmically small with n and were achieved by Green and Tao using tools from Gowers’ work on 4AP-free sets over the integers.

To have some general notation let denote the maximal size of a set with no *k* distinct elements in arithmetic progression. It is reasonable to believe that for some , that is that sets in avoiding terms AP are exponentially small. To be more precise let

We can believe that the limits exist, that , and we may also guess (wildly but safely) that they satisfy .

These problems, as well as the analogous problems over the integers, (Roth’s and Szemeredi’s theorems and later developments), and the related density Hales Jewett problem, form a holy grail of additive combinatorics and they attracted a lot of attention also on this blog.

Here is a partial list of posts: 2008 Pushing the Behrend bound; 2009A, 2009B 2009C Attempts to relate the problem to the Frankl-Wilson Theorem (and the polynomial method) and to Frankl-Rodl theorem and also to find counterexamples; 2009D polymath1; 2009P public opinion poll about the true behavior; 2010 The logarithmic bound for Roth is reached; 2011 cap sets and matrix multiplications; 2016a, 2016b The CLPEG breakthrough; 2016W A polynomial method workshop; 2020 The logarithmic bound for Roth is broken.

Let’s move on to the new result by Péter Pál Pach and Richárd Palincza

As we already mentioned, so far we do not have reasons to expect that proving density results for 6-terms AP in (or ) will be much harder than the question over . **However**, what Pach and Palincza proved is as follows:

**Theorem**: sets avoiding 6-term arithmetic progressions in have size at most .

The big surprise is that the problem P6 over integers modulo six is **much easier** than P4 and P5. The proof is based on a simple combinatorial reasoning showing that

The paper arXived on September 2020 is

Péter Pál Pach and Richárd Palincza: Sets avoiding six-term arithmetic progressions in are exponentially small

Click and enjoy this beautiful, stunning and **very simple** proof.

Congratulations to Péter and Richárd!

]]>The news: In 1981, Paul Erdős and András Hajnal asked whether the sum of the reciprocals of the odd cycle lengths in a graph with infinite chromatic number is necessarily infinite. Hong Liu and Richard Montgomery have just proved that the answer is positive! They also proved a variety of other old standing conjectures. This is remarkable. Congratulations!

Before going on let me pose an even stronger conjecture

**Conjecture:** The sum of the reciprocals of the odd** induced** cycle lengths in a *triangle-free* graph with no triangles and infinite chromatic number is necessarily infinite.

The same conclusion should apply to graphs with bounded clique numbers; it is related to the general area of -boundedness, see this post and this recent survey by Alex Scott and Paul Seymour.

Returning to Liu and Montgomery’s breakthrough, let me mention two other conjectures that Hong and Richard proved in the same paper:

A) (Conjecture by Erdős from 1984) There is a constant *d* such that any graph with average degree *d* has a cycle whose length is a power of 2.

B) (Conjecture by Carsten Thomassen from 1984) for every *k*, there is some *d* so that every graph with average degree at least *d* has a subdivision of the complete graph in which each edge is subdivided the same number of times.

I am thankful to Benny Sudakov for telling me about the new result.

A key ingredient of the proof is the study of expanders that you can always find in graphs. Komlós and Szemerédi showed in 1996 that every graph *G* contains an expander with comparable average degree to *G *and since their work, with greater and greater sophistication, finding expanders in general graphs has become an important tool for a large number of extremal problems.

**Abstract:**

In 1981, Erdős and Hajnal asked whether the sum of the reciprocals of the odd cycle lengths in a graph with infinite chromatic number is necessarily infinite. Let be the set of cycle lengths in a graph and let be the set of odd numbers in . We prove that, if has chromatic number , then This solves Erdős and Hajnal’s odd cycle problem, and, furthermore, this bound is asymptotically optimal.

In 1984, Erdős asked whether there is some d such that each graph with chromatic number at least d (or perhaps even only average degree at least ) has a cycle whose length is a power of 2. We show that an average degree condition is sufficient for this problem, solving it with methods that apply to a wide range of sequences in addition to the powers of 2.

Finally, we use our methods to show that, for every , there is some so that every graph with average degree at least d has a subdivision of the complete graph in which each edge is subdivided the same number of times. This confirms a conjecture of Thomassen from 1984.

Wolfgang Kühnel

Today I want to talk about a new result in a newly arXived paper: A subexponential size by Karim Adiprasito, Sergey Avvakumov, and Roman Karasev. Sergey Avvakumov gave about it a **great** zoom seminar talk about the result in our combinatorics seminar. Here is the link.

The question is very simple: **What is the smallest number of vertices required to triangulate the n-dimensional real projective space?**

The short paper exhibits a triangulation with vertices. This is the first construction that requires subexponential number of vertices in the dimension. The best lower bound by Pierre Arnoux and Alexis Marin (from 1999) are quadratic, so there is quite a way to go with the problem. I am thankful to Ryan Alweiss who was the first to tell me about the new result.

Representing a topological space using simplicial complexes arose in the early days of algebraic topology, but there are certainly more “efficient” representations. What is the reason to to care specifically about representations via simplicial complexes? Here are my answers

- Constructions of triangulated manifolds with few vertices are occasionally amazing mathematical objects.
- Simplicial complexes arise in many combinatorial and algebraic contexts,
- The study of face numbers of simplicial complexes with various topological properties is often related to deep questions in algebra, geometry, and topology,
- We care also about other types of representation.

You may all be familiar with the 6-vertex triangulation of the projective space. It is obtained by identifying opposite faces of the icosahedron. It is 2-neighborly, namely every pair of vertices span an edge of the triangulation. (It also played a role in my high dimensional extension of Cayley’s counting trees formula.) The existence of 2-neighborly triangulations of other closed surfaces is essentially the Heawood 1890 conjecture solved by Ringel and Young in the 1960s.

In 1980 Wolfgang Kühnel set out to construct a 3-neighborly triangulation of the complex projective plane with 9 vertices. The construction, achieved in a paper by Kühnel and Lassmann (who also proved uniqueness), and discussed in a paper of the same year by Kühnel and Banchof, is, in my view, among the most beautiful constructions in mathematics with additional hidden structures and many connections. Some motivation to the work came from the study of tight embeddings of smooth manifolds and Morse theory.

Ulrich Brehm and Kühnel proved that if a d-manifold has fewer than 3⌈d/2⌉+3 vertices then it is homeomorphic to a sphere, and if it has exactly 3(d/2)+3 vertices and is not a sphere, then d=2,4,8 or 16 and the manifold has a nondegenerate height function with exactly three critical points. The 6-vertex and the 9-vertex $\mathbbCP^2$ are examples for dimensions 2 and 4. We can expect further such examples for projective planes over the quaternions and octonions. Brehm and Kühnel constructed three 8-dimensional 5-neighborly “manifolds” which are not combinatorially isomorphic. It is conjectured but not known that they all triangulate the quaternionic projective plane.

Here is a brief description of Sergey Avvakumov’s lecture. (As far as I could see the proof of their result is fully presented along with 3 other proofs for basic related results.)

**06:00** The lecture starts. A simple inductive constructions of a triangulation of .

**08:00** Equivalent statement: we seek a centrally symmetric triangulation of the n-sphere with diameter 3.

**09:00** 6-vertex triangulation of

**11:00** The construction: triangulate the positive facets of the cross polytope and symmetrically the negative facet. Follow a recipe to triangulate the side facets. Crucial question: what is needed from the triangulation T of the positive facet. Answer: **No edge between two disjoint faces**.

**23:00** Spherical interpretation and Delaunay triangulations of a certain configuration V

**31:00** A sufficient combinatorial condition

**39:00** Proof of sufficiency

**54:00** The construction!

**1:09:00** Counting the number of vertices completes the proof of the theorem.

Additional matters

**1:11:00** A proof of a quadratic lower bound by Kuhnel.

**1:21:00** Proof of the facet lower bound (Barany and Lovasz) via Borsuk-Ulam

Of course a main question is what is the minimum number of vertices required for a triangulation of n dimensional real projective space. What about the n-dimensional complex projective space? What is the situation for low dimensions? See also the paper by Sonia Balagopalan, On a vertex-minimal triangulation of RP^4.

]]>*Here is a popular account by Itai Benjamini and Elchanan Mossel from 2000 written shortly after the 2000 US presidential election. Elchanan and Itai kindly agreed that I will publish it here, for the first time, 20 years later! I left the documents as is so affiliations and email addresses are no longer valid. *

__________________________________________________

This is a popular report by Dr. I. Benjamini and Dr. E. Mossel from

Microsoft Research at Redmond on some recent mathematical

studies which are relevant to the U.S. presidential elections.

For More information contact:

Dr. Itai Benjamini, Microsoft Research

e-mail: itai@microsoft.com

Tel: 1-425-7057024

Do the results of the recent election accurately reflect public opinion?

Or are they the result of some minor local manipulations, and a few random

mistakes such as voters confusion, or counting errors.

There are many potential schemes for electing a president, each with its

own features. One important feature of a scheme is the agreement of the

actual outcome of the election with the outcome of the ideal

implementation of the election which disregards potential mistakes and

mischief. If it is probable that the ideal and the actual outcomes differ,

then frequently the outcome of the election will be in doubt.

In recent years, some mathematical efforts have been devoted to trying to

understand which procedures are “stable” to these potential perturbations

and which are “sensitive”. While the motivation for these studies came

from questions in probability and statistical physics, These mathematical

studies can shed some light on the current presidential election. In

particular, a recent paper by Professors Itai Benjamini and Oded Schramm

from Microsoft research and the Weizmann Institute and Gil Kalai from the

Hebrew University of Jerusalem, soon to be published in the prestigious

French journal Publication I.H.E.S., suggests that the “popular vote”

method is much more stable against mistakes than other voting method.

One example of a decision procedure that we encounter in nature is the

neuron. The neuron has to make its decisions based on many inputs; each

represents the strength of electrical currents at the synapses entering

the neuron. Based on these inputs, the neuron should decide whether to

fire its axon going out of the neuron. It is quite likely that some small

perturbations occur at each of the input synapses. It is therefore

expected that the decision procedure of the neuron will be “stable” to

these perturbations.

Experimental evidence indicates that neurons may be modeled as the

following simple procedure. The neuron looks at some weighted sum of its

inputs and makes a decision according to how large this sum is . If the

sum is large, then one decision is made , while if it is small another

decision is made .

The counterpart of this procedure for elections would be counting the

votes and deciding according to the majority of the votes. Mathematical

proofs have been given to show that this decision procedure is the most

stable among all decision procedures.

More complicated neural networks consist of a hierarchy of neurons where

the outputs of neurons in lower levels of the hierarchy are the inputs to

neurons in the higher levels. Some of these networks have been proven to

be much more sensitive to noise than a single neuron.

The counterpart of neural network in the political system may sound

familiar: The voters are divided into groups (states) and each group

chooses its candidate based on the majority vote. Then some weighted

majority of the votes of the states is taken to be the elected president.

The mathematical theorems imply that this system is much more sensitive to

minor mistakes.

If the overwhelming majority of the population is voting for the same

candidate then it doesn’t really matter which voting scheme we use. All of

the natural schemes are “stable”. On the other hand, when the population

is almost evenly split, different schemes behave differently.

The mathematical reasoning behind the stability of the majority vote goes

back to Abraham DeMoivre and Pierre-Simon Laplace (18th century). This

reasoning implies, for example, that in a population of 98,221,798 votes,

if there is a bias of 222,880 for one candidate, then this candidate will

be chosen, even if for each voter there is a small chance that his or her

vote is not counted or counted wrongly (the results of the last election

as of 11/13/00). As long as mistakes for both sides are equally likely,

the result will correctly reflect the bias in the public opinion.

Thus, in this presidential election while the gap between the two

candidates is only 0.2% of the votes it is still large enough so that the

outcome is immune even if a fairly large percent (10% for example) of the

votes were counted mistakenly (assuming the mistakes were random and

independent.)

On the other hand the gap of a 388 votes among the 6,000,000 million votes

in Florida (about 0.05% of the votes) may well be too small to overcome

the effect of random mistakes in counting the votes, even if the chance

for a mistake is fairly small (1%, say). It can well be argued that just

because of the (unavoidable) mistakes in counting the votes in Florida

(putting aside all other controversies surrounding the vote there) we will

never be able to know who got the majority of votes among the voters of

Florida. To understand why the picture is so different as far as the

popular vote is concerns in the entire nation and the popular vote in the

state of Florida we should note that the stability against mistakes

increases dramatically as the number of voters rise. (And also, of course,

as the gap between the candidates rise.)

The discrepancy between our ability to call the winner in the popular vote

and disability to call the winner in the electoral college (which is the

winner of the election according to the constitution) is not unexpected.

The new results by Benjamini, Schramm and Kalai show that for many models

majority is the scheme which is least sensitive to noise. In these models

it is assumed that voters make their decisions independently and the

mistakes are symmetric and independent for different voters. It is shown

then that for models resembling the current voting scheme in the U.S.A, if

the population is almost evenly split, the scheme is much more sensitive

to noise than the majority scheme. In particular a tiny fraction of

mistakes is very likely to reverse the ideal outcome of the election.

Moreover, if the elections are almost balanced, then the results are too

close to call.

If the outcomes do not show significant bias towards one of the candidates

then in the “popular-vote” method the probability of random independent

mistakes which effect one in every A votes has a chance of one in the

square-root of A to switch the outcome of the elections. In a method like

the “electoral college” the chance is increased to something like one in

the fourth root of A.

If the popular vote is significantly tilted towards one of the candidates

than the effect of mistakes becomes smaller for all voting methods but

much more so for the popular-vote method.

The study of the sensitivity of voting procedures to small errors for such

models is based on mathematical tools from “harmonic analysis”, and

provides a classification of the voting schemes which are very sensitive

to small amounts of noise and those which are more stable. In particular

among all symmetric voting procedures (i.e., all voters have the same

power), the popular majority is the most robust.

For further technical reading and the precise formulation of the

mathematical theorems see http://front.math.ucdavis.edu/math.PR/9811157

A word of warning is in place. The precise expected effect of mistakes

will depend on the specific statistical model for the voting patterns and

for the noise created by counting errors and biases. For example, more

recent work by Claire Kanyon from Orsay university, Elchanan Mossel from

Microsoft research and Yuval Peres from Berkeley and the Hebrew

University, gives quite a different picture for different models based on

the famous “Ising model” from Physics. We expect that the qualitative

conclusion of the research by Benjamini, Kalai and Schramm applied to the

case of U.S. presidential elections and that the popular vote method is

much more stable to noise and biases than the electoral college method.

For definite conclusions, however, some statistical work on actual voting

data should be carried out. Choosing the most appropriate voting method

involve, of course, many, primarily non-mathematical considerations.

Itai Benjamini, Microsoft Research and the Wizmann Institute for Science

Elchanan Mossel, Microsoft Research

And for our main topic:

Four years ago we had a long post demonstrating some notions regarding voting with Nate Silver’s famous site FiveThirtyEight. This time with the flow of corona related statistics I am less addicted to Silver’s site, but I noticed an interesting new feature. You can explore the ways Trump or Biden could win the election: You decide the outcome in some states and see the forecast for the entire elections based on that.

Earlier today Nate Silver’s site gave Biden 88% probability to win and Trump 12% probability to win.

Now, if you force a Trump’s victory in the State of Maine with only two electoral votes, Trump’s chances for winning the entire elections goes up to 40%

**Test your intuition:** what is the reason for Maine’s huge influence on the elections’ outcomes?

**Silver’s prediction earlier today: Biden: 88, Trump: 12**

**The prediction when you give Maine to Trump. Biden:59, Trump:40.**

I've rolled a die and not looked at it yet. The statement, "If the number I rolled equals 2+2 then it equals 5," is …

— Timothy Gowers (@wtgowers) October 18, 2020

Here is a tweet from Tim Gowers It is a poll. The question is

I’ve rolled a die and not looked at it yet. The statement, “If the number I rolled equals 2+2 then it equals 5,” is …

The two answers:

a) Probably true

b) Definitely false

]]>

Four-in -a-row with HEX board by: Daniella Drori, Margot Guetta, Eyal Magen and Aiden Platt

Playing 3-player coalition games by: Barak Aharoni and Boris Borshevsky

Network congestion simulation by Eyal Magen, Iddo Yadlin and Daniel Korat (Thread on twitter)

Update: If you find it easy to win in the Four-in -a-row with HEX board game challenge yourself: To force a win with m-in-a-row line, for m=5,6…? To force as many as possible winnings moves.

Do you have more brothers than sisters? or more sisters than brothers? or the same number?

Then you separate the boys’s answers from the girls’s answers

Which of the following is true?

- The answers from boys will be tilted toward having MORE sisters and LESS brothers
- The answers from boys will be tilted toward having MORE brothers and LESS sisters
- There will be no (statistically significant) difference.

**The reason for answer 1 is**: Since the ratio of boys/girls is 50/50, and you don’t count yourself in the answer, boys will tend to have more sisters and girls will tend to have more brothers

**The reason for answer 2 **is: Some families are uneven and have more boys. In these families you will see more boys having more brothers than sisters and less girls having more brothers than sisters. Some families are uneven and have more girls. In these families you will see more girls having more sisters than brothers and less boys having more sisters than brothers.

I thank Oded Margalit for this question given to me as a birthday gift.

The Ramsey number *r(t; ℓ)* is the smallest natural number *n* such that every ℓ-coloring of the edges of the complete graph contains a monochromatic . (*r(t;2)* is often denoted by* R(t,t)* and *r(t;3)* by R*(t,t,t)* etc.) Famously, *R(3,3)=6*; *R(4,4)=18*; and *R(3,3,3)=17*. Understanding *R(t,t)* is among the most famous problems in combinatorics. (Understanding if r(3; * ℓ*) is exponential or superexponential in* ℓ* is also a very famous problem.)

It is known since the 1940s that .

Lower bounds for *R(t,t)* where used by Lefmann in 1987 to give lower bounds on *r(t; ℓ)* for * ℓ*>2. Asaf Ferber and David Conlon gave now **exponential** improvement. This is truly remarkable and the paper is just 4-page long! congratulations Asaf and David!

Expect more cheering news of discrete geometry nature from Oberwolfach. (I take part remotely in the traditional meeting on Discrete and computational geometry, see pictures below).

Update (Sept 24.): An excellent blog post on Anurag math blog. Anurag describes in details the construction, describes the connections with finite geometries, and improves the construction to get a better result. (See also there many cool posts, e.g., an earlier post with some connections of finite geometries and different Ramsey problems Heisenberg groups, irreducible cubics and minimal Ramsey.)

]]>