Two Delightful Major Simplifications

simplify

Arguably mathematics is getting harder, although some people claim that also in the old times parts of it were hard and known only to a few experts before major simplifications had changed  matters. Let me report here about two recent remarkable simplifications of major theorems. I am thankful to Nati Linial who told me about the first and to Itai Benjamini and Gady Kozma who told me about the second. Enjoy!

Random regular graphs are nearly Ramanujan: Charles Bordenave gives a new proof of Friedman’s second eigenvalue Theorem and its extension to random lifts

Here is the paper. Abstract: It was conjectured by Alon and proved by Friedman that a random d-regular graph has nearly the largest possible spectral gap, more precisely, the largest absolute value of the non-trivial eigenvalues of its adjacency matrix is at most 2\sqrt{d-1} +o(1) with probability tending to one as the size of the graph tends to infinity. We give a new proof of this statement. We also study related questions on random n-lifts of graphs and improve a recent result by Friedman and Kohler.

A simple proof for the theorem of Aizenman and Barsky and of Menshikov. Hugo Duminil-Copin and Vincent Tassion give  a new proof of the sharpness of the phase transition for Bernoulli percolation on \mathbb Z^d

Here is the paper Abstract: We provide a new proof of the sharpness of the phase transition for nearest-neighbour Bernoulli percolation. More precisely, we show that – for p<p_c, the probability that the origin is connected by an open path to distance $n$ decays exponentially fast in $n$. – for p>p_c, the probability that the origin belongs to an infinite cluster satisfies the mean-field lower bound \theta(p)\ge\tfrac{p-p_c}{p(1-p_c)}. This note presents the argument of this paper by the same authors, which is valid for long-range Bernoulli percolation (and for the Ising model) on arbitrary transitive graphs in the simpler framework of nearest-neighbour Bernoulli percolation on \mathbb Z^d.

Analysis of Boolean Functions week 5 and 6

Lecture 7

First passage percolation

1)  Models of percolation.

We talked about percolation introduced by Broadbent and Hammersley in 1957. The basic model is a model of random subgraphs of a grid in n-dimensional space. (Other graphs were considered later as well.) Here, a grid is a graph whose vertices have integers coordinates and where two vertices are adjacent if their Euclidean distance is one. Every edge of the grid-graph is taken (or is “open” in the percolation jargon) with the same probability p, independently. We mentioned some basic questions – is there an infinite component? How many infinite components are there? What is the probability that the origin belongs to such an infinite component as a function of p?

I mentioned two results: The first  is Kesten’s celebrated result that the critical probability for planar percolation is 1/2. The other by Burton and Keane is that in very general situations almost surely there is a unique infinite component or none at all. This was a good point to mention a famous conjecture- The dying percolation conjecture (especially in dimension 3) which asserts that at the critical probability there is no infinite component.

We will come back to this basic model of percolation later in the course, but for now we moved to a related more recent model.

2) First passage percolation

We talked about first passage percolation introduced by Hammersley and Welsh in 1965. Again we consider the infinite graph of a grid and this time we let the length of every edge be 1 with probability 1/2 and 2 with probability 1/2 (independently). These weights describe a random metric on this infinite graph that we wish to understand. We consider two vertices (0,0) and (v,0) (for high dimension the second entry can account for a (d-1) dimensional vectors, but we can restrict our attention to d=2) and we let D(x) be the distance between these two vectors. We explained how D is an integer values function on a discrete cube with Liphshitz constant 1. The question we want to address is : What is the variance of D?

Why do we study the variance, when we do not know exactly the expectation, you may ask? (I remember Lerry Shepp asking this when I talked about it at Bell Labs in the early 90s.) One answer is that we know that the expectation of D is linear, and for the variance we do not know how it behaves. Second, we expect that telling the expectation precisely will depend on the model while the way the variance grows and perhaps D‘s limiting distribution, will be universal (say, for dimension 2). And third, we do not give up on the expectation as well. 

Here is what we showed:

1) From the inequality var(D)=\sum_{S\ne \emptyset}\hat D^2(S)\le\sum \hat D^2(S)|S| we derived Kesten’s bound var (D) =O(v).

2) We considered the value s so that \mu(D>s)=t, and showed by the basic inequality above that the variance of D conditioned on D>s is also bounded by v. This corresponds to exponential tail estimate proved by Kesten.

3) Using hypercontractivity we showed that the variance of D conditioned on D>s is actually bounded above by v/log (1/t) which corresponds to Talagrand’s sub-Gaussian tail-estimate.

4) Almost finally based on a certain very plausible lemma we used hypercontructivity to show that most Fourier coefficients of D are above the log v level, improving the variance upper bound to O(v/log v).

5) Since the plausible lemma is still open (see this MO question) we showed how we can “shortcut” the lemma and prove the upper bound without it.

The major open question

It is an open question to give an upper bound of v^{1-\epsilon} or even v^{2/3} which is the expected answer in dimension two. Michel Ledoux wisely proposes to prove it just for directed percolation in the plane (where all edges are directed up and right) from (0,0) to (v,v) where the edge length is Gaussian or Bernoulli.

Lecture 8

Three Further Applications of Discrete Fourier Analysis (without hypercontractivity)

The three next topics will use Fourier but not hypercontractivity. We start by talking about them.

1) The cap-set problem, some perspective and a little more extremal combinatorics

We talked about Roth theorem, the density Hales Jewett theorem,  the Erdos-Rado delta-system theorem and conjecture. We mentioned linearity testing.

2) Upper bounds for error-correcting codes

This was a good place to mention (and easily prove) a fundamental property used in both these cases:  The Fourier transform of convolutions of two functions f and g is the product of the Fourier transform of f and of g.

3) Social choice and Arrow’s theorem

The Fourier theoretic proof for Arrow’s theorem uses only Parseval’s formula so we are going to start with that.

Fourier-theoretic proof of Arrows theorem and related results.

We talked a little about Condorcet(we will later give a more detailed introduction to social choice). We mentioned Condorcet’s paradox, Condorcet’s Jury Theorem, and the notion of Condorcet winner.

Next we formulated Arrow’s theorem.  Lecture 9 was devoted to a Fourier-theoretic proof of Arrow theorem (in the balanced case). You can find it discussed in this blog post by Noam Nisan.  Lecture 10 mentioned a few further application of the Fourier method related to Arrow’s theorem, as well as a simple combinatorial proof of Arrow’s theorem in full generality. For the Fourier proof of Arrow’s theorem we showed that a Boolean function with all its non-zero Fourier coefficients on levels 0 and 1 is constant, dictatorship or anti-dictatorship. This time we formulated FKN theorem and showed how it implies a stability version of Arrow’s theorem in the neutral case.

A Problem on Planar Percolation

Conjecture (Gady Kozma):  Prove that the critical probability for planar percolation on a Cayley graph of the group Z^2 is always an algebraic number.

Gady  mentioned this conjecture in his talk here about percolation on infinite Cayley graphs.  (Update April 30: Today Gady mentioned to me an even bolder question: to show that for every group \Gamma, either all critical probabilities of its Cayley graphs are algebraic, or none is!!;  I recall a similarly bold conjecture regarding the property of being an expander which turned out to be false but was fruitful nevertheless.)

Many problems Continue reading

Noise Sensitivity Lecture and Tales

 

A lecture about Noise sensitivity

Several of my recent research projects are related to noise, and noise was also a topic of a recent somewhat philosophical post.   My oldest and perhaps most respectable noise-related project was the work with Itai Benjamini and Oded Schramm on noise sensitivity of Boolean functions. Recently I gave lectures at several places about noise sensitivity and noise stability of Boolean functions, starting with the definition of these terms and their Fourier description,  then the result of Itai, Oded and myself that the crossing event of planar critical percolation is noise-sensitive, and ending with two recent breakthrough results. One is the work by Garban, Pete, and Schramm   that described the scaling limit of the spectral distribution for the crossing event in critical percolation. The second is  the “majority is stablest” theorem of Mossel, O’Donnell, and Oleszkiewicz and the connections to hardness of approximation.

A fun way to explain noise sensitivity (especially after the 2000 US election) is in terms of the probability that small mistakes in counting the votes in an election will change the outcome. Here we assume that there are two candidates, that every voter votes with probability 1/2 for each candidate (independently) and that when we count every ballot there is a small probability t that the voter intention is reversed.

Noise sensitivity is one of the reasons to reject the HEX voting rule; but perhaps not the main one :) . (Noise stability/sensitivity and the related notion of influence play some role in the recent polymath1 project.)

Here is the power point presentation. I hope to blog at a later time about analysis of Boolean functions and about noise sensitivity and noise stability. 

Meanwhile let me just give a few general comments and tales: 

Some noise sensitivity tales:

1. We started working on noise sensitivity in 1995 and at that time Itai was expecting a son, so a friend lent him a pager. When we wrote the paper Alon was already 4 years old.

bks

2. The paper by Boris Tsirelson and Anatoly Vershik (that also describes work spanning many years) contains a similar notion. Their motivation was entirely different.  One difficulty in translating the two formulations is that “Boolean function” (or rather “a sequence of Boolean functions + some uniformity condition” ) in our work  is translated to “noise” in Tsirelson’s terminology. And “noise sensitive” is translated to “black” or “non-Fock” or “non-classical” in their work. 

3. After the 2000 elections Itai Benjamini and Elchanan Mossel wrote a popular piece about the relevance of this work for elections. Mathematician and writer Barry Cipra wrote a lovely  small article about it. (Looking at it now I find it very impressive how Barry put so much information in a vivid way in such a short article.)

Here is one paragraph from Cipra’s article:

“Three researchers—Itai Benjamini and Oded Schramm, both of Microsoft Research, and Gil Kalai of the Hebrew University in Jerusalem—have turned the question around: How close does a race have to be in order for errors in counting to have a non-negligible chance of reversing the outcome? Their analysis indicates that a simple, nationwide popular vote would be more stable against mistakes than the beleaguered Electoral College system. Indeed, they find, straightforward majority vote is more stable than any other voting method.” Continue reading