First passage percolation
1) Models of percolation.
We talked about percolation introduced by Broadbent and Hammersley in 1957. The basic model is a model of random subgraphs of a grid in n-dimensional space. (Other graphs were considered later as well.) Here, a grid is a graph whose vertices have integers coordinates and where two vertices are adjacent if their Euclidean distance is one. Every edge of the grid-graph is taken (or is “open” in the percolation jargon) with the same probability p, independently. We mentioned some basic questions – is there an infinite component? How many infinite components are there? What is the probability that the origin belongs to such an infinite component as a function of p?
I mentioned two results: The first is Kesten’s celebrated result that the critical probability for planar percolation is 1/2. The other by Burton and Keane is that in very general situations almost surely there is a unique infinite component or none at all. This was a good point to mention a famous conjecture- The dying percolation conjecture (especially in dimension 3) which asserts that at the critical probability there is no infinite component.
We will come back to this basic model of percolation later in the course, but for now we moved to a related more recent model.
2) First passage percolation
We talked about first passage percolation introduced by Hammersley and Welsh in 1965. Again we consider the infinite graph of a grid and this time we let the length of every edge be 1 with probability 1/2 and 2 with probability 1/2 (independently). These weights describe a random metric on this infinite graph that we wish to understand. We consider two vertices (0,0) and (v,0) (for high dimension the second entry can account for a (d-1) dimensional vectors, but we can restrict our attention to d=2) and we let D(x) be the distance between these two vectors. We explained how D is an integer values function on a discrete cube with Liphshitz constant 1. The question we want to address is : What is the variance of D?
Why do we study the variance, when we do not know exactly the expectation, you may ask? (I remember Lerry Shepp asking this when I talked about it at Bell Labs in the early 90s.) One answer is that we know that the expectation of D is linear, and for the variance we do not know how it behaves. Second, we expect that telling the expectation precisely will depend on the model while the way the variance grows and perhaps D‘s limiting distribution, will be universal (say, for dimension 2). And third, we do not give up on the expectation as well.
Here is what we showed:
1) From the inequality we derived Kesten’s bound var (D) =O(v).
2) We considered the value s so that , and showed by the basic inequality above that the variance of D conditioned on D>s is also bounded by v. This corresponds to exponential tail estimate proved by Kesten.
3) Using hypercontractivity we showed that the variance of D conditioned on D>s is actually bounded above by v/log (1/t) which corresponds to Talagrand’s sub-Gaussian tail-estimate.
4) Almost finally based on a certain very plausible lemma we used hypercontructivity to show that most Fourier coefficients of D are above the log v level, improving the variance upper bound to O(v/log v).
5) Since the plausible lemma is still open (see this MO question) we showed how we can “shortcut” the lemma and prove the upper bound without it.
The major open question
It is an open question to give an upper bound of or even which is the expected answer in dimension two. Michel Ledoux wisely proposes to prove it just for directed percolation in the plane (where all edges are directed up and right) from (0,0) to (v,v) where the edge length is Gaussian or Bernoulli.
Three Further Applications of Discrete Fourier Analysis (without hypercontractivity)
The three next topics will use Fourier but not hypercontractivity. We start by talking about them.
1) The cap-set problem, some perspective and a little more extremal combinatorics
We talked about Roth theorem, the density Hales Jewett theorem, the Erdos-Rado delta-system theorem and conjecture. We mentioned linearity testing.
2) Upper bounds for error-correcting codes
This was a good place to mention (and easily prove) a fundamental property used in both these cases: The Fourier transform of convolutions of two functions f and g is the product of the Fourier transform of f and of g.
3) Social choice and Arrow’s theorem
The Fourier theoretic proof for Arrow’s theorem uses only Parseval’s formula so we are going to start with that.
Fourier-theoretic proof of Arrows theorem and related results.
We talked a little about Condorcet(we will later give a more detailed introduction to social choice). We mentioned Condorcet’s paradox, Condorcet’s Jury Theorem, and the notion of Condorcet winner.
Next we formulated Arrow’s theorem. Lecture 9 was devoted to a Fourier-theoretic proof of Arrow theorem (in the balanced case). You can find it discussed in this blog post by Noam Nisan. Lecture 10 mentioned a few further application of the Fourier method related to Arrow’s theorem, as well as a simple combinatorial proof of Arrow’s theorem in full generality. For the Fourier proof of Arrow’s theorem we showed that a Boolean function with all its non-zero Fourier coefficients on levels 0 and 1 is constant, dictatorship or anti-dictatorship. This time we formulated FKN theorem and showed how it implies a stability version of Arrow’s theorem in the neutral case.