Analysis of Boolean Functions – Week 7

Lecture 11

The Cap Set problem

We presented Meshulam’s  bound 3^n/n for the maximum number of elements in a subset A of (\mathbb{Z}/3Z)^n not containing a triple x,y,x of distinct elements whose sum is 0.

The theorem is analogous to Roth’s theorem for 3-term arithmetic progressions and, in fact, it is a sort of purified analog to Roth’s proof, as some difficulties over the integers are not presented here.  There are two ingredients in the proof: One can be referred to as the “Hardy-Littlewood circle method” and the other is the “density increasing” argument.

We first talked about density-increasing method and showed how KKL’s theorem for influence of sets follows from KKL’s theorem for the maximum individual influence. I mentioned what is known about influence of large sets and what is still open. (I will devote to this topic a separate post.)

Then we went over Meshulam’s proof in full details. A good place to see a detailed sketch of the proof is in this post  on Gowers’ blog.

Let me copy Tim’s sketch over here:

Sketch of proof (from Gowers’s blog).

Next, here is a brief sketch of the Roth/Meshulam argument. I am giving it not so much for the benefit of people who have never seen it before, but because I shall need to refer to it. Recall that the Fourier transform of a function f:\mathbb{F}_3^n\to\mathbb{C} is defined by the formula


where \mathbb{E} is short for 3^{-n}\sum, \omega stands for \exp(2\pi i/3) and r.x is short for \sum_ir_ix_i. Now


(Here \mathbb{E} stands for 3^{-2n}\sum, since there are 3^{2n} solutions of x+y+z=0.) By the convolution identity and the inversion formula, this is equal to \sum_r\hat{f}(r)^3.

Now let f be the characteristic function of a subset A\subset\mathbb{F}_3^n of density \delta. Then \hat{f}(0)=\delta. Therefore, if A contains no solutions of x+y+z=0 (apart from degenerate ones — I’ll ignore that slight qualification for the purposes of this sketch as it makes the argument slightly less neat without affecting its substance) we may deduce that

\sum_{r\ne 0}|\hat{f}(r)|^3\geq\delta^3.

Now Parseval’s identity tells us that


from which it follows that \max_{r\ne 0}|\hat{f}(r)|\geq\delta^2.

Recall that \hat{f}(r)=\mathbb{E}_xf(x)\omega^{r.x}. The function x\mapsto\omega^{r.x} is constant on each of the three hyperplanes r.x=b (here I interpret r.x as an element of \mathbb{F}_3). From this it is easy to show that there is a hyperplane H such that \mathbb{E}_{x\in H}f(x)\geq\delta+c\delta^2 for some absolute constant c. (If you can’t be bothered to do the calculation, the basic point to take away is that if \hat{f}(r)\geq\alpha then there is a hyperplane perpendicular to r on which A has density at least \delta+c\alpha, where c is an absolute constant. The converse holds too, though you recover the original bound for the Fourier coefficient only up to an absolute constant, so non-trivial Fourier coefficients and density increases on hyperplanes are essentially the same thing in this context.)

Thus, if A contains no arithmetic progression of length 3, there is a hyperplane inside which the density of A is at least \delta+c\delta^2. If we iterate this argument 1/c\delta times, then we can double the (relative) density of A. If we iterate it another 1/2c\delta times, we can double it again, and so on. The number of iterations is at most 2/c\delta, so by that time there must be an arithmetic progression of length 3. This tells us that we need lose only 2/c\delta dimensions, so for the argument to work we need n\geq 2/c\delta, or equivalently \delta\geq C/n.


Lecture 12

Error-Correcting Codes

We discussed error-correcting codes. A binary code C is simply a subset of the discrete n-dimensional cube. This is a familiar object but in coding theory we asked different questions about it. A code is linear if it forms a vector space over (Z/2Z)^n. The minimal distance of a code is the minimum Hamming distance between two distinct elements, and in the case of linear codes it is simply the minimum weight of a non-zero element of the codes. We mentioned codes over larger alphabets, spherical codes and even codes in more general metric spaces. Error-correcting codes are among the most glorious applications of mathematics and their theory is related to many topics in pure mathematics and theoretical computer science.

1) An extremal problem for codes: What is the maximum size of a binary code of length n with minimal distance d. We mentioned the volume (or Hamming) upper bound and the Gilbert-Varshamov lower bound. We concentrated on the case of codes of positive rate.

2) Examples of codes: We mentioned the Hamming code and the Hadamard code and considered some of their basic properties. Then we mentioned the long code which is very important in the study of Hardness of computation.

3) Linearity testing. Linearity testing is closely related to the Hadamard code. We described Blum-Luby-Rubinfeld linearity test and analyzed it. This is very similar to the Fourier theoretic formula and argument we saw last time for the cap problem.

We start to describe Delsartes linear Programming method to be continued next week.

Tentative Plans and Belated Updates II


Elementary school reunion: Usually, I don’t write about personal matters over the blog, but having (a few weeks ago) an elementary school reunion after 42 years was a moving and exciting event as to consider making an exception. For now, here is a picture:


Jirka’s Miraculous year

It looks like a lot is happening. From time to time I think that I should tell on my blog about exciting new things I hear about, but this is quite a difficult task. Perhaps I should at least post updates about progress on problems I discussed earlier, but even this is not easy.  Jirka Matousek wrote a paper entitled The dawn of an algebraic era in discrete geometry?  The paper starts as follows:

To me, 2010 looks as annus mirabilis, a miraculous year, in several areas of my mathematical interests. Below I list seven highlights and breakthroughs, mostly in discrete geometry, hoping to share some of my wonder and pleasure with the readers.

The paper lists seven startling new results. A few of these results were discussed here, a few others I have planned to discuss later, and yet a few others (like the recent solution by June Huh of the famous unimodularity conjecture for the coefficients of chromatic polynomials of graphs) caught me by a complete surprise. (Here is a link to a follow up paper by June Huh and Eric Katz.) Let me add one additional item, namely the solution (in the negative) by Boris Bukh of Eckhoff’s partition conjecture.

Other wonderful combinatorics news

These are also good times for other areas of combinatorics. I described some startling developments (e.g., here and here and here) and there is more. There were a few posts (here and here) on the Cup Set Problem. Recently Michael Bateman and Nets Katz improved, after many years, the Roth-Meshulam bound.  See these two posts on Gowers’s blog (I,II). Very general theorems discovered independently by David Conlon and Tim Gowers and by Matheas Schacht show that many theorems (such as Ramsey’s theorem or Turan’s theorem) continue to hold for substructures of sparse random sets.  Louis Esperet, Frantisek Kardos, Andrew King, Daniel Kral, and Serguei Norine proved the Lovasz-Plummer conjecture. They showed  that every cubic bridgeless graph G has at least 2^(|V(G)|/3656) perfect matchings. The concept of flag algebras, discovered by Razborov, is an extremely useful tool for extremal set theory. It has led to solutions of several problems and seems to bring us  close to a solution of  Turan’s Conjecture (which  we discussed here and here.) For example, it led to the solution by Hamed Hatami, Jan Hladký, Daniel Král, Serguei Norine, and Alexander Razborov of the question on the maximum number of pentagons in triangle-free graphs.  Hamed Hatami found a structure theorem for Boolean functions with coarse thresholds w.r.t. small probabilities. This extends and sharpens results by Ehud Friedgut and Jean Bourgain. I finally caught up (thanks to Reshef Meir) with Moser-Tardos result giving a new algorithmic proof for Lovasz local lemma. Amazing! You can read about it here.

Some updates on my Internet questions

Imre Leader and Eoin Long wrote a paper entitled tilted Sperner families, which solves a question I raised in the context of polymath1. Imre and Eoin give additional results and conjectures. My motivation was to try to come up (eventually) with very very general conjectures which include density Hales-Jewett as a very special case and are also related to error-correcting codes. Raman Sanyal discovered a dual form of Tverberg’s theorem in terms of families of fans.  (We asked about it here.) There is a new paper on the Entropy-Influence conjecture entitled The Fourier Entropy-Infuence Conjecture for certain classes of Boolean functions, by Ryan O’Donnell, John Wright, and Yuan Zhou, The paper contains a proof of the conjecture for symmetric Boolean functions and  various other cases. This is the first new result on the conjecture for many years. Also there is nice progress for the AC^0-prime number conjecture asked about in a previous post, and a subsequent Mathoverflow question (There I will keep updating matters.). Ben Green solved the conjecture! Jean Bourgain settled the more general MO question and also found results on certain AC(2) circuits.

Newton Institute and Oberwolfach,

And it seems that things are moving along nicely in other areas close to my heart.  A week ago (This was actually two months ago)  I participated in a workshop at the Newton Institute on discrete harmonic analysis. And in the first week of February we had our traditional Oberwolfach meeting on geometric and topological combinatorics. Many interesting results!

A visit to IQI

In the last week of January I visited Caltech. I missed the IPAM meeting scheduled a week before because my visa arrived too late but I still made it to IQI. This was a very nice opportunity as most of my time at Caltech was devoted to quantum information/quantum computation issues related to my own work on quantum fault tolerance. So I gave an “informal” seminar describing my point of view (and gave it again the next day at USC). Here are the slides.  My lecture was followed by two-hour discussion of the more technical details of my conjectures, seeking weak points and counterexamples in what I said, and trying to associate it with physics. There followed further discussions about some aspects of quantum fault tolerance and more general questions of quantum information with John Preskill, Leonard Schulman, Daniel Lidar and a few other people. (A lot of brilliant young people!) I learned quite a lot and was happy with this opportunity. (Of course, I did talk a little with Caltechian old and new friends about bona fide combinatorics questions.)

Three jokes for a dollar

On the weekend I took the LA metro (whose mere existence surprised me) and visited downtown LA and Hollywood. Next to Pershing station a person stopped me and asked if I want to buy three jokes for one dollar. I first said no, but then I reconsidered, called him back and gave him a dollar. To his disappointment and mine I did not understand the first joke, but the other two (perhaps adjusted to my revealed level of understanding) were quite good.

Hectic semester at HUJI

Here at Huji things are as hectic as always with 10-15 weekly research seminars. This week (This was two months ago; the semester have just ended). Continue reading