Category Archives: Computer Science and Optimization

More Reasons for Small Influence


Readers of the big-league ToC blogs have already heard about the breakthrough paper An average-case depth hierarchy theorem for Boolean circuits by Benjamin Rossman, Rocco Servedio, and Li-Yang Tan. Here are blog reports on Computational complexity, on the Shtetl Optimized, and of Godel Lost letter and P=NP. Let me mention one of the applications: refuting a 1999 conjecture by Benjamini, Schramm and me.

Update: Li-Yang Tang explained matters in an excellent comment below. Starting with: “In brief, we believe that an average-case depth hierarchy theorem rules out the possibility of a converse to Hastad-Boppana-LMN when viewed as a statement about the total influence of *constant*-depth circuits. However, while the Inf(f) <= (O(\log(S)))^{d-1} bound is often applied in the setting where d is constant,  it in fact holds for all values of d. It would interesting to explore the implications of our result in regimes where d is allowed to be super-constant.

Let me add that the bounded depth case is an important case (that I referred to here), that there might be some issues failing the conjecture for non-constant depth “for the wrong reasons”,  and that I see good prospect that RST’s work and techniques will refute BKS conjecture in full also for non-bounded depth.

Update:  Rossman, Servedio, and Tan refuted some important variations of our conjecture, while other variations remain open. My description was not so accurate and in hindsight I could also explained the background and motivation better. So rather than keep updating this post, I will write a new one in a few weeks. 


Theorem: If f is described by a bounded depth circuit of size s and depth d then I(f) the total influence of f,  is at most (\log s)^{d-1}.

The total influence I(f) of f is defined as follows: for an input x write I(x) for the number of neighbors y of x with f(y) \ne f(x). I(f) = \mathbb EI(x).

The history of this result as I  remember it is that: it is based on a crucial way on  Hastad Switching lemma going back to Hastad 1986 thesis, and for monotone functions one can use an even earlier 1984 result by Boppana.  It was first proved  (with exponent “d”) in 1993 by Linial-Mansour and  Nisan, as  a consequence of their theorem on the decay of Fourier coefficients for AC0 functions, (also based on the switching lemma). With the correct exponent d-1 it is derived from the switching lemma in a short clean argument in a 97 paper by Ravi Boppana; and finally it was extended to a sharpening of LMN result about the spectral decay by Hastad (2001).

Sipser MIT photo

Mike Sipser

Inverse Boppana-Hastad-LMN

Conjecture: (Benjanmini, Kala, and Schramm, 1999): Every Boolean function  f  is “close” to some depth-d size s circuit with (\log s)^{d-1} not much larger than I(f).

Of course, the exponent (d-1) is strongest possible but replacing it with some constant times d  is also of interest. (Also the monotone case already capture much interest.)

As we will see the conjecture is false even if the exponent d-1 is replaced by a constant times d. I do not know what is the optimal function u(d) if any for which you can replace the exponent d-1 by u(d).

Update: Following some comments by Boaz Barak I am not sure that indeed the new examples and results regarding them leads to disproof of our conjecture. The remarkable part of RST’s paper is that the RST example cannot be approximated by a circuit of smaller depth – even by one. (This has various important applications.) In order to disprove our conjecture one need to show that the influence of the example is smaller than what Boppana’s inequality ((log size)^{depth-1} ) gives. This is not proved in the paper (but it may be true). 

The RST’s result does say that if the influence is (say) logn (where n is the number of variables,) and the function depends on a small number of variables then it need not be correlated with a function in AC0.  

Anyway I will keep you posted.  

in 2007 O’Donnell and Wimmer showed that our inverse conjecture is false as stated. They took a Boolean function which is a tribe function on half the variables and “anti-tribes” on the rest. This still left the possibility that the exponent d-1 could be replaced by d or that “close” could be replaced by a weaker conclusion on substantial correlation.

Rossman,  Servedio, and a genuinely new reason for small influence!Their example, named after Mike Sipser,  is based on the AND-OR tree – a Boolean formula with alternating AND and OR levels and carefully designed parameters.  The crucial part is to show that you cannot approximate this function by lower depth circuits. The theorem proved by RST  is amazingly strong and does not allow reducing the depth even by one! The novel technique for proving it of random projections is very exciting.

It is still possible (I think) that such inverse theorems hold when the individual influences of all variables is below polylog(n)/n where n is the number of variables. Let me pose it as a conjecture:

Conjecture: Every Boolean function  f  with n variables and individual influences below polylog (n)/n is close to a function g in AC0   of size s depth d where (log s)^d is polylog (n).

And here is a post on TCSexchange with a question about “monotone  vs positive” for the class P. Similar questions for AC0 and TC0 were asked in this post.

New Isoperimetric Results for Testing Monotonicity

photo (11)

Muli, Dor and Subash, Jerusalem May 21 2015.

Talagrand1 Margulis2

Michel Talagrand            Gregory Margulis

Property testing

In this post I will tell you about a new paper by Subhash Khot, Dor Minzer and Muli Safra  entitled: On Monotonicity Testing and Boolean Isoperimetric type Theorems. This remarkable paper gives a definite answer to one of the main open problems on property testing, proves some wonderful new isoperimetric inequalities, and shed some light on the still mysterous property of monotonicity.

Property testing is an exciting area which lies between mathematics and the theory of computing. (Closely related to PCP and to some modern aspects of error-correcting codes.) Here is a post about it in Terry Tao’s blog.

Suppose that you have an unknown graph G and you want to distinguish between two possibilities

1) The graph G contains a triangles

2) We can remove an 1%  of the edges of G so that no triangle will remain.

It turns out that by looking at a bounded number of edges drawn at random we can either demonstrate that G contains a triangle or demonstrate that with high probability the second possibility holds! Moreover, this is a deep mathematical result.

Property testing resembles a little election polls (and, more generally statistical hypothesis testing): We can estimate the outcomes of the election with good accuracy and high probability by probing the votes of a bounded number of voters.

Testing monotonicity

Given a Boolean function f, what is the smallest number of queries needed to be able to say

1) The function f is not monotone:

2) With high probability f is at most  ε-far from monotone, namely, it can be made monotone by flipping the value of at most \epsilon2^n values.

We will denote by ε(f) the normalized Hamming distance distance of a Boolean function f to the set of monotone functions.

A brief incomplete history of monotone testing

1) Oded Goldreich, Shafi Goldwasser, Eric Lehman, Dana Ron, and Alex Samorodnitsky. Testing monotonicity. Combinatorica, 20(3):301–337, 2000.

This paper considered the tester: Choose x and y at random such that y > x test if f(y) < f(x).

Using the basic edge isoperminetric inequality it was proved that n queries suffice.

It tooks 15 years a new tester and a new isoperimetric inequality to break this record and show that n^{7/8} polylog (n) queries suffice.

2)  Deeparnab Chakrabarty and C. Seshadhri. A o(n) monotonicity tester for boolean functions over the hypercube. In Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013, pages 411–418, 2013.

An improvement to n^{5/6} polylog (n) was achieved here

3)  Xi Chen, Rocco Servedio, and Li-Yang Tan. New algorithms and lower bounds for monotonicity testing. In Annual Symposium on Foundations of Computer Science, FOCS ’14, 2014.

The paper by Khot, Minzer and Safra reduces this to the optimal (apart from powers of logn)  \sqrt n polylog (n).They use yet a new tester and a new isoperimetric inequality.

I did not talk about adaptive version of the problem about lower bounds and about the dependence on \epsilon.  Khot, Minzer and Safra also give (up to powers of log) optimal dependence on \epsilon.

Isoperimetric results: Margulis, Talagrand, and beyond

Margulis’s Theorem:  Let f be a Boolean function on \Omega_n={0,1}^n. Let $\latex \mu$ be the uniform probability measure on $\Omega_n$.  For x \in \Omega_n define

I(x)=|\{y \in \Omega_n,f(x)=1,d(x,y)=1, f(y) \ne f(x)\}|. 

Now, the edge boundary (or total influence) of f is defined by

I(f)-\mathbb E I(x) it is also denoted by B_e(f).

The vertex boundary of f is defined by  \Gamma (f)=\mu(\{x \in \Omega_n: I(x) \ne 0\})

The classic edge-isoperimetric result asserts that

(1)  I(f) \ge 2 var (f),

Here var(f), is the variance of f.

Margulis theorem asserts that for some constant C:

(2) I(f)\cdot \Gamma(f)\ge C (var (f))^2.

In fact Margulis proved the result in greater generality for Bernoulli probability measures defined by \mu_p(x)=p^k(1-p)^{n-k} where k=x_1+x_2+\dots +x_n.

Talagrand’s theorem

Talagrand’s theorem:

(3) \mathbb E(\sqrt{I(x)})\ge C'var(f).

Again Talagrand’s theorem extends to the Bernoulli case. Talagrand’s theorem implies Margulis’ theorem by a simple application of the Cauchy-Swarz inequality. Continue reading

Next Week in Jerusalem: Special Day on Quantum PCP, Quantum Codes, Simplicial Complexes and Locally Testable Codes

Special Quantum PCP and/or Quantum Codes: Simplicial Complexes and Locally Testable CodesDay

24 Jul 2014 – 09:30 to 17:00

room B-220, 2nd floor, Rothberg B Building

On Thursday, the 24th of July we will host a SC-LTC (simplicial complexes and classical and quantum locally testable codes) at the Hebrew university, Rothberg building B room 202 (second floor) in the Givat Ram campus. Please join us, we are hoping for a fruitful and enjoyable day, with lots of interactions. Coffee and refreshments will be provided throughout the day, as well as free “tickets” for lunch on campus
There is no registration fee, but please email preferably by next Tuesday if there is a reasonable probability that you attend –  so that we have some estimation regarding the number of people, for food planning

Program:SC-LTC day – simplicial complexes and locally testable classical and quantum codes –Rothberg building B202
9:00 gathering: coffee and refreshments

9:30 Irit Dinur: Locally testable codes, a bird’s eye view

10:15: coffee break

10:45 Tali Kaufman, High dimensional expanders and property testing

11:30 15 minutes break

11:45 Dorit Aharonov, quantum codes and local testability

12:30 lunch break

2:00 Alex Lubotzky: Ramanujan complexes

2:50 coffee break

3:15 Lior Eldar: Open questions about quantum locally testable codes and quantum entanglement

3:45 Guy Kindler: direct sum testing and relations to simplicial complexes ( Based on David, Dinur, Goldenberg, Kindler, and Shinkar, 2014)

4:15-5 free discussion, fruit and coffee


Influence, Threshold, and Noise



My dear friend Itai Benjamini told me that he won’t be able to make it to my Tuesday talk on influence, threshold, and noise, and asked if I already have  the slides. So it occurred to me that perhaps I can practice the lecture on you, my readers, not just with the slides (here they are) but also roughly what I plan to say, some additional info, and some pedagogical hesitations. Of course, remarks can be very helpful.

I can also briefly report that there are plenty of exciting things happening around that I would love to report about – hopefully later in my travel-free summer. One more thing: while chatting with Yuval Rabani and Daniel Spielman I realized that there are various exciting things happening in algorithms (and not reported so much in blogs). Much progress has been made on basic questions: TSP, Bin Packing, flows & bipartite matching, market equilibria, and k-servers, to mention a few, and also new directions and methods. I am happy to announce that Yuval kindly agreed to write here an algorithmic column from time to time, and Daniel is considering contributing a guest post as well.

The second AMS-IMU meeting

Since the early 70s, I have been a devoted participants in our annual meetings of the Israeli Mathematical Union (IMU), and this year we will have the second joint meeting with the American Mathematical Society (AMS). Here is the program. There are many exciting lectures. Let me mention that Eran Nevo, this year Erdős’ prize winner, will give a lecture about the g-conjecture. Congratulations, Eran! Among the 22 exciting special sessions there are a few related to combinatorics, and even one organized by me on Wednsday and Thursday.

Contact person: Gil Kalai,
TAU, Dan David building, Room 103
 Wed, 10:50-11:30 Van H. Vu (Yale University) Real roots of random polynomials (abstract)
Wed, 11:40-12:20 Oriol Serra (Universitat Politecnica de Catalunya, Barcelona)  Arithmetic Removal Lemmas (abstract)
 Wed, 12:30-13:10 Tali Kaufman (Bar-Ilan University)  Bounded degree high dimensional expanders (abstract)
 Wed, 16:00-16:40 Rom Pinchasi (Technion)  On the union of arithmetic progressions (abstract)
Wed, 16:50-17:30  Isabella Novik (University of Washington, Seattle) Face numbers of balanced spheres, manifolds, and pseudomanifolds (abstract)
 Wed, 17:40-18:20 Edward Scheinerman (Johns Hopkins University, Baltimore) On Vertex, Edge, and Vertex-Edge Random Graphs (abstract)
 Thu, 9:20-10:00 Yael Tauman Kalai (MSR, New England) The Evolution of Proofs in Computer Science (abstract)
 Thu, 10:10-10:50  Irit Dinur (Weitzman Institute)  Lifting locally consistent solutions to global solutions (abstract)
 Thu, 11:00-11:40 Benny Sudakov (ETH, Zurich) The minimum number of nonnegative edges in hypergraphs (abstract)


And now for my own lecture.

Influence, Threshold, and Noise:

Continue reading

Navier-Stokes Fluid Computers


Smart fluid

Terry Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an averaged three-dimensional Navier-Stokes equation.

The paper proved a remarkable negative answer for the regularity conjecture for a certain variants of the NS equations, namely (or, perhaps, more precisely) the main theorem demonstrates finite time blowup for an averaged Navier-Stokes equation. (This already suffices to show that certain approaches for a positive answer to the real problem are not viable.) The introduction ends with the following words.

“This suggests an ambitious (but not obviously impossible) program (in both senses of
the word) to achieve the same e ffect for the true Navier-Stokes equations, thus obtaining a negative answer to Conjecture 1.1 (the regularity conjecture for 3D NS equation)… Somewhat analogously to how a quantum computer can be constructed from the laws of quantum mechanics [Here Tao links to Benio ff’s 1982 paper: “Quantum mechanical Hamiltonian models of Turing machines,”], or a Turing machine can be constructed from cellular automata such as “Conway’s Game of Life” , one could hope to design logic gates entirely out of ideal fluid (perhaps by using suitably shaped vortex sheets to simulate the various types of physical materials one would use in a mechanical computer). If these gates were sufficiently “Turing complete”, and also “noise-tolerant”, one could then hope to combine enough of these gates together to “program” a von Neumann machine consisting of ideal fluid that, when it runs, behaves qualitatively like the blowup solution used to establish Theorem 1.4.[The paper’s main theorem] Note that such replicators, as well as the related concept of a universal constructor, have been built within cellular automata such as the “Game of Life.”

Once enough logic gates of ideal fluid are constructed, it seems that the main difficulties in executing the above program are of a “software engineering” nature, and would be in principle achievable, even if the details could be extremely complicated in practice. The main mathematical difficulty in executing this “fluid computing” program would thus be to arrive at (and rigorously certify) a design for logical gates of inviscid fluid that has some good noise tolerance properties. In this regard, ideas from quantum computing (which faces a unitarity constraint somewhat analogous to the energy conservation constraint for ideal fluids, albeit with the key di fference of having a linear evolution rather than a nonlinear one) may prove to be useful. (Emphasis mine.)

Interesting idea!

And what Tao does go well beyond an idea, he essentially implement this program for a close relative of the NS equation!  I am not sure if universal computing is established for these systems but the proofs of the finite-time blow up theorem certainly uses some computational-looking gadget, and also as Terry explains some form of fault-tolerance.

Somewhat related ideas (unsupported by any results, of course,) appeared in the seventh post “Quantum repetition” of my debate with Aram Harrow on quantum computing.  (See, e.g., this remark, and this one, and this one.) The thread also contains interesting links, e.g. to Andy Yao’s paper “Classical physics and the Curch-Turing Thesis.”  In addition to the interesting question:

Does the NS-equation in three-dimension supports universal (classical) computation,

we can also ask what about two-dimensions?

Can NS-equations in two dimension be approximated in any scale by bounded depth circuits?

One general question suggested there was the following: “It can be of interest (and perhaps harder compared to the quantum case) to try to describe classical evolutions that do not enable/hide fault tolerance and (long) computation.”

Another interesting comment by Arie Israel is: “I was surprised to learn that experimental fluid mechanics people had thought of this analogy before. Apparently the key name is ‘Fluidics’ and those ideas date back at least to the sixties.”


Update: Here is the first paragraph from a nice article by  Erica Klarreich entitled A Fluid New Path in Grand Math Challenge on this development in Quanta Magazine:

In Dr. Seuss’s book “The Cat in the Hat Comes Back,” the Cat makes a stain he can’t clean up, so he calls upon the help of Little Cat A, a smaller, perfect replica of the Cat who has been hiding under the Cat’s hat. Little Cat A then calls forth Little Cat B, an even smaller replica hidden under Little Cat A’s hat. Each cat in turn lifts his hat to reveal a smaller cat who possesses all the energy and good cheer of the original Cat, just crammed into a tinier package. Finally, Little Cat Z, who is too small to see, unleashes a VOOM like a giant explosion of energy, and the stain disappears.

And here is a follow up post on Tao’s blog (and a few more II, III), and a post on Shtetl Optimized.

The flip side

Update (June 14): It is worth noting that while the purpose of Tao’s program is to show finite-time blow up of the 3D Navier Stokes equations (as is often the case) these lines of ideas can potentially be useful also toward a positive solution of the regularity conjectures. Specifically, one can try to show that 3D Navier-Stokes equations do not support universal classical computation and even more specifically do not support classical fault-tolerance and error correction. Also here some analogy with quantum computation can be useful: It is expected that for adiabatic processes computation requires “spectral gap” and that gapped evolutions with local Hamiltonians support only bounded depth computation. Something analogous may apply to NS equations in bounded dimensions.

There are many caveats, of course,  the quantum results are not proved for D>1, NS equations are non-linear which weakens the analogy, and showing that the evolution does not support computation does not imply, as far as we know, regularity.

Three more remarks: 1) On the technical level an important relevant technical tool for the results on gapped systems with local Hamiltonians is the Lieb-Robinson inequality. (See, e.g. this review paper.)  2) for classical evolutions a repetition mechanism (or the “majority function”) seems crucial for robust computation and it will be interesting specifically to test of 3D Navier-stokes support it; 3) If computation is not possible beyond bounded depth this fact may lead to additional conserved quantities for NS, beyond the classical ones. (One more, June 28): It looks to me that the crucial question is if NS equations only support bounded computation or not. So this distinction captures places where circuit complexity gives clear mathematical distinctions.

NatiFest is Coming

Nati Poster_Final

The conference Poster as designed by Rotem Linial

A conference celebrating Nati Linial’s 60th birthday will take place in Jerusalem December 16-18. Here is the conference’s web-page. To celebrate the event, I will reblog my very early 2008 post “Nati’s influence” which was also the title of my lecture in the workshop celebrating Nati’s 50th birthday.

Nati’s Influence

When do we say that one event causes another? Causality is a topic of great interest in statistics, physics, philosophy, law, economics, and many other places. Now, if causality is not complicated enough, we can ask what is the influence one event has on another one.  Michael Ben-Or and Nati Linial wrote a paper in 1985 where they studied the notion of influence in the context of collective coin flipping. The title of the post refers also to Nati’s influence on my work since he got me and Jeff Kahn interested in a conjecture from this paper.


The word “influence” (dating back, according to Merriam-Webster dictionary, to the 14th century) is close to the word “fluid”.  The original definition of influence is: “an ethereal fluid held to flow from the stars and to affect the actions of humans.” The modern meaning (according to Wictionary) is: “The power to affect, control or manipulate something or someone.”

Ben-Or and Linial’s definition of influence

Collective coin flipping refers to a situation where n processors or agents wish to agree on a common random bit. Ben-Or and Linial considered very general protocols to reach a single random bit, and also studied the simple case where the collective random bit is described by a Boolean function f(x_1,x_2,\dots,x_n) of n bits, one contributed by every agent. If all agents act appropriately the collective bit will be ‘1’ with probability 1/2. The purpose of collective coin flipping is to create a random bit R which is immune as much as possible against attempts of one or more agents to bias it towards ‘1’ or ‘0’. Continue reading

Analysis of Boolean Functions – Week 7

Lecture 11

The Cap Set problem

We presented Meshulam’s  bound 3^n/n for the maximum number of elements in a subset A of (\mathbb{Z}/3Z)^n not containing a triple x,y,x of distinct elements whose sum is 0.

The theorem is analogous to Roth’s theorem for 3-term arithmetic progressions and, in fact, it is a sort of purified analog to Roth’s proof, as some difficulties over the integers are not presented here.  There are two ingredients in the proof: One can be referred to as the “Hardy-Littlewood circle method” and the other is the “density increasing” argument.

We first talked about density-increasing method and showed how KKL’s theorem for influence of sets follows from KKL’s theorem for the maximum individual influence. I mentioned what is known about influence of large sets and what is still open. (I will devote to this topic a separate post.)

Then we went over Meshulam’s proof in full details. A good place to see a detailed sketch of the proof is in this post  on Gowers’ blog.

Let me copy Tim’s sketch over here:

Sketch of proof (from Gowers’s blog).

Next, here is a brief sketch of the Roth/Meshulam argument. I am giving it not so much for the benefit of people who have never seen it before, but because I shall need to refer to it. Recall that the Fourier transform of a function f:\mathbb{F}_3^n\to\mathbb{C} is defined by the formula


where \mathbb{E} is short for 3^{-n}\sum, \omega stands for \exp(2\pi i/3) and r.x is short for \sum_ir_ix_i. Now


(Here \mathbb{E} stands for 3^{-2n}\sum, since there are 3^{2n} solutions of x+y+z=0.) By the convolution identity and the inversion formula, this is equal to \sum_r\hat{f}(r)^3.

Now let f be the characteristic function of a subset A\subset\mathbb{F}_3^n of density \delta. Then \hat{f}(0)=\delta. Therefore, if A contains no solutions of x+y+z=0 (apart from degenerate ones — I’ll ignore that slight qualification for the purposes of this sketch as it makes the argument slightly less neat without affecting its substance) we may deduce that

\sum_{r\ne 0}|\hat{f}(r)|^3\geq\delta^3.

Now Parseval’s identity tells us that


from which it follows that \max_{r\ne 0}|\hat{f}(r)|\geq\delta^2.

Recall that \hat{f}(r)=\mathbb{E}_xf(x)\omega^{r.x}. The function x\mapsto\omega^{r.x} is constant on each of the three hyperplanes r.x=b (here I interpret r.x as an element of \mathbb{F}_3). From this it is easy to show that there is a hyperplane H such that \mathbb{E}_{x\in H}f(x)\geq\delta+c\delta^2 for some absolute constant c. (If you can’t be bothered to do the calculation, the basic point to take away is that if \hat{f}(r)\geq\alpha then there is a hyperplane perpendicular to r on which A has density at least \delta+c\alpha, where c is an absolute constant. The converse holds too, though you recover the original bound for the Fourier coefficient only up to an absolute constant, so non-trivial Fourier coefficients and density increases on hyperplanes are essentially the same thing in this context.)

Thus, if A contains no arithmetic progression of length 3, there is a hyperplane inside which the density of A is at least \delta+c\delta^2. If we iterate this argument 1/c\delta times, then we can double the (relative) density of A. If we iterate it another 1/2c\delta times, we can double it again, and so on. The number of iterations is at most 2/c\delta, so by that time there must be an arithmetic progression of length 3. This tells us that we need lose only 2/c\delta dimensions, so for the argument to work we need n\geq 2/c\delta, or equivalently \delta\geq C/n.


Lecture 12

Error-Correcting Codes

We discussed error-correcting codes. A binary code C is simply a subset of the discrete n-dimensional cube. This is a familiar object but in coding theory we asked different questions about it. A code is linear if it forms a vector space over (Z/2Z)^n. The minimal distance of a code is the minimum Hamming distance between two distinct elements, and in the case of linear codes it is simply the minimum weight of a non-zero element of the codes. We mentioned codes over larger alphabets, spherical codes and even codes in more general metric spaces. Error-correcting codes are among the most glorious applications of mathematics and their theory is related to many topics in pure mathematics and theoretical computer science.

1) An extremal problem for codes: What is the maximum size of a binary code of length n with minimal distance d. We mentioned the volume (or Hamming) upper bound and the Gilbert-Varshamov lower bound. We concentrated on the case of codes of positive rate.

2) Examples of codes: We mentioned the Hamming code and the Hadamard code and considered some of their basic properties. Then we mentioned the long code which is very important in the study of Hardness of computation.

3) Linearity testing. Linearity testing is closely related to the Hadamard code. We described Blum-Luby-Rubinfeld linearity test and analyzed it. This is very similar to the Fourier theoretic formula and argument we saw last time for the cap problem.

We start to describe Delsartes linear Programming method to be continued next week.