Next Week in Jerusalem: Special Day on Quantum PCP, Quantum Codes, Simplicial Complexes and Locally Testable Codes

Special Quantum PCP and/or Quantum Codes: Simplicial Complexes and Locally Testable CodesDay

24 Jul 2014 – 09:30 to 17:00

room B-220, 2nd floor, Rothberg B Building

On Thursday, the 24th of July we will host a SC-LTC (simplicial complexes and classical and quantum locally testable codes) at the Hebrew university, Rothberg building B room 202 (second floor) in the Givat Ram campus. Please join us, we are hoping for a fruitful and enjoyable day, with lots of interactions. Coffee and refreshments will be provided throughout the day, as well as free “tickets” for lunch on campus
There is no registration fee, but please email dorit.aharonov@gmail.com preferably by next Tuesday if there is a reasonable probability that you attend –  so that we have some estimation regarding the number of people, for food planning

Program:SC-LTC day – simplicial complexes and locally testable classical and quantum codes -Rothberg building B202
9:00 gathering: coffee and refreshments

9:30 Irit Dinur: Locally testable codes, a bird’s eye view

10:15: coffee break

10:45 Tali Kaufman, High dimensional expanders and property testing

11:30 15 minutes break

11:45 Dorit Aharonov, quantum codes and local testability

12:30 lunch break

2:00 Alex Lubotzky: Ramanujan complexes

2:50 coffee break

3:15 Lior Eldar: Open questions about quantum locally testable codes and quantum entanglement

3:45 Guy Kindler: direct sum testing and relations to simplicial complexes ( Based on David, Dinur, Goldenberg, Kindler, and Shinkar, 2014)

4:15-5 free discussion, fruit and coffee

 

Influence, Threshold, and Noise

 

poster4.1

My dear friend Itai Benjamini told me that he won’t be able to make it to my Tuesday talk on influence, threshold, and noise, and asked if I already have  the slides. So it occurred to me that perhaps I can practice the lecture on you, my readers, not just with the slides (here they are) but also roughly what I plan to say, some additional info, and some pedagogical hesitations. Of course, remarks can be very helpful.

I can also briefly report that there are plenty of exciting things happening around that I would love to report about – hopefully later in my travel-free summer. One more thing: while chatting with Yuval Rabani and Daniel Spielman I realized that there are various exciting things happening in algorithms (and not reported so much in blogs). Much progress has been made on basic questions: TSP, Bin Packing, flows & bipartite matching, market equilibria, and k-servers, to mention a few, and also new directions and methods. I am happy to announce that Yuval kindly agreed to write here an algorithmic column from time to time, and Daniel is considering contributing a guest post as well.

The second AMS-IMU meeting

Since the early 70s, I have been a devoted participants in our annual meetings of the Israeli Mathematical Union (IMU), and this year we will have the second joint meeting with the American Mathematical Society (AMS). Here is the program. There are many exciting lectures. Let me mention that Eran Nevo, this year Erdős’ prize winner, will give a lecture about the g-conjecture. Congratulations, Eran! Among the 22 exciting special sessions there are a few related to combinatorics, and even one organized by me on Wednsday and Thursday.

Combinatorics
Contact person: Gil Kalai, gil.kalai@gmail.com
TAU, Dan David building, Room 103
 Wed, 10:50-11:30 Van H. Vu (Yale University) Real roots of random polynomials (abstract)
Wed, 11:40-12:20 Oriol Serra (Universitat Politecnica de Catalunya, Barcelona)  Arithmetic Removal Lemmas (abstract)
 Wed, 12:30-13:10 Tali Kaufman (Bar-Ilan University)  Bounded degree high dimensional expanders (abstract)
 Wed, 16:00-16:40 Rom Pinchasi (Technion)  On the union of arithmetic progressions (abstract)
Wed, 16:50-17:30  Isabella Novik (University of Washington, Seattle) Face numbers of balanced spheres, manifolds, and pseudomanifolds (abstract)
 Wed, 17:40-18:20 Edward Scheinerman (Johns Hopkins University, Baltimore) On Vertex, Edge, and Vertex-Edge Random Graphs (abstract)
 Thu, 9:20-10:00 Yael Tauman Kalai (MSR, New England) The Evolution of Proofs in Computer Science (abstract)
 Thu, 10:10-10:50  Irit Dinur (Weitzman Institute)  Lifting locally consistent solutions to global solutions (abstract)
 Thu, 11:00-11:40 Benny Sudakov (ETH, Zurich) The minimum number of nonnegative edges in hypergraphs (abstract)

 

And now for my own lecture.

Influence, Threshold, and Noise:

Continue reading

Navier-Stokes Fluid Computers

how-magneto-rheological-suspension-works-8947_2

Smart fluid

Terry Tao posted a very intriguing post on the Navier-Stokes equation, based on a recently uploaded paper Finite time blowup for an averaged three-dimensional Navier-Stokes equation.

The paper proved a remarkable negative answer for the regularity conjecture for a certain variants of the NS equations, namely (or, perhaps, more precisely) the main theorem demonstrates finite time blowup for an averaged Navier-Stokes equation. (This already suffices to show that certain approaches for a positive answer to the real problem are not viable.) The introduction ends with the following words.

“This suggests an ambitious (but not obviously impossible) program (in both senses of
the word) to achieve the same e ffect for the true Navier-Stokes equations, thus obtaining a negative answer to Conjecture 1.1 (the regularity conjecture for 3D NS equation)… Somewhat analogously to how a quantum computer can be constructed from the laws of quantum mechanics [Here Tao links to Benio ff’s 1982 paper: “Quantum mechanical Hamiltonian models of Turing machines,”], or a Turing machine can be constructed from cellular automata such as “Conway’s Game of Life” , one could hope to design logic gates entirely out of ideal fluid (perhaps by using suitably shaped vortex sheets to simulate the various types of physical materials one would use in a mechanical computer). If these gates were sufficiently “Turing complete”, and also “noise-tolerant”, one could then hope to combine enough of these gates together to “program” a von Neumann machine consisting of ideal fluid that, when it runs, behaves qualitatively like the blowup solution used to establish Theorem 1.4.[The paper’s main theorem] Note that such replicators, as well as the related concept of a universal constructor, have been built within cellular automata such as the “Game of Life.”

Once enough logic gates of ideal fluid are constructed, it seems that the main difficulties in executing the above program are of a “software engineering” nature, and would be in principle achievable, even if the details could be extremely complicated in practice. The main mathematical difficulty in executing this “fluid computing” program would thus be to arrive at (and rigorously certify) a design for logical gates of inviscid fluid that has some good noise tolerance properties. In this regard, ideas from quantum computing (which faces a unitarity constraint somewhat analogous to the energy conservation constraint for ideal fluids, albeit with the key di fference of having a linear evolution rather than a nonlinear one) may prove to be useful. (Emphasis mine.)

Interesting idea!

And what Tao does go well beyond an idea, he essentially implement this program for a close relative of the NS equation!  I am not sure if universal computing is established for these systems but the proofs of the finite-time blow up theorem certainly uses some computational-looking gadget, and also as Terry explains some form of fault-tolerance.

Somewhat related ideas (unsupported by any results, of course,) appeared in the seventh post “Quantum repetition” of my debate with Aram Harrow on quantum computing.  (See, e.g., this remark, and this one, and this one.) The thread also contains interesting links, e.g. to Andy Yao’s paper “Classical physics and the Curch-Turing Thesis.”  In addition to the interesting question:

Does the NS-equation in three-dimension supports universal (classical) computation,

we can also ask what about two-dimensions?

Can NS-equations in two dimension be approximated in any scale by bounded depth circuits?

One general question suggested there was the following: “It can be of interest (and perhaps harder compared to the quantum case) to try to describe classical evolutions that do not enable/hide fault tolerance and (long) computation.”

Another interesting comment by Arie Israel is: “I was surprised to learn that experimental fluid mechanics people had thought of this analogy before. Apparently the key name is ‘Fluidics’ and those ideas date back at least to the sixties.”

cih2

Update: Here is the first paragraph from a nice article by  Erica Klarreich entitled A Fluid New Path in Grand Math Challenge on this development in Quanta Magazine:

In Dr. Seuss’s book “The Cat in the Hat Comes Back,” the Cat makes a stain he can’t clean up, so he calls upon the help of Little Cat A, a smaller, perfect replica of the Cat who has been hiding under the Cat’s hat. Little Cat A then calls forth Little Cat B, an even smaller replica hidden under Little Cat A’s hat. Each cat in turn lifts his hat to reveal a smaller cat who possesses all the energy and good cheer of the original Cat, just crammed into a tinier package. Finally, Little Cat Z, who is too small to see, unleashes a VOOM like a giant explosion of energy, and the stain disappears.

And here is a follow up post on Tao’s blog (and a few more II, III), and a post on Shtetl Optimized.

The flip side

Update (June 14): It is worth noting that while the purpose of Tao’s program is to show finite-time blow up of the 3D Navier Stokes equations (as is often the case) these lines of ideas can potentially be useful also toward a positive solution of the regularity conjectures. Specifically, one can try to show that 3D Navier-Stokes equations do not support universal classical computation and even more specifically do not support classical fault-tolerance and error correction. Also here some analogy with quantum computation can be useful: It is expected that for adiabatic processes computation requires “spectral gap” and that gapped evolutions with local Hamiltonians support only bounded depth computation. Something analogous may apply to NS equations in bounded dimensions.

There are many caveats, of course,  the quantum results are not proved for D>1, NS equations are non-linear which weakens the analogy, and showing that the evolution does not support computation does not imply, as far as we know, regularity.

Three more remarks: 1) On the technical level an important relevant technical tool for the results on gapped systems with local Hamiltonians is the Lieb-Robinson inequality. (See, e.g. this review paper.)  2) for classical evolutions a repetition mechanism (or the “majority function”) seems crucial for robust computation and it will be interesting specifically to test of 3D Navier-stokes support it; 3) If computation is not possible beyond bounded depth this fact may lead to additional conserved quantities for NS, beyond the classical ones. (One more, June 28): It looks to me that the crucial question is if NS equations only support bounded computation or not. So this distinction captures places where circuit complexity gives clear mathematical distinctions.

NatiFest is Coming

Nati Poster_Final

The conference Poster as designed by Rotem Linial

A conference celebrating Nati Linial’s 60th birthday will take place in Jerusalem December 16-18. Here is the conference’s web-page. To celebrate the event, I will reblog my very early 2008 post “Nati’s influence” which was also the title of my lecture in the workshop celebrating Nati’s 50th birthday.

Nati’s Influence

When do we say that one event causes another? Causality is a topic of great interest in statistics, physics, philosophy, law, economics, and many other places. Now, if causality is not complicated enough, we can ask what is the influence one event has on another one.  Michael Ben-Or and Nati Linial wrote a paper in 1985 where they studied the notion of influence in the context of collective coin flipping. The title of the post refers also to Nati’s influence on my work since he got me and Jeff Kahn interested in a conjecture from this paper.

Influence

The word “influence” (dating back, according to Merriam-Webster dictionary, to the 14th century) is close to the word “fluid”.  The original definition of influence is: “an ethereal fluid held to flow from the stars and to affect the actions of humans.” The modern meaning (according to Wictionary) is: “The power to affect, control or manipulate something or someone.”

Ben-Or and Linial’s definition of influence

Collective coin flipping refers to a situation where n processors or agents wish to agree on a common random bit. Ben-Or and Linial considered very general protocols to reach a single random bit, and also studied the simple case where the collective random bit is described by a Boolean function f(x_1,x_2,\dots,x_n) of n bits, one contributed by every agent. If all agents act appropriately the collective bit will be ‘1’ with probability 1/2. The purpose of collective coin flipping is to create a random bit R which is immune as much as possible against attempts of one or more agents to bias it towards ‘1’ or ‘0’. Continue reading

Analysis of Boolean Functions – Week 7

Lecture 11

The Cap Set problem

We presented Meshulam’s  bound 3^n/n for the maximum number of elements in a subset A of (\mathbb{Z}/3Z)^n not containing a triple x,y,x of distinct elements whose sum is 0.

The theorem is analogous to Roth’s theorem for 3-term arithmetic progressions and, in fact, it is a sort of purified analog to Roth’s proof, as some difficulties over the integers are not presented here.  There are two ingredients in the proof: One can be referred to as the “Hardy-Littlewood circle method” and the other is the “density increasing” argument.

We first talked about density-increasing method and showed how KKL’s theorem for influence of sets follows from KKL’s theorem for the maximum individual influence. I mentioned what is known about influence of large sets and what is still open. (I will devote to this topic a separate post.)

Then we went over Meshulam’s proof in full details. A good place to see a detailed sketch of the proof is in this post  on Gowers’ blog.

Let me copy Tim’s sketch over here:

Sketch of proof (from Gowers’s blog).

Next, here is a brief sketch of the Roth/Meshulam argument. I am giving it not so much for the benefit of people who have never seen it before, but because I shall need to refer to it. Recall that the Fourier transform of a function f:\mathbb{F}_3^n\to\mathbb{C} is defined by the formula

\hat{f}(r)=\mathbb{E}f(x)\omega^{r.x},

where \mathbb{E} is short for 3^{-n}\sum, \omega stands for \exp(2\pi i/3) and r.x is short for \sum_ir_ix_i. Now

\mathbb{E}_{x+y+z=0}f(x)f(y)f(z)=f*f*f(0).

(Here \mathbb{E} stands for 3^{-2n}\sum, since there are 3^{2n} solutions of x+y+z=0.) By the convolution identity and the inversion formula, this is equal to \sum_r\hat{f}(r)^3.

Now let f be the characteristic function of a subset A\subset\mathbb{F}_3^n of density \delta. Then \hat{f}(0)=\delta. Therefore, if A contains no solutions of x+y+z=0 (apart from degenerate ones — I’ll ignore that slight qualification for the purposes of this sketch as it makes the argument slightly less neat without affecting its substance) we may deduce that

\sum_{r\ne 0}|\hat{f}(r)|^3\geq\delta^3.

Now Parseval’s identity tells us that

\sum_r|\hat{f}(r)|^2=\mathbb{E}_x|f(x)|^2=\delta,

from which it follows that \max_{r\ne 0}|\hat{f}(r)|\geq\delta^2.

Recall that \hat{f}(r)=\mathbb{E}_xf(x)\omega^{r.x}. The function x\mapsto\omega^{r.x} is constant on each of the three hyperplanes r.x=b (here I interpret r.x as an element of \mathbb{F}_3). From this it is easy to show that there is a hyperplane H such that \mathbb{E}_{x\in H}f(x)\geq\delta+c\delta^2 for some absolute constant c. (If you can’t be bothered to do the calculation, the basic point to take away is that if \hat{f}(r)\geq\alpha then there is a hyperplane perpendicular to r on which A has density at least \delta+c\alpha, where c is an absolute constant. The converse holds too, though you recover the original bound for the Fourier coefficient only up to an absolute constant, so non-trivial Fourier coefficients and density increases on hyperplanes are essentially the same thing in this context.)

Thus, if A contains no arithmetic progression of length 3, there is a hyperplane inside which the density of A is at least \delta+c\delta^2. If we iterate this argument 1/c\delta times, then we can double the (relative) density of A. If we iterate it another 1/2c\delta times, we can double it again, and so on. The number of iterations is at most 2/c\delta, so by that time there must be an arithmetic progression of length 3. This tells us that we need lose only 2/c\delta dimensions, so for the argument to work we need n\geq 2/c\delta, or equivalently \delta\geq C/n.

 

Lecture 12

Error-Correcting Codes

We discussed error-correcting codes. A binary code C is simply a subset of the discrete n-dimensional cube. This is a familiar object but in coding theory we asked different questions about it. A code is linear if it forms a vector space over (Z/2Z)^n. The minimal distance of a code is the minimum Hamming distance between two distinct elements, and in the case of linear codes it is simply the minimum weight of a non-zero element of the codes. We mentioned codes over larger alphabets, spherical codes and even codes in more general metric spaces. Error-correcting codes are among the most glorious applications of mathematics and their theory is related to many topics in pure mathematics and theoretical computer science.

1) An extremal problem for codes: What is the maximum size of a binary code of length n with minimal distance d. We mentioned the volume (or Hamming) upper bound and the Gilbert-Varshamov lower bound. We concentrated on the case of codes of positive rate.

2) Examples of codes: We mentioned the Hamming code and the Hadamard code and considered some of their basic properties. Then we mentioned the long code which is very important in the study of Hardness of computation.

3) Linearity testing. Linearity testing is closely related to the Hadamard code. We described Blum-Luby-Rubinfeld linearity test and analyzed it. This is very similar to the Fourier theoretic formula and argument we saw last time for the cap problem.

We start to describe Delsartes linear Programming method to be continued next week.

Analysis of Boolean Functions week 5 and 6

Lecture 7

First passage percolation

1)  Models of percolation.

We talked about percolation introduced by Broadbent and Hammersley in 1957. The basic model is a model of random subgraphs of a grid in n-dimensional space. (Other graphs were considered later as well.) Here, a grid is a graph whose vertices have integers coordinates and where two vertices are adjacent if their Euclidean distance is one. Every edge of the grid-graph is taken (or is “open” in the percolation jargon) with the same probability p, independently. We mentioned some basic questions – is there an infinite component? How many infinite components are there? What is the probability that the origin belongs to such an infinite component as a function of p?

I mentioned two results: The first  is Kesten’s celebrated result that the critical probability for planar percolation is 1/2. The other by Burton and Keane is that in very general situations almost surely there is a unique infinite component or none at all. This was a good point to mention a famous conjecture- The dying percolation conjecture (especially in dimension 3) which asserts that at the critical probability there is no infinite component.

We will come back to this basic model of percolation later in the course, but for now we moved to a related more recent model.

2) First passage percolation

We talked about first passage percolation introduced by Hammersley and Welsh in 1965. Again we consider the infinite graph of a grid and this time we let the length of every edge be 1 with probability 1/2 and 2 with probability 1/2 (independently). These weights describe a random metric on this infinite graph that we wish to understand. We consider two vertices (0,0) and (v,0) (for high dimension the second entry can account for a (d-1) dimensional vectors, but we can restrict our attention to d=2) and we let D(x) be the distance between these two vectors. We explained how D is an integer values function on a discrete cube with Liphshitz constant 1. The question we want to address is : What is the variance of D?

Why do we study the variance, when we do not know exactly the expectation, you may ask? (I remember Lerry Shepp asking this when I talked about it at Bell Labs in the early 90s.) One answer is that we know that the expectation of D is linear, and for the variance we do not know how it behaves. Second, we expect that telling the expectation precisely will depend on the model while the way the variance grows and perhaps D‘s limiting distribution, will be universal (say, for dimension 2). And third, we do not give up on the expectation as well. 

Here is what we showed:

1) From the inequality var(D)=\sum_{S\ne \emptyset}\hat D^2(S)\le\sum \hat D^2(S)|S| we derived Kesten’s bound var (D) =O(v).

2) We considered the value s so that \mu(D>s)=t, and showed by the basic inequality above that the variance of D conditioned on D>s is also bounded by v. This corresponds to exponential tail estimate proved by Kesten.

3) Using hypercontractivity we showed that the variance of D conditioned on D>s is actually bounded above by v/log (1/t) which corresponds to Talagrand’s sub-Gaussian tail-estimate.

4) Almost finally based on a certain very plausible lemma we used hypercontructivity to show that most Fourier coefficients of D are above the log v level, improving the variance upper bound to O(v/log v).

5) Since the plausible lemma is still open (see this MO question) we showed how we can “shortcut” the lemma and prove the upper bound without it.

The major open question

It is an open question to give an upper bound of v^{1-\epsilon} or even v^{2/3} which is the expected answer in dimension two. Michel Ledoux wisely proposes to prove it just for directed percolation in the plane (where all edges are directed up and right) from (0,0) to (v,v) where the edge length is Gaussian or Bernoulli.

Lecture 8

Three Further Applications of Discrete Fourier Analysis (without hypercontractivity)

The three next topics will use Fourier but not hypercontractivity. We start by talking about them.

1) The cap-set problem, some perspective and a little more extremal combinatorics

We talked about Roth theorem, the density Hales Jewett theorem,  the Erdos-Rado delta-system theorem and conjecture. We mentioned linearity testing.

2) Upper bounds for error-correcting codes

This was a good place to mention (and easily prove) a fundamental property used in both these cases:  The Fourier transform of convolutions of two functions f and g is the product of the Fourier transform of f and of g.

3) Social choice and Arrow’s theorem

The Fourier theoretic proof for Arrow’s theorem uses only Parseval’s formula so we are going to start with that.

Fourier-theoretic proof of Arrows theorem and related results.

We talked a little about Condorcet(we will later give a more detailed introduction to social choice). We mentioned Condorcet’s paradox, Condorcet’s Jury Theorem, and the notion of Condorcet winner.

Next we formulated Arrow’s theorem.  Lecture 9 was devoted to a Fourier-theoretic proof of Arrow theorem (in the balanced case). You can find it discussed in this blog post by Noam Nisan.  Lecture 10 mentioned a few further application of the Fourier method related to Arrow’s theorem, as well as a simple combinatorial proof of Arrow’s theorem in full generality. For the Fourier proof of Arrow’s theorem we showed that a Boolean function with all its non-zero Fourier coefficients on levels 0 and 1 is constant, dictatorship or anti-dictatorship. This time we formulated FKN theorem and showed how it implies a stability version of Arrow’s theorem in the neutral case.

Real Analysis Introductory Mini-courses at Simons Institute

The Real Analysis ‘Boot Camp’ included three excellent mini-courses.

Inapproximability of Constraint Satisfaction Problems (5 lectures)
Johan Håstad (KTH Royal Institute of Technology)

(Lecture I, Lecture II, Lecture III, Lecture IV, Lecture V)

jh-choclate

Unlike more traditional ‘boot camps’ Johan rewarded answers and questions by chocolates (those are unavailable for audience of the video).

Starting from the PCP-theorem (which we will take as given) we show how to design and analyze efficient PCPs for NP-problems. We describe how an efficient PCP using small amounts of randomness can be turned into an inapproximability result for a maximum constraint satisfaction problem where each constraint corresponds to the acceptance criterion of the PCP. We then discuss how to design efficient PCPs with perfect completeness in some interesting cases like proving the hardness of approximating satisfiable instances of 3-Sat.

We go on to discuss gadget construction and how to obtain optimal reductions between approximation problems. We present Chan’s result on how to take products of PCPs to get hardness for very sparse CSPs and give some preliminary new results using these predicates as a basis for a gadget reduction.

Finally we discuss approximation in a different measure, and in particular the following problem. Given a (2k+1)-CNF formula which admits an assignment that satisfies k literal in each clause, is it possible to efficiently find a standard satisfying assignment?

Analytic Methods for Supervised Learning​ (4 lectures)
Adam Klivans (University of Texas, Austin)

(Lecture I, Lecture II, Lecture III, Lecture IV) additional related lecture by Adam on Moment matching polynomials.

In this mini-course we will show how to use tools from analysis and probability (e.g., contraction, surface area and limit theorems) to develop efficient algorithms for supervised learning problems with respect to well-studied probability distributions (e.g., Gaussians). One area of focus will be understanding the minimal assumptions needed for convex relaxations of certain learning problems (thought to be hard in the worst-case) to become tractable.

Introduction to Analysis on the Discrete Cube (4 lectures)
Krzysztof Oleszkiewicz (University of Warsaw)

(Lecture I, Lecture II, Lecture III, Lecture IV) Here are the slides for the lecture which contain material for 1-2 additional lectures.

The basic notions and ideas of analysis on the discrete cube will be discussed, in an elementary and mostly self-contained exposition. These include the Walsh-Fourier expansion, random walk and its connection to the heat semigroup, hypercontractivity and related functional inequalities, influences, the invariance principle and its application to the Majority is Stablest problem. The mini-course will also contain some other applications and examples, as well as several open questions.

Analysis of Boolean Functions – week 4

Lecture 6

Last week we discussed two applications of the Fourier-Walsh plus hypercontractivity method and in this lecture we will discuss one additional application:

The lecture was based on a 5-pages paper by Ehud Friedgut and Jeff Kahn: On the number of copies of one hypergraph in another  Israel Journal of Mathematics Vol. 105 (1998) pp. 251-256.

In this application our method  has nice but not optimal consequences, and another method – applying Shearer’s inequality, gives the optimal result.

1) The question: Given a hypegraph H with k vertices what is the maximum number of labeled copies of H inside a hypegraph G with \ell edges.

There are cases where the answer is known with great precision. The Kruskal-Katona theorem gives the answer when H is the hypegraph whose edges are all the r-subsets of a vertex set V of size k.

In our study  we will fix H and care only about the asymptotic behavior as a function of \ell. We have a simple upper bound of \ell^k and the question is to identify the correct exponent.

2) Stable sets and fractional stable sets. A stable set S in a hypergraph H (also called independent set) is a set of vertices so that every edge contains at most one vertex from S. The stable number \alpha(H) of a hypergraph H is the maximum size of a stable set.

Now comes an idea which is very important in graph theory, of considering the linear programming relaxation of combinatorial parameters. A fractional stable set is an assignment of non negative weights to the vertices, so the sum of weights in every edge is at most one. So this is a “fuzzy set” of a kind the membership of a vertex to a set is described by a number in the interval [0,1] rather than the set {0,1}. The size of a fractional stable set is the sum of weights of all vertices. The fractional stable number \alpha^*(H) is the minimum size of a fractional stable set.

3) Cover and fractional cover numbers

We next described covering and fractional covering (of vertices by edges) in hypergraphs, the covering number \rho(H) of a hypergraph H and the fractional covering number \rho^*(H). Linear programming duality gives that the fractional stable number  is equal to the fractional covering number.

4) The answer:

Friedgut-Kahn theorem: The number of copies of H is a hypergraph G with \ell edges is O(\ell^{\rho^*(H)}) and this bound is sharp.

The case of graphs was proved (in a different language) by Noga Alon in his M. Sc. thesis, and Noga’s first publication  On the number of subgraphs of prescribed type pf graphs with a given number of edges,( Israel J. Math. 38(1981), 116-130).) Part of the challenge was to find the right extension for Alon’s theorem.

5) The lower bound.

Next we explained the nice and simple construction giving the lower bound, which is based on the weights realizing the fractional stable number of H.

6) Bonami’s inequality in a dual form

Our next thing was to state a consequence of Bonami’s hypercontractive inequality, which is a direct extension of Chinchine inequality. Then we showed a weaker upper bound than the actual theorem based on the Bonami inequality .

It is an interesting open question to apply harmonic analysis to the general case. (I believe it is tractable.)

7) Traces and Shearer’s lemma

Next we defined the trace of a hypergraph on a subset W of vertex-set V and stated Shearer’s lemma.

8) More about traces and a little more extremal combinatorics

Not having enough time to complete the proof of Friedgut-Kahn theorem using Shearer’s lemma, we proved the fundamental Sauer-Shelah inequality (see this post), and stated Frankl’s conjecture (see this post, and this one  (sec 3d) ).

Lecture 7

We started with the proof of the Friedgut-Kahn bound using Shearer’s lemma. Then we explained the simple connection between influences and traces and mentioned the connection of Shearer’s lemma (and the Loomis-Whitney theorem) to edge-sioperimetry.

Our next application: First Passage percolation. We gave a short introduction to models of percolation and started to discuss our fourth application of the Fourier+hypercontractivity method: An upper bound for the variance of first passage percolation. Here the method gives the best known result, but unlike KKL’s theorem the result is not sharp. We are third way toward a proof so I may write about it next time. The discussion of first passage percolation is based on the paper First Passage Percolation Has Sublinear Distance Variance by Benjamini, Kalai and Schramm.

Analysis of Boolean Functions – Week 3

Lecture 4

In the third week we moved directly to the course’s “punchline” – the use of Fourier-Walsh expansion of Boolean functions and the use of Hypercontractivity.

Before that we  started with  a very nice discrete isoperimetric question on a graph which is very much related to the graph of the discrete cube.

Consider the graph G whose vertices are 0-1 vectors of length n with precisely r ‘1’s, and with edges corresponding to vertices of Hamming distance two. (Which is the minimal Hamming distance between distinct vertices.) Given a set A of m vertices, how small can E(A, \bar A) be? (We already talked about intersecting families of sets of constant size – the Erdos-Ko-Rado theorem and, in general, it is a nice challenge to extend some of the ideas/methods of the course to constant weight situation.)

And now for the main part of the lecture.

1) Basics harmonic analysis on the discrete cube. We considered the vector space of real functions on the discrete cube \Omega_n and defined an inner product structure. We also defined the p-th norm for 1\le p\le \infty. Next we defined the Fourier-Walsh functions and showed that they form a orthonormal basis. This now leads to the Fourier-Walsh expansion for an arbitrary real function f on the discrete cube f=\sum_{S\subset[n]}\hat f(S)W_S, and we could easily verify Parseval formula.

2) Influence and Fourier. If f is a real function on \Omega_n and f=\sum \hat f(S)W_S its Fourier-Walsh expansion. We showed that I_k(f)=\sum_{S:k \in S}\hat f^2(S). It follows that I(f)=\sum_S\hat f^2(S)|S|. The Fourier-theoretic proof for I(f) ≥ 4 t (1-t) where t=μ(f) now follows easily.

3) Chinchine, hypercontractivity and the discrete isoperimetric inequality.

Next we discussed what will it take to prove the better estimate I(f) ≥ K t log t. We stated Chinchine inequality, and explained why is Chinchine inequality relevant: For Boolean functions the pth power of the p-norm does not depend on p. (It always equals t.) Therefore if t is small, the p-th norms themselves much be well apart! After spending a few moments on the history of the inequality (as told to me by Ron Blei) we discussed what kind of extension do we need, and stated  the Bonami-Gross-Beckner inequality. We use Bonami’s inequality to proof of the inequality I(f) ≥ K t log t and briefly talked about what more does it give.

Lecture 5

1) Review and examples. We reviewed what we did in the previous lecture and considered a few examples: Dictatiorship; the AND function (an opportunity to mention the uncertainty principle,) and MAJORITY on three variables. We also mentioned another connection between influences and Fourier-Walsh coefficients: for a monotone (non decreasing) Boolean function f, I_k(f) = -2\hat f(\{k\}).

2) KKL’s theorem

KKL’s theorem: There is an absolute constant K so that for every Boolean function f, with t=μ(f), there exists k, 1 ≤ k ≤ n such that

I_k(f) \ge K t (1-t) logn/n.

To prove of KKL’s theorem: we repeat, to a large extent, the steps from Lecture 4 (of course, the proof of KKL’s theorem was where this line of argument came from.) We showed that if all individual influences are below 1/sqrt n than I(f) \ge K t(1-t) \log n.

We mentioned one corollary: For Boolean function invariant under a transitive group of permutations, all individual influences are equal and therefore I(f) \ge K t (1-t)\log n.

3) Further problems

In the  last part of the lecture we mentioned seven problems regarding influence of variables and KKL’s theorem (and I added two here):

1) What can be said about balanced Boolean functions with small total influence?

2) What can be said about Boolean functions for which I(f) ≤ K t log (1/t), for some constant K, where t=μ(f)?

3) What can be said about the connection between the symmetry group and the minimum total influence?

4) What can be said about Boolean functions (1/3 ≤ μ(f)≤ 2/3, say) for which \max I_k(f) \le K log n/n.?

5) What more can be said about the vector of influences (I_1(f),I_2(f), \dots I_n(f))?

6)* What is the sharp constant in KKL’s theorem?

7)* What about edge expansions of (small) sets of vertices in general graphs?

8) Under what conditions I(f) \ge n^\beta for β >0.

9) What about influence of larger sets? In particular, what is the smallest t (as a function of n ) such that if \mu(f)=t there is a set S of variables S ≤ 0.3n with I_S(f) \ge 0.9?

(This post  is a short version, I will add details later on.)

Analysis of Boolean functions – week 2

Post on week 1; home page of the course analysis of Boolean functions

Lecture II:

We discussed two important examples that were introduced by Ben-Or and Linial: Recursive majority and  tribes.

Recursive majority (RM): F_m is a Boolean function with 3^m variables and F_{m+1} (x,y,z) = F_1(F_m(x),F_m(y),F_m(z)). For the base case we use the majority function F_1(x,y,z)=MAJ(x,y,z).

Tribes: Divide your n variables into s pairwise disjoint sets (“tribes”) of cardinality t. f=1 if for some tribe all variables equal one, and thus T=0 if for every tribe there is a variable with value ‘0’.

We note that this is not an odd function i.e. it is not symmetric with respect to switching ‘0’ and ‘1’. To have \mu=1/2 we need to set t=\log_2n - \log_2\log_2n+c. We computed the influence of every variable to be C log n/n. The tribe function is a depth-two formula of linear size and we briefly discussed what are Boolean formulas and Boolean circuits (These notions can be found in many places and also in this post.).

I states several conjectures and questions that Ben-Or and Linial raised in their 85 paper:

Conjecture 1: For every balanced Boolean function with n variables there is a variable k whose influence is \Omega (\log n/n).

Conjecture 2: For every balanced Boolean function with n variables there is a set S of n/log n variables whose influence I_S(f) is 1-o(1).

Question 3: To what extent can the bound in Conjecture 2 be improved if the function f is odd. (Namely, f(1-x_1,1-x_2,\dots, 1-x_n)=1-f(x_1,x_2,\dots, x_n).)

Our next theme was discrete isoperimetric results.  I noted the connection between total influence and edge expansion and proved the basic isoperimetric inequality: If μ(f)=t then I(f) ≥ 4 t(1-t). The proof uses the canonical paths argument.

Lecture III:

We proved using “compression” that sharp bound on I(f) as a function of t=μ(f). We made the analogy between compression and Steiner symmetrization – a classic method for proving the classical isoperimetric theorem. We discussed similar results on vertex boundary and on Talagrand-Margulis boundary (to be elaborated later in the course).

Then We proved the Harris-Kleitman inequality and showed how to deduce the fact that intersecting family of subsets of [n] with the property that the family of complements is also intersecting has at most 2^{n-2} sets.

The next topic is spectral graph theory. We proved the Hoffman bound for the largest size of an independent set in a graph G.

I mentioned graph-Laplacians and the spectral bound for expansions (Alon-Milman, Tanner)..

The proofs mentioned above are so lovely that I will add them on this page, but sometime later.

Next week I will introduce harmonic analysis on the discrete cube and give a Fourier-theoretic explanation for  the additional log (1/t) factor in the edge isoperimetric inequality.

Important announcement: Real analysis boot camp in the Simons Institute for the Theory of Computing, is part of the program in Real Analysis and Computer Science. It is taking place next week on September 9-13 and has three lecture series. All lecture series are related to the topic of the course and especially: