New Isoperimetric Results for Testing Monotonicity

photo (11)

Muli, Dor and Subash, Jerusalem May 21 2015.

Talagrand1 Margulis2

Michel Talagrand            Gregory Margulis

Property testing

In this post I will tell you about a new paper by Subhash Khot, Dor Minzer and Muli Safra  entitled: On Monotonicity Testing and Boolean Isoperimetric type Theorems. This remarkable paper gives a definite answer to one of the main open problems on property testing, proves some wonderful new isoperimetric inequalities, and shed some light on the still mysterous property of monotonicity.

Property testing is an exciting area which lies between mathematics and the theory of computing. (Closely related to PCP and to some modern aspects of error-correcting codes.) Here is a post about it in Terry Tao’s blog.

Suppose that you have an unknown graph G and you want to distinguish between two possibilities

1) The graph G contains a triangles

2) We can remove an 1%  of the edges of G so that no triangle will remain.

It turns out that by looking at a bounded number of edges drawn at random we can either demonstrate that G contains a triangle or demonstrate that with high probability the second possibility holds! Moreover, this is a deep mathematical result.

Property testing resembles a little election polls (and, more generally statistical hypothesis testing): We can estimate the outcomes of the election with good accuracy and high probability by probing the votes of a bounded number of voters.

Testing monotonicity

Given a Boolean function f, what is the smallest number of queries needed to be able to say

1) The function f is not monotone:

2) With high probability f is at most  ε-far from monotone, namely, it can be made monotone by flipping the value of at most \epsilon2^n values.

We will denote by ε(f) the normalized Hamming distance distance of a Boolean function f to the set of monotone functions.

A brief incomplete history of monotone testing

1) Oded Goldreich, Shafi Goldwasser, Eric Lehman, Dana Ron, and Alex Samorodnitsky. Testing monotonicity. Combinatorica, 20(3):301–337, 2000.

This paper considered the tester: Choose x and y at random such that y > x test if f(y) < f(x).

Using the basic edge isoperminetric inequality it was proved that n queries suffice.

It tooks 15 years a new tester and a new isoperimetric inequality to break this record and show that n^{7/8} polylog (n) queries suffice.

2)  Deeparnab Chakrabarty and C. Seshadhri. A o(n) monotonicity tester for boolean functions over the hypercube. In Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013, pages 411–418, 2013.

An improvement to n^{5/6} polylog (n) was achieved here

3)  Xi Chen, Rocco Servedio, and Li-Yang Tan. New algorithms and lower bounds for monotonicity testing. In Annual Symposium on Foundations of Computer Science, FOCS ’14, 2014.

The paper by Khot, Minzer and Safra reduces this to the optimal (apart from powers of logn)  \sqrt n polylog (n).They use yet a new tester and a new isoperimetric inequality.

I did not talk about adaptive version of the problem about lower bounds and about the dependence on \epsilon.  Khot, Minzer and Safra also give (up to powers of log) optimal dependence on \epsilon.

Isoperimetric results: Margulis, Talagrand, and beyond

Margulis’s Theorem:  Let f be a Boolean function on \Omega_n={0,1}^n. Let $\latex \mu$ be the uniform probability measure on $\Omega_n$.  For x \in \Omega_n define

I(x)=|\{y \in \Omega_n,f(x)=1,d(x,y)=1, f(y) \ne f(x)\}|. 

Now, the edge boundary (or total influence) of f is defined by

I(f)-\mathbb E I(x) it is also denoted by B_e(f).

The vertex boundary of f is defined by  \Gamma (f)=\mu(\{x \in \Omega_n: I(x) \ne 0\})

The classic edge-isoperimetric result asserts that

(1)  I(f) \ge 2 var (f),

Here var(f), is the variance of f.

Margulis theorem asserts that for some constant C:

(2) I(f)\cdot \Gamma(f)\ge C (var (f))^2.

In fact Margulis proved the result in greater generality for Bernoulli probability measures defined by \mu_p(x)=p^k(1-p)^{n-k} where k=x_1+x_2+\dots +x_n.

Talagrand’s theorem

Talagrand’s theorem:

(3) \mathbb E(\sqrt{I(x)})\ge C'var(f).

Again Talagrand’s theorem extends to the Bernoulli case. Talagrand’s theorem implies Margulis’ theorem by a simple application of the Cauchy-Swarz inequality.

Let me diverge and mention two other important results in this spirit. Bobkov’s theorem gives a sharp result when we consider the expectation of \sqrt {h(x)}, rather than \sqrt{I(x)}, where h(x) is defined just like I(x) but also for x so that f(x)=0.

Talagrand’s Influence-boundary theorem and conjecture

Let I_k(x)=f(x) (f(x)-f(y)), where y is obtained from x by flipping the kth coordinate and let I_k(f) =\mathbb E I_k(f) be the influence of the kth variable on f. (Sometimes the influence is defined as twice this number.)

Let  II(f)~=~\sum_{k=1}^n(I_k(f))^2. If f is monotone than II(f) is at most  var (f).

Theorem (Talagrand 1997):

(4) \mathbb E (\sqrt {I(x)})~\geC'var (f)(\log e/var(f)^{1/2-\alpha}(\log(e/II(f))^{\alpha}.

Talagrand conjectured that \alpha can be taken to be 1/2. Proving it would be great!

Here is a page with links to Talagrand’s preprints (in dvi forms) Talagrand’s hompage includes prize questions that he asked over the years.

Some new inequalities crucial for testing monotonicity

Following the first section of Khot, Minzer, and Safra’s paper I will briefly mention now isoperimetric inequalities related to monotonicity

Recall that ε(f) denote the normalized Hamming distance distance of a Boolean function f to the set of monotone functions.

Define I^-(x)=0 when f(x)=0, and when f(x)=1

I^-(x)~= |\{y \in \Omega_n, d(x,y)=1,f(y)=0,~y>x\}|.

Thus, I^-(x) measures local violations of monotonicity. If f is monotone then I^-(x)=0 for every x. Define I^-(f) = \mathbb E I^-(x). This quantity can be referred to as “total negative influence.”

Goldreich, Goldwasser, Lehman, Ron, Samorodnitsky

The first monotonicity testing result relied on an analog for the basic edge isoperimetric theorem.

(5) I^-(f)~\ge \Omega (\epsilon (f)).

 Chakrabarty and Seshadhri.

We can define \Gamma^-(f) as the vertex boundary just for violations of monotonicity.

(6) I^-_f\cdot\Gamma^-(f)\ge\Omega (\epsilon^2(f))

Khot, Minzer and Safra

(7) \mathbb E\sqrt{I^-(x)}\ge\Omega\tilde(\epsilon(f)).

Here the ~ means “up to polylogarithmic factors in n. (It will be very interesting to eliminate this little ~.)

I will end my description here. Chakrabarty and Seshadhri needed a further refinement of their inequality for their theorem, and Khot, Minzer, and Safra needed two further refinements and robust versions of their basic inequality in order to prove their goal. The proofs of the isoperimetric results and how to apply them for monotonicity testing are also very interesting.

This entry was posted in Combinatorics, Computer Science and Optimization. Bookmark the permalink.

One Response to New Isoperimetric Results for Testing Monotonicity

  1. Gil Kalai says:

    One interesting question that I forgot to mention is if there is a Fourier interpretation of Talagrand’s result. It is known that \mathbb E (h(x)) = 4\sum\hat f(S)|S| and that \mathbb E(h^2(x))=4\sum\hat f^2(S)|S|^2. (The constant depends of precise definitions/normalization and may be a little off.) So is there a Fourier theoretic interpretation for the expectation of square-root of I(x) or h(x)? And is there a Fourier interpretation of Talagrand or Margulis results? (In percolation language we ask for connection between moments for the number of pivotals and for moments of |S| for the Fourier distribution.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s