A Discussion and a Debate

Heavier than air flight of the 21 century?

The very first post on this blog entitled “Combinatorics, Mathematics, Academics, Polemics, …” asked the question “Are mathematical debates possible?” We also had posts devoted to debates and to controversies.

A few days ago, the first post in a discussion between Aram Harrow, a brilliant computer scientists and quantum information researcher (and a decorated debator), and myself on quantum error correction was launched in Dick Lipton and Ken Regan’s big-league blog, Gödel’s Lost Letter and P=NP.

The central question we would like to discuss is:

Are universal quantum computers based on quantum error correction possible.

In preparation for the public posts, Ken, Aram, Dick, and me are having very fruitful and interesting email discussion on this and related matters, and also the post itself have already led to very interesting comments. Ken is doing marvels in editing what we write.

Dick opened the post by talking on perpetual motion machines which is ingenious because it challenges both sides of the discussion. Perpetual motion turned out to be impossible: will quantum computers enjoy the same fate? On the other hand (and closer to the issue at hand), an argument against quantum mechanics based on the impossibility of perpetual motion by no other than Einstein turned out to be false, are skeptical ideas to quantum computers just as bogus? (The answer could be yes to both questions.) Some people claimed that heavier-than-air flight might is a better analogy. Sure, perhaps it is better.

But, of course, analogies with many human endeavors can be made, and for these endeavors, some went one way, and some went the other way, and for some we don’t know.

Although this event is declared as a debate, I would like to think about it as a discussion. In the time scale of such a discussion what we can hope for is to better understand each other positions, and, not less important, to better understand our own positions.  (Maybe I will comment here about some meta aspects of this developing discussion/debate.)

A real debate

A real emerging debate is if we (scientists) should boycott Elsevier. I tend to be against such an action, and especially against including refereeing papers for journals published by Elsevier as part of the boycott. I liked, generally speaking,  Gowers’s critical post on Elsevier, but the winds of war and associated rhetoric are not to my liking.  The universities are quite powerful, and they deal, negotiate and struggle with scientific publishers, and other similar bodies, on a regular  basis. I tend to think that the community of scientists should not be part of such struggles and that such involvement will harm the community and science. This is a real debate! But it looks almost over.  Many scientists joined the boycott and not many opposing opinions were made. It looks that we will have a little war and see some action. Exciting, as ever.

Fractional Sylvester-Gallai

Avi Wigderson was in town and gave a beautiful talk about an extension of Sylvester-Gallai theorem. Here is a link to the paper: Rank bounds for design matrices with applications to combinatorial geometry and locally correctable codes by Boaz Barak, Zeev Dvir, Avi Wigderson, and Amir Yehudayoff.

Sylvester-Gallai

The Sylvester-Gallai Theorem:  Let X be a finite set of n points in an eulidean space such that for every two distinct points x,y \in X the line through x and y contains a third point z \in X. Then all points in X are contained in a line. 

I heard about this result when I took Benjy Weiss’s mathematics course for high-school students in 1970/1. a  The Sylvester-Gallai theorem was the last question marked with (*) in the first week’s homework. In one of the next meetings Benjy listened carefully to our ideas on how to prove it and then explained to us why our attempts of proving it are doomed to fail: What we tried to do only relied on the very basic incidence axioms of Euclidean geometry but the Sylvester-Gallai theorem does not hold for finite projective planes. (Sylvester conjectured the result in 1893. The first proof was given by Mechior in 1940 and Gallai proved it in 1945.)

My MO question

Befor describing the new results let me mention my third ever MathOverflow question that was about potential extensions of the G-S theorem. The question was roughly this:

Suppose that V is an r dimensional variety embedded into n space so that if the intersection of every j-dimensional subspace with V is full dimensional then this intersection  is “complicated”. Then n cannot be too large.

I will not reproduce the full question here but only a memorable remark made by Greg Kuperberg:

If you claimed that Gil is short for Gilvester (which is a real first name although rare), then you could say that any of your results is the “Gilvester Kalai theorem”. – Greg Kuperberg Nov 24 2009 at 5:13

The result by Barak, Dvir, Wigderson and Yehudayoff

Theorem:  Let X be a finite set of n points in an Euclidean space such that for every point x \in X the number of y, y\in X,y \ne x such that the line through x and y contains another point of X is at least \delta (n-1). Then

\dim (Aff(X))\le 13/\delta^2

Some remarks:

1) The proof: The first ingredient of the proof is a translation of the theorem into a question about ranks of matrices with a certain combinatorial structure. The next thing is to observe is that when the non zero entries of the matrix are 1’s the claim is simple. The second surprising ingredient of the proof is to use scaling in order to “tame” the entries of the matrix.

2)  The context – locally correctable codes:  A q-query locally correctable (q,\delta)-code over a field F is a subspace of F^n so that, given any element \tilde y that disagrees with some y \in C in at most \delta n positions and an index i, 1 \le i \le n we can recover y_i with probability 3/4 by reading at most q coordinates of \tilde y.  The theorem stated above imply that, for two queries,  over the real numbers (and also over the complex numbers), such codes do not exist when n is large. Another context where the result is of interest is the hot area of sum product theorems and related questions in the geometry of incidences.

3) Some open problems: Is there a more detailed structure theorem for configurations of points satisfying the condition of the theorem? Can the result be improved to \dim (Aff(X))=O(1/\delta )? Can a similar result be proved on locally correctable codes with more than two queries? This also translates into an interesting Sylvester-Gallai type question but it will require, Avi said, new ideas.

Ryan O’Donnell: Analysis of Boolean Function

Ryan O’Donnell has begun writing a book about Fourier analysis of Boolean functions and  he serializes it on a blog entiled Analysis of Boolean Function.  New sections appear on Mondays, Wednesdays, and Fridays.

Besides covering the basic theory, Ryan intends to describe applications in theoretical computer science and other areas of mathematics, including combinatorics, probability, social choice, and geometry.

Beside being a great place to learn this interesting material, actively participating in Ryan’s blog can make you a hero! Don’t miss this opportunity.

Each chapter of Ryan’s book ends with a “highlight” illustrating the use of Boolean analysis in problems where you might not necessarily expect it. In a post over Computational Complexity Ryan described some of these highlights in order to give a flavor of the contents:

  • Testing linearity (the Blum-Luby-Rubinfeld Theorem)
  • Arrow’s Theorem from Social Choice (and Kalai’s “approximate” version)
  • The Goldreich-Levin Algorithm from cryptography
  • Constant-depth circuits (Linial-Mansour-Nisan’s work)
  • Noise sensitivity of threshold functions (Peres’s Theorem)
  • Pseudorandomness for F_2-polynomials (Viola’s Theorem)
  • NP-hardness of approximately solving linear systems (Hastad’s Theorem)
  • Randomized query complexity of monotone graph properties
  • The (almost-)Polynomial Freiman-Ruzsa Theorem (i.e., Sanders’s Theorem)
  • The Kahn-Kalai-Linial Theorem on influences
  • The Gaussian Isoperimetric Inequality (Bobkov’s proof)
  • Sharp threshold phenomena (Friedgut and Bourgain’s theorems)
  • Majority Is Stablest Theorem
  • Unique Games-hardness from SDP gaps (work of Raghavendra and others)

Cup Sets, Sunflowers, and Matrix Multiplication

This post follows a recent paper On sunflowers  and matrix multiplication by Noga Alon, Amir Spilka, and Christopher Umens (ASU11) which rely on an earlier paper Group-theoretic algorithms for matrix multiplication, by Henry Cohn, Robert Kleinberg, Balasz Szegedy, and Christopher Umans (CKSU05), and refers also to a paper by Don Coppersmith and Shmuel Winograd (CW90).

Three famous problems

The Erdos-Rado sunflower conjecture

The Erdos-Rado sunflower (Delta system) theorem and conjecture was already menioned in this post on extremal set theory.

A sunflower (a.k.a. Delta-system) of size r is a family of sets A_1, A_2, \dots, A_r such that every element that belongs to more than oneofthe sets belongs to all of them. A basic and simple result of Erdos and Rado asserts that

Erdos-Rado sunflower theorem: There is a function f(k,r) so that every family \cal F of k-sets with more than f(k,r) members contains a sunflower of size r.

One of the most famous open problems in extremal combinatorics is:

The Erdos-Rado conjecture: Prove that f(k,r) \le c_r^k.

Here, c_r is a constant depending on r. This is most interesting already for r=3.

Three term arithmetic progressions

The cup set problem (three terms arithmetic progressions in (Z/3Z)^n):

The cup set problem was also discussed here quite extensively. (See, e.g. this post.)

Let \Gamma=\{0,1,2\}^n. The cap set problem  asks for the maximum number of elements in a subset of \Gamma which contains no arithmetic progression of size three or, alternatively, no three vectors that sum up to 0(modulo 3). (Such a set is called a cup set.) If A is a cap set of maximum size we can ask how the function h(n)=3^n/|A| behaves. Roy Meshulam proved, using Roth’s argument, that h(n) \ge n. Edell found an example of a cap set of size 2.2^n. So h(n) \le (3/2.2)^n.  The gap is exponential.

The strong cap set conjecture: h(n) \ge (1+\epsilon)^n for some \epsilon >0.

Of course, the cap set problem is closely related to

Erdos-Turan problem (for r=3): What is the larget size r_3(n) of a subest of {1,2,…,n} without 3-term arithmetic progression?

Matrix multiplications

Let ω be the smallest real number so that there is an algorithm for multiplying  two n \times n matrices which requires O(n^\omega ) arithmetic operations.

The ω=2 conjecture: ω=2.

A very recent breakthrough by Virginia Vassilevska Williams (independently) following an earlier breakthrough by Andrew Stothers improved the Coppersmith-Winograd algorithm which gave ω =2.376, to 2.374 and 2.373 respectively. (See the discussions over Lipton’s blog (1,2), Shtetl optimized, and Computational Complexity.)

It turns out that these three conjectures are related. (The connection of matrix multiplication and the Erdos-Turan problem is fairly old, but I am not sure what an even drastic improvment of Behrends’s lower bound would say about \omega.)

Three combinatorial conjectures that imply ω=2.

Remarkably, an affarmative answer for the ω=2 conjecture would folow from each one of three combinatorial conjectures. One conjecture goes back to CW90 and two were described in CKSU05. I will not present the precise formulations in order to encourage the readers to look at the original papers. (Maybe I will add the formulations later.)

The no disjoint equivoluminous subsets conjecture (CW90).

The Strong UPS conjecture (CKSU05).

Theorem: Conjecture CW90 implies the strong UPS conjecture.

CKSU’s two-family conjecture (CKSU05).

Relations between these problems

Here are some results taken from ASU11 about the relations between these combinatorial questions. The first result goes back to Erdos and Szemeredi.

The weak sunflower conjecture: A family \cal F of subsets of {1,2,…,n}  with no sunflower of size 3 can have at most (2-\epsilon)^n sets.

The following results are not difficult.

Theorem: The strong sunflower conjecture implies the weak sunflower conjecture.

Theorem: The strong cup set conjecture also implies the weak sunflower conjecture.

Theorem: The weak sunflower conjecture implies that the CW90 conjecture is false.

It follows that CW90 conjecture is in conflict both with the Erdos Rado sunflower conjecture and with the strong cup set conjecture.

Theorem: The strong cup set conjecture implies that the strong UPS conjecture is false.

While two family theorems are quite popular in extremal combinatorics (see this post and this one), CKSU’s two family conjecture is still rather isolated from other combinatorics.

What to believe?

This is a nice topic for discussion.

Projections to the TSP Polytope

Michael Ben Or told me about the following great paper Linear vs. Semidefinite Extended Formulations: Exponential Separation and Strong Lower Bounds by Samuel Fiorini, Serge Massar, Sebastian Pokutta, Hans Raj Tiwary and Ronald de Wolf. The paper solves an old conjecture of Yannakakis about projections of polytopes.

From the abstract: “We solve a 20-year old problem posed by M. Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the maximum cut problem and the stable set problem. These results follow from a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.”

There are many interesting aspects to this story. The starting point was a series of papers in the 80s trying to prove that P=NP by solving TSP using linear programming. The idea was to present the TSP polytope as a projection of a larger dimensional polytope described by  polynomially many linear inequalities, and solve the LP problem on that larger polytope.  Yannakakis proved that such attempts are doomed to fail, when the larger LP problem keep the symmetry of the original TSP polytope.

Yannakakis asked if the symmetry condition can be removed and this is what the new paper shows. This is a very interesting result also from the point of view of convex polytope theory.

Another exciting aspect of the paper is the use of methods from quantum communication complexity.

Update: See this post over GLL for discussion and a description of a follow up paper.

 

The AC0 Prime Number Conjecture

Möbius randomness and computational complexity

Last spring Peter Sarnak gave a thought-provoking lecture in Jerusalem. (Here are the very interesting slides of a similar lecture at I.A.S.)

Here is a variation of the type of questions Peter has raised.

The AC_0 Prime Number Conjecture: The correlation between the Möbius function and any function f from the natural numbers to {-1,1}, so that f can be described by a bounded depth polynomial size circuit, tends to zero!

Updates: (March) The conjecture made it to major league blogs as a very interesting blog post describing the conjecture appeared on Dick Lipton’s blog, and let to some interesting discussion. I posted two related Mathoverflow problems . The first
is about the discrete Fourier coefficients of the Mobius function
and the second
is about Walsh-Fourier coefficients of the Mobius function.
 Ben Green have posted an answer to my Mathoverflow question indicating an affarmative solution for the AC^0 Prime Number Conjecture. The proof is based on combining the Linial-Mansour-Nisan theorem with Results and techniques by Glyn Harman and Imre Kátai, (From the paper Primes with preassigned digits. II. Acta Arith. 133 (2008), 171–184.)

Ben have written some rough notes on this a short paper giving the solution here.  (An  earlier note by Ben assumed GRH and the full paper contain the extra work is needed to prove it unconditionally.) This is a very nice result. Ben is also optimistic that showing that all Walsh-Fourier functions are orthogonal to Mobius (which is a more general result) is within reach by combining the above result with results by Mauduit-Rivat. 

Of course, the next logical step is the ACC[2]-Prime number conjecture.  This includes as a very special case the question if all Walsh-Fourier functions are orthogonal asymptotically to the Mobius function. The “Walsh-Fourier” functions are high degree monomials over the real but they can be considered as linear functions over Z/2Z. For that, replace the values {-1,+1} by {0,1} both in the domain and range of our Boolean functions? What about low degree polynomials instead of linear polynomials? If we can extend the results to polynomials over Z/2Z of degree at most polylog (n) this will imply by a result of Razborov Mobius randomness for AC0[2] circuits. (This is interesting also under GRH).

(AprilJean Bourgain proved (private communication)  that for every Walsh function $W_S$ we have \sum_{m=1}^{X}\mu(m) W_S(m) \le X \cdot e^{-(\log X)^{1/10}}. In other words, \hat \mu(S) \le e^{-(\log X)^{1/10}}.

Jean also showed that under GRH \sum_{m=1}^X\mu(m)W_S(m) \le X^{1-(c/(\log\log X)^2)}. In other words, \hat \mu(S) \le X^{-(c/\log \log X)^2}. This result suffices to show under  GRH the “monotone prime number conjecture.”

While the questions about polylog degree polynomials over Z/2Z and about AC0(2) circuits  are still open, Jean Bourgain was able to prove Mobius randomness for AC0(2) circuits of certain sublinear size.

Jean also noted that to show that the Mobius function itself is non-approximable by a AC0[2) circuit (namely you cannot reach correlation say of 0.99) can easily be derived from Razborov Smolensky theorem, since an easy computation shows that \mu(3x)^2 has correlation >0.8 with the 0(mod 3) function. So we have a simple argument showing that (for n large) an ACC(2) function (or even a polylog degree polynomial)) cannot have correlation larger than 0.99 with the Mobius function, but showing that it cannot have correlation larger than 0.01 with the Mobius function seems at present very difficult. (More update: as it turned out, this last observation can be found already in Allender-Saks-Shparlinski’s paper.) 
 

End of updates.

Let me state the conjecture more precisely. Continue reading

IPAM remote blogging: The Many Facets of Linear Programming

The many facets of Linear Programming

Here is an extremely nice paper by Michael Todd from 2001. It gives useful background for many lectures and it can serve as a good base point to examine last decade’s progress.

Background post for today’s morning session talks: Is Backgammon in P?

And here is a link to Ye’s paper giving a strongly polynomial algorithm for Markov decision processes with constant discounting. Update: The slides for Ye’s lecture are now posted and they are very detailed and clear.

Günter Ziegler: 1000$ from Beverly Hills for a Math Problem. (IPAM remote blogging.)

Scanned letter by Zadeh. (c) Günter M. Ziegler

left-to-right: David Avis, Norman Zadeh,  Oliver Friedmann, and Russ Caflish (IPAM director). Photo courtesy Eddie Kim.

Update: The slides for Friedmann’s talk are now available. The conference schedule page contains now the slides for most presentations.

This post is authoured by Günter Ziegler with some help by David Avis. A German (slightly expanded) version can be found on Günter’s blog.

Oliver Friedmann, a Computer Science graduate student at Munich University, will defend his thesis next month. It contains a proof, that “Zadeh’s rule” for linear optimization is “exponential”, that is, it may take an awfully long time on relatively small problems.

 

Why is this remarkable?

“Linear Optimization” problems are extremely important computational tasks that arise in all kinds of larger planning processes. Such tasks are usually solved on a computer using a method known as the “simplex algorithm”, invented by George Dantzig in 1947. The simplex method needs a prescription for making local choices in its search, known as the “pivot rule”. And it is known about mostsuch pivot rules that they can be extremely slow and inefficient on particular optimization problems. One would like (and need) a rule that is “provably fast”.

One candidate for this was the “Zadeh rule” — until today. When Zadeh was a postdoc in the Department of Operations Research at Stanford he published a technical report on the worst case examples of the simplex method. Continue reading

IPAM Remote Blogging: Santos-Weibel 25-Vertices Prismatoid and Prismatoids with large Width

Here is a web page by Christope Weibel on the improved counterexample.

The IPAM webpage contains now slides of some of the lectures. Here are Santos’s slides. The last section contains some recent results on the “width of 5-prismatoids”  A prismatoid is a polytope with two facets containing all the vertices. The width of a prismatoid is the number of steps needed to go between these two facets where in each step we move from a facet to an adjacent one. Santos’s counterexample is based on findng 5-dimensional prismatoid with width larger than 5. It is observed that the width of a 5-prismatoid with n vertices cannot exceed 3n/2 and it is shown (by rather involved constructions) that there are examples where the width is as large as \sqrt n.

Remote Blogging: Efficiency of the Simplex Method: Quo vadis Hirsch conjecture?

Here are some links and posts related to some of the talks in IPAM’s workshop “Efficiency of the Simplex Method: Quo vadis Hirsch conjecture?” I will be happy to add links to pdf’s of the presentations and to relevant papers. Descriptions and remarks on individual lectures are very welcome. In particular you are most welcome to post here the posters/abstract/papers from the poster session. (Because of some technical matters I will have to miss the workshop. I hope to be able somehow to follow it from far away.)

Following is a description of some of the lectures and some useful links for a few lectures:

A counterexample to Hirsch’s conjecture: Posts related to Paco Santo’s lecture (Tuesday morning) describing a counter example to Hirsch’s conjecture are here and here The second post contains a link to Paco’s paper. Fred Holt’s second lecture will describe some new consequence to Paco’s construction. 

Acyclic USO: Acyclic unique sink orientations (Acyclic USO) of cubes and more general polytopes are mentioned in this post about abstract linear objective functions as well as this one about telling simple polytopes from their graphs. Acyclic USO are mentioned discussed in David Avis’s Wednesday morning talk and a few others.

Stochastic games: Stochastic games relevant to Yinyu Ye’s and Oliver Friedmann’s Thursday morning talks are discussed in this post. Some background (and links to the papers) for the new subexponential lower bounds for randomized pivot rules that will be described in Friedmann’s lecture can be found in this post. 

Polynomial Hirsch conjecture: Ed Kim’s (Thursday afternoon) and Nicolai Hähnle’s  (Friday’s morning) talks are related to polymath3. David Bremner will discuss (Tuesday afternoon) the combinatorics and geometry of path complexes. Jonathan Kelner will propose (Friday morning) a geometric/probabilistic method based on smoothed analysis to attack th epolynomial Hirsch conjecture.

Bad behavior of the simplex algorithms. Examples for the bad behavior of simplex type algorithms (mainly in three dimensions) will be described in Günter Ziegler’s talk (Tuesday afternoon). Here is the link to Günter’s slides which are rather detailed. Bernd Gärtner (Wednesday morning) will demonstrate how Goldfarb’s cubes can be used to refute a conjecture regarding an algorithm for machine learning. 

Interior point methods: Since the mid 80s interior point methods for linear programming are as important theoretically and practically as simplex type algorithms. (I will add a link for a good wide-audience description of interior point methods HERE.) Jim Renegar’s Thursday afternoon lecture will describe some new advances on  Central Swaths (a generalization of the central path a central notion for interior point methods. ) Santosh Vampala closing lecture will propose a hybrid vertex-following interior-type algorithm.

Continuous analogs: There are several interesting continuous analogs for combinatorial notions and questions related to the simplex algorithms. Yuriy Zinchenko will discuss (Wednesday afternoon) continuous analogs of the Hirsch conjecture

Walking on the arragements: Consider the entire arrangement of hyperplanes described by the inequalities of an LP problem. The simplex algorithm can be described as a walk on vertices of this arrangements. Those are very special vertices – the vertices of the feasible polyhedron. The dual simplex algorithm can also be described as a walk on vertices of the arrangement. This time the relevant vertices (I think) are dual-feasible, namely those are vertices of the arrangement which optimize the objective function w.r.t. a subset of the inequalities.  What about LP algorithms based on more general type of walks?  Kumei Fukuda Thursday’s afternoon talk will discuss this issue.  

A new polynomial LP algorithm:  Sergei Chubanov (Thursday afternoon) will propose a strongly polynomial relaxation-type algorithm which either finds a solution of a linear system or decides that the system has no 0,1-solutions. If the system is feasible and the bounds on variables are tight, the algorithm always finds a solution. Sergei will continue to show that the algorithm can be used as the basis for the construction of a polynomial algorithm for linear programming. Here is a link to the paper. Tamás Terlaky (Wednesday afternoon) will review several algorithms for linear programming including elimination and pivot algorithms, interior point methods and the perceptron algorithm.

Complexity of Delaunay Triangulation:  Nina Amenta’s Friday talk will describe what we recently learned about the the complexity of Delaunay triangulations as function of the distribution of their vertices, and will raise the question “how much of this can be applied to polytopes in general?”

Among the additional presentations: Christophe Weibel presented an improvement to Paco Santos’ counterexample. Jésus de Loera presented a paper with Bernd Sturmfels, and Cynthia Vinzant on the centeral curve in linear programming.