Open Collaborative Mathematics over the Internet – Three Examples

After much hesitation, I decided to share with you the videos of my lecture: Open collaborative mathematics over the internet – three examples, that I gave last January in Doron Zeilberger’s seminar at Rutgers on experimental mathematics. Parts of the 47-minutes talk is  mathematical, while other parts are about mathematics on the Internet, blogs,  the polymath projects, MathOverflow, etc.

Here is the video of part I (30 minutes) and here is the video of  part II (17 minutes).

I tried to give some homage to Doron’s own lecture style, but when I saw the video, I could not ignore some aspects of my own style – complete indifference between plus and a minus, between x^2 and \sqrt x, between multiplying by something and dividing by the same thing, between subscripts and superscripts, between what are “rows: and what are “columns”, etc., and, in addition, randomly ending English words with an ‘s’ regardless of what English grammar dictates. Apologies. This was a rare occasion of giving a talk about “meta” matters of doing mathematics.

My first (and main) example was Erdős discrepancy’s problem, and I mentioned some experimental aspects, and heuristic arguments. The second example was Möbius randomness,  continued with some comments on MathOverflow, some  of the “goodies” one can earn participating in MathOverflow, and some comments on a debate regarding polymath projects organized by I.A.S a few years ago.  The third example was about mathematically oriented skepticism. This time not about my debate with Aram Harrow and quantum computing skepticism (that I briefly mentioned), but about my “Angry Birds” skepticism. The lecture was a mixture of a blackboard talk and presentation of various Internet site.

Summary and links

Here are the links to the Internet pages I presented with an outline of the lecture:

Video I (Erdős discrepancy problem)

00:00-00:43 Doron’s introduction; 00:43-4:00 On the screen: Erdős discrepancy problems: showing this post (EDP22) from Gowers’s blog. I talked  about the chaotic nature of mathematics on the Internet. Then I explained what are polymath projects, mentioned polymath1 and density Hales-Jewett, other polymath projects, and polymath5.

4:00-12:00 Polymath5, discrepancy of hypergraphs, and The Erdős Discrepancy problem. [6:06 "There is nothing more satisfying in a lecture than seeing attempts by the speaker to move an unmovable blackboard"]; Some basic observations, random signs, Mobius functions, the log n example.

12:00-16:00 Plan for the next fifteen minutes: a) Greedy methods, b)Heuristic approaches, c) Hereditary discrepancy

17:00 – 18:40 Alex Nokolov and Kinal Talwar’s work on hereditary discrepancy. On the screen: Talwar’s post discrepancy to privacy and back on Windows on Theory

18:40 -23:00   Greedy approaches. On the screen:  My question on MathOverflow – on a certain greedy algorithm for Erdős discrepancy problem; The MO answer by ‘rlo’; a follow up question

23:00 – 27:00 What is MathOverflow. On the screen: my old MO page. (Here is the new one.) What is  “MO- reputation,” which “badges” you can earn and for what; Earning “hats” in more advanced site: On the screen:  hat champions from TCS-Stackexchange, winter 2012. (This webpage is no longer available.)

27:00 – 30:00 My heuristic approach for EDP

Video II (Möbius randomness, and angry birds)

On the screen: My MO question on Möbius randomness of the Walsh functions

00:00-02:00 Diversion: The IAS debate on polymath projects between Gowers and Sarnak, and comments  by me and by Noga Alon.

02:00-05:40  Möbius randomness and computational complexity. What is Mõbius randomness (MR);  Peter Sarnak’s Jerusalem talk on MR, his belief regarding the hardness of factoring, and my opinion; The AC^0 prime number conjectures and Walsh functions. My MO question; Ben Green’s MO-answer to one problem and Bourgain’s result answering a second question.

05:40-7:50 A remaining question on MR of Rudin-Shapiro sequences. On the screen – my MO question Mobius randomness of the Rudin-Shapiro sequence; What is a bounty in MathOverflow; Diversion: The power that comes with high reputation on MO. Who won my bounty.

7:50-09:30  My debate with Aram Harrow on my Quantum computers skepticism. On the screen the debates first post and then the post “can you hear the shape of a quantum computer?

09:30-14:30 Example 3: Another mathematically related skepticism – Angry Birds. On the screen this page from Arqade : Most voted questions (a few are very amusing); Introduction to the game “Angry Birds”; My skeptical theory: “Angry Birds” is becoming easier with new versions, and the statistical argument for it. I mentioned the Arqade’s question “Is angry Birds deterministic” that got (then) 143 upvotes, and was a role-model for me.

abd

14:30-17:00 My Arqade question, my hopes, and how it was accepted by the computer-games community;  On the screen: my Arqade question:

Continue reading

Posted in Mathematics over the Internet | Tagged , , , , , , | 1 Comment

Poznań: Random Structures and Algorithms 2013

michal  photo

Michal Karonski (left) who built Poland’s probabilistic combinatorics group at Poznań, and a sculpture honoring the Polish mathematicians who first broke the Enigma machine (right, with David Conlon, picture taken by Jacob Fox).

I am visiting now Poznań for the 16th Conference on Random Structures and Algorithms. This bi-annually series of conferences started 30 years ago (as a satellite conference to the 1983 ICM which took place in Warsaw) and this time there was also a special celebration for Bela Bollobás 70th birthday. I was looking forward to  this first visit to Poland which is, of course, a moving experience for me. Before Poznań I spent a few days in Gdańsk visiting Robert Alicki.

Today (Wednesday)  at the Poznań conference I gave a lecture on threshold phenomena and here are the slides. In the afternoon we had the traditional random run with a record number of runners.

Let me briefly tell you about very few of the other lectures:

Update (Thursday): A very good day, and among others a great talk of Jacob Fox on Relative Szemeredi Theorem (click for the slides from a similar talk from Budapest) where he presented a joint work with David Conlon and Yufei Zhao giving a very general and strong form of Szemeredi theorem for quasi-random sparse sets, which among other applications, leads to a much simpler proof of the Green -Tao theorem.

Mathias Schacht

Mathias Schacht gave a wonderful talk  on extremal results in random graphs (click for the slides) which describes some large recent body of highly successful research on the topic.

Here are two crucial slides, and going through the whole presentation can give a very good overall picture.

ms1

mt2

Vera Sós

Vera Sós gave an inspiring talk about the random nature of graphs which are extremal to the Ramsey property and connections with graph limits. Vera presented the following very interesting conjecture on graph limits.

We say that a sequence of graphs (G_n) has a limit if for every k and every graph H with k vertices the proportion in G_n of induced H-subgraphs among all k-vertex induced subgraphs tend to a limit. Let us also say that (G_n) has a V-limit if for every k and every e the proportion in G_n of induced subgraphs with k vertices and e edges among all k-vertex induced subgraphs tend to a limit.

Sós’ question: Is having a V-limit equivalent to having a limit.

This is open even in the case of quasirandomness, namely, when the limit is given by the Erdos-Renyi model G(n,p). (Update: in this case V-limit is equivalent to limit, as several participants of the conference observed.)

Both a positive and a negative answer to this fundamental question would lead to many further (different) open problems.

Joel Spencer

Joel Spencer gave a great (blackboard) talk about algorithmic aspects of the probabilistic method, and how existence theorems via the probabilistic method now often require complicated randomized algorithm. Joel mentioned his famous six standard deviation theorem. In this case, Joel conjectured thirty years ago that there is no efficient algorithm to find the coloring promised by his theorem. Joel was delighted to see his conjecture being refuted first by Nikhil Bansal (who found an algorithm whose proof depends on the theorem) and then later by Shachar Lovett and  Raghu Meka (who found a new algorithm giving a new proof) . In fact, Joel said, having his conjecture disproved is even more delightful than having it proved.

Based on this experience Joel and I are now proposing another conjecture:

Kalai-Spencer (pre)conjecture: Every existence statement proved by the probabilistic method can be complemented by an efficient (possibly randomized) algorithm.

By “complemented by an efficient algorithm” we mean that there is an efficient(polynomial time)  randomized algorithm to create the promised object with high probability.  We refer to it as a preconjecture since the term “the probabilistic method” is not entirely well-defined. But it may be possible to put this conjecture on formal grounds, and to discuss it informally even before.

Posted in Combinatorics, Conferences, Open problems, Philosophy, Probability | Tagged , | Leave a comment

BosonSampling and (BKS) Noise Sensitivity

Following are some preliminary observations connecting BosonSampling, an interesting  computational task that quantum computers can perform (that we discussed in this post), and noise-sensitivity in the sense of Benjamini, Schramm, and myself (that we discussed here and here.)

BosonSampling and computational-complexity hierarchy-collapse

Suppose that you start with n bosons each can have m locations. The i-th boson is in superposition and occupies the j-th location with complex weight a_{ij}. The bosons are indistinguishable which makes the weight for a certain occupation pattern proportional to the permanent of a certain n by n submatrix of the n by m matrix of weights.

Boson Sampling is a task that a quantum computer can perform. As a matter of fact, it only requires a “boson machine” which represents only a fragment of quantum computation. A boson machine is a quantum computer which only manipulates indistinguishable bosons with gated described by phaseshifters and beamsplitters.

BosonSampling and boson machines were studied in a recent paper The Computational Complexity of Linear Optics of Scott Aaronson and Alex Arkhipov (AA). They proved (Theorem 1 in the paper) that if (exact) BosonSampling can be performed by a classical computer then this implies a collapse of the computational-complexity polynomial hierarchy (PH, for short). This result adds to a similar result achieved independently by Michael J. Bremner, Richard Jozsa, and Dan J. Shepherd (BJS) in the paper entitled: “Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy,” and to older results by  Barbara Terhal and David DiVincenzo (TD) in the paper Adaptive quantum computation, constant depth quantum circuits and Arthur-Merlin games, Quant. Inf. Comp. 4, 134-145 (2004).

Since universal quantum computers can achieve BosonSampling (and the other related computational tasks considered by TD and BJS), this is a very strong indication for the computational complexity advantage of quantum computers which arguably brings us with quantum computers to the “cozy neighborhood” of NP-hardness.

Noisy quantum computers with quantum fault-tolerance are also capable of exact BosonSampling and this strong computational-complexity quantum-superiority applies to them as well.

Realistic BosonSampling and Gaussian Permanent Estimation (GPE)

Aaronson an Arkhipov considered the following question that they referred to as Gaussian Permanent Approximation.

Problem (Problem 2 from AA’s paper): (|GPE|_{\pm}^2): Given as imput a matrix {\cal N}(0,1)_{\bf C}^{n \times n} of i.i.d Gaussians,together with error bounds ε, δ > o, estimate to within additive error \pm \epsilon n! with probability at leat 1-δ over X, in poly(n,1/\epsilon,1/\delta) time.

They conjectured that such Gaussian Permanent Approximation is computationally hard and showed (Theorem 3) that this would imply that sampling w.r.t. states achievable by boson machines cannot even be approximated by classical computing (unless PH collapses). They regarded questions about approximation more realistic in the context of decoherence where we cannot expect exact sampling.

Scott Aaronson also expressed guarded optimism that even without quantum fault-tolerance BosonSampling can be demonstrated by boson machines for 20-30 bosons, leading to strong experimental evidence for computational advantage of quantum computers (or, if you wish, against the efficient Church-Turing thesis).

Is it so?

More realistic BosonSampling and Noisy Gaussian Permanent Estimation (NGPE)

Let us consider the following variation that we refer to as Noisy Gaussian Permanent Estimation:

Problem 2′: (|NGPE|_{\pm}^2): Given as imput a matrix M= {\cal N}(0,1)_{\bf C}^{n \times n} of i.i.d Gaussians, and a parameter t>0 let NPER (M),  be the expected value of the permanent for \sqrt {1-t^2}M+E where E= {\cal N}(0,t)_{\bf C}^{n \times n}.  Given the input matrix M together with error bounds ε, δ > o, estimate NPER(M) to within additive error \pm \epsilon n! with probability at leat 1-δ over X, in poly(n,1/\epsilon,1/\delta) time.

Problem 2′ seems more relevant for noisy boson machines (without fault-tolerance). The noisy state of the computer is based on every single boson  being slightly mixed, and the permanent computation will average these individual mixtures. We can consider the relevant value for t to be a small constant. Can we expect Problem 2′ to be hard?

The answer for Question 2′ is surprising. I expect that even when t is very very tiny, namely t=n^{-\beta} for \beta <1, the expected value of NPER(M) (essentially) does not depend at all on M!

Noise Sensitivity a la Benjamini, Kalai and Schramm

Noise sensitivity for the sense described here for Boolean functions was studied in a paper by Benjamini Schramm and me in 1999.  (A related notion was studied by Tsirelson and Vershik.) Lectures on noise sensitivity and percolation is a new beautiful monograph by Christophe Garban and Jeff Steif which contains a description of noise sensitivity. The setting extends easily to the Gaussian case. See this paper by Kindler and O’donnell for the Gaussian case. In 2007, Ofer Zeituni and I studied the noise sensitivity in the Gaussian model of the maximal eigenvalue of random Gaussian matrices (but did not write it up).

NS

Noise sensitivity depends on the degree of the support of the Fourier expansion. For determinants or permanents of an n by n matrices the basic formulas as sums of generalized diagonals describe the Fourier expansion,  so the Fourier coefficients are supported on degree-n monomials. This implies that the determinant and the permanent are very noise sensitive.

Noisy Gaussian Permanent Estimation is easy

Noisy Gaussian Permanent Estimation is easy, even for very small amount of noise, because the outcome does not depend at all on the input. It is an interesting question what is the hardness of NGPE is when the noise is below the level of noise sensitivity.

Update (March, 2014) Exploring the connection between BosonSampling and BKS-noise sensitivity shows that the picture drawn here is incorrect. Indeed, the square of the permanent is not noise stable even when the noise is fairly small and this shows that the noisy distribution does not approximate the noiseless distribution. However the noisy distribution does depend on the input!

 

AA’s protocol and experimental BosonSampling

Scott and Alex proposed a simple experiment described as follows : “An important motivation for our results is that they immediately suggest a linear-optics experiment, which would use simple optical elements (beamsplitters and phaseshifters) to induce a Haar-random m \times m unitary transformation U on an input state of n photons, and would then check that the probabilities of various final states of the photons correspond to the permanents of n \times n submatrices, as predicted by quantum mechanics.”

Recently, four groups carried out interesting BosonSampling experiments with 3 bosons (thus for permanents of 3 by 3 matrices.) (See this post on Scott’s blog.)

BKS-noise sensitivity is giving  simple predictions on how things will behave as a function of the number of bosons and this can be tested even with experiments with very small number of bosons. When you increase the number of bosons the distribution will quickly become uniform for the various final states. The correlation between the probabilities and the value corresponding to permanents will rapidly go to zero.

Some follow-up questions

Here are a few interesting questions that deserve further study.

1) Does problem 2′ capture the general behavior of noisy boson machines? To what generality noise sensitivity applies for general functions described by Boson sampling distributions?

(There are several versions for photons-based quantum computers including even an important  model by Knill, Laflamme, and Milburn that support universal quantum computation. The relevance of BKS noise-sensitivity should be explored carefully for the various versions.)

2) Is the connection with noise sensitivity relevant to the possibility to have boson machines with fault tolerance?

3) What is the Gaussian-quantum analog for BKS’s theorem asserting that noise sensitivity is the law unless  we have substantial correlation with the majority function?

4) What can be said about noise-sensitivity of measurements for other quantum codes?

A few more remarks:

More regarding noisy boson machines and quantum fault tolerance

Noisy boson machines and BosonSampling are related to various other issues regarding quantum fault-tolerance. See my added recent remarks (about the issue of synchronization, and possible modeling using smoothed Lindblad evolutions) to my old post on AA’s work.

Noise sensitivity and the special role of the majority function

bks

The main result of Itai, Oded, and me was that a Boolean function which is not noise sensitive must have a substantial correlation with the majority function. Noise sensitivity and the special role of majority for it gave me some motivation to look at quantum fault-tolerance in 2005  (see also this post,) and this is briefly discussed in my first paper on the subject, but until now I didn’t find an actual connection between quantum fault-tolerance and BKS-noise-sensitivity.

Censorship

It is an interesting question which bosonic states are realistic, and it came up in some of my papers and in the discussion with Aram Harrow and Steve Flammia (and their paper on my “Conjecture C”).

A sort of conclusion

BosonSampling was offered as a way to prove that quantum mechanics allows a computational advantage without using the computational advantage for actual algorithmic purpose. If you wish, the AA’s protocol is offered as a sort of zero-knowledge proof that the efficient Church-Turing thesis is false.  It is a beautiful idea that attracted interest and good subsequent work from theoreticians and experimentalists. If the ideas described here are correct, BosonSampling and boson machines may give a clear understanding based on BKS noise-sensitivity for why quantum mechanics does not offer computational superiority (at least not without the magic of quantum fault-tolerance).

Avi’s joke and common sense

Here is a quote from AA referring to a joke by Avi Wigderson: “Besides bosons, the other basic particles in the universe are fermions; these include matter particles such as quarks and electrons. Remarkably, the amplitudes for n-fermion processes are given not by permanents but by determinants of n×n matrices. Despite the similarity of their definitions, it is well-known that the permanent and determinant differ dramatically in their computational properties; the former is #P-complete while the latter is in P. In a lecture in 2000, Wigderson called attention to this striking connection between the boson and fermion dichotomy of physics and the permanent-determinant dichotomy of computer science. He joked that, between bosons and fermions, ‘the bosons got the harder job.’ One could view this paper as a formalization of Wigderson’s joke.”

While sharing the admiration to Avi in general and Avi’s jokes in particular, if we do want to take Avi’s joke seriously (as we always should), then the common-sense approach would be first to try to understand why is it that nature treats bosons and fermions quite equally and the dramatic computational distinction is not manifested at all. The answer is that a crucial ingredient for a computational model is the modeling of noise/errors, and that noise-sensitivity makes bosons and fermions quite similar physically and computationally.

Eigenvalues, determinants, and Yuval Filmus

It is an interesting question (that I asked over Mathoverflow) to understand the Fourier expansion of the set of eigenvalues, the maximum eigenvalue and related functions. At a later point,  last May,  I was curious about the Fourier expansion of the determinant, and for the Boolean case I noticed remarkable properties of its Fourier expansion. So I decided to ask Yuval Filmus about it:

Dear Yuval

 I am curious about the following. Let D be the function defined on {-1,1}^n^2
which associates to every +1/1 matrix its determinant.
What can be said about the Fourier transform of D? It looks to me that easy arguments shows that the Fourier transform is supported only on subsets of the entries
so that in every raw and columns there are odd number of entries. Likely there are even further restrictions that I would be interested to explore.
Do you know anything about it?
best Gil

Yuval’s answer came a couple of hours later like a cold shower:

Hi Gil,

The determinant is a sum of products of generalized diagonals.
Each generalized diagonal is just a Fourier character, and they are all different.

In other words, the usual formula for the determinant *is* its Fourier transform

This reminded me of a lovely story of how I introduced Moni Naor to himself that I should tell sometime.

What else can a quantum computer sample?

The ability of quantum computers to sample (exactly) random complex Gaussian matrices according to the value of their permanents is truly amazing! If you are not too impressed by efficient factoring but still do not believe that QC can reach the neighborhood of NP-hard problems you may find this possibility too good to be true.

I am curious if sharp P reductions give us further results of this nature. For example,  can a QC sample random 3-SAT formulas (by a uniform distribution or by a certain other distribution coming from sharp-P reductions) according to the number of their satisfying assignments?

Can QC sample integer polytopes by their volume or by the number of integer points in them? Graphs by the number of 4-colorings?

Posted in Computer Science and Optimization, Physics, Probability | Tagged , , , | 4 Comments

Lawler-Kozdron-Richards-Stroock’s combined Proof for the Matrix-Tree theorem and Wilson’s Theorem

wilson  curvature

David Wilson and a cover of Shlomo’s recent book “Curvature in mathematics and physics”

A few weeks ago, in David Kazhdan’s basic notion seminar, Shlomo Sternberg gave a lovely presentation Kirchho ff and Wilson via Kozdron and Stroock. The lecture is based on the work presented in the very recent paper by Michael J. Kozdron,  Larissa M. Richards, and Daniel W. Stroock: Determinants, their applications to Markov processes, and a random walk proof of Kirchhoff’s matrix tree theorem. Preprint, 2013. Available online at arXiv:1306.2059.

Here is the abstract:

Kirchhoff’s formula for the number of spanning trees in a connected graph  is over 150 years old. For example, it says that if c_2, \dots, c_n are the nonzero  eigenvalues of the Laplacian, then the number k of spanning trees is k= (1/n)c_2\cdots c_n. There are many proofs.  An algorithm due to Wilson via loop erased random walks produces such a tree, and Wilson’s theorem is that all spanning trees are produced by his algorithm with equal probability. Hence,  after the fact, we know that Wilson’s algorithm produces any given tree with probability 1/k.  A proof due to Lawler, using the Green’s function, shows directly that Wilson’s algorithm has the probability 1/k  of producing any given spanning tree, thus simultaneously proving Wilson’s theorem and Kirchhoff’s formula. Lawler’s proof has been considerably simplified by Kozdron and Stroock. I plan to explain their proof. The lecture will be completely self-contained, using only Cramer’s rule from linear algebra.

(Here are also lecture notes of the lecture by Ron Rosenthal.)

Here is some background.

The matrix-tree theorem

The matrix tree theorem asserts that the number of rooted spanning trees of a connected graph G  is the product of the non-zero eigenvalues of L(G), the Laplacian of G.

Suppose that G has n vertices. The Laplacian of G is the matrix whose (i,i)-entry is the degree of the ith vertex, and its (i,j) entry for i \ne j is 0 if the ith vertex is not adjacent to the jth vertex, and -1 if they are adjacent. So  L(G)=D-A(G) where A(G) is the adjacency matrix of G, and D is a diagonal matrix whose entries are the degrees of the vertices.  An equivalent formulation of the matrix-tree theorem is that the number of spanning trees is the determinant of a matrix obtained from the Laplacian by deleting the j th row and j th column.

We considered a high dimensional generalization of the matrix tree theorem in these posts (I, II, III, IV).

How to generate a random spanning tree for a graph G?

Using the matrix-tree theorem

Method A: Start with an edge e \in G, use the matrix-tree theorem to compute the probability p_e that e belongs to a random spanning tree of G, take e with probability p_e. If e is taken consider the contraction G/e and if G is not taken consider the deletion G \backslash e and continue.

This is an efficient method to generate a random spanning tree according to the uniform probability distribution. You can extend it by assigning each edge a weight and chosing a tree with probability proportional to the product of its weights.

Random weights and greedy

Method B: Assign each edge a random real number between 0 and 1 and chose the spanning tree which minimizes the sum of weights via the greedy algorithm.

This is a wonderful method but it leads to a different probability distribution on random spanning trees which is very interesting!

The Aldous-Broder random walk method

Method C: The Aldous-Broder theorem. Start a simple random walk from a vertex of the graph until reaching all vertices, and take each edge that did not form a cycle with earlier edges. (Or, in other words, take every edge that reduced the number of connected components of the graph on the whole vertex set and visited edges.)

Amazingly, this leads to a random uniform spanning tree. The next method is also very amazing and important for many applications.

David Wilson’s algorithm

Method D: Wilson’s algorithm. Fix a vertex as a root. (Later the root will be a whole set of vertices, and a tree on them.) Start from an arbitrary vertex u not in the root and take a simple random walk until you reach the root. Next, erase all edges in cycles of the path created by the random walk so you will left with a simple path from  u to the root. Add this path to the root and continue!

Here is a link to Wilson’s paper! Here is a nice presentation by Chatterji  and Gulwani.

Posted in Combinatorics, Computer Science and Optimization, Probability | Tagged , , | 4 Comments

Auction-based Tic Tac Toe: Solution

reshefmoshepayne

Reshef, Moshe and Sam

poorman

The question:

(based on discussions with Reshef Meir, Moshe Tennenholtz, and Sam Payne)

Tic Tac Toe is played since anciant times. For the common version, where the two players X and O take turns in marking the empty squares in a 3 by 3 board – each player has a strategy that can guarantee a draw. Now suppose that the X player has a budget of Y dollars, the O playare has a budget of 100 dollars and before each round the two players bid who makes the next move. The highest bidder makes the move and his budget is reduced by his bid.

What would you expect the minimal value of Y is so the X player has a winning strategy?

We asked this question in our test your intuition series (#17)  in this post. Here is a recent new nice version of tic-tac-toe.

The poll:

pollttt

The answer:

101.84

Continue reading

Posted in Games, Test your intuition | Tagged | 6 Comments

Some old and new problems in combinatorics and geometry

Paul99

Paul Erdős in Jerusalem, 1933  1993

I just came back from a great Erdős Centennial conference in wonderful Budapest. I gave a lecture on old and new problems (mainly) in combinatorics and geometry (here are the slides), where I presented twenty problems, here they are:

Around Borsuk’s Problem

Let f(d) be the smallest integer so that every set of diameter one in R^d can be covered by f(d) sets of smaller diameter. Borsuk conjectured that f(d) \le d+1.

It is known (Kahn and Kalai, 1993) that : f(d) \ge 1.2^{\sqrt d}and also that (Schramm, 1989) f(d) \le (\sqrt{3/2}+o(1))^d.

Problem 1: Is f(d) exponential in d?

Problem 2: What is the smallest dimension for which Borsuk’s conjecture is false?

Volume of sets of constant width in high dimensions

Problem 3: Let us denote the volume of the n-ball of radius 1/2 by V_n.

Question (Oded Schramm): Is there some \epsilon >0 so that for every n>1 there exist a set K_n of constant width 1 in dimension n whose volume satisfies VOL(K_n) \le (1-\epsilon)^n V_n.

Around Tverberg’s theorem

Tverberg’s Theorem states the following: Let x_1,x_2,\dots, x_m be points in R^d with m \ge (r-1)(d+1)+1Then there is a partition S_1,S_2,\dots, S_r of \{1,2,\dots,m\} such that  \cap _{j=1}^rconv (x_i: i \in S_j) \ne \emptyset.

Problem 4:  Let t(d,r,k) be the smallest integer such that given m points  x_1,x_2,\dots, x_m in R^d, m \ge t(d,r,k) there exists a partition S_1,S_2,\dots, S_r of \{1,2,\dots,m\} such that every k among the convex hulls conv (x_i: i \in S_j), j=1,2,\dots,r  have a point in common.

Reay’s “relaxed Tverberg conjecture” asserts that that whenever k >1 (and k \le r), t(d,r,k)= (d+1)(r-1)+1.

Problem 5: For a set A, denote by T_r(A) those points in R^d which belong to the convex hull of r pairwise disjoint subsets of X. We call these points Tverberg points of order r.

Conjecture: For every A \subset R^d , \sum_{r=1}^{|A|} {\rm dim} T_r(A) \ge 0.

Note that \dim \emptyset = -1.

Problem 6:   How many points T(d;s,t) in R^d guarantee that they can be divided into two parts so that every union of s convex sets containing the first part has a non empty intersection with every union of t convex sets containing the second part.

A question about directed graphs

Problem 7: Let G be a directed graph with n vertices and 2n-2 edges. When can you divide your set of edges into two trees T_1 and T_2 (so far we disregard the orientation of edges,) so that when you reverse the directions of all edges in T_2 you get a strongly connected digraph.

Erdős-Ko-Rado theorem meets Catalan

Problem 8 

Conjecture: Let \cal C be a collection of triangulations of an n-gon so that every two triangulations in \cal C share a diagonal.  Then |{\cal C}| is at most the number of triangulations of an (n-1)-gon.

F ≤ 4E

Problem 9: Let K be a two-dimensional simplicial complex and suppose that K can be embedded in R^4. Denote by E the number of edges of K and by F the number of 2-faces of K.

Conjecture:  4E

A weaker version which is also widely open and very interesting is: For some absolute constant C C E.

Polynomial Hirsch

Problem 10:  The diameter of graphs of d-polytopes with n facets is bounded above by a polynomial in d and n.

Analysis – Fixed points

Problem 11: Let K be a convex body in R^d. (Say, a ball, say a cube…) For which classes \cal C of functions, every f \in {\cal C} which takes K into itself admits a fixed point in K.

Number theory – infinitely many primes in sparse sets

Problem 12: Find a (not extremely artificial) set A of integers so that for every n, |A\cap [n]| \le n^{0.499}where you can prove that A contains infinitely many primes.

Möbius randomness for sparse sets

Problem 13: Find a (not extremely artificial) set A of integers so that for every n, |A\cap [n]| \le n^{0.499} where you can prove that

\sum \{\mu(k): k \le n, k \in A\} = o(|A \cap [n]).

Computation – noisy game of life

Problem 14: Does a noisy version of Conway’s game of life support universal computation?

Ramsey for polytopes

Problem 15: 

Conjecture: For a fixed k, every d-polytope of sufficiently high dimension contains a k-face which is either a simplex or a (combinatorial) cube.

Expectation thresholds and thresholds

Problem 16: Consider a random graph G in G(n,p) and the graph property: G contains a copy of a specific graph H. (Note: H depends on n; a motivating example: H is a Hamiltonian cycle.) Let q be the minimal value for which the expected number of copies of H’ in G in G(n,q) is at least 1/2 for every subgraph H’ of H. Let p be the value for which the probability that G in G(n,p) contains a copy of H is 1/2.

Conjecture: [Kahn - Kalai 2006]  p/q = O( log n)

Traces

Problem 17: Let X be a family of subsets of [n]=\{1,2,\dots,n\}.
How large X is needed to be so that the restriction (trace) of X to some set B \subset [n]|B|=(1/2+\delta)n has at least 3/4 \cdot 2^{|B|} elements?

Graph-codes

Problem 18: Let  P  be a property of graphs. Let \cal G be a collection of graphs with n vertices so that the symmetric difference of two graphs in \cal G has property PHow large can \cal G be.

Conditions for colorability

Problem 19: A conjecture by Roy Meshulam and me:

There is a constant C such that every graph G
with no induced cycles of order divisible by 3 is colorable by C colors.

Problem 20:

Another conjecture by Roy Meshulam and me: For every b>0 there
is a constant C=C(b) with the following property:

Let G be a graph such that for all its induced subgraphs H

The number of independent sets of odd size minus the number of independent sets of even size is between -b  and b.

Then G is colorable by C(b) colors.

Remarks:

The title of the lecture is borrowed from several papers and talks by Erdős. Continue reading

Posted in Combinatorics, Geometry, Open problems | Tagged | 4 Comments

The Kadison-Singer Conjecture has beed Proved by Adam Marcus, Dan Spielman, and Nikhil Srivastava

…while we keep discussing why mathematics is possible…

The news

Adam Marcus, Dan Spielman, and Nikhil Srivastava posted a paper entitled “Interlacing Families II: Mixed Characteristic Polynomials and the Kadison-Singer Problem,” where they prove the 1959 Kadison-Singer conjecture.

(We discussed part I of “interlacing families” in this post about new Ramanujan graphs.  Looks like a nice series.)

The Kadison-Singer Conjecture

The Kadison-Singer conjecture refers to a positive answer to a question posed by Kadison and Singer: “They asked ‘whether or not each pure state of \cal B is the extension of some pure state of some maximal abelian algebra’ (where \cal B is the collection of bounded linear transformations on a Hilbert space.”) I heard about this question in a different formulation known as the “Bourgain-Tzafriri conjecture” (I will state it below) and the paper addresses a related well known discrepancy formulation by Weaver. (See also Weaver’s comment on the appropriate “quantum” formulation of the conjecture.)

Updates: A very nice post on the relation of the Kadison-Singer Conjecture  to foundation of quantum mechanics is in this post in  Bryan Roberts‘ blog Soul Physics. Here is a very nice post on the mathematics of the conjecture with ten interesting comments on the proof by Orr Shalit, and another nice post on Yemon Choi’s blog and how could I miss the very nice post on James Lee’s blog.. Nov 4, 2013: A new post with essentially the whole proof appeared on Terry tao’s blog, Real stable polynomials and the Kadison Singer Problem.

Update: A very nice blog post on the new result was written by  Nikhil Srivastava on “Windows on theory.” It emphasizes the discrapancy-theoretic nature of the new result, and explains the application for partitioning graphs into expanders.

The Bourgain-Tzafriri theorem and conjecture

Let me tell again (see this post about Lior, Michael, and Aryeh where I told it first) about a theorem of Bourgain and Tzafriri related to the Kadison-Singer conjecture.

Jean Bourgain and Lior Tzafriri considered the following scenario: Let a > 0 be a real number. Let A be a n \times n matrix with norm 1 and with zeroes on the diagonal. An s by s principal minor M is “good” if the norm of M is less than a.

Consider the following hypergraph F:

The vertices correspond to indices {}[n]=\{1,2,\dots,n\}. A set S \subset [n] belongs to F if the S \times S sub-matrix of M is good.

Bourgain and Tzafriri showed that for every a > 0 there is C(a) > 0 so that for every matrix A we can find S \in F so that |S| \ge C(a)n.

Moreover, they showed that for every nonnegative weights w_1,w_2,\dots w_n there is S \in F so that the sum of the weights in S is at least C(a) times the total weight. In other words, (by LP duality,) the vertices of the hypergraph can be fractionally covered by C(a) edges.

The “big question” is if there a real number C'(a) > 0 so that for every matrix M, {}[n] can be covered by C'(a) good sets. Or, in other words, if the vertices of F can be covered by C'(a) edges. This question is known to be equivalent to an old conjecture by Kadison and Singer (it is also known as the “paving conjecture”). In view of what was already proved by Bourgain and Tzafriri what is needed is to show that the covering number is bounded from above by a function of the fractional covering number. So if you wish, the Kadison-Singer conjecture had become a statement about bounded integrality gap. Before proving the full result, Marcus, Spielman and Srivastava gave a new proof of the Bourgain-Tzafriti theorem.

Additional references:

KADISON-SINGER MEETS BOURGAIN-TZAFRIRI by PETER G. CASAZZA, ROMAN VERSHYNIN,  The Kadison-Singer Problem in Mathematics and Engineering: A Detailed Account pdf, and many other recent publications by Pete Casazza.

Posted in Analysis, Computer Science and Optimization, Physics, Updates | Tagged , , , , , | 29 Comments

Why is Mathematics Possible: Tim Gowers’s Take on the Matter

lnc math

In a previous post I mentioned the question of why is mathematics possible. Among the interesting comments to the post, here is a comment by Tim Gowers:

“Maybe the following would be a way of rephrasing your question. We know that undecidability results don’t show that mathematics is impossible, since we are interested in a tiny fraction of mathematical statements, and in practice only in a tiny fraction of possible proofs (roughly speaking, the comprehensible ones). But why is it that these two classes match up so well? Why is it that nice mathematical statements so often have proofs that are of the kind that we are able to discover?

Continue reading

Posted in Open discussion, Philosophy, What is Mathematics | Tagged , , , | 23 Comments

Polymath8: Bounded Gaps Between Primes

Yitang Zhang’s very recent shocking paper demonstrated that bounded gaps between primes occur infinitely often, with the explicit upper bound of 70,000,000 given for this gap. Polymath8 was launched for the dual purpose of learning Zhang’s proof and improving the upper bound for the gaps. Here are links for three posts (I, II, III) on Terry Tao’s blog, a post on the secret blogging seminar,  and for a post on the polymath blog. And here is the table for the world records so far.

Updates: Record for June 16 – 60,744

Posted in Mathematics over the Internet, Number theory, Updates | Tagged , | Leave a comment

Joram’s Memorial Conference

Joram2

Joram Lindenstrauss 1936-2012

This week our local Institute of Advanced Study holds a memorial conference for Joram Lindenstrauss. Joram was an immensely powerful mathematician, in terms of originality and conceptual vision, in terms of technical power, in terms of courage to confront difficult problems, in terms of clarity and elegance, and in terms of influence and leadership. Joram was a dear teacher and a dear colleague and I greatly miss him.

IMG_3032-002

One nice anecdote that I heard in the conference was about the ceremony where Joram received the Israel Prize. When he shook the hand of the Israeli president, Itzhak Navon, Navon told him: “If you have a little time please drop by to tell me sometime what Banach spaces are.” Next Joram shook the hand of prime minister Menchem Begin who overheard the comment and told Joram: “If you have a little time please do not drop by to tell me sometime what Banach spaces are.”

Continue reading

Posted in Conferences, Obituary | Tagged | 1 Comment