Michal Karonski (left) who built Poland’s probabilistic combinatorics group at Poznań, and a sculpture honoring the Polish mathematicians who first broke the Enigma machine (right, with David Conlon, picture taken by Jacob Fox).
I am visiting now Poznań for the 16th Conference on Random Structures and Algorithms. This bi-annually series of conferences started 30 years ago (as a satellite conference to the 1983 ICM which took place in Warsaw) and this time there was also a special celebration for Bela Bollobás 70th birthday. I was looking forward to this first visit to Poland which is, of course, a moving experience for me. Before Poznań I spent a few days in Gdańsk visiting Robert Alicki. Today (Wednesday) at the Poznań conference I gave a lecture on threshold phenomena and here are the slides. In the afternoon we had the traditional random run with a record number of runners. Let me briefly tell you about very few of the other lectures: Update (Thursday): A very good day, and among others a great talk of Jacob Fox on Relative Szemeredi Theorem (click for the slides from a similar talk from Budapest) where he presented a joint work with David Conlon and Yufei Zhao giving a very general and strong form of Szemeredi theorem for quasi-random sparse sets, which among other applications, leads to a much simpler proof of the Green -Tao theorem.
Mathias Schacht gave a wonderful talk on extremal results in random graphs (click for the slides) which describes some large recent body of highly successful research on the topic. Here are two crucial slides, and going through the whole presentation can give a very good overall picture.
Vera Sós gave an inspiring talk about the random nature of graphs which are extremal to the Ramsey property and connections with graph limits. Vera presented the following very interesting conjecture on graph limits. We say that a sequence of graphs has a limit if for every k and every graph H with k vertices the proportion in of induced H-subgraphs among all k-vertex induced subgraphs tend to a limit. Let us also say that has a V-limit if for every k and every e the proportion in of induced subgraphs with k vertices and e edges among all k-vertex induced subgraphs tend to a limit. Sós’ question: Is having a V-limit equivalent to having a limit. This is open even in the case of quasirandomness, namely, when the limit is given by the Erdos-Renyi model G(n,p). (Update: in this case V-limit is equivalent to limit, as several participants of the conference observed.) Both a positive and a negative answer to this fundamental question would lead to many further (different) open problems.
Joel Spencer gave a great (blackboard) talk about algorithmic aspects of the probabilistic method, and how existence theorems via the probabilistic method now often require complicated randomized algorithm. Joel mentioned his famous six standard deviation theorem. In this case, Joel conjectured thirty years ago that there is no efficient algorithm to find the coloring promised by his theorem. Joel was delighted to see his conjecture being refuted first by Nikhil Bansal (who found an algorithm whose proof depends on the theorem) and then later by Shachar Lovett and Raghu Meka (who found a new algorithm giving a new proof) . In fact, Joel said, having his conjecture disproved is even more delightful than having it proved. Based on this experience Joel and I are now proposing another conjecture: Kalai-Spencer (pre)conjecture: Every existence statement proved by the probabilistic method can be complemented by an efficient (possibly randomized) algorithm. By “complemented by an efficient algorithm” we mean that there is an efficient(polynomial time) randomized algorithm to create the promised object with high probability. We refer to it as a preconjecture since the term “the probabilistic method” is not entirely well-defined. But it may be possible to put this conjecture on formal grounds, and to discuss it informally even before.
In a previous post I mentioned the question of why is mathematics possible. Among the interesting comments to the post, here is a comment by Tim Gowers:
“Maybe the following would be a way of rephrasing your question. We know that undecidability results don’t show that mathematics is impossible, since we are interested in a tiny fraction of mathematical statements, and in practice only in a tiny fraction of possible proofs (roughly speaking, the comprehensible ones). But why is it that these two classes match up so well? Why is it that nice mathematical statements so often have proofs that are of the kind that we are able to discover?
Spectacular advances in number theory
Last weeks we heard about two spectacular results in number theory. As announced in Nature, Yitang Zhang proved that there are infinitely many pairs of consecutive primes which are at most 70 million apart! This is a sensational achievement. Pushing 70 million to 2 will settle the ancient conjecture on twin primes, but this is already an extremely amazing breakthrough. An earlier breakthrough came in 2005 when Daniel Goldston, János Pintz, and Cem Yıldırım proved that the gaps between consecutive primes is infinitely often smaller than .
Update: A description of Zhang’s work and a link to the paper can be found on Emmanuel Kowalski’s bloog Further update: A description of Zhang’s work and related questions and results can be found now in Terry Tao’s blog. Terry Tao also proposed a new polymath project aimed to reading Zhang’s paper and attempting to improve the bounds.
Harald Helfgott proved that every integer is the sum of three primes. Here the story starts with Vinogradov who proved it for sufficiently large integers, but pushing down what “sufficiently large” is, and pushing up the computerized methods needed to take care of “small” integers required much work and ingenuity.
Why is Mathematics possible?
The recent news, and a little exchange of views I had with Boaz Barak, bring us back to the question: “Why is mathematics possible?” This is an old question that David Kazhdan outlined in a lovely 1999 essay “Reflection on the development of mathematics in the twentieth century.” The point (from modern view) is this: We know that mathematical statements can, in general, be undecidable. We also know that a proof for a short mathematical statement can be extremely long. And we also know that even if a mathematical statement admits a short proof, finding such a proof can be computationally intractable. Given all that, what are the reasons that mathematics is at all possible?
It is popular to associate “human creativity” with an answer. The problem with incorrect (or, at least, incomplete) answers is that they often represent missed opportunities for better answers. I think that for the question “why is mathematics possible” there are opportunities (even using computational complexity thinking) to offer better answers.
Please offer your answers.
The following paradox was raised by Rann Smorodinsky:
Rann Smorodinsky’s Privacy Paradox
Suppose that you have the following one-time scenario. You want to buy a sandwich where the options are a roast beef sandwich or an avocado sandwich. Choosing the sandwich of your preference (say, the avocado sandwich) adds one to your utility, but having your private preference known to the seller, reduces by one your utility. The prior people have on your preferences is fifty-fifty.
If you choose the avocado sandwich your utility is zero, hence you can improve on this by picking each type of sandwich at random with probability 1/2. In this case your private preference remains unknown and you gain in expectation 1/2 for having the sandwich you prefer with probability 1/2.
But given this strategy, you can still improve on it by choosing the avocado sandwich.
Today (April 27, 2012) it is precisely 213 years 7 months, and 29 days to the completion of the declaration of the rights of man, which makes it a perfect occasion to celebrate this remarkable human creation.
Here is a beautiful lecture by Jonathan Israel about the history of basic human rights:
The History of Basic Human Rights: The Declaration of the Rights of Man, 1789
A small discussion in a meeting about two decades ago.
Lior: Some people in the department think that they are wiser than what they really are
John: I am really wiser than what I think I am.
John’s statement is paradoxical (and funny). It looks similar to famous paradoxical self referential statements but it has some twist.
See also this question on philosophy stack-exchange
I came across a videotaped lecture by Itamar Pitowsky given at PITP some years ago on the question of probability in physics that we discussed in two earlier posts on randomness in nature (I, II). There are links below to the presentation slides, and to a video of the lecture.
A little over a week ago on Thursday, Itamar, Oron Shagrir, and I sat at our little CS cafeteria and discussed this very same issue. What does probability mean? Does it just represent human uncertainty? Is it just an emerging mathematical concept which is convenient for modeling? Do matters change when we move from classical to quantum mechanics? When we move to quantum physics the notion of probability itself changes for sure, but is there a change in the interpretation of what probability is? A few people passed by and listened, and it felt like this was a direct continuation of conversations we had while we (Itamar and I; Oron is much younger) were students in the early 70s. This was our last meeting and Itamar’s deep voice and good smile are still with me.
In spite of his illness of many years Itamar looked in good shape. A day later, on Friday, he met with a graduate student working on connections between philosophy and computer science. Yet another exciting new frontier. Last Wednesday Itamar passed away from sudden complications related to his illness.
Itamar was a great guy; he was great in science and great in the humanities, and he had an immense human wisdom and a modest, level-headed way of expressing it. I will greatly miss him.
Here is a link to a Condolence page for Itamar Pitowsky
|Probability in physics:
where does it come from?
Dept. of Philosophy, The Hebrew University of Jerusalem
The application of probability theory to physics began in the 19th century with Maxwell’s and Boltzmann’s explanation of the properties of gases in terms of the motion of their constituent molecules. Now the term probability is not a part of the (classical) theory of particle motion; so what does it mean, and where does it come from? Boltzmann thought to reduce the meaning of probability in physics to that of relative frequency. Thus, eg., we never find a container of gas in normal circumstances (equilibrium) with all of its molecules on the right hand side. Now, suppose we could prove this from the principles of mechanics- that a dynamical system with a huge number of particles almost never gets into a state with all its particles on one side. Then, to say that such an event has a vanishing probability would simply mean (and not only imply) that it is very rare.I shall explain Boltzmann’s program and assumptions in some detail, and why, in spite of its intuitive appeal, it ultimately fails. We shall also discuss why quantum mechanics with its “built in” concept of probability does not help much, and review some alternatives, as time permits.
For more information about Itamar Pitowsky, visit his web site. See his presentation slides.
Additional resources for this talk: video.
(Here is the original link to the PIPS lecture) My post entitled Amazing possibilities about various fundamental limitations stated by many great minds that turned out to be wrong, was largely based on examples provided by Itamar.