# Poznań: Random Structures and Algorithms 2013

Michal Karonski (left) who built Poland’s probabilistic combinatorics group at Poznań, and a sculpture honoring the Polish mathematicians who first broke the Enigma machine (right, with David Conlon, picture taken by Jacob Fox).

I am visiting now Poznań for the 16th Conference on Random Structures and Algorithms. This bi-annually series of conferences started 30 years ago (as a satellite conference to the 1983 ICM which took place in Warsaw) and this time there was also a special celebration for Bela Bollobás 70th birthday. I was looking forward to  this first visit to Poland which is, of course, a moving experience for me. Before Poznań I spent a few days in Gdańsk visiting Robert Alicki.

Today (Wednesday)  at the Poznań conference I gave a lecture on threshold phenomena and here are the slides. In the afternoon we had the traditional random run with a record number of runners.

Let me briefly tell you about very few of the other lectures:

Update (Thursday): A very good day, and among others a great talk of Jacob Fox on Relative Szemeredi Theorem (click for the slides from a similar talk from Budapest) where he presented a joint work with David Conlon and Yufei Zhao giving a very general and strong form of Szemeredi theorem for quasi-random sparse sets, which among other applications, leads to a much simpler proof of the Green -Tao theorem.

### Mathias Schacht

Mathias Schacht gave a wonderful talk  on extremal results in random graphs (click for the slides) which describes some large recent body of highly successful research on the topic.

Here are two crucial slides, and going through the whole presentation can give a very good overall picture.

### Vera Sós

Vera Sós gave an inspiring talk about the random nature of graphs which are extremal to the Ramsey property and connections with graph limits. Vera presented the following very interesting conjecture on graph limits.

We say that a sequence of graphs $(G_n)$ has a limit if for every k and every graph H with k vertices the proportion in $G_n$ of induced H-subgraphs among all k-vertex induced subgraphs tend to a limit. Let us also say that $(G_n)$ has a V-limit if for every k and every e the proportion in $G_n$ of induced subgraphs with k vertices and e edges among all k-vertex induced subgraphs tend to a limit.

Sós’ question: Is having a V-limit equivalent to having a limit.

This is open even in the case of quasirandomness, namely, when the limit is given by the Erdos-Renyi model G(n,p). (Update: in this case V-limit is equivalent to limit, as several participants of the conference observed.)

Both a positive and a negative answer to this fundamental question would lead to many further (different) open problems.

### Joel Spencer

Joel Spencer gave a great (blackboard) talk about algorithmic aspects of the probabilistic method, and how existence theorems via the probabilistic method now often require complicated randomized algorithm. Joel mentioned his famous six standard deviation theorem. In this case, Joel conjectured thirty years ago that there is no efficient algorithm to find the coloring promised by his theorem. Joel was delighted to see his conjecture being refuted first by Nikhil Bansal (who found an algorithm whose proof depends on the theorem) and then later by Shachar Lovett and  Raghu Meka (who found a new algorithm giving a new proof) . In fact, Joel said, having his conjecture disproved is even more delightful than having it proved.

Based on this experience Joel and I are now proposing another conjecture:

Kalai-Spencer (pre)conjecture: Every existence statement proved by the probabilistic method can be complemented by an efficient (possibly randomized) algorithm.

By “complemented by an efficient algorithm” we mean that there is an efficient(polynomial time)  randomized algorithm to create the promised object with high probability.  We refer to it as a preconjecture since the term “the probabilistic method” is not entirely well-defined. But it may be possible to put this conjecture on formal grounds, and to discuss it informally even before.

# Why is Mathematics Possible: Tim Gowers’s Take on the Matter

In a previous post I mentioned the question of why is mathematics possible. Among the interesting comments to the post, here is a comment by Tim Gowers:

“Maybe the following would be a way of rephrasing your question. We know that undecidability results don’t show that mathematics is impossible, since we are interested in a tiny fraction of mathematical statements, and in practice only in a tiny fraction of possible proofs (roughly speaking, the comprehensible ones). But why is it that these two classes match up so well? Why is it that nice mathematical statements so often have proofs that are of the kind that we are able to discover?

# Why is mathematics possible?

## Spectacular advances in number theory

Last weeks we heard about two spectacular results in number theory.  As announced in Nature, Yitang Zhang proved that there are infinitely many pairs of consecutive primes $(p_n, p_{n+1})$ which are at most 70 million apart! This is a sensational achievement. Pushing 70 million to 2 will settle the ancient conjecture on twin primes, but this is already an extremely amazing breakthrough.  An earlier breakthrough came in 2005 when Daniel Goldston, János Pintz, and Cem Yıldırım proved that the gaps between consecutive primes $p_{n+1}-p_n$ is infinitely often smaller than $\sqrt {\log p_n} \log \log ^2 p_n$.

Update: A description of Zhang’s work and a link to the paper can be found on Emmanuel Kowalski’s bloog Further update: A description of Zhang’s work and related questions and results can be found now in Terry Tao’s blog. Terry Tao also proposed a new polymath project aimed to reading Zhang’s paper and attempting to improve the bounds.

Harald Helfgott proved that every integer is the sum of three primes.  Here the story starts with Vinogradov who proved it for sufficiently large integers, but pushing down what “sufficiently large” is, and pushing up the computerized methods needed to take care of “small” integers required much work and ingenuity.

## Why is Mathematics possible?

The recent news, and a little exchange of views I had with Boaz Barak, bring us back to the question: “Why is mathematics possible?” This is an old question that David Kazhdan outlined in a lovely 1999 essay “Reflection on the development of mathematics in the twentieth century.” The point (from modern view) is this: We know that mathematical statements can, in general, be undecidable.  We also know that a proof for a short mathematical statement can be extremely long. And we also know that even if a mathematical statement admits a short proof, finding such a proof can be computationally intractable. Given all that, what are the reasons that mathematics is at all possible?

It is popular to associate “human creativity” with an answer. The problem with incorrect (or, at least, incomplete) answers is that they often represent missed opportunities for better answers. I think that for the question “why is mathematics possible” there are opportunities (even using computational complexity thinking) to offer better answers.

# The Privacy Paradox of Rann Smorodinsky

The following paradox was raised by Rann Smorodinsky:

Suppose that you have the following one-time scenario. You want to buy a sandwich where the options are a roast beef sandwich or an avocado sandwich. Choosing the sandwich of your preference (say, the avocado sandwich) adds one to your utility, but having your private preference known to the seller,  reduces by one your utility. The prior people have on your preferences is fifty-fifty.

If you choose the avocado sandwich your utility is zero, hence you can improve on this by picking each type of sandwich at random with probability 1/2. In this case your private preference remains unknown and you gain in expectation 1/2 for having the sandwich you prefer with probability 1/2.

But given this strategy, you can still improve on it by choosing the avocado sandwich.

# The 1789 Declaration of the Rights of Man

Today (April 27, 2012) it is precisely 213 years 7 months, and 29 days to the completion of the declaration of the rights of man, which makes it a perfect occasion to celebrate this remarkable human creation.

Here is a beautiful lecture by Jonathan Israel about the history of basic human rights:

The History of Basic Human Rights: The Declaration of the Rights of Man, 1789

Lior: Some people in the department think that they are wiser than what they really are

John: I am really wiser than what I think I am.

John’s statement is paradoxical (and funny). It looks similar to famous paradoxical self referential statements but it has some twist.

# Itamar Pitowsky: Probability in Physics, Where does it Come From?

I came across a videotaped lecture by Itamar Pitowsky given at PITP some years ago on the question of probability in physics that we discussed in two earlier posts on randomness in nature (I, II). There are links below to the presentation slides, and to  a video of the lecture.

A little over a week ago on Thursday, Itamar,  Oron Shagrir, and I sat at our little CS cafeteria and discussed this very same issue.  What does probability mean? Does it just represent human uncertainty? Is it just an emerging mathematical concept which is convenient for modeling? Do matters change when we move from classical to quantum mechanics? When we move to quantum physics the notion of probability itself changes for sure, but is there a change in the interpretation of what probability is?  A few people passed by and listened, and it felt like this was a direct continuation of conversations we had while we (Itamar and I; Oron is much younger) were students in the early 70s. This was our last meeting and Itamar’s deep voice and good smile are still with me.

In spite of his illness of many years Itamar looked in good shape. A day later, on Friday, he met with a graduate student working on connections between philosophy and computer science.  Yet another exciting new frontier. Last Wednesday Itamar passed away from sudden complications related to his illness.

Itamar was a great guy; he was great in science and great in the humanities, and he had an immense human wisdom and a modest, level-headed way of expressing it. I will greatly miss him.

Here is a link to a Condolence page for Itamar Pitowsky

 Probability in physics: where does it come from?

## Itamar Pitowsky

### Dept. of Philosophy, The Hebrew University of Jerusalem

The application of probability theory to physics began in the 19th century with Maxwell’s and Boltzmann’s explanation of the properties of gases in terms of the motion of their constituent molecules. Now the term probability is not a part of the (classical) theory of particle motion; so what does it mean, and where does it come from? Boltzmann thought to reduce the meaning of probability in physics to that of relative frequency. Thus, eg., we never find a container of gas in normal circumstances (equilibrium) with all of its molecules on the right hand side. Now, suppose we could prove this from the principles of mechanics- that a dynamical system with a huge number of particles almost never gets into a state with all its particles on one side. Then, to say that such an event has a vanishing probability would simply mean (and not only imply) that it is very rare.I shall explain Boltzmann’s program and assumptions in some detail, and why, in spite of its intuitive appeal, it ultimately fails. We shall also discuss why quantum mechanics with its “built in” concept of probability does not help much, and review some alternatives, as time permits.

Additional resources for this talk: video.

(Here is the original link to the PIPS lecture) My post entitled Amazing possibilities  about various fundamental limitations stated by many great minds that turned out to be wrong, was largely based on examples provided by Itamar.

# Randomness in Nature II

In a previous post we presented a MO question by Liza about randomness:

What is the explanation of the apparent randomness of high-level phenomena in nature?

1. Is it accepted that these phenomena are not really random, meaning that given enough information one could predict it? If so isn’t that the case for all random phenomena?

2. If there is true randomness and the outcome cannot be predicted – what is the origin of that randomness? (is it a result of the randomness in the micro world – quantum phenomena etc…)

Before I give the floor to the commentators, I would like to mention a conference on this topic that took place in Jerusalem a year ago. The title was “The Probable and the Improbable: The Meaning and Role of Probability in Physics” and the conference was in honor of Itamar Pitowsky. Let me also mention that  the Wikipedia article on randomness is also a good resource.

Here are some of the answers offered here to Liza’s question.

One way to think about what it means to say that a physical process is “random” is to say that there is no algorithm which predicts its behavior precisely which runs significantly faster than the process itself. Morally I think this should be true of many “high-level” phenomena. Continue reading

# Some Philosophy of Science

The Bayesian approach to the philosophy of science was developed in the first half of the twentieth century. Karl Popper and Thomas Kuhn are twentieth-century philosophers of science who later proposed alternative approaches.

It will be convenient to start with the Bayesian approach since we already talked about probability and Thomas Bayes in this post. The Bayesian approach (mainly associated with Ramsey and Savage) can be regarded as a verification-based philosophy of science; it is based on different scientists gradually updating, according to new empirical evidence, their (different) prior (subjective) probabilities of scientific explanations and theories, until the cumulative evidence is strong enough to reach a common conclusion.

One difficulty with the Bayesian approach is that in cases of disagreement, there are also disagreements on the interpretation of the evidence.

Bayesian view does not give a way to test a scientific theory but rather to update our beliefs in the theory given new evidence. In practice, scientific theories primarily explain existing observations. For example, the main motivation of Newtonian mechanics and the main support for its validity was the explanation of Kepler’s laws. Kepler’s laws concerning the elliptic orbits of planets around the sun were discovered seventy years before they were explained by Newtonian mechanics.

Karl Popper                          Thomas Kuhn

Popper is famous for basing philosophy of science on the notion of falsification. According to Popper, the mark of a theory as scientific is falsifiability: the possibility to empirically refute the theory – in principle. This is in contrast with other approaches that can be viewed as basing philosophy of science on confirmation or verification. Famously, two principal examples of non-scientific theories according to Popper are the Marxist theory of capital and Freudian psychoanalysis.

If the Bayesian approach, like approaches based on verification, suggests that the optimal way for a scientific theory to proceed is by making safe conjectures which may lead to small incremental progress, Popper’s approach suggests making bold and risky conjectures. One concern about practical implication of the Popperian approach is the fact that bold conjectures and theories that pass the falsifiability test are of little value if they are absurd or simply false to begin with.

Critics assert that neither Popper’s theory nor earlier approaches based on verification give a proper description of how science is practiced. Also, they have limited normative value regarding how science ought to be practiced. It is especially difficult to use the insights from philosophy of science for scientific theories under development.

Thomas Kuhn is famous for his notions of paradigm shifts and scientific revolutions. According to Kuhn, science is normally carried out inside a certain paradigm that is shared by a community of scientists, and it is furthermore characterized by “paradigm shifts,” which occur when the current paradigm is no longer capable of explaining the new evidence.  Kuhn referred to the process of switching from the common paradigm to a new one as a “scientific revolution.” An important example of a scientific revolution analyzed by Kuhn is the shift from Newtonian mechanics to Einstein’s theory of relativity. Continue reading

# Noise

What is the correct picture of our world? Are noise and errors part of the essence of matters, and the beautiful perfect patterns we see around us, as well as the notions of information and computation, are just derived concepts in a noisy world? Or do noise and errors just express our imperfect perception of otherwise perfect laws of nature? Talking about an inherently noisy reality may well reflect a better understanding across various scales and areas.