## Spectacular advances in number theory

Last weeks we heard about two spectacular results in number theory.  As announced in Nature, Yitang Zhang proved that there are infinitely many pairs of consecutive primes $(p_n, p_{n+1})$ which are at most 70 million apart! This is a sensational achievement. Pushing 70 million to 2 will settle the ancient conjecture on twin primes, but this is already an extremely amazing breakthrough.  An earlier breakthrough came in 2005 when Daniel Goldston, János Pintz, and Cem Yıldırım proved that the gaps between consecutive primes $p_{n+1}-p_n$ is infinitely often smaller than $\sqrt {\log p_n} \log \log ^2 p_n$.

Update: A description of Zhang’s work and a link to the paper can be found on Emmanuel Kowalski’s bloog Further update: A description of Zhang’s work and related questions and results can be found now in Terry Tao’s blog. Terry Tao also proposed a new polymath project aimed to reading Zhang’s paper and attempting to improve the bounds.

Harald Helfgott proved that every integer is the sum of three primes.  Here the story starts with Vinogradov who proved it for sufficiently large integers, but pushing down what “sufficiently large” is, and pushing up the computerized methods needed to take care of “small” integers required much work and ingenuity.

## Why is Mathematics possible?

The recent news, and a little exchange of views I had with Boaz Barak, bring us back to the question: “Why is mathematics possible?” This is an old question that David Kazhdan outlined in a lovely 1999 essay “Reflection on the development of mathematics in the twentieth century.” The point (from modern view) is this: We know that mathematical statements can, in general, be undecidable.  We also know that a proof for a short mathematical statement can be extremely long. And we also know that even if a mathematical statement admits a short proof, finding such a proof can be computationally intractable. Given all that, what are the reasons that mathematics is at all possible?

It is popular to associate “human creativity” with an answer. The problem with incorrect (or, at least, incomplete) answers is that they often represent missed opportunities for better answers. I think that for the question “why is mathematics possible” there are opportunities (even using computational complexity thinking) to offer better answers.

## Enlightenment at a red traffic light

### Wolf Prize laureate Prof. George Daniel Mostow made his greatest scientific breakthrough while driving.

Haaretz tells the story of how Dan Mostow reached his breakthrough known as Mostow’s rigidity theorem.

Congratulations, Dan!

## French-Isreali Meeting and Günterfest

More updates: If you are in Paris On Wednesday and Thursday this week there will be a lovely French-Isreali interacademic meeting on mathematics.  The problem is very interesting, and I will give a talk quite similar to my recent MIT talk on quantum computers.

In the  weekend  we will celebrate Günter Ziegler’s 50th birthday in Berlin. Günter started very very young so we had to wait long for this.

Happy birthday, Gunter

Posted in Updates | Tagged , | 1 Comment

## Test Your Intuition (21): Auctions

You run a single-item sealed bid auction where you sell an old camera. There are three bidders and the value of the camera for each of them is described by a certain (known) random variable: With probability 0.9 the value is 100+x where x is taken uniformly at random from the interval [-1,1]. With probability 0.1 the value is 300+x where x is as before. The 300 value represents the case that the item has a special nostalgic value for the buyer.

The values of the camera to the three bidders are thus i.i.d random variables. (The reason for adding this small random x is to avoid ties, and you can replace the interval [-1,1] with [-ε, ε] for a small ε, if you wish.) If you don’t want to worry about ties you can simply think about the value being 100 with probability 0.9 and 300 wit probability 0.1.

### The basic question

The basic questions for you the seller is:

What is the highest expected revenue you, the seller, can guarantee and what is your optimal allocation policy.

I’d like to see the answer for this question. But let me challenge your intuition with two simpler questions.

1) Can the seller guarantee an expected revenue of 120  or more?

2) What (roughly) is the optimal allocation policy

a) Highest bidder wins.

b) Highest bidder wins if his bid passes some reserve price.

c) The item is allocated at random to the bidders with probabilities depending on their bids.

### Background: Myerson’s paper and his revenue equivalence theorem

The most relevant paper to this post is a well-known paper Optimal auction design by Roger Myerson. Myerson considered the case of a single-item sealed-bid auction where the bidders’ values for the item are independent identical random variable.

Note that I did not specify the price that the winning bidder is going to pay for the camera. The reason is that according to a famous theorem by Myerson (referred to as the revenue equivalence theorem), when the bidders are strategic, the expected revenues for the seller are determined by the allocation rule and are independent from the pricing policy! (Well, you need to assume a reasonable pricing policy…)

For example, if the item is allocated to the highest bidder then the expected price will be the second highest price. If the price is determined by the second highest bid (Vickery’s auction) then each bidder has no incentive to give a bid which is different from its value. But even if the item will be allocated to the first bidder for the highest price, the expected revenues will still be the same! When you analyze an auction mechanism like ours you can assume that the payments are set in a way that the bidders have no incentive not to bid the true value the camera has. This is sometimes referred to as the revelation principle.

Myerson considered a mechanism which sometimes lead to higher revenues compared to allocating the item to the highest bidder: The seller set a reserve price and the item is allocated to the highest bidder if the bid passes this reserve price, and is not allocated at all otherwise. In the paper Myerson also considered more complicated allocation rules which are important in the analysis where the item is allocated to bidders according to some probabilities based on their bids.

### Polls

This time we have two questions and two polls:

Once again this is a game-theory question. I hope it will lead to interesting discussion like the earlier one on tic-tac-toe.

### A little more Background: Auctions and Vickery’s second price auction.

(From Wikipedia) An auction is a process of buying and selling goods or services by offering them up for bid, taking bids, and then selling the item to the highest bidder. In economic theory, an auction may refer to any mechanism or set of trading rules for exchange.

In our case, we consider an auction of a single item (the camera) and each bidder is giving a sealed bid.

(Again from Wikipedea) A Vickrey auction is a type of sealed-bid auction, where bidders submit written bids without knowing the bid of the other people in the auction, and in which the highest bidder wins, but the price paid is the second-highest bid. The auction was first described academically by Columbia University professor William Vickrey in 1961 though it had been used by stamp collectors since 1893.[2] This type of auction is strategically similar to an English auction, and gives bidders an incentive to bid their true value.

## Oz’ Balls Problem: The Solution

A commentator named Oz proposed the following question: You have a box with n red balls and n blue balls. You take out each time a ball at random but, if the ball was red, you put it back in the box and take out a blue ball. If the ball was blue, you put it back in the box and take out a red ball.

You keep doing it until left only with balls of the same color. How many balls will be left (as a function of n)?

Peter Shor wrote in a comment “I’m fairly sure that there is not enough bias to get $cn$, but it intuitively seems far too much bias to still be $c \sqrt{n}$. I want to say $n^c$. At a wild guess, it’s either $c = \frac{2}{3}$or $c = \frac{3}{4}$, since those are the simplest exponents between $\frac{1}{2}$ and $1$.”  The comment followed by a heuristic argument of Kevin Kostelo and computer experiments by Lior Silberman that supported the answer $n^{3/4}$.

Posted in Probability, Test your intuition | | 1 Comment

## Answer: Lord Kelvin, The Age of the Earth, and the Age of the Sun

Yeshu Kolodni and Lord Kelvin

### The question

In 1862, the physicist William Thomson (who later became Lord Kelvin) of Glasgow published calculations that fixed the age of Earth at between 20 million and 400 million years. Later in the 1890s Kelvin calculated the age of Earth by using thermal gradients, and arrived at an estimate of 100 million years old which he later reduced to 20 million years. (For much more interesting details see this Wikipedia article.)

The question was: what was the main reason for Lord Kelvin’s wrong estimation

a) Radioactivity – Heat produced by radioactive decay; this was a process unknown to science for decades to come.

b) Convection – The transfer of heat not through radiation or heat-conduction but through the movement of hot parts to the surface; this is a process familiar in home cooking.

Here is the answer and some discussion mainly based on what Yeshu Kolodny have told me.

## Test your Intuition/Knowledge: What was Lord Kelvin’s Main Mistake?

### The age of the earth

(Thanks to Yeshu Kolodny) We now know that the age of the earth is 4.54±1% Billion years.

From Wikipedea: In 1862, the physicist William Thomson (who later became Lord Kelvin) of Glasgow published calculations that fixed the age of Earth at between 20 million and 400 million years. He assumed that Earth had formed as a completely molten object, and determined the amount of time it would take for the near-surface to cool to its present temperature. His calculations did not account for heat produced via radioactive decay (a process then unknown to science) or convection inside the Earth, which allows more heat to escape from the interior to warm rocks near the surface.

What was the main reason for Lord Kelvin’s wrong estimation

a) Radioactivity – Heat produced by radioactive decay; this was a process unknown to science for decades to come.

b) Convection – The transfer of heat not through radiation or heat-conduction but through the movement of hot parts to the surface; this is a process familiar in home cooking.

## Indian Crested Porcupine

For a few days we had an Indian Crested Porcupine (דורבן) in our garden. It ate all the flowers and dug an impressive array of tunnels.

Posted in Updates | Tagged , , | 1 Comment

## New Ramanujan Graphs!

Margulis’ paper

Ramanujan graphs were constructed independently by Margulis and by Lubotzky, Philips and Sarnak (who also coined the name). The picture above shows Margulis’ paper where the graphs are defined and their girth is studied. (I will come back to the question about girth at the end of the post.) In a subsequent paper Margulis used the girth property in order to construct efficient error-correcting codes. (Later Sipser and Spielman realized how to use the expansion property for this purpose.)

The purpose of this post is to briefly tell you about new Ramanujan graphs exhibited by Adam Marcus, Daniel Spielman, and Nikhil Srivastava. Here is the paper. This construction is remarkable for several reasons: First, it is the first elementary proof for the existence of Ramanujan graphs which also shows, for the first time, that there are k-regular Ramanujan graphs (with many vertices)  when k is not q+1, and q is a prime power. Second, the construction uses a novel “greedy”-method (with further promised fruits) based on identifying classes of polynomials with interlacing real roots, that does not lead (so far) to an algorithm (neither deterministic nor randomized). Third, the construction relies on Nati Linial’s idea of random graph liftings and verify (a special case of) a beautiful conjecture of Yonatan Bilu and Linial.  Continue reading

## Taking balls away: Oz’ Version

This post is based on a comment by Oz to our question about balls with two colors:

“There is an interesting (and more difficult) variation I once heard but can’t recall where:

You have a box with n red balls and n blue balls. You take out each time a ball at random as before. But, if the ball was red, you put it back in the box and take out a blue ball. If the ball was blue, you put it back in the box and take out a red ball.

You keep as before until left only with balls of the same color. How many balls will be left (as a function of n)?

1) Roughly  εn for some ε>0.

2) Roughly $\sqrt n$?

3) Roughly log n?

4) Roughly a constant?

5) Some other behavior

Posted in Guest post, Probability, Test your intuition | Tagged , , | 14 Comments

You have a box with n red balls and n blue balls. You take out balls one by one at random until left only with balls of the same color. How many balls will be left (as a function of n)?

1) Roughly  εn for some ε>0.

2) Roughly $\sqrt n$?

3) Roughly log n?

4) Roughly a constant?

Here is the collective intuition regarding this problem