Buffon’s Needle and the Perimeter of Planar Sets of Constant Width

Here is an answer to “Test your intuition (8)”. (Essentially the answer posed by David Eppstein.)

BuffonNeedle_700

(From Wolfram Mathworld)

Buffon’s needle problem asks to find the probability that a needle of length \ell will land on a line, given a floor with equally spaced parallel lines at distance one apart. The problem was posed by Georges-Louis Leclerc, Comte de Buffon (1707 – 1788).

There is a familiar computation showing that when \ell \le 1 the probability is 2\ell/\pi. A computation-free proof can be found in these pages on geometric probability written mostly by Molly McGinty. (They were pointed to me by Sergiu Hart.)

Briefly the computation-free argument goes as follows:

1) Consider the expected number of crossings of a needle with the lines rather than the probability of crossing.

2) Allow also polygonal needles.

3) Show, based on linearity of expectation, that for a polygonal needle the expected number of crossings is a linear function c\ell of the length \ell of the needle.

4) Consider now a circular needle of radius 1/2. Note that with probability one it has two crossings with the lines. Deduce that c=2/\pi.

This gives a proof for Buffon’s needle problem. But now consider any closed planar curve with constant width one. Again with probability one it will have two crossings with the parallel lines. Therefore, it has the same perimeter as the circle.

Update: The argument above Continue reading

Test Your Intuition (7)

Consider the following game: you have a box that contains one white ball and one black ball. You choose a ball at random and then return it to the box. If you chose a white ball then a white ball is added to the box, and if you chose a black ball then a black ball is added to the box. Played over time, what is the probability that more than 80 percents of the chosen balls are white?

Continue reading

Chess can be a Game of Luck

chess

Can chess be a game of luck?

Let us consider the following two scenarios:

A) We have a chess tournament where each of forty chess players pay 50 dollars entrance fee and the winner takes the prize which is 80% of the the total entrance fees.

 B)  We have a chess tournament where each of forty chess players pay 20,000 dollars entrance fee and the winner takes the prize which is 80% of the the total entrance fees.

Before dealing with these two rather realistic scenarios let us consider the following more hypothetical situations.

C) Suppose that chess players have a quality measure that allows us to determine the probability that any one player will beat the other. Two players play and bet. The strong player bets 10 dollars  and the waek player bets according to the probability he will win. (So the expected gain of both player is zero.)

D)  Suppose again that chess players have a quality measure that allows us to determine the probability that any one players will beat the other. Two players play and bet. The strong player bets 100,000 dollars and the weak player bets according to the probability he will wins. (Again, the expected gain of both players is zero.)

When we analyze scenarios C and D the first question to ask is “What is the game?” In my opinion we need to consider the entire setting, so the “game” consists of both the chess itself and the betting around it. In cases C and D the betting aspects of the game are completely separated from the chess itself. We can suppose that the higher the stakes are, the higher the ingredient of luck of the combined game. It is reasonable to assume that version C) is mainly a game of skill and version D) is mainly a game of luck.

Now what about the following scenarios: 

E) Two players play chess and bet 5 dollars.chessnyc

Here the main ingredient is skill; the bet only adds a little spice to the game.

F) Two players play chess and bet 100,000 dollars. 

 Well, to the extent that such a game takes place at all, I would expect that the luck factor will be dominant. (Note that scenario F is not equivalent to the scenario where two players play, the winner gets 300,000 dollars and the loser gets 100,000 dollars.)

Let us go back to the original scenarios A) and B). Here too, I would consider the ingredients of luck and skill to be strongly dependant on the stakes. The setting of scenario A) can be quite compatible with a game of skill where the prizes give some extra incentives to participants (and rewards for the organizers), while in scenario B) it stands to reason that the luck/gambling factor will be dominant.  

One critique against my opinion is: What about tennis tournaments where professional tennis players are playing on large amounts of prize money? Are professional tennis tournaments  games of luck? There is one major difference between this example and examples A and B above. In tennis tournaments there are very large prizes but the expected gain for a player is positive, all (or at least most) players can make a living by participating. This changes entirely the incentives. This is also the case for various high level professional chess tournaments.

For mathematicians there are a few things that sound strange in this analysis. The luck ingredient is not invariant under multiplying the stakes by a constant, and it is not invariant under giving (or taking) a fixed sum of money to the participants before the game starts. However, these aspects are crucial when we try to analyze the incentives and motives of players and, in my opinion,  it is a mistake to ignore them.   

So my answer is: yes, chess can be a game of luck.

Now, what about poker? Continue reading

Answer To Test Your Intuition (4)

Let G be a graph and u and v two vertices.

(1) Let H be a random graph where every edge of G is chosen with probability ½. Let p be the probability that there is a path between u and v in H.

(2) Let T be a random orientation of the graph G. Namely, every edge {x,y} is directed from x to y with probability ½ and otherwise it is directed from from y to x. Let q be the probability that there is a directed path from u to v in T.

What is the relation between p and q?

Answer: p=q

This is an old theorem of Colin McDiarmid: Continue reading

Test Your Intuition (4)

Let G be a graph and u and v two vertices.

(1) Let H be a random graph where every edge of G is chosen with probability ½. Let p be the probability that there is a path between u and v in H.

(2) Let T be a random orientation of the graph G. Namely, every edge {x,y} is directed from x to y with probability ½ and otherwise it is directed from from y to x. Let q be the probability that there is a directed path from u to v in T.

What is the relation between p and q?

Continue reading

Some Philosophy of Science

The Bayesian approach to the philosophy of science was developed in the first half of the twentieth century. Karl Popper and Thomas Kuhn are twentieth-century philosophers of science who later proposed alternative approaches.

It will be convenient to start with the Bayesian approach since we already talked about probability and Thomas Bayes in this post. The Bayesian approach (mainly associated with Ramsey and Savage) can be regarded as a verification-based philosophy of science; it is based on different scientists gradually updating, according to new empirical evidence, their (different) prior (subjective) probabilities of scientific explanations and theories, until the cumulative evidence is strong enough to reach a common conclusion.

One difficulty with the Bayesian approach is that in cases of disagreement, there are also disagreements on the interpretation of the evidence.

Bayesian view does not give a way to test a scientific theory but rather to update our beliefs in the theory given new evidence. In practice, scientific theories primarily explain existing observations. For example, the main motivation of Newtonian mechanics and the main support for its validity was the explanation of Kepler’s laws. Kepler’s laws concerning the elliptic orbits of planets around the sun were discovered seventy years before they were explained by Newtonian mechanics.

  

 

              Karl Popper                          Thomas Kuhn

 Popper is famous for basing philosophy of science on the notion of falsification. According to Popper, the mark of a theory as scientific is falsifiability: the possibility to empirically refute the theory – in principle. This is in contrast with other approaches that can be viewed as basing philosophy of science on confirmation or verification. Famously, two principal examples of non-scientific theories according to Popper are the Marxist theory of capital and Freudian psychoanalysis.  

If the Bayesian approach, like approaches based on verification, suggests that the optimal way for a scientific theory to proceed is by making safe conjectures which may lead to small incremental progress, Popper’s approach suggests making bold and risky conjectures. One concern about practical implication of the Popperian approach is the fact that bold conjectures and theories that pass the falsifiability test are of little value if they are absurd or simply false to begin with.

Critics assert that neither Popper’s theory nor earlier approaches based on verification give a proper description of how science is practiced. Also, they have limited normative value regarding how science ought to be practiced. It is especially difficult to use the insights from philosophy of science for scientific theories under development.

Thomas Kuhn is famous for his notions of paradigm shifts and scientific revolutions. According to Kuhn, science is normally carried out inside a certain paradigm that is shared by a community of scientists, and it is furthermore characterized by “paradigm shifts,” which occur when the current paradigm is no longer capable of explaining the new evidence.  Kuhn referred to the process of switching from the common paradigm to a new one as a “scientific revolution.” An important example of a scientific revolution analyzed by Kuhn is the shift from Newtonian mechanics to Einstein’s theory of relativity. Continue reading

A Problem on Planar Percolation

Conjecture (Gady Kozma):  Prove that the critical probability for planar percolation on a Cayley graph of the group Z^2 is always an algebraic number.

Gady  mentioned this conjecture in his talk here about percolation on infinite Cayley graphs.  (Update April 30: Today Gady mentioned to me an even bolder question: to show that for every group \Gamma, either all critical probabilities of its Cayley graphs are algebraic, or none is!!;  I recall a similarly bold conjecture regarding the property of being an expander which turned out to be false but was fruitful nevertheless.)

Many problems Continue reading

A Conference and a School on Oded Schramm’s Mathematics

There will be a conference this summer in Redmond and a school in December in Jerusalem devoted to Oded Schramm’s mathematics.  

Oded Schramm Memorial Conference

Probability and Geometry

Redmond WA, August 30-31, 2009

 

 frontier

 

To be held at Microsoft Research.

Here are links to the conference page and conference poster

Speakers include

    Omer Angel (University of British Columbia)
Itai Benjamini (Weizmann Institute)
Mario Bonk (University of Michigan)
Michael Freedman (Microsoft Station Q)
Christophe Garban (ֹcole Normale Supיrieure)
Olle Haggstrom (Chalmers University of Technology)
Zheng-Xu He (Beijing CAS Institute of Mathematics)
Gregory F. Lawler (University of Chicago)
Russell Lyons (Indiana University)
Assaf Naor (New York University)
Yuval Peres (Microsoft Research)
Gabor Pete (University of Toronto)
Steffen Rohde (University of Washington)
Scott Sheffield (Massachusetts Institute of Technology)
Stanislav Smirnov (Universitי de Genטve)
Wendelin Werner (Universitי Paris-Sud XI)
David B. Wilson (Microsoft Research)

 

Midrasha Mathematicae – Discrete Probability and Geometry

The mathematics of Oded Schramm

Jerusalem, 15-23 December 2009

 

percolation

To be held at the Hebrew University of Jerusalem, the Institute for Advanced Study, Feldman Building Givat Ram Campus. Continue reading

Noise Sensitivity Lecture and Tales

 

A lecture about Noise sensitivity

Several of my recent research projects are related to noise, and noise was also a topic of a recent somewhat philosophical post.   My oldest and perhaps most respectable noise-related project was the work with Itai Benjamini and Oded Schramm on noise sensitivity of Boolean functions. Recently I gave lectures at several places about noise sensitivity and noise stability of Boolean functions, starting with the definition of these terms and their Fourier description,  then the result of Itai, Oded and myself that the crossing event of planar critical percolation is noise-sensitive, and ending with two recent breakthrough results. One is the work by Garban, Pete, and Schramm   that described the scaling limit of the spectral distribution for the crossing event in critical percolation. The second is  the “majority is stablest” theorem of Mossel, O’Donnell, and Oleszkiewicz and the connections to hardness of approximation.

A fun way to explain noise sensitivity (especially after the 2000 US election) is in terms of the probability that small mistakes in counting the votes in an election will change the outcome. Here we assume that there are two candidates, that every voter votes with probability 1/2 for each candidate (independently) and that when we count every ballot there is a small probability t that the voter intention is reversed.

Noise sensitivity is one of the reasons to reject the HEX voting rule; but perhaps not the main one :) . (Noise stability/sensitivity and the related notion of influence play some role in the recent polymath1 project.)

Here is the power point presentation. I hope to blog at a later time about analysis of Boolean functions and about noise sensitivity and noise stability. 

Meanwhile let me just give a few general comments and tales: 

Some noise sensitivity tales:

1. We started working on noise sensitivity in 1995 and at that time Itai was expecting a son, so a friend lent him a pager. When we wrote the paper Alon was already 4 years old.

bks

2. The paper by Boris Tsirelson and Anatoly Vershik (that also describes work spanning many years) contains a similar notion. Their motivation was entirely different.  One difficulty in translating the two formulations is that “Boolean function” (or rather “a sequence of Boolean functions + some uniformity condition” ) in our work  is translated to “noise” in Tsirelson’s terminology. And “noise sensitive” is translated to “black” or “non-Fock” or “non-classical” in their work. 

3. After the 2000 elections Itai Benjamini and Elchanan Mossel wrote a popular piece about the relevance of this work for elections. Mathematician and writer Barry Cipra wrote a lovely  small article about it. (Looking at it now I find it very impressive how Barry put so much information in a vivid way in such a short article.)

Here is one paragraph from Cipra’s article:

“Three researchers—Itai Benjamini and Oded Schramm, both of Microsoft Research, and Gil Kalai of the Hebrew University in Jerusalem—have turned the question around: How close does a race have to be in order for errors in counting to have a non-negligible chance of reversing the outcome? Their analysis indicates that a simple, nationwide popular vote would be more stable against mistakes than the beleaguered Electoral College system. Indeed, they find, straightforward majority vote is more stable than any other voting method.” Continue reading

Test Your Intuition (1)

Question:  Suppose that we sequentially place n balls into n boxes by putting each ball into a randomly chosen box. It is well known that when we are done, the fullest box has with high probability (1+o(1))\ln n/\ln \ln n balls in it. Suppose instead that for each ball we choose two boxes at random and place the ball into the one which is less full at the time of placement. What will be the maximum occupancy?

Test your intuition before reading the rest of the entry.

Continue reading