Test Your Intuition (11): Is it Rational to Insure a Toaster

Here is a question from last year’s exam in the course “Basic Ideas of Mathematics”:
 
You buy a toaster for 200 NIS ($50) and you are offered one year of insurance for 24 NIS ($6).
 
a) Is it worth it if the probability that damage covered by the insurance will occur during the first year is 10%? (We assume that without insurance, such damage makes the toaster a “total loss”.)
 
b) Is it worth it if the probability that the toaster will be damaged is unknown?
 
As an additional test of your imagination, can you think of reasons why buying the toaster insurance would be rational?

Itamar Pitowsky: Probability in Physics, Where does it Come From?

I came across a videotaped lecture by Itamar Pitowsky given at PITP some years ago on the question of probability in physics that we discussed in two earlier posts on randomness in nature (I, II). There are links below to the presentation slides, and to  a video of the lecture. 

A little over a week ago on Thursday, Itamar,  Oron Shagrir, and I sat at our little CS cafeteria and discussed this very same issue.  What does probability mean? Does it just represent human uncertainty? Is it just an emerging mathematical concept which is convenient for modeling? Do matters change when we move from classical to quantum mechanics? When we move to quantum physics the notion of probability itself changes for sure, but is there a change in the interpretation of what probability is?  A few people passed by and listened, and it felt like this was a direct continuation of conversations we had while we (Itamar and I; Oron is much younger) were students in the early 70s. This was our last meeting and Itamar’s deep voice and good smile are still with me.

In spite of his illness of many years Itamar looked in good shape. A day later, on Friday, he met with a graduate student working on connections between philosophy and computer science.  Yet another exciting new frontier. Last Wednesday Itamar passed away from sudden complications related to his illness.

Itamar was a great guy; he was great in science and great in the humanities, and he had an immense human wisdom and a modest, level-headed way of expressing it. I will greatly miss him.

Here is a link to a Condolence page for Itamar Pitowsky

Probability in physics:
where does it come from?
 

   

Itamar Pitowsky

Dept. of Philosophy, The Hebrew University of Jerusalem

The application of probability theory to physics began in the 19th century with Maxwell’s and Boltzmann’s explanation of the properties of gases in terms of the motion of their constituent molecules. Now the term probability is not a part of the (classical) theory of particle motion; so what does it mean, and where does it come from? Boltzmann thought to reduce the meaning of probability in physics to that of relative frequency. Thus, eg., we never find a container of gas in normal circumstances (equilibrium) with all of its molecules on the right hand side. Now, suppose we could prove this from the principles of mechanics- that a dynamical system with a huge number of particles almost never gets into a state with all its particles on one side. Then, to say that such an event has a vanishing probability would simply mean (and not only imply) that it is very rare.I shall explain Boltzmann’s program and assumptions in some detail, and why, in spite of its intuitive appeal, it ultimately fails. We shall also discuss why quantum mechanics with its “built in” concept of probability does not help much, and review some alternatives, as time permits.

For more information about Itamar Pitowsky, visit his web site. See his presentation slides.

Additional resources for this talk: video.

 

(Here is the original link to the PIPS lecture) My post entitled Amazing possibilities  about various fundamental limitations stated by many great minds that turned out to be wrong, was largely based on examples provided by Itamar.

Noise Stability and Threshold Circuits

The purpose of this post is to describe an old conjecture (or guesses, see this post) by Itai Benjamini, Oded Schramm and myself (taken from this paper) on noise stability of threshold functions. I will start by formulating the conjectures and a little later I will explain further the notions I am using.

The conjectures

Conjecture 1:  Let f be a monotone Boolean function described by  monotone threshold circuits of size M and depth D. Then f is  stable to (1/t)-noise where t=(\log M)^{100D}.  

Conjecture 2:   Let f be a monotone Boolean function described by  a threshold circuits of size M and depth D. Then f is  stable to (1/t)-noise where t=(\log M)^{100D}.

The constant 100 in the exponent is, of course, negotiable. In fact, replacing 100D with any  function of D will be sufficient for the applications we have in mind. The best we can hope for is that the conjectures are true if  t behaves like  t=(\log M)^{D-1}.  

Conjecture 1 is plausible but it looks difficult. The stronger Conjecture 2 while tempting is quite reckless. Note that the conjectures differ “only” in the location of the word “monotone”. Continue reading

Randomness in Nature II

In a previous post we presented a MO question by Liza about randomness:

 What is the explanation of the apparent randomness of high-level phenomena in nature?

1. Is it accepted that these phenomena are not really random, meaning that given enough information one could predict it? If so isn’t that the case for all random phenomena?

2. If there is true randomness and the outcome cannot be predicted – what is the origin of that randomness? (is it a result of the randomness in the micro world – quantum phenomena etc…)

Before I give the floor to the commentators, I would like to mention a conference on this topic that took place in Jerusalem a year ago. The title was “The Probable and the Improbable: The Meaning and Role of Probability in Physics” and the conference was in honor of Itamar Pitowsky. Let me also mention that  the Wikipedia article on randomness is also a good resource.

Here are some of the answers offered here to Liza’s question.

Qiaochu Yuan

One way to think about what it means to say that a physical process is “random” is to say that there is no algorithm which predicts its behavior precisely which runs significantly faster than the process itself. Morally I think this should be true of many “high-level” phenomena. Continue reading

Randomness in Nature

Here is an excellent question asked by Liza on “Mathoverflow“.

What is the explanation of the apparent randomness of high-level phenomena in nature? For example the distribution of females vs. males in a population (I am referring to randomness in terms of the unpredictability and not in the sense of it necessarily having to be evenly distributed).

1. Is it accepted that these phenomena are not really random, meaning that given enough information one could predict it? If so isn’t that the case for all random phenomena?

2. If there is true randomness and the outcome cannot be predicted – what is the origin of that randomness? (is it a result of the randomness in the micro world – quantum phenomena etc…)

Where can I find resources about the subject?

Some answers and links can be found below the question in MO. (The question was closed after a few hours.) More answers and further discussion are welcome here.

Related posts:  noiseThomas Bayes and probability, four derandomization problems, some philosophy of science

And here is a related post on probability by Peter Cameron relating to the question “what is probability”.

Midrasha News

Our Midrasha is going very very well. There are many great talks, mostly very clear and helpful. Various different directions which interlace very nicely. Some moving new mathematical breakthroughs; very few fresh from the oven. Tomorrow is the last day.

Update: I will try to describe some highlights an links to the presentations, related papers, and pictures at a later post.

Four Derandomization Problems

Polymath4 is devoted to a question about derandomization: To find a deterministic polynomial time algorithm for finding a k-digit prime.  So I (belatedly) devote this post to derandomization and, in particular, the following four problems.

1) Find a deterministic algorithm for primality

2) Find a combinatorial characterization (and a deterministic algorithm) for generically 3-rigid graphs

3) Find an explicit construction for Ramsey graphs

4) Find a deterministic method for polling

(Remark: I was very slow in writing this post, and I had to give up elaborating the details on some very nice topics in order to finish it.)

Primality

Continue reading

Midrasha Mathematicae: The Mathematics of Oded Schramm

Update: The midrasha is taking place now. After 3 and a half school-days we have a break untill sunday. Clicking on the poster above will lead you the webpage of the event and to  a link to an online broadcast of the lectures.

Our Winter school on Probability and Geometry is approaching. It will take place at the Hebrew University of Jerusalem on December 15-23. The deadline for application is November 25, 2009; by all means, apply! it will be a very nice event. We got additional support from The Clay Mathematics Institute and from Microsoft Research. Financial support (local expences, and, in some cases, partial support for the travel) may be available.

The poster gives the updated list of confirmed speaker. (There may be 1-2 additional speakers that did not confirm yet.)

Here is the link to the site of the school.  I will update this page regarding program and schedule. UPDATE (Preliminary program)

Continue reading

Test Your Intuition (10): How Does “Random Noise” Look

This is a bit unusual post in the “test your intuition” corner as the problem is not entirely formal.  

How does random noise in the digital world typically look?

Suppose you have a memory of n bits, or a memory based on a larger r-letters alphabet, and suppose that a “random noise” hits the memory in such a way that the probability of each bit being affected is t.

What will be the typical behavior of such a random digital noise? Part of the question is to define “random noise” in the best way possible, and then answer it for this definition.

In particular, Will the “random noise” typically behave in a way which is close to be independent on the different bits? or will it be highly correlated? or pehaps the answer depends on the size of the alphabet and the value of t?

The source of this question is an easy fact about quantum memory which asserts that if you consider a random noise operation acting on a quantum memory with n qubits, and if the probability that every qubit is damaged is a tiny real number t, then typically the noise  has the following form: with large probability nothing happens and with tiny probability (a constant times t) a large fraction of qubits are harmed.

 I made one try for the digital (binary) question but I am not sure at all that it is the “correct” definition for what “random noise” is.

(Maybe I should try to ask the problem also on “math overflow“. See also here, here and here for what math overflow is.)

Update:  over “mathoverflow” Greg Kuperberg made an appealing argument that for the correct notion of random noise the behavior in the classical case is similar to that of the quantum case.

Test Your Intuition #9 (answer to #9),  #8  (answer),   #7,   #6,  #5,  #4 (answer), #3 (answer), #2#1.

Answer to Test Your Intuition (9)

Two experimental results of 10/100 and 15/100 are not equivalent to one experiment with outcomes 3/200.

(Here is a link to the original post.)

One way to see it is to think about 100 experiments. The outcomes under the null hypothesis will be 100 numbers (more or less) uniformly distributed in [0,1]. So the product is extremely tiny.

What we have to compute is the probability that the product of two random numbers uniformly distributed in [0,1] is smaller or equal 0.015. This probability is much larger than 0.015.

Here is a useful approximation (I thank Brendan McKay for reminding me): if we have n independent values in U(0,1)  then the prob of product < X is

X \sum_{i=0}^{n-1} ( (-1)^i (log X)^i/i!.

In this case  0.015 * ( 1 – log(0.015) ) = 0.078

So the outcomes of the two experiments do not show a significant support for the theory.

The theory of hypothesis testing in statistics is quite fascinating, and of course, it became a principal tool in science and  led to major scientific revolutions. One interesting aspect is the similarity between the notion of statistical proof which is important all over science and the notion of interactive proof in computer science. Unlike mathematical proofs, statistical proof are based on following certain protocols and standing alone if you cannot guarantee that the protocol was followed the proof has little value.