How can we assign probabilities in cases of uncertainty? And what is the nature of probabilities, to start with? And what is the rational mechanism for making a choice under uncertainty?

Thomas Bayes lived in the eighteenth century. Bayes’ famous formula shows how to update probabilities given some new evidence. Following is an example for an application of Bayes’ rule:

Suppose that ninety percent of pedestrians cross a certain crosswalk when the light is green, and ten percent cross it when the light is red. Suppose also that the probability of being hit by a car is 0.1% for a pedestrian who crosses on a green light, but the probability of being hit by a car is 2% for a pedestrian who crosses on a red light. A pedestrian is hit by a car at this particular crossing and brought to the hospital. How likely is it that he crossed on a red light?

Well, to start with (or *a priori*), only ten percent of the people who cross the crosswalk cross it on a red light, but now that we are told that this person was hit by a car it makes the probability that he crossed illegally higher. But by how much? Bayes’ rule allows us to compute this (*a posteriori*) probability. I will not describe the mathematical formula, but I will tell you the outcome: the probability that this person crossed on a red light is 2/3.

The Bayesian approach can be described as follows. We start by assigning probabilities to certain events of interest and, as more evidence is gathered, we update these probabilities. This approach is applied to mundane decision-making and also to the evaluation of scientific claims and theories in philosophy of science.

Bayes’ rule tells us how to update probabilities but we are left with the question of how to assign probabilities in cases of uncertainty to begin with. What is the probability of success in a medical operation? What is the chance of your team winning the next baseball game? How likely is it that war will break out in the Middle East in the next decade? How risky are your stock-market investments?

One very early approach to probabilities, the principle of indifference (a.k.a. the principal of insufficient reason), asserts that given a certain number of mutually exclusive events, their probabilities are the same. The formulation of this principle goes back to Jakob Bernoulli and Pierre-Simon Laplace. This principle is an important very early appearance of the notion of **symmetry**. Of course, there are many cases where the principle of indifference fails miserably. Various other approaches to “subjective probabilities” and to the foundation of probability theory were developed in the twentieth century.

Decisions under uncertainty depend not only on the probabilities but also on the “stakes.” Crossing a crosswalk on a red light will get you to your destination more quickly ninety-eight percent of the time, and two percent of the time you will be hit by a car. To make a rational decision between crossing on a red light or not, you have to take into account how good it is for you to get to your destination earlier and how bad it is for you to get hit by a car. A theory of decisions under uncertainty, based on the notion of utility, was developed by John von Neumann and Oskar Morgenstern, the founders of “game theory.” In this theory, to each possible outcome we assign a numerical quantity called a “utility.” Rational decisions are based on combining the probabilities for various outcomes and the utility gained from each of these outcomes. The theory of von Neumann and Morgenstern has been the subject of intense debate in recent decades.

Perhaps the major difficulty with the Bayesian point of view, whether relating to decisions under uncertainty or to the Bayesian philosophy of science, is that quite often, no one has a clue how to assign probabilities in cases of uncertainty.

The addition of probability thinking remarkably extends our understanding of reality. At the same time, we face the impossibility of understanding various phenomena, perhaps those about which we are most curious, even with the language and tools of probability at our disposal. Introducing the language of probability allowed us to ask many new questions that we cannot answer even using the tools of probability.

“Perhaps the major difficulty with the Bayesian point of view, whether relating to decisions under uncertainty or to the Bayesian philosophy of science, is that quite often, no one has a clue how to assign probabilities in cases of uncertainty.”

In the Bayesian language, you’re saying that many people don’t know how to pick (a priori) priors. But if you *ever* assign a probability, the Bayesian update rule can tell you what you your initial prior was, so priors are only unclear in situations where you can’t assign probabilities at all. As such. perhaps this is an unfair argument against the Bayesian view of probability?

As one of my colleagues says, when you want to measure the distance to the moon, you don’t pick up a yard stick. You always know

somethingbefore you do your experiment.Dear John and a. Rex, thanks for the comments. The purpose of the post was to give a popular elementary introducion for decision under uncertainty. I agree that the difficulty is in cases where you can’t assign probabilities at all. (This is not an argument against Bayesian thought which I actually like.) For more on choices under uncertainty and related foundational questions about uncertainty and probability look at Itzhak (Tzahi) Gilboa very readable and thought-provoking lecture notes: (Graduate level) “Decision under uncertainty” http://www.tau.ac.il/~igilboa/pdf/Gilboa_Lecture_Notes_Decision_under_Uncertainty.pdf and (more elementary) “Rational choice.”

Pingback: An Open Discussion and Polls: Around Roth’s Theorem « Combinatorics and more

Pingback: Some Philosophy of Science « Combinatorics and more

Pingback: Randomness in Nature « Combinatorics and more