Tversky, Kahneman, and Gili Bar-Hillel (WikiPedia). Taken by Maya Bar-Hillel at Stanford, summer 1979.

*The following post was kindly contributed by Ehud Friedgut.*

During the past week I’ve been reading, and greatly enjoying Daniel Kahneman’s brilliant book “Thinking fast and Slow”.

One of the most intriguing passages in the book is the description of an experiment designed by Kahneman and Tversky which exemplifies a judgmental flaw exhibited by many people, which supposedly indicates an irrational, or inconsistent behavior. I will describe their experiment shortly.

I still remember the first time I heard of this experiment, it was related to me over lunch in Princeton by Noga Alon. Returning to this problem, 15 years later, I still, as in my initial exposure to the problem, made the “inconsistent” choice made by the vast majority of the subjects of the study. In this post I wish to argue that, in fact, there is nothing wrong with this choice.

Before relating their experiment, let me suggest one of my own. Imagine, if you will, that you suffer from gangrene in one of your toes. The doctor informs you that there is a 20% chance that it is “type A” gangrene, in which case you can expect spontaneous healing. There is a 75% chance that it is type B, in which case you will have to amputate it, and a 5% chance that it is type C. In the last case there is a shot you can be given that will save your toe, but it will cost you 2000$.

What would you do? I would probably not take the shot. My guiding principle here is that I hate feeling stupid, and that there’s a pretty good chance that if I take the shot I’ll walk around for the rest of my life, not only minus one toe and 2000$, but also feeling foolish for making a desperate shot in the dark.

Now, say I declined the shot, and I return after a week, and the doctor sees that the condition has worsened and that he will have to amputate the toe. He asks me if I wish (say for no cost) that he send the amputated toe for a biopsy, to see if it was type B or C. Here my gut reaction, and I’m sure yours too, is a resounding no. But even when thinking it over more carefully I still think I would prefer not to know. The question is which is better:

Option 1) I have a 75/80 probability of having a clean conscience, and a 5/80 chance of knowing clearly for the rest of my life that I’m lacking a toe because I’m what’s known in Yiddish as an uber-chuchem (smart aleck).

Option 2) Blissful ignorance: for the rest of my life I enjoy the benefit of doubt, and know that there’s only a 5/80 chance that the missing toe was caused by my stinginess.

I prefer option 2. I’m guessing that most people would also choose this option. I’m also guessing that Kahenman and Tversky would not label this as an irrational or even an unreasonable choice. I’m almost positive they wouldn’t claim that both options are equivalent.

Now, back to the KT experiment. You are offered to participate in a two stage game. In the first stage 75% of the participants are eliminated at random. At the second stage, if you make it, you have two choices: a 100% chance of winning 30$ or an 80% chance of winning 45$. But you have to decide before stage one takes place.

What would you choose?

I’ll tell you what I, and the majority of the subjects of the study do. We choose the 30$. Here’s my reasoning: 30 $ is pretty nice, I can go for a nice lunch, 45$ would upgrade it, sure, but I would feel really bad if I ended up with nothing because I was greedy. Let’s stick to the sure thing.

Now a different experiment: you have to choose between 20% chance of gaining 45$, or a 25% chance of gaining 30$.

What do you choose?

Once again, I chose what the majority chose: I would now opt for the 45$. My reasoning? 20% sounds pretty close to 25% to me, the small difference is worthwhile for a 50% gain in the prize.

O.k., I;m sure you all see the paradox. The two games are identical. In both you choose between a 20% chance of 45$ and a 25% chance of 30$. My reference to “a sure thing” represented a miscomprehension, common to most subjects, who ignored the first stage in the first game. Right?

No, wrong. I think the two games are really different, just as the two options related to the gangrene biopsy were different.

It is perfectly reasonable that when imagining the first game you assume that you are told whether you proceed to the second stage or not, and only if you proceed you are then told, if you chose the 80% option, whether you were lucky.

In contrast, in the second game, it is reasonable to assume that no matter what your choice was, you are just told whether you won or not.

Of course, both games can be generated by the same random process, with the same outcome (choose a random integer between 1 and 100, and observe whether it’s in [1,75], [76,95] or [96,100] ), but that doesn’t mean that when you chose the 45$ option and lose you always go home with the same feeling. In game 1 if you chose the risky route you have a 75% probability of losing and knowing that your loss has nothing to do with your choice, and a 5% chance of kicking yourself for being greedy. In game 2 you have a 80% chance of losing, but enjoying the benefit of doubt, knowing that there’s only a 5/80 chance that the loss is your fault.

Of course, my imagination regarding the design of the games is my responsibility, it’s not given explicitly by the original wording, but it certainly is implicit there.

I maintain that there is nothing irrational about trying to avoid feeling regret for your choices, and that I would really stick to the “paradoxical” combination of choices even in real life, after fully analyzing the probability space in question.

For those of you reading this blog who don’t know me, I’m a professor of mathematics, and much of my research has to do with discrete probability. That doesn’t mean that I’m not a fool, but at least it gives me the benefit of doubt, right?

========================================================

O.k., now, here’s part two of my post – after finishing the book.