Tversky, Kahneman, and Gili Bar-Hillel (WikiPedia). Taken by Maya Bar-Hillel at Stanford, summer 1979.

* *

*The following post was kindly contributed by Ehud Friedgut.*

During the past week I’ve been reading, and greatly enjoying Daniel Kahneman’s brilliant book “Thinking fast and Slow”.

One of the most intriguing passages in the book is the description of an experiment designed by Kahneman and Tversky which exemplifies a judgmental flaw exhibited by many people, which supposedly indicates an irrational, or inconsistent behavior. I will describe their experiment shortly.

I still remember the first time I heard of this experiment, it was related to me over lunch in Princeton by Noga Alon. Returning to this problem, 15 years later, I still, as in my initial exposure to the problem, made the “inconsistent” choice made by the vast majority of the subjects of the study. In this post I wish to argue that, in fact, there is nothing wrong with this choice.

Before relating their experiment, let me suggest one of my own. Imagine, if you will, that you suffer from gangrene in one of your toes. The doctor informs you that there is a 20% chance that it is “type A” gangrene, in which case you can expect spontaneous healing. There is a 75% chance that it is type B, in which case you will have to amputate it, and a 5% chance that it is type C. In the last case there is a shot you can be given that will save your toe, but it will cost you 2000$.

What would you do? I would probably not take the shot. My guiding principle here is that I hate feeling stupid, and that there’s a pretty good chance that if I take the shot I’ll walk around for the rest of my life, not only minus one toe and 2000$, but also feeling foolish for making a desperate shot in the dark.

Now, say I declined the shot, and I return after a week, and the doctor sees that the condition has worsened and that he will have to amputate the toe. He asks me if I wish (say for no cost) that he send the amputated toe for a biopsy, to see if it was type B or C. Here my gut reaction, and I’m sure yours too, is a resounding no. But even when thinking it over more carefully I still think I would prefer not to know. The question is which is better:

Option 1) I have a 75/80 probability of having a clean conscience, and a 5/80 chance of knowing clearly for the rest of my life that I’m lacking a toe because I’m what’s known in Yiddish as an uber-chuchem (smart aleck).

Option 2) Blissful ignorance: for the rest of my life I enjoy the benefit of doubt, and know that there’s only a 5/80 chance that the missing toe was caused by my stinginess.

I prefer option 2. I’m guessing that most people would also choose this option. I’m also guessing that Kahenman and Tversky would not label this as an irrational or even an unreasonable choice. I’m almost positive they wouldn’t claim that both options are equivalent.

Now, back to the KT experiment. You are offered to participate in a two stage game. In the first stage 75% of the participants are eliminated at random. At the second stage, if you make it, you have two choices: a 100% chance of winning 30$ or an 80% chance of winning 45$. But you have to decide before stage one takes place.

What would you choose?

I’ll tell you what I, and the majority of the subjects of the study do. We choose the 30$. Here’s my reasoning: 30 $ is pretty nice, I can go for a nice lunch, 45$ would upgrade it, sure, but I would feel really bad if I ended up with nothing because I was greedy. Let’s stick to the sure thing.

Now a different experiment: you have to choose between 20% chance of gaining 45$, or a 25% chance of gaining 30$.

What do you choose?

Once again, I chose what the majority chose: I would now opt for the 45$. My reasoning? 20% sounds pretty close to 25% to me, the small difference is worthwhile for a 50% gain in the prize.

O.k., I;m sure you all see the paradox. The two games are identical. In both you choose between a 20% chance of 45$ and a 25% chance of 30$. My reference to “a sure thing” represented a miscomprehension, common to most subjects, who ignored the first stage in the first game. Right?

No, wrong. I think the two games are really different, just as the two options related to the gangrene biopsy were different.

It is perfectly reasonable that when imagining the first game you assume that you are told whether you proceed to the second stage or not, and only if you proceed you are then told, if you chose the 80% option, whether you were lucky.

In contrast, in the second game, it is reasonable to assume that no matter what your choice was, you are just told whether you won or not.

Of course, both games can be generated by the same random process, with the same outcome (choose a random integer between 1 and 100, and observe whether it’s in [1,75], [76,95] or [96,100] ), but that doesn’t mean that when you chose the 45$ option and lose you always go home with the same feeling. In game 1 if you chose the risky route you have a 75% probability of losing and knowing that your loss has nothing to do with your choice, and a 5% chance of kicking yourself for being greedy. In game 2 you have a 80% chance of losing, but enjoying the benefit of doubt, knowing that there’s only a 5/80 chance that the loss is your fault.

Of course, my imagination regarding the design of the games is my responsibility, it’s not given explicitly by the original wording, but it certainly is implicit there.

I maintain that there is nothing irrational about trying to avoid feeling regret for your choices, and that I would really stick to the “paradoxical” combination of choices even in real life, after fully analyzing the probability space in question.

For those of you reading this blog who don’t know me, I’m a professor of mathematics, and much of my research has to do with discrete probability. That doesn’t mean that I’m not a fool, but at least it gives me the benefit of doubt, right?

========================================================

O.k., now, here’s part two of my post – after finishing the book.

I didn’t encounter the notion of blissful ignorance in the book, but, of course, Kahneman is well aware of the notion of trying to avoid regret. However, he finds it, how shall we say? Regrettable.

When addressing a fictitious archetypical character, “Sam”, who is risk aversive and therefore makes choices that are suboptimal from the point of view of expected gain, Kahneman offers Sam the following words of wisdom:

“

I sympathize with your aversion to losing any gamble, but it is costing you a lot of money. Consider the following question:

Are you on your deathbed? Is this the last offer of a small favorable gamble that you will ever consider?

“

He then goes on to urge Sam to view life as a long series of repeated games (or variants on a single game) rather than a collection of isolated decisions. Clearly, if Sam wholeheartedly adopts this advice he will both have a good chance of becoming wealthier, and experience much less regret. (My gangrene example doesn’t fall into this category because amputating a toe is slightly too dramatic to be classified as “a small gamble”, and also there’s a clear limit on the number of times you can repeat this experiment.)

Can I adopt this attitude when faced with a gamble similar to the original KT test described above?

I’m afraid I might have a hard time doing so. It seems my ego might get in the way – you see, I pride myself so much on being rational and good at analyzing risks that every such test is a memorable event to me, and therefore I find it hard to view it as just one more choice among thousands that I will have to take. Do you see the paradox? The “skilled decision maker” makes a bad decision because he prides himself for his skill.

Perhaps this is why Amoz Oz once expressed the sentiment that he would like to see a prime minister who is like an old and experienced grocer (and I add in my mind, as opposed to a brilliant analytical genius with a degree from Stanford in operation research.) I think old experienced grocers might be better at avoiding letting their ego get in the way of decision making.

]]>