Rationality, Economics and Games

1. The “Center for Rationality”

“Founded in 1991, the Hebrew University’s Center for the Study of Rationality  [at first it was simply called “Center for Rationality”] is a unique venture in which faculty, students, and guests join forces to explore the rational basis of decision-making. Coming from a broad sweep of departments — mathematics, economics, psychology, biology, education, computer science, philosophy, business, statistics, and law — its members apply game- theoretic tools to examine the processes by which individuals seeking the path of maximum benefit respond to real-world situations where individuals with different goals interact.” 

Game theory was always strong at the Hebrew University of Jerusalem, and a nice aspect of it is the combination of mathematics and debating. As an undergraduate I was quite interested in game theory along with combinatorics and convexity, and my first published paper was on game theory, with Michael Maschler and Guillermo Owen. Later I moved in other directions, but more recently, in part because of my membership in the Center for the last ten years and in part because of my collaboration with economists Ariel Rubinstein (who was my classmate in my undergraduate years) and Rani Spiegler, I am trying to do research and write papers in theoretical economics. Not having the basic instincts of an economist, and lacking some basic background, makes it especially difficult.

Let me also mention that there are very interesting connections between computer science and economics and a very large emerging research community.  

2. Many many controversies

Among the many issues discussed and debated in seminars at the Center (the regular ones are the “Game Theory Seminar” on Sundays and the “Rationality on Friday” seminars on… Fridays,)  roundtables, the annual retreat, Sunday’s sandwich gatherings, and ample debates over e-mail were:

The controversy over expected utility theory (we will come back to it below); (Little updates: May, 21)

The role of psychology in economics;

The relevance of “neuroeconomy”;

Economics and the law and, in particular, judicial activism;

Privacy and surveillance;

Labor unions in general and the university professors labor union in particular;

The controversies over governance of Israeli universities, differential salaries for professors, and higher tuition for students;

Various issues related to the Center itself, like the fierce struggle with the university administration to get more offices in the late 90s, and the role, advantages, and disadvantages of this and similar research centers in university life;

Issues regarding the Israeli-Arab conflict, and war and peace in general.

 

3. Expected utility and rationality 

Let’s first briefly talk about expected utility theory and one controversy arising from it, to earn the right to move later to a few anecdotes.

Expected utility is a beautiful mathematical theory dealing with choices under uncertainty. Suppose there are n alternatives and we want to understand the preference of a rational agent between lotteries involving the alternatives. A lottery is a scenario of the following form: you have with probability 1/3 alternative A and with probability 2/3 alternative B. The preference relations are required to be “rational,” or in other words, to form an order relation. Lotteries of lotteries are also considered. Under a few natural axioms regarding the decision-maker’s behavior you reach the conclusion that you can associate to every alternative X a utility r(X) which is a real number, such that the preference relation between lotteries is derived by the order relation between their expected utilities. 

The debate on expected utility theory is old, and there are well-known experiments showing that individual choices deviate systematically from the predictions based on the expected utility model. (There is also evidence that individual behavior deviates from “rationality,” which asserts that the preferences are transitive, and that preferences may depend on factors that are not represented at all in this model.) Even if you use the expected utility model, identifying an individual’s utility function is very difficult. On top of this, identifying the probabilities involved in cases of uncertainty is a major issue in and of itself.

Quite recently, Matthew Rabin pointed out that expected utility theory leads to very counterintuitive conclusions. “Suppose that from any initial wealth level an expected-utility maximizer turns down gambles where she loses $100 or gains $110, each with 50% probability,  then she will turn down 50-50 bets of losing $1000 or gaining any sum,” and suppose that from any lifetime wealth level of less than $350,000 an expected-utility maximizer turns down gambles where she loses $100 or gains $105, each with 50% probability, then from an initial wealth level of $340,000 she will turn down 50-50 bets of losing $4000 or gaining $635,670. Rabin’s surprising discovery (related also to a 1963 paper by Samuelson) drew a lot of attention, and Rabin’s own interpretation was that this is strong evidence against expected utility theory. 

Ignacios Palacios-Huerta and Roberto Serrano’s response is that people with large initial wealth level will accept the small gambles in Rabin’s example. Their response can be regarded as claiming that the correct interpretation of Rabin’s observation is that the expected utility theory is overly inclusive rather than incorrect. The absurd conclusion refers to irrelevant regions of the theory, they claim, and they support their claim with empirical data. The tension between a theory or a model being incorrect or too narrow and it being too wide occurs in many scientific controversies and these two possibilities are often alternative interpretations of the same piece of evidence.

Zvi Safra and Uzi Segal pointed out that Rabin’s difficulty arises also in more general notions of utility. (I regard Safra and Segal’s results as weakening the interpretation of Rabin’s result as an argument for rejecting expected utility theory). Ariel Rubinstein also refers to Rabin’s findings in the section “The Dilemma of Absurd Conclusions” in a (rather provocative) paper where he describes various dilemmas facing the economic theorist. This controversy had a role in triggering the recent papers of Aumman and Serrano  and of Foster and Hart on riskiness.

This is a very nice controversy to watch.   I am weakly leaning toward Serrano’s side of the debate (and in general to the more “classical” economics theory,) but I have to mention that I have been exposed more to this side. More precisely, I view Rabin’s observation as one describing an important small effect regarding deviation from expected utility theory on (one or a few) small bets. This small effect needs to be corrected (or avoided) when trying to practically apply expected utility theory (which, as I said, is extremely difficult anyway), but I see no reason to believe that it nullifies expected utility theory and its prominent role in economics.

“Small???” you may ask. Turning down a 50-50 bet of losing $4000 or gaining $635,670 is a small matter? No; but note that this is a mathematical conclusion that follows from one’s behavior toward a few small bets that deviates from expected utility theory.  Once you see the behavior of the decision-maker as expressed by a (reasonable) utility function plus a rather small error term regarding her behavior for  one or a few small bets, the absurd conclusion does not apply.

Here is an example of what I mean: suppose you claim that a function f of your dollar income is linear while in reality it is only linear in the value rounded up to the next dollar. If you try to compute f(2000) based on a linear function through f(23.75) and f(24.12) you will make a huge mistake and yet the linear approximation is good.

Update (May 21): Uzi Segal wrote me:  “I’m not too sure about the ‘correct’ interpretation of our results. certainly it is true that whatever the problem is, it is not EU. but whether the conclusion is ‘there is no real problem’ or ‘the problem is with having a global preference relation’ — I don’t know. I’m afraid it is more likely the latter than the former.” Safra and Segal’s paper will be published in “Econometrica” and the final version can be found on the journal’s site. Piero La Mura wrote me that he “recently proposed a tractable generalization of expected utility (inspired by quantum information theory) which seems to avoid all the main anomalies, including Rabin’s.”

 

4. A few anecdotes

Game theory B.C. In the little workshop celebrating the Center’s inauguration Aumann gave a talk entitled “Game Theory in Jerusalem B.C.”  The title of the lecture was quite puzzling: “rationality in Jerusalem” already seemed a little contradictory, but “rationality in Jerusalem B.C.”? What could it mean? Aumann was interested in the early appearance of game theory in the Talmud, but we could not recall any examples from life in ancient Jerusalem, and Aumann’s only response to our questions about it  was: “Come to the lecture”.  Well, as it turned out in the lecture, by B.C. he meant “Before Center,” so he actually talked about Game theory in Jerusalem up to that time.    

Maschler and the game theory exam. Game theory was the first course I took as a high-school-on-strike student in the 1970-1971 academic year. (School teachers were on strike that year for several months.) Michael Maschler was a very impressive teacher and the lecture hall was filled with students. He also always came to class in a jacket and tie which in Israel is quite unusual. The following year I was already a university student and I took “Advanced Game Theory,” where Maschler suggested an open problem about certain dynamical processes leading to the bargaining set solution in cooperative games. (These dynamical processes were introduced by Stearns and Billera.) I thought I could solve one direction of one problem and I wrote Maschler about it and indeed my solution worked (but in the the other direction — not the one I thought) and Maschler and Owen proved the other direction and we wrote a paper together. In 1974 I had to finish my degree and I had no grade for “Game Theory” so I took the exam. To my surprise that year our theorem was part of the material and there was a question on it. Rather than answering the question I referred to my paper and, while in this mood, I continued to give sloppy answers to the other questions just to indicate that I knew the answers.  The TA and Maschler were not very impressed by my exam and the grade I got was 19/100. (Later, Maschler and Bezalel Peleg gave me a make-up exam and this time I answered the questions appropriately and got a good grade.) This was a good lesson.

A ride from Tel Aviv with Aumann. After getting my undergraduate degree I went to the army and part of my service was in the Tel Aviv area. From time to time I went to the Tel Aviv University game theory seminar that Aumann (who was a HU professor but also taught at TAU) ran. Among the regular participants were Abraham Neyman (Merale), who was known as a legendary mathematics problem-solver in my undergraduate years, Yair Tauman, Sergiu Hart, and Dubi Samet. One day Aumann gave me a lift from Tel Aviv to Jerusalem. He asked me to take out a piece of paper and to write down a matrix. It was a 2-by-2 payoff matrix of a non-zero-sum game, so each entry was a vector of payoffs for the two players. Then he asked me to do some calculations, made his point, suggested another matrix, asked for more calculations, made another point, and so on. Quite often he took his eyes off the road and pointed to my piece of paper and said something like: Here you wrote (2,3), but it should be (4,3), or here you made the worng calculation – the plus sign should be a minus. This divided attention between the road and my piece of paper was scary enough, but on top of this, Aumann used the following strategy for driving: any time he saw an opportunity to gain an advantage by changing lanes he did so, and these changes were quite frequent and often performed while he was looking at my piece of paper and pointing out some required modification of what was written there. While Auman was in control of both the driving and the mathematics, this was certainly one of the scariest car rides I ever took.

  

 

Maschler (1978), Peleg (1980), and Aumann (1977) (Oberwolfach pictures collection)

(May, 15: By mistake an unedited draft version was uploaded first. Fixed.) 

This entry was posted in Controversies and debates, Economics, Rationality and tagged , , , , . Bookmark the permalink.

7 Responses to Rationality, Economics and Games

  1. Pingback: Rationality, economics and games

  2. I do not see that Rabin is against expected utility as a normative model.
    Rabin merely points out that humans make inconsistent decisions.
    This is consistent with a variety of behavioral economics findings. And does not say that it is advisable to make inconsistent choices.

    To the end the controversy is about the amount. Namely, to which extent are humans consistent.

    I believe that economical decisions can be roughly spilled to three categories: Outside arbitrage. Big consequences. Small consequences.

    1) Outside arbitrage. Where a few rational individuals can make money out of other’s inconsistencies. Example put/call parity in options.
    Here the market is almost always rational. Because it is enough to have very few rational guys to rationalize it.

    2) Big consequences refers to where irrational decisions are going to be exposed no matter how careless people are. Example: The very idea of saving for pension.
    While single individuals ignore many times saving, there is public awareness to the need, and society makes an effort to rationalize behavior.
    Other examples relate to financial decisions that many people know they have to consult about. The big consequences ultimately insert rationality in the system, via various routes.

    3) Small things. Here I see no doubt that human nature is quite capricious. There are so many examples about this.

    I believe that rejecting Rabin’s a 100/200 .5/.5 bet is strictly irrational. But this is a small decision. The direct consequences of this irrationality are not that big to make society aware on them.

    I never saw this distinction written, but it looks obvious. Anyone saw these definitions stated somewhere?

  3. Definition of small/big

    When I talk about big decisions, I am not talking about the effect of the decision itself. I am sure people make irrational decisions about huge decisions.

    My category of “big” talks about mistakes that are repeated many times, and their effect is disastrous. This may lead to correction from various places. Not taxing car entrance to the city centers can be said irrational, and one city at least reacted, while other tried. It can be seen that rationality emerges when the effects are severe enough. Even then rationality does not rule.

  4. Pingback: Lior, Aryeh, and Michael « Combinatorics and more

  5. Pingback: Alarming Developments In Tel Aviv University « Combinatorics and more

  6. Nets Katz says:

    Hi Gil,

    I was browsing around here looking for tidbits of gossip about an open problem I’m
    obsessed with when I came across this page on expected utility theory. I think it
    is a very important issue but feel your treatment leaves out an important aspect
    which is not currently fashionable. Namely, you say little about the axiomatic basis
    for expected utility.

    To some modern economists, this seems nonsensical. Don’t we just assume agents act
    to maximize their expected utility? This gives us a model for how agents behave
    and then we check experimentally whether it fits the data. Formally, this is a correct
    description of all uses of mathematics in the sciences. But in the old days, before we
    were so sophisticated and spoke in terms of models, we expected more from our axioms. Kepler may have had a model for planetary motion, but Newton’s work
    was important because it explained the motion from more basic and more intuitively
    obvious axioms.

    A typical economist explaining the subject to his introductory graduate macroeconomics class at Indiana will say, for instance, that bounded concave utility
    functions are important in order to avoid the St. Petersburg paradox – which assigns
    infinite value to lotteries with infinite expectation. When asked about the more
    commonplace fact that people pay for lottery tickets more than their expectation value, the economist replies, “those people are just stupid.”

    Sitting in such a class this fall, I found myself objecting on much more axiomatic grounds. I was not well prepared for the class having beforehand read nothing more
    modern than Hicks in the economics literature. Hicks really impressed me when he
    explained that nothing mattered about a utility function except for its level sets. The
    “reality” is that agents prefer some combinations of goods over others and the utility
    function is just a wrapper we put over that, ordering the indifference curves. This was
    my internal picture of a utility function. I found myself deeply offended by the notion
    of expected utility. “How can I add values of a utility function?” I asked. “It doesn’t
    transform correctly.”

    The professor was momentarily baffled but replied to me shortly after consulting with
    colleagues. “The answer,” he said, “is in Von Neumann and Morgenstern, in Appendix A.” Something like that is true. [Though one should read Chapter 3, as well.)
    Von Neumann and Morgenstern introduced expected utility theory, knew that it needed an axiomatic basis, and introduced one. They were
    contemporary with Hicks and anticipated the Hicksian objection. Sure we might
    think that the only part of utility functions we can observe is the order of their values
    but we might be mistaken. Savages might think that temperature has no units but
    only measures colder or hotter, but modern man has thermometers and knows
    the units of temperature. We, Von Neumann and Morgenstern, shall play the same
    role as the inventors of the thermometer, thereby discovering the units of utility.

    The key observation of Von Neumann and Morgenstern is that in addition to agents
    having preferences over all collections of goods, they have preferences over all
    probability distributions on collections of goods. (This is an axiom but an appealing
    and natural one.) However, they must add one more axiom in order to obtain
    units for utility and in order to arrive at expected utility theory:

    Von Neumann and Morgenstern axiom: There is no complementarity among goods
    that do not occur simultaneously.

    From this axiom, Von Neumann and Morgenstern conclude that expected utility
    holds and a fortiori that utility function have units: they are determined up to
    composition with an affine map.

    Example: Steak and potatoes may be complementary goods. You might strictly prefer a meal consisting of 50 cents of steak and 50 cents of potatoes over a dollar of steak
    or over a dollar of potatoes. But you would not strictly prefer a 50% chance of
    $1 of steak and a 50% chance of $1 of potatoes over both $1 of steak and $1 of
    potatoes.

    This explains why when a customer enters a restaurant and the waiter asks the customer what he wants, the customer never replies, “surprise me.” The Von Neumann and Morgenstern axiom rules out the value of surprise.

    I like the Von Neumann-Morgernstern treatment because it appeals to our intuition.
    We wouldn’t dream, they say, of handing you expected utility out of thin air as a model. Rather we base it on an axiom which appeals to intuition. Indeed I don’t dispute that it appeals to intuition. But on careful thought, it should lose the appeal.

    Let’s go back to the stupid people who buy lottery tickets above expectation value.
    We could explain that their stupidity is why they do this. Or we could ask them why
    they do it. They will answer along the following lines. When I buy a lottery ticket,
    there arises a very,very,very small chance that I will become rich. The existence
    of this small chance adds color to my life.

    Translation: There is complementarity between the future event of being rich and of
    staying poor. The stupid person is trying to tell you that his utility function over
    random outcomes is actually concave near zero in the probability becoming rich.

    Who should I believe? Von Neumann and Morgenstern or my lying stupid person?

    There is a field called Rational Expectations Macroeconomics which relies heavily
    on expected utility. The field is full of models. Of course, none of them really match
    the data. However the real reason the practitioners would be very reluctant to
    abandon expected utility is that without it, their models would be much harder to
    solve recursively.

    Best,

    Nets

    • Gil Kalai says:

      Dear Nets, thanks for your comments. Modeling people choices under uncertainty is a fascinating area and, while expected utility theory is at the center of this area, the theory itself is under much criticism. I did not mention the behavioral economics spproaches and in particular Kahneman and Tversky’s work. Expected utility theory accomodates risk loving behavior and gambling. But I agree with you that expected utility theory does not accomodate “surprises loving behavior” this is a very nice point, I will ask around about it. Best regards, Gil

Leave a comment