The Privacy Paradox of Rann Smorodinsky

rann  privacy

The following paradox was raised by Rann Smorodinsky:

Rann Smorodinsky’s Privacy Paradox

Suppose that you have the following one-time scenario. You want to buy a sandwich where the options are a roast beef sandwich or an avocado sandwich. Choosing the sandwich of your preference (say, the avocado sandwich) adds one to your utility, but having your private preference known to the seller,  reduces by one your utility. The prior people have on your preferences is fifty-fifty.

If you choose the avocado sandwich your utility is zero, hence you can improve on this by picking each type of sandwich at random with probability 1/2. In this case your private preference remains unknown and you gain in expectation 1/2 for having the sandwich you prefer with probability 1/2.

But given this strategy, you can still improve on it by choosing the avocado sandwich.

This entry was posted in Games, Philosophy, Rationality and tagged , , . Bookmark the permalink.

17 Responses to The Privacy Paradox of Rann Smorodinsky

  1. Paco Santos says:

    My two cents: if you really do not want people to know that you are vegetarian, choose the roast beef sandwich.

    • Gil Kalai says:

      I wonder if the following strategy works: You come to the seller and tell him: I was brought up that beef sandwich goes with red wine, and avocado sandwich goes with beer. And I really much prefer beer!

  2. Thorsten Domnez says:

    Isn’t it usual to assume that in equilibrium, your opponent knows your strategy? In this case, randomizing seems always better than deterministically selecting the sandwich corresponding to your type.

    • Gil Kalai says:

      Dear Thorsten, In equilibrium you do not gain from deviating and , in particular, from making yourself the choices that are supposed to be random.

      • Moran Koren says:

        Hi Gil,
        But euilibrium is a concept of strategies , so is S1 is the random strategy and S2 is choose avocado…your choice is between S1 and S2, S2|S1 is not an options becuase your gain depends on your ability to get the short end of the stick hlaf the times.

        when comparing strategies in a wide form game (a decision tree like here) your strategy relates to all the terminal nodes at once…..like a game plan you have to comply with in equilibrum.
        As you can see , ex-ante U(S2)>U(S1)… so indeed , no paradox here….

  3. Andy D says:

    The answer, of course, is programmable matter + FHE

    http://en.wikipedia.org/wiki/Programmable_matter
    http://en.wikipedia.org/wiki/Homomorphic_encryption

    You send the seller a fully homomorphic encryption of your sandwich request (avocado). The seller uses this to produce a carbon-based encrypted sandwich. At this point it resembles Soylent Green covered in pseudorandom splotches. Then, using your private key, it rearranges itself at the molecular level into delicious, edible form.

  4. Andy D says:

    But maybe you insist on information-theoretic security; I’ll reluctantly consider this case as well.

    It seems like there is a modeling gap: what does it mean for the seller to ‘know your preference’?
    At least two possibilities:

    Model 1: The seller is trying to guess your preference, and if his guess is correct you lose 1 utility. More generally, each Player i in a game has type t_i, and player 1 is trying to guess an x such that some predicate R(x, t_1, …, t_k) is satisfied. If correct he receives some payoff and others are penalized, say. This is a game where the payoff to P1 depends not only on P1’s own type (as usual) but on others’ as well.

    Model 2: For any mixed strategy by you as a function of your type, the seller receives a payoff from you determined by the amount of (Shannon) information conveyed about your preference (under a 50-50 prior).
    This is not a game in the ordinary sense, since the buyer’s utility under a “mixed strategy” is *not the same* as the expected utility under the associated distribution over pure strategies. Properly regarded, there’s an infinite set of pure strategies, and it’s not immediately clear whether equilibria exist.

    • Andy D says:

      But I guess the Seller doesn’t really take an action in model 2, so it really reduces to a single-player optimization problem and we can find a best or nearly-best action.

      Rann’s puzzle raises a good question I have also wondered about (and would be grateful for references on), namely:
      -in most cases it’s clear that privacy has a nonzero but finite value, and can be traded off against other desired goods. Yet a lot of contemporary research takes as given that we *need* to achieve something like epsilon-differential privacy for some fixed epsilon, and then sees what other things can be achieved subject to that.
      On the one hand I am sympathetic to this approach because we as modelers may not have a clear idea what is the relative value of privacy vs other goods, so we’d hope to give a whole continuum of mechanisms/solutions for different settings to this relative value.
      On the other hand, differential privacy is a *global* condition on a protocol, whereas players care about their *individual* costs and benefits. We might look for solutions where some players have their privacy compromised, but these players are compensated.

      This is slightly abstract, but the principle is simple and can be appreciated without thinking about the tricky notion of privacy. Suppose we are considering mechanisms to divide a cake with strawberries on top among K people. Say that a mechanism is “delta-fair for volume” if each person is guaranteed to receive a delta/K fraction of the cake’s volume. Say a mechanism is “gamma-fair for strawberries” if each person is guaranteed to get a gamma/K fraction of the cake’s strawberries. We as modelers don’t know how good strawberries are vs. volume, so we might try to design (delta, gamma)-fair mechanisms that maximize gamma as a function of.delta. OTOH, we might get better social welfare from a mechanism that can be very unfair to a player with respect to either volume or strawberries, yet always compensates any such player with the other resource.

      I’m no expert on any of {privacy, game theory, mechanism design}, so all this should be taken as a query rather than a critique.

  5. Daniel Lehmann says:

    Choosing the avocado sandwich has utility one, not zero, as claimed by Rann, since my choice does not reveal anything about my private preference: the fact I choose avocado does not mean that I prefer avocado.

    • Rann S says:

      Choosing avocado was Gil’s short-hand for being truthful and asking for your favorite alternative.(as opposed to a type-independent strategy that always chooses avocado)

  6. BB says:

    The problem here is that the disutility from the seller knowing my type is discontinuous. If his posterior belief that I am type A (for avocado) is arbitrarily close to 1, I get no disutility, but if it is 1, I get disutility 1. Therefore, you are maximising over an open set, and hence no equilibrium.

    The question you are asking can be thought of as a Bayesian version of “Psychological Games” of Geanakoplos, Pearce, and Stachetti, where they allow other agents’ beliefs to enter my utility function. The agent is either of type A (avocado) or B (beef). Each type chooses a strategy; thus, A chooses an Avocado sandwich with probability x and B with probability y. This gives the seller some posterior belief for each observed choice of sandwich. In the version of the problem posed here, the disutility of the posterior is discontinuous — 0 everywhere except at the end points (roughly speaking) — which renders utility functions discontinuous.

    Here is another version of the problem: The disutility of each type of the buyer is precisely the posterior belief that the seller has about his type. In this version of the game, utilities are continuous, and it is easy to see that each type buys his most preferred sandwich. No more paradox.

  7. Bo Waggoner says:

    This paradox is also noted in game theory and is closely connected to commitment in games. (e.g. [1].)

    We can get the same paradox by removing privacy but adding a malicious seller who gets to choose which sandwich to give you based on your request. You gain 1 if you get your matching preference and 0 otherwise. If we restrict the seller to play a pure strategy, no Nash Equilibrium exists: You can make the seller indifferent by always randomizing what you report, and the seller may as well give you what you ask for; but then you would prefer to switch to reporting truthfully all the time. Then the seller would switch to giving you the opposite of your report, etc, and you would switch to lying, etc.

    It’s interesting that the paradox disappears if you can *commit* to your randomness. If you toss a coin so that it lands on the counter, then the seller can actually believe your claim that you’re picking randomly, and gives you the sandwich corresponding to the flip. (Works in the privacy setting too.)

    Without commitment, the only equilibrium is for you to report randomly and the seller to hand you a sandwich randomly. In this case, your expected value is the same with or without commitment, but I think there are cases in which being able to commit to a (randomized) strategy confers a big advantage.

    [1] Bagwell. Commitment and Observability in Games. Games and Economic Behavior, 1995.

  8. Gil Kalai says:

    Comments from Decision-Theory Forum:

    **************************

    I don’t see any paradox since the mixed strategy solution depends on the seller knowing that you are using a mixed strategy., ken Binmore

    **************************

    This situation can (should?) be modeled as a psychological game, after Geanakopolos, Pearce and Stachetti 1989. In such a game, players’ payoffs directly depend on their (and their opponents’) hierarchy of beliefs. In Rann’s example, the buyer’s payoff can be defined as the value of the product he ends up buying minus 2*|p-0.5|, where p is the seller’s posterior on avocado. Equilibrium in such a game is defined more or less as usual (in general, since payoffs can depend on high order beliefs, the equilibrium concept has to deal with those, too). When the buyer orders avocado for sure, this is an equilibrium. It does not maximize the buyer’s ex ante expected payoffs, but that is not unusual in this class of games because payoffs need not be linear in beliefs.

    Here is another example, which Kfir Eliaz and I used to call “the dress game”. Bob’s girlfriend bought a dress. He observes whether it is nice. His strategy is to tell her whether it is nice, given his observation. His objective is to maximize a convex increasing function of his girfriend’s posterior that the dress is nice. TE only equilibria are babbling equilibria, in which Bob’s message is totally uninformative, because otherwise he always has an incentive to deviate from his equilibrium (potentially mixed) strategy and say the dress is nice. By the convexity assumption, the babbling equilibria do not maximize Bob’s ex ante expected payoff. If he could commit, he would want to be fully informative., Ran Spiegler

    ********************

    Just saw a new paper, by Ely, Frankel and Kamenica, at a conference that is related to these kind of games. They do assume ex ante commitment:

    Click to access suspense.pdf

    For a “paradox” without commitment, there’s also the Hangman’s paradox by John Geanakoplos.
    Best, Attila Ambrus

    *******************

    Indeed, psychological games come to mind. But one could still argue that there is room for a paradox: back in ’88 David (Schmeidler) and I had a paper titled “Information-Dependent Games”, in which one’s payoff depended on beliefs, and where we used the surprise-test paradox to generate a trivial example where no equilibrium existed.

    The main difference between GPS and our paper was, that GPS required linearity in probabilities. In some sense, they allowed the players to use more strategies than the others’ beliefs could depend on. I think that if the payoff could freely (not-necessarily linearly) depend on mixed strategies, the non-existence could be established in their model as well.

    Best, Tzachi Gilboa

    *******************

    A similar argument is used to rule out undesired equilibria in (informed principal) games in which a privately informed player proposes a bargaining contract.

    A principal and an agent each own half of the good. They have independent private values. For simplicity, assume that the principal is equally likely to value the good at either 0 or 1 and that the agent’s valuation is uniform on [0,1]. The principal chooses a bargaining contract. If the agent agrees, they play according to the contract. Otherwise, they keep the shares.

    To rule out undesirable equilibria, one considers the following deviation. The principal asks the agent to name a price of either 0 or 1 and then decides whether to sell her share or buy the agent’s share at this price. (He might need to pay epsilon to the agent to induce him to accept this mechanism.)

    If the agent chooses price 0, type 1 of the principal will find it profitable to deviate to this mechanism and the agent will sell at a loss. If the agent chooses price 1, type 0 will deviate to this mechanism and the agent will buy at a loss. The agent breaks even if and only if he mixes 50/50 between both prices. All other mixed strategies will generate a loss.

    However, in any incentive compatible and individually rational mechanism, the agent’s expected payoff is strictly positive.

    Tymofiy Mylovanov

    **********************

    My comments on the game focuses on the hypothesized probability distribution from a prospect theory perspective. One of the empirical regularities of prospect theory is that people hate symmetric 50-50 bets. Based on the information given, the introduction of a gain utility (+1) for avocado (“A”), and a loss utility (-1) for roast beef (“RB” phonetic Arby), imply a reference point of 0 against which gain and loss is measured. If so, then let p be the probability associated with selecting A, and q that with selecting RB. The “mixed prospect” is now (A, p; 0, 1-p-q ; RB, q). One of the empirical regularities of probability weighting functions is that they have a fixed point probability of approximately 1/3 that separates loss and gain domains based on ranking. In which case an admissible criterion is that 1-p-q=1/3 So that p+q=2/3. Since the “rank dependent utility” implied by the problem is A > 0 > RB, we have the weighting scheme w(p), w(1/3), w(q). So that the introduction of a prior probability of p=1/2 è q=1/6 In which case w(1/6) > 1/6 due to overweighting in“loss probability” domain; w(1/2) < 1/2 due to underweighting gain probability domain; and w(1/3)=1/3 due to fixed point. Roughly, the decision weights for gain (g) and loss (l) respectively, are pi_g=w^+(1/3 + 1/2) – w^+(1/3) and pi_l =w^-(1/6 + 1/3) – w^-(1/6). For a value function V the nonexpected value of the choices available to our subject is V(.)=V^+(+1)pi_g + V^-1(-1)pi_l. Evidently, the nonexpected utility is also greater than zero even if the seller knows the subject will select avocado. For admissible “strategies” that “dominate” ½ we can impose the constraint 0 1/2. So that any combination of value function and decision weights that support that inequality would be “strategy proof” in so far as whether the seller knows the subject selects avocado or not.

    g charles-cadogan

  9. Gil Kalai says:

    Another thing that looks relevant is the randomized response method in surveys: From the wikipedea article: “Randomized response is a research method used in structured survey interview. It was first proposed by S. L. Warner in 19651 and later modified by B. G. Greenberg in 1969.2 It allows respondents to respond to sensitive issues (such as criminal behavior or sexuality) while maintaining confidentiality. Chance decides, unknown to the interviewer, whether the question is to be answered truthfully, or “yes”, regardless of the truth.”

Leave a comment