Top Banner
From outcome-based to language-based preferences Valerio Capraro * , Joseph Y. Halpern , Matjaˇ z Perc We review the literature on models that try to explain human behavior in social interactions described by normal-form games with monetary payoffs. We start by covering social and moral preferences. We then focus on the growing body of research showing that people react to the language in which actions are described, especially when it activates moral concerns. We conclude by arguing that behavioral economics is in the midst of a paradigm shift towards language-based preferences, which will require an exploration of new models and experimental se- tups. JEL: C70, C91, D01, D03, D63 Keywords: utility maximization, bounded rationality, social prefer- ences, moral preferences, language-based models 1. Introduction We review the literature on models of human behavior in social interactions that can be described by normal-form games with monetary payoffs. This is certainly a limited set of interactions, since many interactions are neither one- shot nor simultaneous, nor do they have a social element, nor do they involve just monetary payoffs. Although small, this set of interactions is of great interest from both the practical and the theoreti- cal perspective. For example, it includes * Capraro: Department of Economics, Middle- sex University, The Burroughs, London NW4 4BT, U.K., [email protected]. Halpern: Computer Science Department, Cornell University, Ithaca, NY 14850, USA, [email protected]. Work supported in part by NSF grants IIS-178108 and IIS-1703846 and MURI grant W911NF-19-1-0217. Perc: Faculty of Natural Sciences and Mathe- matics, University of Maribor, Koroˇ ska cesta 160, 2000 Maribor, Slovenia & Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan & Complex- ity Science Hub Vienna, Josefst¨adterstraße 39, 1080 Vienna, Austria, [email protected]. games such as the prisoner’s dilemma and the dictator game, which capture the essence of some of the most funda- mental aspects of our social life, such as cooperation and altruism. Economists have long recognized that people do not always behave so as to maximize their monetary payoffs in these games. Find- ing a good model that explains behavior has driven much of the research agenda over the years. Earlier work proposed that we have so- cial preferences ; that is, our utility func- tion depends not just on our payoffs, but also on the payoffs of others. For exam- ple, we might prefer to minimize inequity or to maximize the sum of the mone- tary payoffs, even at a cost to ourselves. However, a utility function based on so- cial preferences is still outcome-based ; that is, it is still a function of the play- ers’ payoffs. We review a growing body of experimental literature showing that outcome-based utility functions cannot adequately describe the whole range of human behavior in social interactions. 1
47

From outcome-based to language-based preferences - PsyArXiv

May 04, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: From outcome-based to language-based preferences - PsyArXiv

From outcome-based tolanguage-based preferences

Valerio Capraro∗, Joseph Y. Halpern†, Matjaz Perc‡

We review the literature on models that try to explain human behaviorin social interactions described by normal-form games with monetarypayoffs. We start by covering social and moral preferences. We thenfocus on the growing body of research showing that people react to thelanguage in which actions are described, especially when it activatesmoral concerns. We conclude by arguing that behavioral economics isin the midst of a paradigm shift towards language-based preferences,which will require an exploration of new models and experimental se-tups.JEL: C70, C91, D01, D03, D63Keywords: utility maximization, bounded rationality, social prefer-ences, moral preferences, language-based models

1. Introduction

We review the literature on modelsof human behavior in social interactionsthat can be described by normal-formgames with monetary payoffs. This iscertainly a limited set of interactions,since many interactions are neither one-shot nor simultaneous, nor do they havea social element, nor do they involve justmonetary payoffs. Although small, thisset of interactions is of great interestfrom both the practical and the theoreti-cal perspective. For example, it includes

∗ Capraro: Department of Economics, Middle-sex University, The Burroughs, London NW4 4BT,U.K., [email protected].† Halpern: Computer Science Department,

Cornell University, Ithaca, NY 14850, USA,[email protected]. Work supported in part byNSF grants IIS-178108 and IIS-1703846 and MURIgrant W911NF-19-1-0217.‡ Perc: Faculty of Natural Sciences and Mathe-

matics, University of Maribor, Koroska cesta 160,2000 Maribor, Slovenia & Department of MedicalResearch, China Medical University Hospital, ChinaMedical University, Taichung, Taiwan & Complex-ity Science Hub Vienna, Josefstadterstraße 39, 1080Vienna, Austria, [email protected].

games such as the prisoner’s dilemmaand the dictator game, which capturethe essence of some of the most funda-mental aspects of our social life, such ascooperation and altruism. Economistshave long recognized that people do notalways behave so as to maximize theirmonetary payoffs in these games. Find-ing a good model that explains behaviorhas driven much of the research agendaover the years.

Earlier work proposed that we have so-cial preferences; that is, our utility func-tion depends not just on our payoffs, butalso on the payoffs of others. For exam-ple, we might prefer to minimize inequityor to maximize the sum of the mone-tary payoffs, even at a cost to ourselves.However, a utility function based on so-cial preferences is still outcome-based ;that is, it is still a function of the play-ers’ payoffs. We review a growing bodyof experimental literature showing thatoutcome-based utility functions cannotadequately describe the whole range ofhuman behavior in social interactions.

1

Page 2: From outcome-based to language-based preferences - PsyArXiv

2 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

The problem is not with the notion ofmaximizing expected utility. Even if weconsider decision rules other than maxi-mizing the expected utility, such as mini-mizing regret or maximin, we cannot ex-plain many experimental results as longas the utility is outcome-based. Nor doesit help to take bounded rationality intoaccount.

The experimental evidence suggeststhat people have moral preferences, thatis, preferences for doing what they viewas the “right thing”. These preferencescannot be expressed solely in terms ofmonetary payoffs. We review the litera-ture, and discuss attempts to constructa utility function that captures peoples’moral preferences. We then considermore broadly the issue of language. Thekey takeaway message is that what mat-ters is not just the monetary payoffs as-sociated with actions, but also how theseactions are described. A prominent ex-ample of this is when the words beingused to describe the strategies activatemoral concerns.

The review is structured as follows. InSection 2, we review the main experi-mental regularities that were responsi-ble for the paradigm shift from mone-tary maximization models to outcome-based social preferences and then to non-outcome-based moral preferences. Mo-tivated by these empirical regularities,in the next sections, we review the ap-proaches that have been taken to cap-ture human behavior in one-shot inter-actions and, for each of them, we de-scribe their strengths and weaknesses.We start with social preferences (Section3) and moral preferences (Section 4). Wealso discuss experimental results showingthat the words used to describe the avail-able actions impact behavior. In Section5, we review work on games where theutility function depends explicitly on thelanguage used to describe the available

actions. We conclude in Section 6 withsome discussion of potential directionsfor future research. Thinking in termsof moral preferences and, more generally,language-based preferences suggests po-tential connections to and synergies withother disciplines. Work on moral philos-ophy and moral psychology could helpus understand different types and dimen-sions of moral preferences, and work incomputational linguistics on sentimentanalysis (Pang, Lee and Vaithyanathan,2002) could help explain how we obtainutilities on language.

2. Experimental regularities

The goal of this section is to re-view a series of experimental regulari-ties that were observed in normal-formgames played among anonymous players.Although we occasionally mention theliterature on other types of games, themain focus of this review is on one-shot,simultaneous-move, anonymous games.

We start by covering standard ex-periments in which some people havebeen shown not to act so as to maxi-mize their monetary payoff. Then wemove to experiments in which peoplehave been shown not to act accordingto any outcome-based preference. Fi-nally, we describe experiments suggest-ing that people’s preferences take intoaccount the words used to describe theavailable actions.

The fact that some people do not actso as to maximize their monetary payoffwas first shown using the dictator game.In this game,1 the dictator is given a cer-tain sum of money and has to decidehow much, if any, to give to the recip-ient, who starts with nothing. The re-cipient has no choice and receives only

1Following the standard approach, we abuse ter-minology slightly and call this a game, although itdoes not specify the utility functions, but only theoutcome associated with each strategy profile.

Page 3: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 3

the amount that the dictator decides togive. Since dictators have no monetaryincentives to give, a payoff-maximizingdictator would keep the whole amount.However, experiments have repeatedlyshown that people violate this prediction(Kahneman, Knetsch and Thaler, 1986;Forsythe et al., 1994). Moreover, the dis-tribution of giving tends to be bimodal,with peaks at the zero-offer and at theequal share (Engel, 2011).

We can summarize the first experimen-tal regularity as follows:

Experimental Regularity 1. Asignificant number of dictatorsgive some money in the dicta-tor game. Moreover, the dis-tribution of donations tend tobe bimodal, with peaks at zeroand at half the total.

Another classical game in which peo-ple often violate the payoff-maximizationassumption is the ultimatum game. Inits original formulation, the ultimatumgame is not a normal-form game, butan extensive-form game, where playersmove sequentially: a proposer is givena sum of money and has to decide howmuch, if any, to offer to the responder.The responder can either accept or rejectthe offer. If the offer is accepted, the pro-poser and the responder are paid accord-ing to the accepted offer; if the offer isrejected, neither the proposer nor the re-sponder receive any payment. A payoff-maximizing responder clearly would ac-cept any amount greater than 0; know-ing this, a payoff-maximizing proposerwould offer the smallest positive amountavailable in the choice set. Behavioralexperiments showed that people dramat-ically violate the payoff-maximizing as-sumption: responders typically rejectlow offers and proposers often offer anequal split (Guth, Schmittberger andSchwarze, 1982; Camerer, 2003). Re-

jecting low offers is impossible to recon-cile with a theory of payoff maximiza-tion. Making a non-zero offer is consis-tent with payoff maximization, if a pro-poser believes that the responder will re-ject too low an offer. However, severalresearchers have noticed that offers aretypically larger than the amount thatproposers believe would result in accep-tance (Henrich et al., 2001; Lin and Sun-der, 2002). This led Camerer (2003,p. 56) to conclude that “some of [pro-poser’s] generosity in ultimatum games isaltruistic rather than strategic”. Theseobservations have been replicated in thenormal-form, simultaneous-move variantof the ultimatum game, that is the focusof this article. In this variant, the pro-poser and the responder simultaneouslychoose their offer and minimum accept-able offer, respectively, and then are paidonly if the proposer’s offer is greater thanor equal to the responder’s minimum ac-ceptable offer.2

Experimental Regularity 2. Inthe ultimatum game, a sub-stantial proportion of respon-ders reject non-zero offers and asignificant number of proposersoffer an equal split.

The fact that some people do not actso as to maximize their monetary pay-off was also observed in the contextof (one-shot, anonymous) social dilem-mas. Social dilemmas are situations inwhich there is a conflict between theindividual interest and the interest ofthe group (Hardin, 1968; Ostrom, 1990;Olson, 2009). The most-studied so-

2Some authors have suggested that the stan-dard sequential-move ultimatum game elicits slightlylower rejection rates than its simultaneous-movevariant (Schotter, Weigelt and Wilson, 1994; Blount,1995), but this does not affect the claim that pro-posers offer more than necessary from a purely mon-etary point of view.

Page 4: From outcome-based to language-based preferences - PsyArXiv

4 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

cial dilemmas are the prisoner’s dilemmaand the public-goods game.3

In the prisoner’s dilemma, two play-ers can either cooperate (C) or defect(D). If both players cooperate, they re-ceive the reward for cooperation, R; ifthey both defect, they receive the pun-ishment payoff, P ; if one player defectsand the other cooperates, the defector re-ceives the temptation payoff, T , whereasthe cooperator receives the sucker’s pay-off, S. Payoffs are assumed to satisfy theinequalities: T > R > P > S. These in-equalities guarantee that the only Nashequilibrium is mutual defection, since de-fecting strictly dominates cooperating.However, mutual defection gives playersa payoff smaller than mutual coopera-tion.

In the public-goods game, each of nplayers is given an endowment e > 0 andhas to decide how much, if any, to con-tribute to a public pool. Let ci ∈ [0, e] beplayer i’s contribution. Player i’s mon-etary payoff is e − ci + α

∑nj=1 cj, where

α ∈ ( 1n, 1) is the marginal return for co-

operation, that is, the proportion of thepublic good that is redistributed to eachplayer. Since α ∈ ( 1

n, 1), players max-

imize their individual monetary payoffby contributing 0, but if they do that,they receive less than the amount theywould receive if they all contribute theirwhole endowment (e < αne). Althoughcooperation is not individually optimalin the prisoner’s dilemma or the public-goods game, many people cooperate inbehavioral experiments using these pro-tocols (Rapoport and Chammah, 1965;Ledyard, 1995).

3Other well-studied social dilemmas are theBertrand competition (Bertrand, 1883) and the trav-eler’s dilemma (Basu, 1994). In this review, we fo-cus on the prisoner’s dilemma and the public-goodsgame; the other social dilemmas do not give rise toconceptually different results, at least within our do-main of interest.

Experimental Regularity 3. Asignificant number of people co-operate in the one-shot pris-oner’s dilemma and the one-shot public-goods game.

In Section 3, we show that theseregularities can be explained well byoutcome-based preferences, that is, pref-erences that are a function of the mon-etary payoffs. But we now discuss aset of empirical findings that cannot beexplained by outcome-based preferences.We start with truth-telling in tasks inwhich people can increase their monetarypayoff by misreporting private informa-tion.

Economists have considered severalways of measuring the extent to whichpeople lie, the most popular ones beingthe sender-receiver game (Gneezy, 2005)and the die-under-cup task (Fischbacherand Follmi-Heusi, 2013). In its originalversion, the sender-receiver game is nota normal-form game, but an extensive-form game. There are two possible mon-etary distributions, called Option A andOption B; only the sender is informedabout the payoffs corresponding to thesedistributions. The sender can then tellthe receiver either “Option A will earnyou more money than Option B” or “Op-tion B will earn you more money thanOption A”. Finally, the receiver decideswhich of the two options to implement.4

Clearly, only one of the messages thatthe sender can send is truthful. Gneezy(2005) showed that many senders tell thetruth, even when the truthful message

4The original variant of the sender-receiver gametherefore requires the players to choose their ac-tion sequentially. Similar results can be obtainedwith the normal-form, simultaneous-move variant inwhich the receiver decides whether to believe thesender’s message at the same time that the sendersends it, or even when the receiver makes no activechoice (Gneezy, Rockenbach and Serra-Garcia, 2013;Biziou-van Pol et al., 2015).

Page 5: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 5

does not maximize the sender’s mone-tary payoff. Gneezy also showed thatthis honest behavior is not driven bypurely monetary preferences over mone-tary outcomes, since people behaved dif-ferently when asked to choose betweenthe same monetary options when therewas no lying involved (i.e., when theyplayed a dictator game that was mon-etarily equivalent to the sender-receivergame), suggesting that people find lyingintrinsically costly.

In the die-under-cup task, participantsroll a dice under a cup (i.e., privately)and are asked to report the outcome.Participants receive a payoff that de-pends on the reported outcome, not onthe actual outcome. The actual out-come is typically not known to the ex-perimenter, but, by comparing the dis-tribution of the reported outcomes to theuniform distribution, the experimentercan tell approximately what fraction ofpeople lied. Several studies showedthat people tend to be honest, even ifthis goes against their monetary inter-est (Fischbacher and Follmi-Heusi, 2013;Abeler, Nosenzo and Raymond, 2019;Gerlach, Teodorescu and Hertwig, 2019).

Experimental Regularity 4. Asignificant number of peopletell the truth in the sender-receiver game and in the die-rolling task, even if it lowerstheir monetary payoff.

Another line of empirical work that ishard to reconcile with preferences overmonetary payoffs is observed in variantsof the dictator game. For example, List(2007) explored people’s behavior in twomodified dictator games. In the controlcondition, the dictator was given $10 andthe receiver was given $5; the dictatorcould then give any amount between $0and $5 to the recipient. In line with stud-ies using the standard dictator game,

List observed that about 70% of the dic-tators give a non-zero amount, with apeak at $2.50. In the experimental con-dition, List added a “take” option: dicta-tors were allowed to take $1 dollar fromthe recipient.5 In this case, List foundthat the peak at giving $2.50 disappearsand that the distribution of choices be-comes unimodal, with a peak at giving$0. However, only 20% of the partici-pants choose the take option. This sug-gests that, for some participants, givinga positive amount dominates giving $0in the baseline, but giving $0 dominatesgiving the same positive amount in thetreatment. This is clearly inconsistentwith outcome-based preferences. Bard-sley (2008) and Cappelen et al. (2013)obtained a similar result.

Experimental Regularity 5. Asignificant number of peopleprefer giving over keeping in thestandard dictator game with-out a take option, but preferkeeping over giving in the dic-tator game with a take option.

In a similar vein, Lazear, Malmendierand Weber (2012) showed that some dic-tators give part of their endowment whenthey are constrained to play a dictatorgame, but, given the option of receiv-ing the maximum amount of money theycould get by playing the dictator gamewithout actually playing it, they chooseto avoid the interaction. Indeed, Dana,Cain and Dawes (2006) found that somedictators would even pay $1 in orderto avoid the interaction. Clearly, theseresults are inconsistent with outcome-based preferences.

Experimental Regularity 6. Asignificant number of people

5List also considered a treatment with multipletake options, up to $5. The results are similar tothose with a single take option of $1, so we focushere only on the latter case.

Page 6: From outcome-based to language-based preferences - PsyArXiv

6 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

prefer giving over keeping in thestandard dictator game with-out an exit option, but preferkeeping over giving in the dic-tator game with an exit option.

How a game is framed is also wellknown to affect people’s behavior. Forexample, contributions in the public-goods game depend on whether the gameis presented in terms of positive ex-ternalities or negative ones (Andreoni,1995), rates of cooperation in the pris-oner’s dilemma depend on whether thegame is called “the community game” or“the Wall Street game” (Ross and Ward,1996), and using terms such as “part-ner” or “opponent” can affect partici-pants’ behavior in the trust game (Burn-ham, McCabe and Smith, 2000). Burn-ham, McCabe and Smith (2000) sug-gested that the key issue (at least, inthese situations) is what players perceiveas the norms.

Following this suggestion, there waswork exploring the effect of changing thenorms on people’s behavior. One lineexplored dictators’ behavior in variantsof the dictator game in which the ini-tial endowment is not given to the dic-tator, but is instead given to the recip-ient, or equally shared between the dic-tator and the recipient, and the dicta-tor can take some of the recipient’s en-dowment; this is called the “take” frame.Some experiments showed that peopletend to be more altruistic in the dicta-tor game in the “take” frame than in thestandard dictator game (Swope et al.,2008; Krupka and Weber, 2013); more-over, this effect is driven by the percep-tion of what the socially appropriate ac-tion is (Krupka and Weber, 2013). How-ever, this result was not replicated inother experiments (Dreber et al., 2013;Eckel and Grossman, 1996; Halvorsen,2015; Hauge et al., 2016). A relatedstream of papers pointed out that includ-

ing morally loaded words in the instruc-tions of the dictator game can impactdictators’ giving (Branas-Garza, 2007;Capraro and Vanzo, 2019; Capraro et al.,2019; Chang, Chen and Krupka, 2019),and that the behavioral change can beexplained by a change in the perceptionof what dictators think the morally rightaction is (Capraro and Vanzo, 2019). Al-though there is debate about whether the“take” frame can impact people’s behav-ior in the dictator game, there is gen-eral agreement that the wording of theinstructions can impact dictators’ behav-ior by activating moral concerns.

The fact that the wording of the in-structions can impact behavior has alsobeen observed in other games. For exam-ple, Eriksson et al. (2017) found that thelanguage in which the rejection option ispresented significantly impacts rejectionrates in the ultimatum game. Specifi-cally, receivers are more likely to declinean offer when this option is labelled “re-jecting the proposer’s offer” than when itis labelled “reducing the proposer’s pay-off”. Moreover, in line with the discus-sion above regarding the dictator game,Eriksson et al. found this effect to bedriven by the perception of what themorally right action is.

A similar result was obtained in thetrade-off game (Capraro and Rand, 2018;Tappin and Capraro, 2018), where adecision-maker has to unilaterally de-cide between two allocations of moneythat affect the decision-maker and twoother participants. One allocation equal-izes the payoffs of the three partici-pants; the other allocation maximizesthe sum of the payoffs, but is unequal.Capraro and Rand (2018) and Tappinand Capraro (2018) found that minorchanges in the language in which the ac-tions are presented significantly impactsdecision-makers’ behavior. For exam-ple, naming the efficient choice “more

Page 7: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 7

generous” and the equitable choice “lessgenerous” makes subjects more likely tochoose the efficient choice, while nam-ing the efficient choice “less fair” and theequitable choice “more fair” makes sub-jects more likely to choose the equitablechoice.

Morally loaded words have also beenshown to affect behavior in the pris-oner’s dilemma, where participants havebeen observed to cooperate at differentrates depending on whether the strate-gies are named ‘I cooperate/I cheat’ or‘A/B’ (Mieth, Buchner and Bell, 2021).Furthermore, in both the one-shot anditerated prisoner’s dilemma, moral sua-sion, that is, providing participants withcues that make the morality of an actionsalient, increases cooperation (Capraroet al., 2019; Dal Bo and Dal Bo, 2014).This suggests that cooperative behavioris partly driven by a desire to do what ismorally right.

Experimental Regularity 7. Be-havior in several experimentalgames, including the dictatorgame, the prisoner’s dilemma,the ultimatum game, and thetrade-off game, depends on theinstructions used to introducethe games, especially whenthey activate moral concerns.

3. Social preferences

In order to explain the seven regulari-ties listed in Section 2, one has to go be-yond preferences for maximizing mone-tary payoffs. The first generation of util-ity functions that do this appeared in the1970s. These social preferences share theunderlying assumption that the utility ofan individual depends not only on the in-dividual’s monetary payoff, but also onthe monetary payoff of the other play-

ers involved in the interaction.6 In thissense, these social preferences are all par-ticular instances of outcome-based pref-erences, that is, utility functions thatdepend only on (1) the individuals in-volved in the interaction and (2) themonetary payoffs associated with eachstrategy profile. Social preferences typ-ically explain Experimental Regularities1–3 well. However, they have difficultieswith Experimental Regularities 4–7. Inthis section, we review this line of work.For more comprehensive reviews, we re-fer the readers to Camerer (2003) andDhami (2016). Part of this ground wasalso covered by Sobel (2005).

Economists have long recognized theneed to include other-regarding prefer-ences in the utility function. Earlierwork focused on economies with one pri-vate good and one public good. In theseeconomies, there are n players; player i isendowed with wealth wi, which she canallocate to the private good or the pub-lic good. This is a quite general class ofgames (e.g., the dictator game, prisoner’sdilemma, and the public-goods game canall be expressed in this form), althoughit does not cover several other games ofinterest for this review (e.g., the ultima-

6There has also been work on explaining humanbehavior in terms of bounded rationality. The ideabehind this approach is that computing a best re-sponse may be computationally difficult, so playersdo so only to the best of their ability. Althoughuseful in many domains, these models do not ex-plain deviations from the payoff-maximizing strat-egy in situations in which computing this strategy isobvious, such as in the dictator game (Experimen-tal Regularity 1). While some people may give inthe dictator game because they incorrectly computedthe payoff-maximizing strategy, did not read the in-structions, or played randomly, this does not beginto explain the overall behavior of dictators. In a re-cent analysis of over 3,500 dictators, all of whom hadcorrectly answered a comprehension question regard-ing which strategy maximizes their monetary payoff,Branas-Garza, Capraro and Rascon-Ramirez (2018)found an average donation of 30.8%. Similar obser-vations also apply to the other experimental regular-ities listed in Section 2.

Page 8: From outcome-based to language-based preferences - PsyArXiv

8 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

tum and trade-off games). Let xi and gibe i’s allocation to the private good andcontribution to the public good, respec-tively. Economists first assumed that i’sutility ui depended only and monoton-ically on xi and G =

∑gj. According

to this model, the government forcing anincrease of contributions to public goods(e.g., by increasing taxes) will result in adecrease of private contributions, dollar-for-dollar. Specifically, if the govern-ment takes one dollar from a particu-lar contributor and puts it in the pub-lic good, while keeping everything elsefixed (say, by changing the tax struc-ture appropriately), then that contribu-tor can restore the equilibrium by reduc-ing his contribution by one dollar (Warr,1982; Roberts, 1984; Bernheim, 1986;Andreoni, 1988). The prediction thatthis would happen was violated in em-pirical studies (Abrams and Schitz, 1978;Abrams and Schmitz, 1984). Motivatedby these limitations, Andreoni (1990) in-troduced a theory of warm-glow giving,where the utility function captures theintuition that individuals receive positiveutility from the very act of giving to thepublic good. Formally, this correspondsto considering, instead of a utility func-tion of the form ui = ui(xi, G), one of theform ui = ui(xi, G, gi). Note that thelatter utility function is still outcome-based, because all of its arguments arefunctions of monetary outcomes. Warm-glow theory has been applied success-fully to several domains. However, whenit comes to explaining the experimentalregularities listed in Section 2, it has twosignificant limitations. The first is itsdomain of applicability: the ultimatumand trade-off games cannot be expressedin terms of economies with one privategood and one public good in any obvi-ous way. The second is more fundamen-tal: as we show at the end of this section,it cannot explain Experimental Regular-

ities 4-7 because it is outcome-based.

More recently, economists have starteddefining the utility function directly onthe monetary payoffs of the players in-volved in the interaction. These utilityfunctions, by construction, can be ap-plied to any economic interaction. Thesimplest such utility function is just alinear combination of the individual’spayoff and the payoffs of the otherplayers (Ledyard, 1995). Formally, let(x1, . . . , xn) be a monetary allocationamong n players. The utility of playeri given this allocation is

ui(x1, . . . , xn) = xi + αi∑j 6=i

xj,

where αi is an individual parameter rep-resenting i’s level of altruism. Prefer-ring to maximize payoff is the specialcase where αi = 0. Players with αi > 0care positively about the payoff of otherplayers. Consequently, this utility func-tion is consistent with altruistic behaviorin the dictator game and with coopera-tive behavior in social dilemmas. Playerswith αi < 0 are spiteful. These are play-ers who prefer to maximize the differ-ences between their own monetary payoffand the monetary payoff of other players.Thus, this type of utility function is alsoconsistent with the existence of peoplewho reject positive offers in the ultima-tum game.

However, it was soon observed thatthis type of utility function is not con-sistent with the quantitative details ofultimatum-game experiments. Indeed,from the rate of rejections observed inan experiment, one can easily computethe distribution of the spitefulness pa-rameter. Since proposers are drawn fromthe same population, one can then usethis distribution to compute what offershould be made by proposers. The offersthat the proposers should make, accord-

Page 9: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 9

ing to the α calculated, are substantiallylarger than those observed in the exper-iment (Levine, 1998).

Starting from this observation, Levine(1998) proposed a generalization of Led-yard’s utility function that assumedthat agents have information (or beliefs)about the level of altruism of the otherplayers and base their own level of altru-ism on that of the other players. Thisallows us to formalize the intuition thatplayers may be more altruistic towardsaltruistic players than towards selfish orspiteful players. Specifically, Levine pro-posed the utility function

ui(x1, . . . , xn) = xi +∑j 6=i

αi + λαj1 + λ

xj,

where λ ∈ [0, 1] is a parameter repre-senting how sensitive players are to thelevel of altruism of the other players. Ifλ = 0, then we obtain Ledyard’s model,where i’s level of altruism towards j doesnot depend on j’s level of altruism to-wards i; if λ > 0, players tend to bemore altruistic towards altruistic playersthan towards selfish and spiteful players.Levine showed that this model fits theempirical data in several settings quitewell, including the ultimatum game andprisoner’s dilemma.

One year later, Fehr and Schmidt(1999) introduced a utility functionbased on a somewhat different idea. In-stead of caring directly about the abso-lute payoffs of the other players, Fehrand Schmidt assumed that (some) peoplecare about minimizing payoff differences.Following this intuition, they introduceda family of utility function that can cap-

ture inequity aversion:

ui(x1, . . . , xn) = xi

− αin− 1

∑j 6=i

max(xj − xi, 0)

− βin− 1

∑j 6=i

max(xi − xj, 0).

Note that the first sum is greater thanzero if and only if some player j receivesmore than player i (xj > xi). Therefore,αi can be interpreted as a parameter rep-resenting the extent to which player i isaverse to having player j’s payoff higherthan his own. Similarly, βi can be inter-preted as a parameter representing theextent to which player i dislikes advanta-geous inequities. Fehr and Schmidt alsoassumed that βi ≤ αi and βi ∈ [0, 1).The first assumption means that playersdislike having a payoff higher than thatof some other player at most as much asthey dislike having a payoff lower thanthat of some other player. To understandthe assumption that βi < 1, suppose thatplayer i has a payoff larger than the pay-off of all the other players. Then i’s util-ity function reduces to

ui(x1, . . . , xn) = (1−βi)xi+βi

n− 1

∑j 6=i

xj.

If βi ≥ 1, then the component of the util-ity function corresponding to the mone-tary payoff of player i is non-positive, soplayer i maximizes utility by giving awayall his money, an assumption that seemsimplausible (Fehr and Schmidt, 1999).Finally, the assumption βi ≥ 0 simplymeans that there are no players who pre-fer to be better off than other players.7

This way of capturing inequity aversion

7Fehr and Schmidt (1999) make this assumptionfor simplicity, although they acknowledge that theybelieve that there are subjects with βi < 0.

Page 10: From outcome-based to language-based preferences - PsyArXiv

10 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

has been shown to fit empirical data wellin a number of contexts, including thestandard ultimatum game, variants withmultiple proposers and with multiple re-sponders, and the public-goods game.

A different type of utility functioncapturing inequity aversion was intro-duced by Bolton and Ockenfels (2000).Like Fehr and Schmidt (1999), Boltonand Ockenfels assume that players’ util-ity function takes into account inequitiesamong players. To define this function,they assume that the monetary payoffs ofall players are non-negative. Then theydefine

σi =

{xic

if c > 0,1n

if c = 0,

where c =∑n

j=1 xj, so σi represents i’srelative share of the total monetary pay-off. Bolton and Ockenfels further assumethat i’s utility function (which they referto as i’s motivational function) dependsonly on i’s monetary payoff xi and onhis relative share σi, and satisfies four as-sumptions. We refer to Bolton and Ock-enfels (2000) for the formal details; herewe focus on an intuitive description oftwo of these assumptions, the ones thatcharacterize inequity aversion (the othertwo assumptions are made for mathe-matical convenience). One assumption isthat, keeping the relative payoff σi con-stant, i’s utility is increasing in xi. Thus,for two choices that give the same rela-tive share, player i’s decision is consis-tent with payoff maximization. The sec-ond assumptions is that, holding xi con-stant, i’s utility is strictly concave in σi,with a maximum at the point at whichplayer i’s monetary payoff is equal tothe average payoff. Thus, keeping mon-etary payoff constant, players prefer anequal distribution of monetary payoffs.This utility function was shown to fit em-

pirical data quite well in a number ofgames, including the dictator game, ul-timatum game, and prisoner’s dilemma.(We defer a comparison of Bolton andOckenfels’ approach with that of Fehrand Schmidt.)

Shortly after the explosion of inequity-aversion models, several economists ob-served that some decision-makers appearto act in a way that increases inequity, ifthis increase results in an increase in thetotal payoff of the participants (Char-ness and Grosskopf, 2001; Kritikos andBolle, 2001; Andreoni and Miller, 2002;Charness and Rabin, 2002). This obser-vation is hard to reconcile with inequity-aversion models, and suggests that peo-ple not only prefer to minimize inequity,but also prefer to maximize social wel-fare.

To estimate these preferences, An-dreoni and Miller (2002) conducted anexperiment, found the utility functionin a particular class of utility functionsthat best fit the experimental results,and showed that this utility function alsofits data from other experiments well.In more detail, they conducted an ex-periment in which participants made de-cisions in a series of modified dictatorgames where the cost of giving is in theset {0.25, 0.5, 1, 2, 3}. For example, whenthe cost of giving is 0.25, sending one to-ken to the recipient results in the recipi-ent receiving four tokens. Andreoni andMiller found that 22.7% of the dictatorswere perfectly selfish (so their behaviorcould be rationalized by the utility func-tion u(x1, x2) = x1), 14.2% of dictatorssplit the monetary payoff equally withthe recipient (so their behavior could berationalized by the Rawlsian utility func-tion u(x1, x2) = min(x1, x2),

8 and 6.2%

8Named after John Rawls, a philosopher, whoargued, roughly speaking, that in a just society, thesocial system should be designed so as to maximizethe payoff of those worst off.

Page 11: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 11

of the dictators gave to the recipient onlywhen the price of giving was smaller than1 (and thus their behavior could be ratio-nalized by the utilitarian utility functionu(x1, x2) = 1

2x1 + 1

2x2). To rationalize

the behavior of the remaining 57% of thedictators, Andreoni and Miller fit theirdata to a utility function of the form

u1(x1, x2) = (αxρ1 + (1− α)xρ2)1/ρ

.

Here α represents the extent to whichthe dictator (player 1) cares about hisown monetary payoff more than that ofthe recipient (player 2), while ρ repre-sents the convexity of preferences. An-dreoni and Miller found that subjects canbe divided into three classes that theycalled weakly selfish (α = 0.758, ρ =0.621; note that if α = 1, we get self-regarding preferences), weakly Rawlsian(α = 0.654, ρ = −0.350; note that if 0 <α < 1 and x1, x2 > 0, then as ρ → −∞,we converge to Rawlsian preferences),and weakly utilitarian (α = 0.576, ρ =0.669; note that if α = .5 and ρ = 1,we get utilitarian preferences). More-over, they showed that the model also fitswell experimental results on the standarddictator game, the public-goods game,and prisoner’s dilemma. Finally, thismodel can also explain the results men-tioned above showing that some decision-makers act so as to increase inequity, ifthe increase leads to an increase in socialwelfare.

Charness and Rabin (2002) considereda more general family of utility func-tions that includes the ones mentionedearlier as special cases. Like Andreoniand Miller, they conducted an experi-ment to estimate the parameters of theutility function that best fit the data.For simplicity, we describe Charness andRabin’s utility function in the case of twoplayers; we refer to their paper for thegeneral formulation. They considered a

utility function for player 2 of the form

u2(x1, x2) =(ρr + σs)x1+

(1− ρr − σs)x2,

where: (1) r = 1 if x2 > x1, and r = 0otherwise; (2) s = 1 if x2 < x1, and s = 0otherwise.9 Intuitively, ρ represents howimportant it is to agent 2 that he gets ahigher payoff than agent 1, while σ repre-sents how important it is to agent 2 thatagent 1 gets a higher payoff than he does.Charness and Rabin did not make anya priori assumptions regarding ρ and σ.But they showed that by setting ρ and σappropriately, one can recover the earliermodels:

• Assume that σ ≤ ρ ≤ 0. In thiscase, player 2’s utility is increasingin x2−x1. So, by definition, this casecorresponds to competitive prefer-ences.

• Assume that σ < 0 < 12< ρ < 1.

If x1 < x2, then player 2’s utility isρx1 + (1 − ρ)x2, and thus dependspositively on both x1 and x2, be-cause 0 < ρ < 1. Moreover, sinceρ > 1

2, player 2 prefers increasing

player 1’s payoff to his own; that is,player 2 prefers to decrease inequity(since x2 > x1). If x2 < x1, thenplayer 2’s utility is σx1 + (1− σ)x2,and thus depends negatively on x1

and positively on x2, so again play-ers 2 prefers to decrease inequity.

• If 0 < σ ≤ ρ ≤ 1, then player 2’sutility depends positively on bothx1 and x2. Charness and Rabindefine these as social-welfare pref-erences. (Note that in the case

9The general form of the utility function has athird component that takes into account reciprocityin sequential games. Since in this review we focus onnormal-form games, we ignore this component here.

Page 12: From outcome-based to language-based preferences - PsyArXiv

12 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

σ = ρ = 12, one obtains utilitar-

ian preferences u2(x1, x2) = (x1 +x2)/2; more generally, Charnessand Rabin apply the term “social-welfare preferences” to those pref-erences where the individual payoffsare both weighted positively.)

To test which of these cases better fitsthe experimental data, Charness and Ra-bin conducted a series of dictator gameexperiments. They found that the stan-dard assumption of narrow self-interestexplains only 68% of the data, assumingcompetitive preferences (i.e., σ ≤ ρ ≤ 0)explains even less (60%), and that as-suming a preference for inequity aversion(i.e., σ < 0 < 1

2< ρ < 1) is consis-

tent with 75% of the data. The best re-sults were obtained by assuming a pref-erence for maximizing social welfare (i.e.,0 < σ ≤ ρ ≤ 1), which explains 97% ofthe data.

Engelmann and Strobel (2004) alsoshowed that assuming a preference formaximizing social welfare leads to bet-ter predictions than assuming a prefer-ence for inequity aversion. They con-sidered a set of decision problems de-signed to compare the relative abilityof several classes of utility functionsto make predictions. They consideredutility functions corresponding to payoffmaximization, social-welfare maximiza-tion, Rawlsian maximin preferences, theutility function proposed by Fehr andSchimdt, and the utility functions pro-posed by Bolton and Ockenfels. Theyfound that the best fit of the data wasprovided by a combination of social-welfare concerns, maximin preferences,and selfishness. Moreover, Fehr andSchimdt’s inequity-aversion model out-performed that of Bolton and Ocken-fels. However, this increase in perfor-mance was entirely driven by the factthat, in many cases, the predictions ofFehr and Schimdt reduced to maximin

preferences.

Team-reasoning models (Gilbert,1987; Bacharach, 1999; Sugden, 2000)and equilibrium notions such as co-operative equilibrium (Halpern andRong, 2010; Capraro, 2013) also takesocial-welfare maximization seriously.The underlying idea of these approachesis that individuals do not always actso as to maximize their individualmonetary payoff, but may also take intoaccount social welfare. For example, inthe prisoner’s dilemma, social welfareis maximized by mutual cooperation,so team-reasoning models predict thatpeople cooperate. However, since thesemodels typically assume that the utilityof a player is a function of the sum ofthe payoffs of all players, they cannotexplain behaviors in zero-sum games,such as the dictator game. Anotherlimitations of these approaches is theirinability to explain the behavior of peo-ple who choose to minimize both theirindividual payoffs and social welfare,such as responders who reject offers inthe ultimatum game.

Cappelen et al. (2007) introduced amodel in which participants strive tobalance their individual “fairness ideal”with their self-interest. They considerthree fairness ideals. Strict egalitarian-ism contends that people are not respon-sible for their effort and talent; accordingto this view, the fairest distribution of re-sources is the equal distribution. Liber-tarianism argues that people are respon-sible for their effort and talent; accordingto this view, resources should be sharedso that each person’s share is in pro-portion to what s/he produces. Liberalegalitarianism is based on the belief thatpeople are responsible for their effort,but not for their talent; according to thisview, resources should be distributed soas to minimize differences due to talent,but not those due to effort. To formal-

Page 13: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 13

ize people’s tendency to implement theirfairness ideal, Cappelen et al. consideredutility functions of the form

ui(xi, a, q) = γixi −βi2

(xi − pi(a, q))2 ,

where a = (a1, . . . , an) is the vector oftalents of the players, q = (q1, . . . , qn) isthe vector of efforts made by the play-ers, xi is the monetary payoff of player i,and pi(a, q) is the monetary payoff thatplayer i believes to be his fair share. Fi-nally, γi and βi are individual param-eters, representing the extent to whichplayer i cares about his monetary payoffand his fairness ideal. In order to esti-mate the distribution of the fairness ide-als, Cappelen et al. conducted an experi-ment using a variant of the dictator gamethat includes a production phase. Thequantity produced depends on factorswithin the participants’ control (effort)and factors beyond participants’ control(talent). The experiment showed that39.7% of the subjects can be viewed asstrict egalitarians, 43.4% as liberal egal-itarians, and 16.8% as libertarians. Al-though this approach is useful in caseswhere the initial endowments are earnedby the players through a task involvingeffort and/or talent, when the endow-ments are received as windfall gains, asin most laboratory experiments, it re-duces to inequity aversion, and so it suf-fers from the same limitations that otherutility functions based on inequity aver-sion do.

The frameworks discussed thus far areparticularly suitable for studying situa-tions in which there is only one decision-maker or in which the choices of differ-ent decision-makers are made simultane-ously. In many cases, however, thereis more than one decision maker, andthey make their choices sequentially. Fornon-simultaneous move games, schol-

ars have long recognized that inten-tions play an important role. A par-ticular class of intention-based models—reciprocity models—is based on the ideathat people typically reciprocate (per-ceived) good actions with good actionsand (perceived) bad actions with badactions. Economists have consideredseveral models of reciprocity (Rabin,1993; Levine, 1998; Charness and Ra-bin, 2002; Dufwenberg and Kirchsteiger,2004; Sobel, 2005; Falk and Fischbacher,2006). Intention-based models havebeen shown to explain deviations fromoutcome-based predictions in a numberof situations where beliefs about theother players’ intentions may play a role(Falk, Fehr and Fischbacher, 2003; Mc-Cabe, Rigdon and Smith, 2003; Fehrand Schmidt, 2006; Falk, Fehr and Fis-chbacher, 2008; Dhaene and Bouckaert,2010). Although we acknowledge the ex-istence and the importance of these mod-els, as we mentioned in the introduc-tion, in this review we focus on normal-form games. Some of these games (e.g.,dictator game, die-under-cup paradigm,trade-off game) have only one decisionmaker, and therefore beliefs about oth-ers’ intentions play no role. In this con-texts, beliefs can still play a role, for ex-ample beliefs about others’ beliefs, sinceeven in single-agent games, there may beothers watching (or the decision makercan play as if there are). This leads us topsychological games, which will be men-tioned in Section 5.

Outcome-based social preferences ex-plain many experimental results well. Inparticular, looking at the experimentalregularities listed in Section 2, they eas-ily explain the first three regularities, atleast qualitatively. However, they can-not explain any of the remaining regu-larities. Indeed, the main limitation ofoutcome-based preferences is that theydepend only on the monetary payoffs.

Page 14: From outcome-based to language-based preferences - PsyArXiv

14 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

As we suggested in Section 2, one wayof explaining these regularities is to as-sume that people have moral preferences.Crucially, these moral preferences can-not be defined solely in terms of the eco-nomic consequences of the available ac-tions. We review such moral preferencesin Section 4.10

Before moving to moral preferences,it is worth noting that another limita-tion of outcome-based social preferencesis that they predict correlations betweendifferent behaviors that are not consis-tent with experimental data. For exam-ple, Chapman et al. (2018) reported ona large experiment showing that eightstandard measures of prosociality canactually be clustered in three principalcomponents, corresponding to altruis-tic behavior, punishment, and inequityaversion, which are virtually unrelated.Chapman et al. observed that this is notconsistent with any outcome-based socialpreferences. However, we will see thatthis result is consistent with moral pref-erences.

4. Moral preferences

In the previous sections, we showedthat outcome-based preferences and,more generally, outcome-based decisionrules, are inconsistent with Experimen-tal Regularities 4-7. In this section, we

10Of course, we can consider outcome-based pref-erences in the context of decision rules other thanexpected-utility maximization, such as maximin ex-pected utility (Wald, 1950; Gardenfors and Sahlin,1982; Gilboa and Schmeidler, 1989), minimax re-gret (Niehans, 1948; Savage, 1951), and maximin ex-pected utility (Wald, 1950; Gardenfors and Sahlin,1982; Gilboa and Schmeidler, 1989). For example,with maximin, an agent chooses the act that maxi-mizes her worst-case (expected) utility. With mini-max regret, an agent chooses the act that minimizesher worst-case regret (the gap between the payofffor an act a in state s and the payoff for the bestact in state s). Although useful in many contexts,none of these approaches can explain ExperimentalRegularities 4-7, since they are all outcome-based.

review the empirical literature suggest-ing that all seven experimental regular-ities discussed in Section 2 can be ex-plained by assuming that people havepreferences for following a norm,11 anddiscuss attempts to formalize this usingan appropriate utility function. We usethe term moral preferences as an um-brella term to denote this type of utilityfunction.

Experimental Regularities 1-4. Thefact that donations in the standard dic-tator game, offers and rejections in theultimatum game, cooperation in socialdilemmas, and honesty in lying tasks canbe explained by moral preferences wasindependently shown by many authors.

Krupka and Weber (2013) asked dicta-tors to report, for each available choice,how “socially appropriate” they thinkother dictators think that choice is; dic-tators were incentivized to guess themodal answer of other dictators. Theyfound that an equal split was rated as themost socially appropriate choice. Theyalso found that framing effects in the dic-tator game when passing from the “give”frame to the “take” frame can be ex-plained by a change in the perception ofwhat the most appropriate action is.

Kimbrough and Vostroknutov (2016)introduced a “rule-following” task tomeasure people’s norm-sensitivity,specifically, how important it was tothem to follow rules of behavior. Inthis task, each participant has to con-trol a stick figure walking across thescreen from left to right. Along itswalk, the figure encounters five trafficlights, each of which turns red when thefigure approaches it. Each participant

11We wrote “a norm”, and not “the norm”, be-cause there are different types of norms. For the aimof this review, it is important to distinguish betweenpersonal beliefs about what is right and wrong, be-liefs about what others approve or disapprove of, andbeliefs about what others actually do. We will getto this distinction in more details later.

Page 15: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 15

has to decide how long to wait at thetraffic light (which turns green after fiveseconds), knowing that s/he will lose acertain amount of money for each secondspent on the waiting. The total amountof time spent waiting is taken as anindividual measure of norm-sensitivity.Kimbrough and Vostroknutov foundthat this parameter predicts giving inthe dictator game and the public-goodsgame, and correlates with rejectionthresholds in the ultimatum game.Bicchieri and Chavez (2010) showedthat ultimatum-game responders rejectthe same offer at different rates, de-pending on the other available offers;in particular, responders tend to acceptoffers that they consider to be fairer,compared to the other available offers.

In a similar vein, Capraro and Rand(2018) and Capraro and Vanzo (2019)found that giving in the dictator gamedepends on what people perceive to bethe morally right thing to do. Indirectevidence that moral preferences drivegiving in the dictator game was alsoprovided by research showing that in-cluding moral reminders in the instruc-tions of the dictator game increases giv-ing (Branas-Garza, 2007; Capraro et al.,2019). In addition, as mentioned in Sec-tion 2, Eriksson et al. (2017) showed thatframing effects among responders in theultimatum game can be explained by achange in the perception of what is themorally right thing to do, while Dal Boand Dal Bo (2014) found that moral re-minders increase cooperation in the iter-ated prisioner’s dilemma.

We remark that Capraro (2018) foundthat, in the ultimatum game, 92% of theproposers and 72% of responders declarethat offering half is the morally rightthing to do, while Capraro and Rand(2018) found that 81% of the subjectsdeclare that cooperating is the morallyright thing to do in the one-shot pris-

oner’s dilemma.

Finally, the fact that honest behav-ior in economic games in which partic-ipants can lie for their benefit is partlydriven by moral preferences was sug-gested by several authors (Gneezy, 2005;Erat and Gneezy, 2012; Fischbacher andFollmi-Heusi, 2013; Abeler, Nosenzo andRaymond, 2019). Empirical evidencewas provided by Cappelen, Sørensenand Tungodden (2013), who found thattelling the truth in the sender-receivergame in the Pareto white-lie conditioncorrelates positively with giving in thedictator game, suggesting that ‘aversionto lying not only is positively associatedwith pro-social preferences, but for manya stronger moral motive than the con-cern for the welfare of others’. Biziou-van Pol et al. (2015) replicated the cor-relation between honesty in the Paretowhite-lie condition and giving in thedictator game and additionally showeda similar correlation with cooperationin the prisoner’s dilemma; the authorssuggested that cooperating, giving, andtruth-telling might be driven by a com-mon motivation to do the right thing. Fi-nally, Bott et al. (2019) found that moralreminders decrease tax evasion in a fieldexperiment with Norwegian tax-payers.

Experimental Regularities 5-6. List(2007), Bardsley (2008), and Cappelenet al. (2013) showed that people tend tobe more altruistic in the standard dic-tator game than in the dictator gamewith a take option (Experimental Reg-ularity 5). The fact that this behav-ioral change might reflect moral prefer-ences was suggested by Krupka and We-ber (2013). They found that sharingnothing in the standard dictator game isconsidered to be far less socially appro-priate than sharing nothing in the dicta-tor game with a take option. They alsoshowed that social appropriateness canexplain why some dictator-game givers

Page 16: From outcome-based to language-based preferences - PsyArXiv

16 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

prefer to avoid the interaction altogether,given the chance to do so (ExperimentalRegularity 6): dictators rate exiting thegame to be far less socially inappropriatethan keeping the money in the standarddictator game.

Experimental Regularity 7. In Sec-tion 2, we reviewed the literature show-ing that behavior in several games, in-cluding the dictator game, the prisoner’sdilemma, the ultimatum game, and thetrade-off game, depends on the languageused to present the instructions, espe-cially when it activates moral concerns.

To summarize, all seven regularitiescan be qualitatively explained by assum-ing that people have moral preferences.In what follows, we review the models ofmoral preferences that have been intro-duced thus far in the literature. The ideathat morality has to be incorporated ineconomic models has been around sincethe foundational work of Adam Smithand Francis Y. Edgeworth (Smith, 2010;Edgeworth, 1881); see (Sen, 1977; Bin-more, 1994; Tabellini, 2008; Bicchieri,2005; Enke, 2019) for more recent ac-counts. However, work on utility func-tions that take moral preferences into ac-count is relatively recent. In this review,we focus on utility functions that can beapplied to all or to most12 of the eco-nomic interactions that are described interms of normal-form games with mone-tary payoffs.13

12Economists have also introduced a number ofdomain-specific models; for example, models to ex-plain cooperation in the prisoner’s dilemma (Bolleand Ockenfels, 1990), honesty in lying tasks (Abeler,Nosenzo and Raymond, 2019; Gneezy, Kajackaiteand Sobel, 2018), fairness in principal-agent mod-els (Ellingsen and Johannesson, 2008), and honestyin principal-agent models (Alger and Renault, 2006,2007). Although useful in their contexts, these mod-els cannot be readily extended to other types of in-teraction.

13Economists have also studied models of moral-ity in other games (see, e.g., Benabou and Tirole(2011). Recently, economists have also sought to ex-

We proceed chronologically. We startby discussing the work of Akerlof andKranton (2000). Their motivation wasto study “gender discrimination in theworkplace, the economics of poverty andsocial exclusion, and the household divi-sion of labor”. To do so, they proposeda utility function that takes into accounta person’s identity. The identity is as-sumed to carry information about howa person should behave, which, in turn,is assumed to depend on the social cate-gories to which a person belongs. In thissetting, Akerlof and Kranton considereda utility function of the form

ui = ui(ai, a−i, Ii),

where Ii represents i’s identity. Theyshowed that their model can qualita-tively explain group differences such asthe ones that motivated their work. Thismodel is certainly consistent with Exper-imental Regularities 1-7. Indeed, it suf-fices to assume that the identity takesinto account a tendency to follow thenorms. This model is conceptually sim-ilar to a previous model proposed byStigler and Becker (1977), which is basedon the idea that preferences should notbe defined over marketed goods, but overgeneral commodities that people trans-form into consumption goods. AlthoughStigler and Becker (1977) do not aimto explain the experimental regularitiesthat are the focus of this review, theirmodel is consistent with them: for ex-ample, people may cooperate in order tomaintain good relations, or may act al-truistically to experience the warm glowof giving, or may act morally to adhere to

plain political behavior in terms of moral preferences(Bonomi, Gennaioli and Tabellini, 2021; Enke, Pol-born and Wu, 2022). Although these models cannotbe readily applied to the economic games that arethe focus of this review, they show that the idea thatmoral preferences can help explain people’s behavioris gaining traction across different areas of research.

Page 17: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 17

their self-image. We refer to Sobel (2005)for a more detailed discussion and for themathematical equivalence between thismodel and that of Akerlof and Kranton.

A more specific model, but one basedon a similar idea, was introduced byBrekke, Kverndokk and Nyborg (2003).Their initial aim was to explain fieldexperiments showing that paying peo-ple to provide a public good can “crowdout” intrinsic motivations. For exam-ple, paying donors reduces blood dona-tion (Titmuss, 2018), people’s willing-ness to accept a nuclear-waste reposi-tory in their neighborhood (Frey andOberholzer-Gee, 1997), and volunteer-ing (Gneezy and Rustichini, 2000). Toexplain these findings, Brekke, Kvern-dokk, and Nyborg considered economicinteractions in which each player i (i =1, . . . , n) has to put in some effort ei,measured in units of time, to generatea public good gi. At most T units oftime are assumed to be available, so thati has to decide how much time ei to con-tribute to the public good and how muchtime li to use for leisure: li = T − ei.The total quantity of public good is G =Gp+

∑ni=1 gi, where Gp is the public pro-

vision of the public good. The monetarypayoff that player i receives from puttingin effort ei is denoted xi. The key as-sumption of the model is that player ihas a morally ideal effort, denoted e∗i .Brekke, Kverndokk, and Nyborg postu-lated that player i maximizes a utilityfunction of the form

ui = ui(xi, li) + vi(G) + fi(ei, e∗i ),

where ui and vi are increasing and con-cave, while the function f(ei, e

∗i ) is as-

sumed to attain its maximum at ei = e∗i ;we can think of f as taking into accountthe distance between i’s actual effort eiand the ideal effort e∗i in an inverselyrelated way. As an explicit example,

Brekke, Kverndokk, and Nyborg consid-ered the function fi(ei, e

∗i ) = −a(ei −

e∗i )2, with a > 0. Therefore, ceteris

paribus, players aim to maximize theirmonetary payoff, their leisure time, andthe public good, while aiming at min-imizing the distance from their moralideal. Brekke, Kverndokk, and Nyborgsupposed that, before deciding their ac-tion, players consider their morally idealeffort. They assumed that all playersshare a utilitarian moral philosophy, sothat the morally ideal effort is foundby maximizing W =

∑i ui with respect

to ei. Under these assumptions, theyshowed that their model is consistentwith the crowding-out effect. Specifi-cally, they showed that when a fee isintroduced for people who do not con-tribute to the public good, if this fee isat least equal to the cost of buying giunits of public good in the market and ifthis fee is smaller than the utility corre-sponding to the gain of leisure time dueto not contributing, then the moral ideale∗i is equal to 0; in other words, the feebecomes a moral justification for not con-tributing. This intuitively happens be-cause individuals leave the responsibilityof ensuring the public good to the or-ganization: the public good is providedby the organization, which buys it usingthe fees; this is convenient for the indi-viduals, as they gain in leisure time. Ifwe replace time with money, then thisutility function can capture the empiri-cal regularities observed in the dictatorgame and social dilemmas. However, theutility function cannot easily be appliedto settings that do not have the form ofa public-goods game, such as the ultima-tum and trade-off games.

A more general utility function was in-troduced by Benabou and Tirole (2006).It tries to take into account altruismand is motivated by a theory of so-cial signalling, according to which peo-

Page 18: From outcome-based to language-based preferences - PsyArXiv

18 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

ple’s actions are associated with repu-tational costs and benefits (Nowak andSigmund, 1998; Smith and Bird, 2000;Gintis, Smith and Bowles, 2001). Play-ers choose a participation level for someprosocial activity from an action set A ⊆R. Choosing a has a cost c(a) and givesa monetary payoff ya; y can be positive,negative, or zero. Players are assumed tobe characterized by a type v = (va, vy) ∈R2, where va is, roughly speaking, theimpact of the prosocial factors associatedwith participation level a on the agent’sutility, while vy is, roughly speaking, theimpact of money on the agent’s utility.Benabou and Tirole mentioned that va,the utility of choosing a, is determined byat least two factors: the material payoffof the other player and the enjoyment de-rived from the act of giving. Thus, theirapproach can capture Andreoni’s (1990)notion of warm-glow giving.

Benabou and Tirole then defined thedirect benefit of action a to be

D(a) = (va + vyy)a− c(a),

and the reputational benefit to be

R(a) = x[γaE(va|a, y)− γyE(vy|a, y)],

where E(v|a, y) represents the observers’posterior expectation that the player isof type v, given that the player chose awhen the monetary incentive to choosea is y. The parameters γa and γy areassumed to be non-negative. To under-stand this assumption, note that, by def-inition, players with high va are proso-cial; players with high vy are greedy.Therefore, the hypothesis that γa and γyare non-negative formalizes the idea thatpeople like to be perceived as prosocial(γa ≥ 0) and not greedy (γy ≥ 0). Theparameter x > 0 represents the visibil-ity of an action, that is, the probabilitythat the action is visible to other play-

ers. Benabou and Tirole then defined theutility function

u(a) = D(a) +R(a).

They studied this utility function in anumber of contexts in which actions canbe observed and decision options can bedefined in terms of contribution. Whileuseful in its domains of applicability,this utility function cannot be appliedto games where choices cannot be de-scribed in terms of participation levels,such as trade-off games in which play-ers have the role of distributing money,without themselves being affected by thedistribution levels.

A utility function that captures moralmotivations and can be applied to allgames was introduced by Levitt and List(2007). They argued that the utility ofplayer i when s/he chooses action a de-pends on two factors. The first factoris the utility corresponding to the mon-etary payoff associated with action a. Itis assumed to be increasing in the mon-etary value of a, denoted xi. The secondfactor is the moral cost or benefit mi as-sociated with a. Levitt and List focusedon three factors that can affect the moralvalue of an action. The first is the nega-tive externality that a imposes on otherpeople. They hypothesized that the ex-ternality is an increasing function of xi:the more a player receives from choos-ing a, the less other participants receive.The second factor is the set n of moralnorms and rules that govern behavior inthe society in which the decision-makerlives. For example, the very fact thatan action is illegal may impose an addi-tional cost for that behavior. The thirdfactor is the extent to which actions areobserved. For example, if an illegal oran immoral action is recorded, or per-formed in front of the experimenter, itis likely that the decision-maker pays a

Page 19: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 19

greater moral cost than if the same ac-tion is performed when no one is watch-ing. The effect of scrutiny is denotedby s; greater scrutiny is assumed to in-crease the moral cost. Levitt and Listadded this component to take into ac-count the fact that, when the behavior ofparticipants cannot be monitored, peo-ple tend to be less prosocial than whenit can (Bandiera, Barankay and Rasul,2005; List, 2006; Benz and Meier, 2008).They proposed that player i maximizesthe utility function

ui(a, xi, n, s) = mi(a, xi, n, s) +wi(a, xi).

This approach can explain Experimen-tal Regularities 1, 3, 5, and 7. However,the situation described in ExperimentalRegularity 2 and some instances of Ex-perimental Regularities 4 and 6 violateLevitt and List’s assumptions. Specifi-cally, the assumption that the the nega-tive externalities associated with playeri’s action depend negatively on i’s mon-etary payoff does not hold in the ulti-matum game, where rejecting a low offer(the choice which, is typically viewed asthe moral choice (Bicchieri and Chavez,2010)) decreases both players’ mone-tary payoff. This is also the case inthe sender-receiver game in the Paretowhite-lie condition, where telling thetruth is viewed as the moral choice, yetminimizes the monetary payoffs of bothplayers. But these are minor limitations;they can be addressed by consideringa slightly more general utility functionthat depends not only on xi, but also on∑

j 6=i xj (and dropping the assumptionthat these two variables are inversely re-lated).

A similar approach was used by Lopez-Perez (2008), although he focused onextensive-form games. He assumed theexistence of a “norm” function Ψi thatassociates with each information set h

for player i in a given game an actionΨi(h) that can be taken at h. Intu-itively, Ψi(h) is the moral choice at infor-mation set h. Lopez-Perez assumed thatplayer i receives a psychological disutil-ity (in the form of a negative emotion,such as guilt or shame) whenever he vi-olates the norm expressed by Ψi. He didnot propose a specific functional expres-sion for the utility function; rather, hestudied a particular example of a norm,the E-norm. This norm is conceptu-ally similar to the one used by Charnessand Rabin (2002), described in Section 3.Lopez-Perez showed that a utility func-tion that gives higher utility to strategiesthat perform more moral actions quali-tatively fits the empirical data in severalcontexts. Although he did not explicitlyshow that the experimental regularitiespresented in Section 2 can be explainedusing his model, it is easy to show thatthis is the case. Indeed, it suffices to as-sume that the mapping Ψi just associateswith the game the morally right actionfor player i. (We can view a normal-formgame as having a single information set,so we can view Ψi as just applying to thewhole game.) Note, however, that thenorm Ψi defined in this way is differentfrom the E-norm considered by Lopez-Perez, which is outcome-based.

Andreoni and Bernheim (2009) fo-cused on the dictator game and intro-duced a utility function that combineselements from theories of social imagewith inequity aversion. The fact thatsome people care about how others seethem has been recognized by economistsfor at least two decades (Bernheim, 1994;Glazer and Konrad, 1996). Combiningthese ideas with inequity aversion, An-dreoni and Bernheim proposed that dic-tators maximize the utility function

ui(xi,m, t) = f(xi,m) + tg

(xi −

1

2

),

Page 20: From outcome-based to language-based preferences - PsyArXiv

20 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

where xi ∈ [0, 1] is the monetary payoffof the dictator (the endowment is nor-malized to 1), m ≥ 0 is the social imageof the dictator, f represents the utilityassociated with xi (which is assumed tobe increasing in both xi and m and con-cave in xi), t ≥ 0 is a parameter rep-resenting the extent to which the dicta-tor cares about minimizing inequity, andg is a (twice continuously differentiable,strictly concave) function that attains itsmaximum at 0, that formalizes the in-tuition that the dictator gets a disutil-ity from not implementing the equal dis-tribution. Andreoni and Bernheim as-sumed that there is an audience A thatincludes the recipient and possibly otherpeople (e.g., the experimenter). The au-dience observes the donation 1 − xi andthen forms beliefs about the dictator’slevel of fairness t. They assumed thatthe dictator’s social image is some func-tion B of Φ, the cumulative distributionΦ representing A’s beliefs about the dic-tator’s level of fairness. For example, thefunction B could be the mean of t givenΦ. Andreoni and Bernheim showed that,with some minimal assumptions aboutΦ, we can explain donations in the dicta-tor game quite well. In particular, theirutility function is consistent with theprevalence of exactly equal splits, thatis, the lack of donations slightly aboveor slightly below 50% of the endowment.They also considered a variant of the dic-tator game in which with some proba-bility p the dictator’s choice is not im-plemented, but instead the recipient re-ceives an amount x0 close to 0. Intu-itively, this should have the effect of cre-ating a peak of voluntary donations atx0, since dictators can excuse the out-come as being beyond their control, thuspreserving their social image. Andreoniand Bernheim observed this behavior,and showed that this is indeed consis-tent with their utility function. Unfor-

tunately, it is not clear how to extendthis utility function beyond dictator-likegames.

A conceptually similar approach wasconsidered by DellaVigna, List and Mal-mendier (2012). Instead of formalizinga concern for social image, their util-ity function takes into account the ef-fect of social pressure (which might affectpeople’s decisions through social image).This approach was motivated by a door-to-door campaign with three treatments:a flier treatment, in which householdswere informed one day in advance by aflier on their doorknob of the upcomingvisit of someone soliciting donations; aflier with an opt-out checkbox treatment,where the flier contained a box to bechecked in case the household did notwant to be disturbed; and a baseline,in which households were not informedabout the upcoming visit. Della Vigna,List, and Malmendier found that the flierdecreased the frequency of the door be-ing opened, compared to the baseline.Moreover, the flier with an opt-out checkbox also decreased giving, but the ef-fect was significant only for small dona-tions. To explain these findings, theyconsidered a two-stage game between aprospect (potential donor) and a solic-itor. In the first stage, the prospectmay receive a flier and, if so, he noticesthe flier with probability r ∈ (0, 1]. Inthe second stage, the solicitor visits thehome. The probability of the prospectopening the door is denoted by h: ifthe prospect did not notice the flier, his equal to the baseline probability h0;otherwise, the prospect can update thisprobability at a cost c(h), with c(h0) = 0,c′(h0) = 0, c′′ > 0. That is, not updatingthe probability of opening the door hasno cost; updating it has a cost that de-pends monotonically on the adjustment.A donor can donate either in person orthrough other channels (e.g., by email).

Page 21: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 21

Let g be the amount donated in per-son and gm the amount donated throughother means. A donor’s utility is takento be

u(g, gm) =f(w − g − gm)+

αv(g + δgm, g−i)+

σ(gσ − g)1g<gσ ,

where w represents the initial wealth ofthe donor; f(w − g − gm) represents theutility of private consumption; α is a pa-rameter representing the extent to whichthe donor cares about the payoff of thecharity, which can be negative;14 δ is theproportion of the donation made throughother channels that does not reach theintended recipient (e.g., the cost of anenvelope and a stamp); g−i is the (ex-pected) donation made by other donors;σ is a parameter representing the extentto which the donor cares about socialpressure; gσ is a trade-off donation: ifthe donor donates less than gσ when thesolicitor is present, then the donor paysa cost depending on gσ − g. Della Vi-gna, List, and Malmendier showed thatthis utility function captures their exper-imental results well. However, it is notclear how to extend it to qualitativelydifferent decision contexts.

Kessler and Leider (2012) considereda utility function in which players re-ceive a disutility when they deviate froma norm; this leads to a model similar tothat of Brekke, Kverndokk and Nyborg(2003) that we described above. Themain difference is that, instead of con-sidering time, Kessler and Leider (2012)considered money. Players are assumedto have a norm x that represents theideal monetary contribution. The utility

14In fact, if the social pressure (third addend ofthe utility function) is high enough, someone can endup donating even though he dislikes the charity.

of player i is defined as

ui(xi, xj, x) =πi(xi, xj)−φig(x− xi)1xi<x,

where πi(xi, xj) is the monetary payoffof player i when i contributes xi and jcontributes xj, φi is a parameter repre-senting i’s norm-sensitivity, and g rep-resents the disutility that player i getsfrom deviating from the norm, so g(xi) =0 if xi ≥ x and g(xi) increases withxi − x if xi < x. Kessler and Leider ap-plied their utility function to an experi-ment involving four games: an additivetwo-player public-goods game, a multi-plicative public-goods game, a double-dictator game, and a Bertrand game.The set of contributions available de-pended on the game; they were chosen toensure a mismatch between the individ-ual monetary payoff-maximizing actionand the socially beneficial action. Partic-ipants played ten rounds of these games,with random re-matching. Some of theserounds were preceded by a contractingphase in which participants could agreeon which contribution to choose. Kesslerand Leider observed that the presenceof a contracting phase increased contri-butions. They argued that their utilityfunction fits their data well; in partic-ular, contracting increased contributionby increasing the norm.

In subsequent work, Kimbrough andVostroknutov (2016) used this approachto explain prosocial behavior in thepublic-goods game, the dictator game,and the ultimatum game. The key inno-vation of this work is the estimation ofthe parameter φi, which was done usingthe “rule-following task” task discussedearlier in this section. They found thattheir measure of norm-sensitivity signifi-cantly correlates with cooperation in thepublic-goods game, with giving in thedictator game, and with rejection thresh-

Page 22: From outcome-based to language-based preferences - PsyArXiv

22 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

olds in the ultimatum game (but notwith offers in the ultimatum game, al-though results trend in the expected di-rection). In sum, this utility functionis very useful in its domain of appli-cability. Moreover, although it mightseem difficult to extend it to situationsin which strategies (and norms) cannotbe expressed in terms of contributions,it can easily be extended to games wherethe space of strategy profiles is a metricspace (X, d) where d is the metric (i.e.,a distance function between strategies).In this setting, we can replace x − x inthe utility function with d(x, x). It iseasy to see that this utility function canexplain all seven experimental regulari-ties, assuming that x coincides with whatpeople view as the moral choice.

Lazear, Malmendier and Weber (2012)considered a utility function motivatedby their empirical finding that somesharers in the dictator game preferto avoid the dictator-game interaction,given the possibility of doing so. Theyconsidered situations where players canchoose one of two scenarios. In the sce-nario with a sharing option, the agentplays the standard dictator game in therole of the dictator: he receives an en-dowment e, which he has to split betweenhimself (x1) and the recipient (x2). Inthe scenario without a sharing option,the player simply receives x1 = e, whilethe recipient receives x2 = 0. They pro-posed a utility that depends on the sce-nario: u = u(D,x1, x2), where D = 1represents the scenario with a sharingopportunity, while D = 0 represents thescenario without the sharing opportu-nity. The fact that the utility depends onthe scenario makes it possible to classifypeople as one of three types. The firsttype, the “willing sharers”, consists of in-dividuals who prefer the sharing scenarioand to share their endowment (at least,to some extent). Formally, this type of

player is characterized by the conditions{maxx1∈[0,e]u(1, x1, e− x1) > u(0, e, 0)

argmaxx1∈[0,e]u(1, x1, e− x1) < e.

The second type, the “non-sharers”,never share. They are defined by thecondition

argmaxx1∈[0,e]u(1, x1, e− x1) = e.

The third type, called “reluctant shar-ers”, are perhaps most interesting. Theyare determined by the remaining condi-tions:{

maxx1∈[0,e]u(1, x1, e− x1) < u(0, e, 0)

argmaxx1∈[0,e]u(1, x1, e− x1) < e.

The first condition says that these play-ers prefer to avoid the sharing opportu-nity when given the possibility of doingso. However, if they are forced to playthe dictator game, they share part oftheir endowment. Lazear, Malmendier,and Weber showed that there are a sig-nificant number of people of the thirdtype. Indeed, some subjects even preferto pay a cost to avoid the sharing oppor-tunity. While this gives a great deal ofinsight, it is hard to generalize this typeof utility function to other kinds of inter-action.

Krupka and Weber (2013) introduceda model in which subjects are torn be-tween following their self-interest andfollowing the “injunctive norm”, thatis, what they believe other peoplewould approve or disapprove of (Cial-dini, Reno and Kallgren, 1990). LetA = {a1, . . . , ak} be the set of actionsavailable. Krupka and Weber assumedthe existence of a social norm functionthat associates to each action aj ∈ A anumber N(aj) ∈ R representing the de-gree of social appropriateness of aj. N

Page 23: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 23

is hypothesized to be independent of theindividual; it represents the extent towhich society views aj as socially appro-priate. N(aj) can be negative, in whichcase aj is viewed as socially inappropri-ate. The utility of player i is defined as

ui(aj) = vi(πi(aj)) + γiN(aj),

where πi(aj) is the monetary payoff ofplayer i associated with action aj, vi isthe utility associated to monetary pay-offs, and γi is the extent to which i caresabout doing what is socially appropriate.As we mentioned earlier in this section,one of Krupka and Weber’s contributionswas to introduce a method of measur-ing social appropriateness. People wereshown the available actions and, for eachof them, asked to rate how socially ap-propriate they were. Participants werealso incentivized to guess the mode of thechoices made by the other participants.It is not difficult to show that Krupkaand Weber’s utility function is consistentwith all the experimental regularities, atleast if we assume that the most sociallyappropriate choice coincides with whatpeople believe to be the morally rightthing to do. This suggests that one possi-ble limitation of Krupka and Weber’s ap-proach is that it takes into account onlythe injunctive norm. In general, thereare situations where what people believeothers would approve of (the injunctivenorm) is different from the choice theybelieve to be morally right (the personalnorm). For example, suppose that a ve-gan must decide whether to buy a steakfor $5 or a vegan meal for $10. Know-ing that the vast majority of the popula-tion eats meat, the vegan might believethat others would view the steak as themost socially appropriate choice. How-ever, since the vegan thinks that eatingmeat is morally wrong, she would opt forbuying the vegan meal. This suggests

that there might be situations where so-cial appropriateness might not be a goodpredictor of human behavior. We returnto this issue in Section 6.

Alger and Weibull (2013) introduceda notion of homo moralis. They consid-ered a symmetric 2-player game wherethe players have a common strategy setA.15 Let πi be i’s payoff function; sincethe game is symmetric, we have thatπi(ai, aj) = πj(aj, ai). Players may differin how much they care about morality.Morality is defined by taking inspirationfrom Kant’s categorical imperative: oneshould make the choice that is universal-izable, that is, the action that would bebest if everyone took it (e.g., cooperat-ing in the prisoner’s dilemma).16 Morespecifically, they defined player i to be ahomo moralis if his utility function hasthe form

ui(a1, a2) = (1−k)πi(a1, a2)+kπi(a1, a1),

where k ∈ [0, 1] represents the degreeof morality. If k = 0, then we recoverthe standard homo economicus; if k = 1,we have homo kantiensis, someone whomakes the universalizable choice. Al-ger and Weibull proved that evolutionresults in the degree of morality k be-ing equal to the index of assortativityof the matching process. We refer totheir paper for the exact definition ofthe index of assortativity. For our pur-poses, all that is relevant is that it is anon-negative number that takes into ac-count the potential non-randomness ofthe matching process: it is zero with ran-

15Their results also apply to non-symmetricgames where players do not know a priori whichrole they will play.

16See Laffont (1975) for a macroeconomic appli-cation of this principle. Roemer (2010) defined anotion of Kantian equilibrium in public-goods typegames; these are strategy profiles in which no playerwould prefer all other players to change their contri-bution levels by the same multiplicative factor.

Page 24: From outcome-based to language-based preferences - PsyArXiv

24 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

dom matching, and greater than zero fornon-random matching processes. Thus,in the particular case of random match-ing, Alger and Weibul’s theorem showsthat evolution results in homo moraliswith degree of morality k = 0, namely,homo economicus. If the matching isnot random, assortativity can favor theemergence of homo moralis with a degreeof morality strictly greater than zero.

In subsequent work, Alger and Weibull(2016) showed that homo moralis pref-erences are evolutionary stable, accord-ing to an appropriate definition of evo-lutionary stability for preferences. Thisapproach is certainly useful for under-standing the evolution of morality. How-ever, if the payoff function π is equal tothe monetary payoff, then homo moralispreferences are outcome-based, so can-not explain Experimental Regularities 4-7. On the other hand, if the payoff func-tion is not equal to the monetary pay-off (or otherwise outcome-based), thenhow should it be defined? Despite theselimitations, it is important to note thatthere are some framing effects that canbe explained by the model of Alger andWeibull (2013). Suppose, for exam-ple, that market interactions are formedthrough a matching process with lessassortativity than are business partner-ship interactions, or co-authorship inter-actions, then a lower degree of moralityshould be expected in market interac-tions.

Kimbrough and Vostroknutov (2020a;2020b) proposed a utility function simi-lar to the one proposed by Krupka andWeber (2013) that was discussed earlier.Specifically, they proposed that peo-ple are torn between maximizing theirmaterial payoff and (not) doing whatthey think society would (dis)approve of.This corresponds to the utility function

ui(a) = vi(πi(a)) + φiη(a),

where vi(πi(a)) represents the utilityfrom the monetary payoff πi(a) corre-sponding to action a, φi represents theextent to which player i cares about(not) doing what he thinks society would(dis)approve of, and η(a) is a measureof the extent to which society approvesor disapproves a. The key difference be-tween this and Krupka and Weber’s ap-proach is in the definition of η, whichcorresponds to Krupka and Weber’s N .While Krupka and Weber defined N em-pirically, by asking experimental sub-jects to guess what they believe otherswould find socially (in)appropriate, Kim-brough and Vostroknutov defined η interms of the cumulative dissatisfactionthat players experience when a certainstrategy profile is realized, rather thanother possible strategy profiles. Kim-brough and Vostroknutov’s notion of“dissatifaction” corresponds to what ismore typically called regret : the dif-ference between what a player couldhave gotten and what he actually got.Thus, the cumulative dissastisfaction isthe sum of the regrets of the players. Intheir model, Kimbrough and Vostroknu-tov assumed that the normatively bestoutcome is the one that minimizes aggre-gated dissatisfaction. This implies thatthis model predicts that Pareto domi-nant strategy profiles are always moresocially appropriate than Pareto domi-nated strategy profiles. Therefore, whilethis model explains well some types ofmoral behaviors, such as cooperation, itfails to explain moral behaviors that arePareto dominated, such as honesty whenlying is Pareto optimal and framing ef-fects in the trade-off game that push peo-ple to choose the equal but Pareto dom-inated allocation of money.

Capraro and Perc (2021) introduced autility function which, instead of consid-ering people’s tendency to follow the in-junctive norm, considers their tendency

Page 25: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 25

to follow their personal norms, thatis, their internal standards about whatis right or wrong in a given situation(Schwartz, 1977). Specifically, Capraroand Perc proposed the utility function

ui(a) = vi(πi(a)) + µiPi(a),

where vi(πi(a)) represents the utilityfrom the monetary payoff πi(a) corre-sponding to action a, µi represents theextent to which player i cares aboutdoing what he thinks to be morallyright, and Pi(a) represents the extent towhich player i thinks that a is morallyright. From a practical perspective,the main difference between this utilityfunction and those proposed by Krupkaand Weber (2013) and by Kimbroughand Vostroknutov (2016) is that personalnorms are individual, that is, Pi(a) de-pends specifically on player i, while theinjunctive norm depends only on the so-ciety where an individual lives, and sois the same for all individuals in thesame society. This utility function is alsoconsistent with all seven regularities de-scribed in Section 2.

Finally, Basic and Verrina (2021) pro-posed a utility function that combinespersonal and injunctive norms:

ui(a) = vi(πi(a)) + γiS(a) + δiPi(a),

where γ and δ represent the extent towhich people care about following theinjunctive and the personal norms, re-spectively. This utility function was sug-gested to the authors by experimentalevidence in support of the fact that in-junctive norms and personal norms aredifferentially associated with giving inthe dictator game (standard version anda variant with a tax) and with behaviorin the ultimatum game (as well as in athird-party punishment game). Also thisutility function is consistent with all the

seven experimental regularities that weare considering in this review.

At the end of Section 3, we observedthat another limitation of social pref-erences is that they predict correla-tions between different prosocial behav-iors that are not observed in experimen-tal data. We conclude this section byobserving that the lack of these corre-lations is consistent with at least somemoral preferences. Indeed, several mod-els of moral preference (e.g., Akerlof andKranton (2000); Levitt and List (2007);Lopez-Perez (2008); Krupka and We-ber (2013); Kimbrough and Vostroknu-tov (2016); Capraro and Perc (2021);Basic and Verrina (2021)) do not as-sume that morality is unidimensional. Ifmorality is multidimensional and differ-ent people may weigh different dimen-sions differently, then it is possible that,for some people, the right thing to do isto act altruistically, while for others itis to punish antisocial behavior, and foryet others, it is to minimize inequity; thiswould rationalize the experimental find-ings of Chapman et al. (2018). We willcome back to the multidimensionality ofmorality in Section 6.

5. Language-based preferences

In this section, we go beyond moralpreferences and consider settings wherean agent’s utility depends, not just onwhat happens, but on how the agent feelsabout what happens, which is largelycaptured by the agent’s beliefs, and howwhat happens is described, which is cap-tured by the agent’s language.

Work on a formal model, called a psy-chological game, for taking an agent’sbeliefs into account in the utility func-tion started with Geanakoplos, Pearceand Stacchetti (1989), and was laterextended by Battigalli and Dufwenberg(2009) to allow for dynamic beliefs,among other things. The overview by

Page 26: From outcome-based to language-based preferences - PsyArXiv

26 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

Battigalli and Dufwenberg (2020) showshow the fact that the utility function inpsychological games can depend on be-liefs allows us to capture, for example,guilt feelings (whether Alice leaves a tipfor a taxi driver Bob in a foreign countrymight depend on how guilty Ann wouldfeel if she didn’t leave a tip, which in turndepends on what Alice believes Bob isexpecting in the way of a tip), reciprocity(how kind player i is to j depends on i’sbeliefs regarding whether j will recipro-cate), other emotions (disappointment,frustration, anger, regret), image (yourbelief about how others perceive you),and expectations (how what you actu-ally get compares to your expectation ofwhat you will get; see also (Koszegi andRabin, 2006)), among other things.

Having the agent’s utility function de-pend on language, how the world is de-scribed, provides a yet more general wayto express preferences. (It is more gen-eral since the language can include a de-scription of the agent’s beliefs; see be-low.) Experimental Regularity 7 de-scribes several cases in which people’s be-havior depends on the words used to de-scribe the available actions. Language-based preferences allow us to explain Ex-perimental Regularity 7, as well as all theother experimental regularities regardingsocial interactions listed in Section 2, ina straightforward way. From this pointof view, we can interpret language-basedmodels as a generalization of moral pref-erences, where the utility of a sentenceis assumed to carry the moral value ofthe action described by that sentence.Language-based models are strictly moregeneral than moral preferences. In thissection, we will show that they can ex-plain other well-known regularities (e.g.,ones not involving social interactions)that have been found in behavioral ex-periments, such as the Allais paradox.

A classic example is standard fram-

ing effects, where people’s decisions de-pend on whether the alternatives are de-scribed in terms of gains or losses (Tver-sky and Kahneman, 1985). It is wellknown, for example, that presenting al-ternative medical treatments in terms ofsurvival rates versus mortality rates canproduce a marked difference in how thosetreatments are evaluated, even by expe-rienced physicians (McNeil et al., 1982).A core insight of prospect theory (Kah-neman and Tversky, 1979) is that sub-jective value depends not (only) on factsabout the world, but on how those factsare viewed (as gains or losses, dominatedor undominated options, etc.). And howthey are viewed often depends on howthey are described in language. For ex-ample, Thaler (1980) observed that thecredit card lobby preferred that the dif-ference between the price charged to cashand credit card customers be presentedas a discount for paying cash rather thanas a surcharge for paying by credit card.The two different descriptions amount totaking different reference points.17

Such language dependence is ubiqui-tous. We celebrate 10th and 100th an-niversaries specially, and make a big dealwhen the Dow Jones Industrial Averagecrosses a multiple of 1,000, all because wehappen to work in a base 10 number sys-tem. Prices often end in .99, since peo-ple seem to perceive differently the differ-ence between $19.99 and $20 and the dif-ference between $19.98 and $19.99. Werefer to (Strulov-Shlain, 2019) for somerecent work on and an overview of thiswell-researched topic.

One important side effect of the use

17Tversky and Kahneman emphasized the distinc-tion between gains and losses in prospect theory,but they clearly understood that other features ofa description were also relevant. However, note thatprospect theory applied to monetary outcomes re-sults is yet another instance of outcome-based pref-erences, so cannot explain Experimental Regularities4–7.

Page 27: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 27

of language, to which we return be-low, is that it emphasizes categories andclustering. For example, as Kruegerand Clement (1994) showed, when peo-ple were asked to estimate the aver-age high and low temperatures in Prov-idence, Rhode Island, on various dates,while they were fairly accurate, their es-timates were relatively constant for anygiven month, and then jumped when themonth changed—the difference in esti-mates for two equally spaced days wassignificantly higher if the dates were indifferent months than if they were in thesame month. This clustering arises inlikelihood estimation as well. We oftenassess likelihoods using words like “prob-able”, “unlikely”, or “negligible”, ratherthan numeric representations; and whennumbers are used, we tend to round them(Manski and Molinari, 2010).

The importance of language to eco-nomics was already stressed by Rubin-stein in his book Economics and Lan-guage (Rubinstein, 2000). In Chapter 4,for example, he considers the impact ofhaving an agent’s preferences be defin-able in a simple propositional language.There have been various formal modelsthat take language into account. For ex-ample:

• Lipman (1999) consider “pieces ofinformation” that an agent mightreceive, and takes the agent’s statespace to be characterized by max-imal pieces of information. Healso applies his approach to framingproblems, among other things.

• Although Ahn and Ergin (2010)do not consider language explicitly,they do allow for possibility thatthere may be different descriptionsof a particular event, and use thispossibility to capture framing. Forthem, a “description” is a partitionof the state space.

• Blume, Easley and Halpern (2021)take an agent’s object of choice tobe programs, where a program is ei-ther a primitive program or has theform if t then a else b, where aand b are themselves programs, andt is a test. A test is just a proposi-tional formula, so language plays asignificant role in the agent’s prefer-ences. Blume, Easley, and Halperntoo show how framing effects can becaptured in their approach.

• There are several approaches todecision-making that can be viewedas implicitly based on language. Forexample, a critical component ofGilboa and Schmeidler’s case-baseddecision theory (Gilboa and Schmei-dler, 2001) is the notion of a simi-larity function, which assesses howclose a pair of problems are to eachother. We can think of problemsas descriptions of choice situationsin some language. Jehiel’s notionof analogy-based expectation equilib-rium (Jehiel, 2005) assumes thatthere is some way of partitioningsituations in bundles that, roughlyspeaking, are treated the same waywhen it comes to deciding how tomove in a game. Again, we canthink of as these bundles as oneswhose descriptions are similar. Fi-nally, Mullainathan (2002) assumesthat people use coarse categories(similar in spirit to Jehiel’s anal-ogy bundles, the categories partitionthe space of possibilities) to makepredictions. While none of theseapproaches directly models the lan-guage used, many of the examplesthey use are language-based.

• While not directly part of the util-ity function, the role of vaguenessand ambiguity in language, how itaffects communication, and its eco-

Page 28: From outcome-based to language-based preferences - PsyArXiv

28 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

nomic implications have been stud-ied and modeled (see, e.g. (Blumeand Board, 2014; Halpern and Kets,2015)).

We focus here on language-based games(Bjorndahl, Halpern and Pass, 2013),where the utility function directly de-pends on the language. As we shall see,language-based games provide a way offormalizing all the examples above. Thefollowing example, which deals with sur-prise, gives a sense of how language-based games work.

EXAMPLE 5.1: (Bjorndahl, Halpernand Pass, 2013) Alice and Bob have beendating for a while now, and Bob has de-cided that the time is right to pop thebig question. Though he is not one forfancy proposals, he does want it to bea surprise. In fact, if Alice expects theproposal, Bob would prefer to postponeit entirely until such time as it might bea surprise. Otherwise, if Alice is not ex-pecting it, Bob’s preference is to take theopportunity.

We can summarize this scenario by thepayoffs for Bob given in Table 1.

p ¬pBA p 0 1¬BA p 1 0

Table 1—The surprise proposal.

In this table, we denote Bob’s two strate-gies, proposing and not proposing, by pand ¬p, respectively, and use BAp (re-spectively, ¬BAp) to denote that Aliceis expecting (respectively, not expecting)the proposal. (More precisely, BAp saysthat Alice believes that Bob will propose;we are capturing Alice’s expectations byher beliefs.) Thus, although Bob is theonly one who moves in this game, hisutility depends, not just on his moves,but on Alice’s expectations.

This choice of language already illus-trates one of the features of the language-based approach: coarseness. We usedquite a coarse language to describe Al-ice’s expectation: she either expects theproposal or she doesn’t. Since the expec-tation is modeled using belief, this ex-ample can be captured using a psycho-logical game as well. Of course, whetheror not Alice expects a proposal may bemore than a binary affair: she may, forexample, consider a proposal unlikely,somewhat likely, very likely, or certain.In a psychological game, Alice’s beliefswould be expressed by placing an arbi-trary probability α ∈ [0, 1] on p. Butthere is good reason to think that an ac-curate model of her expectations involvesonly a small number k of distinct “levels”of belief, rather than a continuum. Ta-ble 1, for simplicity, assumes that k = 2,though this is easily generalized to largervalues.

Once we fix a language (which is just afinite or infinite set of formulas), we cantake a situation to be a maximal con-sistent set of formulas; that is, a com-plete description of the world in that lan-guage.18 In the example above, thereare four situations: {p,BAp} (Bob pro-poses and Alice expects the proposal),{p,¬BAp} (Bob proposes but Alice is notexpecting it), {¬p,BAp} (Bob does notpropose although Alice is expecting himto), and {¬p,¬BAp} (Bob does not pro-pose and Alice is not expecting a pro-posal). An agent’s language describes allthe features of the game that are relevantto the player. An agent’s utility functionassociates a utility with each situation,as in Table 1 above. Standard game the-ory is the special case where, given a setΣi of strategies (moves) for each player i,

18What counts as a maximal consistent set of for-mulas depends on the semantics of the language. Weomit the (quite standard) formal details here; theycan be found in (Bjorndahl, Halpern and Pass, 2013).

Page 29: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 29

the formulas have the form play i(σi) forσi ∈ Σi. The situations are then strategyprofiles.

A normal-form psychological game canbe viewed as a special case of a language-based game where (a) the language talksonly about agent’s strategies and agents’possibly higher-order beliefs about thesestrategies (e.g., Alice’s beliefs aboutBob’s beliefs about Alice’s beliefs aboutthe proposal), and (b) those beliefs aredescribed using probabilities. For exam-ple, taking α to denote Alice’s probabil-ity of p, psychological game theory mighttake Bob’s utility function to be the fol-lowing:

uB(x, α) =

{1− α if x = pα if x = ¬p.

The function uB agrees with Table 1 atits extreme points if we identifyBAp withα = 1 and ¬BAp with α = 0. Otherwise,for the continuum of other values that αmay take between 0 and 1, uB yields aconvex combination of the correspondingextreme points. Thus, in a sense, uB is acontinuous approximation to a scenariothat is essentially discrete.

The language implicitly used in psy-chological games is rich in one sense—itallows a continuum of possible beliefs—but is poor in the sense that it talks onlyabout belief. That said, as we mentionedabove, many human emotions can be ex-pressed naturally using beliefs, and thusstudied in the context of psychologicalgames. The following example illustrateshow.

EXAMPLE 5.2: (Bjorndahl, Halpernand Pass, 2013) Alice and Bob play aclassic prisoner’s dilemma game, withone twist: neither wishes to live up tolow expectations. Specifically, if Bob ex-pects the worst of Alice (i.e., expectsher to defect), then Alice, indignant at

Bob’s opinion of her, prefers to coop-erate. Likewise for Bob. On the otherhand, in the absence of such low expec-tations from their opponent, each will re-vert to their classical preferences.

The standard prisoner’s dilemma issummarized in Table 2:

c dc (3,3) (0,5)d (5,0) (1,1)

Table 2—The classical prisoner’s

dilemma.

Let uA, uB denote the two players’utility functions according to this table.Let the language consist of the formu-las of the form play i(σ), Bi(play i(σ)),and their negations, where i ∈ {A,B}and σ ∈ {c, d}. Given a situation S,let σS denote the unique strategy pro-file determined by S. We can now definea language-based game that captures theintuitions above by taking Alice’s utilityfunction u′A on situations S to be

u′A(S) =

uA(σS)− 6 if playA(d),BBplayA(d) ∈ S

uA(σS) otherwise,

and similarly for u′B.More generally, we could take take

Alice’s utility to be uA(σS) − 6θ ifplayA(d), BBplayA(d) ∈ S, where θ is ameasure of the extent to which Alice’sindignance affects her utility. And yetmore generally, if the language lets ustalk about the full range of probabili-ties, Alice’s utility can depend on theprobability she ascribes to playA(d). (Al-though we have described the last vari-ant using language-based games, it canbe directly expressed using psychologicalgames.)

Using language lets us go beyond ex-

Page 30: From outcome-based to language-based preferences - PsyArXiv

30 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

pressing the belief-dependence capturedby psychological games. For one thing,the coarseness of the utility language letsus capture some well-known anomaliesin the preferences of consumers. For ex-ample, we can formalize the explanationhinted at earlier for why prices often endin .99. Consider a language that consistsof price ranges like “between $55 and$55.99” and “between $60 and $64.99”.With such a language, the agent is forcedto ascribe the same utility to $59.98 and$59.99, while there can be a significantdifference between the utilities of $59.99and $60. Intuitively, we think of theagent as using two languages: the (typ-ically quite rich) language used to de-scribe the world and the (perhaps muchcoarser) language over which utility isdefined. Thus, while the agent perfectlywell understands the difference betweena price of $59.98 and $59.99, her utilityfunction may be insensitive to that dif-ference.

Using a coarse language effectively lim-its the set of describable outcomes, andthus makes it easier for a computation-ally bounded agent to determine her ownutilities. These concerns suggest thatthere might be even more coarseness athigher ranges. For example, supposethat the language includes terms like“around $20,000” and “around $300”. Ifwe assume that both “around $20,000”and “around $300” describe intervals(centered at $20,000 and $300, respec-tively), it seems reasonable to assumethat the interval described by “around$20,000” is larger than that describedby “around $300”. Moreover, it seemsreasonable that $19,950 should be in thefirst interval, while $250 is not in the sec-ond. With this choice of language (andthe further assumptions), we can captureconsumers who might drive an extra 5kilometers to save $50 on a $300 pur-chase but would not be willing to drive

an extra 5 kilometers to save $50 on a$20,000 purchase (this point was alreadymake by Thaler (1980)): a consumer getsthe same utility if they pay $20,000 or$19,950 (since in both case, they are pay-ing “around $20,000), but do not getthe same utility paying $250 rather than$300.

This can be viewed as an application ofWeber’s law, which asserts that the mini-mum difference between two stimuli nec-essary for a subject to discriminate be-tween them is proportional to the mag-nitude of the stimuli; thus, larger stimulirequire larger differences between themto be perceived. Although traditionallyapplied to physical stimuli, Weber’s lawhas also been shown to be applicable inthe realm of numerical perception: largernumbers are subjectively harder to dis-criminate from one another (Moyer andLandauer, 1967; Restle, 1978).

As we observed earlier, we can un-derstand the partitions that arise in Je-hiel’s notion of a coarsening of the lan-guage; this is even more explicit in Mul-lainathan’s notion of categories. The ob-servation of Manski and Molinari (2010)that people often represent likelihoodsusing words suggests that coarseness canarise in the representation of likelihood.To see the potential impact of this ondecision-theoretic concerns, consider thefollowing analysis of Allais’ paradox (Al-lais, 1953).

EXAMPLE 5.3: Consider the two pairsof gambles described in Table 3.The first pair is a choice between (1a) $1million for sure, versus (1b) a .89 chanceof $1 million, a .1 chance of $5 million,and a .01 chance of nothing. The secondis a choice between (2a) a .89 chance ofnothing and a .11 chance of $1 million,versus (2b) a .9 chance of nothing anda .1 chance of $5 million. The “para-dox” arises from the fact that most peo-ple choose (1a) over (1b), and most peo-

Page 31: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 31

Gamble 1a Gamble 1b1 $1 million .89 $1 million

.1 $5 million

.01 $0

Gamble 2a Gamble 2b.89 $0 .9 $0.11 $1 million .1 $5 million

Table 3—The Allais paradox.

ple choose (2b) over (2a) (Allais, 1953),but these preferences are not simulta-neously compatible with expected-utilitymaximization.

Suppose that we apply the observa-tions of Manski and Molinari (2010) tothis setting. Specifically, suppose thatprobability judgements such as “thereis a .11 chance of getting $1 million”are represented in a language with onlyfinitely many levels of likelihood. Inparticular, suppose that the languagehas only the descriptions “no chance”,“slight chance”, “unlikely”, and their re-spective opposites, “certain”, “near cer-tain”, and “likely”, interpreted as in Ta-ble 4.

Range Description Representative1 certain 1

[.95, 1) near certain .975[.85, .95) likely .9(.05, .15] unlikely .1(0, .05] slight chance .025

0 no chance 0

Table 4—Using coarse likelihood.

Once we represent likehoods usingwords in a language rather than num-bers, we have to decide how to deter-mine (expected) utility. For definiteness,suppose that the utility of a gamble asdescribed in this language is determinedusing the interval-midpoint representa-tive given in the third column of Table

4. Thus, a “slight chance” is effectivelytreated as a .025 probability, a “likely”event as a .9 probability, and so on.

Revisiting the gambles associated withthe Allais paradox, suppose that we re-place the actual probability given in Ta-ble 3 by the word that represents it (i.e.,replace 1 by “certain”, .89 by “likely”,and so on)—this is how we assume thatan agent might represent what he hears.Then when doing an expected utility cal-culation, the word is replaced by theprobability representing that word, giv-ing us Table 5.

Gamble 1a Gamble 1b1 $1 million .9 $1 million

.1 $5 million

.025 $0

Gamble 2a Gamble 2b.9 $0 .9 $0.1 $1 million .1 $5 million

Table 5—The Allais paradox, coarsely

approximated.

Using these numbers, we can calcu-late the revised utility of (1b) to be.9 · uA($1 million) + .1 · uA($5 million) +.025 · uA($0), and this quantity may wellbe less than uA($1 million), dependingon the utility function uA. For example,if uA($1 million) = 1, uA($5 million) =3, and uA($0) = −10, then the utilityof gamble (1b) evaluates to .95. In thiscase, Alice prefers (2b) to (2a) but alsoprefers (1a) to (1b). Thus, this choice oflanguage rationalizes the observed pref-erences of many decision-makers. (Ru-binstein (2000) offered a closely relatedanalysis.)

It is worth noting that this approachto evaluating gambles will lead to dis-continuities; the utility of a gamble thatgets, say, $1,000,000 with probability xand $5,000,000 with probability 1 − x

Page 32: From outcome-based to language-based preferences - PsyArXiv

32 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

does not converge to the utility of a gam-ble that gets $1,000,000 with probabil-ity 1 as x approaches 1. Indeed, wewould get discontinuities at the bound-aries of every range. We expect almosteveryone to treat certainty specially, andso have a special category for the range[1, 1]; what people take as the range forother descriptions will vary. Andreoniand Sprenger (2009) present an approachto the Allais paradox that is based on thediscontinuity of the utility of gambles at1, and present experimental evidence forsuch a discontinuity. We can view thelanguage-based approach as providing apotential explanation for this discontinu-ity.

Going back to Example 5.2, note thatcooperating is rational for Alice if shethinks that Bob is sure that she will de-fect, since cooperating in this case wouldyield a minimum utility of 0, whereas de-fecting would result in a utility of −1.On the other hand, if Alice thinks thatBob is not sure that she will defect, thensince her utility in this case is deter-mined classically, it is rational for herto defect, as usual. Bjorndahl, Halpernand Pass (2013) define a natural general-ization of Nash equilibrium in language-based games and show that, in general—and, in particular in this game—it doesnot exist, even if mixed strategies are al-lowed. The problem is the discontinu-ity in payoffs. Intuitively, a Nash equi-librium is a state of play where playersare happy with their choice of strategiesgiven correct beliefs about what their op-ponents will choose. But there is a fun-damental tension between a state of playwhere everyone has correct beliefs, andone where some player successfully sur-prises another.

Bjorndahl, Halpern and Pass (2013)also define a natural generalization ofthe solution concept of rationalizabil-ity (Bernheim, 1984; Pearce, 1984), and

show that all language-based gameswhere the language satisfies a naturalconstraint have rationalizable strategies.But the question of finding appropri-ate solution concepts for language-basedgames remains open. Moreover, theanalysis of Bjorndahl, Halpern and Pass(2013) was carried out only for normal-form games. Geanakoplos, Pearce andStacchetti (1989) and Battigalli andDufwenberg (2009) consider extensive-form psychological games. Extendinglanguage-based games to the extensive-form setting will require dealing withissues like the impact of the languagechanging over time.

We conclude this section by observingthat, interpreting the choice of languageas a framing of the game, language-based games can be seen as a specialcase of framing. There have alreadybeen attempts to provide general mod-els of the effects of framing. For exam-ple, Tversky and Simonson (1993) con-sidered situations in which an agent’schoices may depend on the backgroundset B (i.e., the set of all available choices)and the choice set S (i.e., the set of of-fered choices). Tversky and Simonson in-troduced a choice function VB(x,C) =v(x) + βfB(x) + θg(x, S) consisting ofthree components: v(x) is the context-free value of x, independent of B, fB(x)captures the effect of the background,and g(x, S) captures the effect of thechoice set. Salant and Rubinstein (2008)and Ellingsen et al. (2012) assumed thatthere is a set F of frames and that theutility function depends on the specificframe F ∈ F .

These models of framing effects caneasily explain all seven regularities bychoosing suitable frames and utilityfunctions. For example, Ellingsen etal. applied their model to the pris-oner’s dilemma and were able to explainchanges in the rate of cooperation de-

Page 33: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 33

pending on the name of the game (‘com-munity game’ vs. ‘stock market game’),under the assumption that the frame af-fects the beliefs about the opponent’slevel of altruism. Although these mod-els can be applied to explain framing ef-fects specifically generated by language,they do not model the effect of languagedirectly. As Experimental Regularity 7shows, many framing effects are in factultimately due to language. Language-based games provide a way of captur-ing these language effects directly. More-over, they allow us to ask questions thatare not asked in the standard framing lit-erature, such as, for example, why peo-ple’s behavior changes when the price ofgas goes from $3.99 to $4.00, but notwhen it goes from $3.98 to $3.99. (Thiswould not typically be called a fram-ing effect; but we can reinterpret it asa framing effect by assuming that thereis a frame F ∈ F such that “over $4”and “under $4” are different categoriesin F .)

6. Future research and outlook

The key takeaway message of this ar-ticle is that the monetary payoffs asso-ciated with actions are not sufficient tofully explain people’s behavior. Whatmatters are not just the monetary pay-offs, but also the language used topresent the available actions. It fol-lows that economic models of behav-ior should also take language into ac-count. We believe that the shift fromoutcome-based preferences to language-based preferences will have a profoundimpact on economics. We conclude ourreview with a discussion of some linesof research that we believe will play aprominent role in this shift. These linesof research are quite interdisciplinary, in-volving psychology, sociology, philoso-phy, and computer science.

In the previous sections, we have high-

lighted experimental results suggestingthat, at least in some cases, people seemto have moral preferences. However,we were deliberately vague about wherethese moral preferences come from. Dothey arise from personal beliefs aboutwhat is right and wrong, beliefs aboutwhat others approve or disapprove of, orbeliefs about what others actually do?

Moral psychologists and moral philoso-phers have long argued that there areseveral types of norms, which sometimesconflict with one other. An importantdistinction is between personal normsand social norms (Schwartz, 1977). Per-sonal norms refer to internal standardsabout what is considered to be right andwrong; they are not externally motivatedby the approval or disapproval of oth-ers. It might happen that others eitherapprove or disapprove of them, but thisis not what drives the personal norms.Social norms, on the other hand, referto “rules and standards that are under-stood by members of a group, and thatguide and/or constrain behavior withoutthe force of laws” (Cialdini and Trost,1998). Two important types of socialnorms are injunctive norms and descrip-tive norms, defining, respectively, whatpeople think others would approve or dis-approve of and what people actually do(Cialdini, Reno and Kallgren, 1990). Aunified theory of norms has been pro-posed more recently by Cristina Bicchieri(2005). According to her theory, thereare three main classes of norms: personalnormative beliefs, which are personal be-liefs about what should happen in a givensituation; empirical expectations, whichare personal beliefs about what one ex-pects others to do; and normative expec-tations, which are personal beliefs aboutwhat others think one should do.

Although the different types of normsoften align, as we discussed earlier, theymay conflict. When descriptive norms

Page 34: From outcome-based to language-based preferences - PsyArXiv

34 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

conflict with injunctive norms, peopletend to follow the descriptive norm, asshown by a famous field experiment inwhich people are observed to litter morein a littered environment than in a cleanenvironment (Cialdini, Reno and Kall-gren, 1990). Similarly, when empiricaland normative expectations are in con-flict, people tend to follow the empiricalexpectations. One potential explanationfor this is that people are rarely punishedwhen everyone is engaging in the samebehavior (Bicchieri and Xiao, 2009). Lit-tle is known about what happens whenpersonal norms are in conflict with de-scriptive or injunctive norms, or whenpersonal normative beliefs are in contrastwith empirical and normative expecta-tions. The example that we mentionedin Section 4 of a vegan who does not eatfood containing animal-derived ingredi-ents while realizing that it is an injunc-tive norm to do so suggests that, at leastin some cases, personal judgments aboutwhat is right and what is wrong representthe dominant motivation for behavior.

In the context of the games consid-ered in this review, some experimentalwork points towards a significant roleof personal norms, at least in one-shotand anonymous interactions. For exam-ple, Capraro and Rand (2018) createda laboratory setting in which the per-sonal norm was pitted against the de-scriptive norm in the trade-off game, andfound that participants tended to fol-low the personal norm. More recently,Catola et al. (2021) showed that per-sonal norms predict cooperative behav-ior in the public-goods game better thansocial norms do. The role of personalnorms was also highlighted by Basic andVerrina (2021). They found that bothpersonal and social norms shape behav-ior in the dictator and ultimatum games,which led them to propose a utility func-tion that takes into account both types of

norms, as reviewed in Section 4. In anycase, we believe that an important direc-tion for future empirical research is anexploration of how people resolve normconflicts. This may be a key step in al-lowing us to create a new generation ofutility functions that take into accountthe relative effect of different types ofnorms.

Another major direction for futurework is the exploration of how theheterogeneity in individual personalnorms affects economic choices. Peoplediffer in their judgments about whatthey consider right and wrong. Forexample, some people embrace utili-tarian ethics, which dictates that theaction selected should maximize totalwelfare and minimize total harm (Mill,2016; Bentham, 1996). Others embracedeontological ethics, according to whichthe rightness or wrongness of an actionis entirely determined by whether theaction respects certain moral norms andduties, regardless of its consequences(Kant, 2002). It has been suggestedthat people’s personal norms can bedecomposed into fundamental dimen-sions, although there is some debateabout the number and the characteri-zation of these dimensions. Accordingto moral foundations theory, thereare six dimensions: care/harm, fair-ness/cheating, loyalty/betrayal, author-ity/subversion, sanctity/degradation,and liberty/oppression (Haidt andJoseph, 2004; Graham et al., 2013; Iyeret al., 2012; Haidt, 2012); according tomorality-as-cooperation theory, thereare seven dimensions: helping kin,helping your group, reciprocating, beingbrave, deferring to superiors, dividingdisputed resources, and respecting priorpossession (Curry, 2016; Curry, Mullinsand Whitehouse, 2019; Curry, Chestersand Van Lissa, 2019). Each individ-ual assigns different weights to these

Page 35: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 35

dimensions. These weights have beenshown to play a key role in determininga range of important characteristics,including political orientation (Graham,Haidt and Nosek, 2009). There hasbeen very little work exploring the linkbetween moral dimensions and prosocialbehavior. Some preliminary evidencedoes suggest that different forms ofprosocial behavior may be associatedwith different moral foundations, notnecessarily correlated between them-selves. For example, while both dictatorgame giving and ultimatum game re-jections appear to be associated withmoral preferences, giving appears to beprimarily driven by the fairness dimen-sion of morality (Schier, Ockenfels andHofmann, 2016), whereas ultimatumgame rejections seem to be primar-ily driven by the ingroup dimension(Capraro and Rodriguez-Lara, 2021). Itis worth noting that the fairness and theingroup dimensions are not correlatedbetween themselves (Haidt and Joseph,2004; Graham et al., 2013; Iyer et al.,2012; Haidt, 2012). Therefore, thesepreliminary results speak in favor of themultidimensionality of prosociality andmay provide a rationalization for theresult by Chapman et al. (2018) thatthe giving cluster of social preferencesis not correlated with the punishmentcluster. As said, this line of research hasjust started.19 Understanding the linkbetween moral dimensions and prosocialbehaviors is a necessary step for buildingmodels that can explain human behaviorwith greater precision.

Since our moral preferences are clearly

19There is some work exploring the link betweenmoral foundations and cooperation in the prisoner’sdilemma (Clark et al., 2017). Moreover, a workingpaper by Bonneau (2021) explores the role of differ-ent moral foundations in explaining altruistic behav-ior in different forms of the dictator game, includingthe dictator game with a take option and the dicta-tor game with an exit option.

affected by our social interactions, andit is well known that the structure of anindividual’s social network affects pref-erences and outcomes in general (Easleyand Kleinberg, 2010), we believe that an-other important line of research is howmoral preferences are shaped by socialconnections. For example, it is knownthat cooperation is strongly affected bythe structure of the social network. Hubsin such networks can act as strong co-operative centers and exert a positiveinfluence on the peripheral nodes (San-tos, Pacheco and Lenaerts, 2006). Theability to break ties with unfair or ex-ploitative partners and make new one’swith those of better reputation also fa-vorably affects cooperation (Perc andSzolnoki, 2010). More recent researchhas shown that other forms of moral be-havior, such as truth-telling and hon-esty, are also strongly affected by theproperties of social networks (Capraro,Perc and Vilone, 2020). And a case hasbeen made for further explorations beingmuch needed along these lines (Capraroand Perc, 2018), for example, by study-ing how network structure affects differ-ent types of moral behavior, includingequity, efficiency, and trustworthiness.

Another line of research involveslanguage-based preferences. To the ex-tent that people do have language-basedpreferences, it would be useful to be ableto predict how people will behave inan economic decision problem describedby a language. A relatively new areaof research in computational linguisticsmay be relevant in this regard. Sen-timent analysis (e.g., Pang, Lee andVaithyanathan (2002); Pang and Lee(2004); Esuli and Sebastiani (2007)) aimsto determine the attitude of a speakeror a writer to a topic from the informa-tion contained in a document. For exam-ple, we may want to determine the feel-ings of a reviewer about a movie from

Page 36: From outcome-based to language-based preferences - PsyArXiv

36 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

his review. Among other things, senti-ment analysis attempts to associate to adescription (in a given context) its po-larity, that is, a number in the interval[−1, 1] expressing a positive, negative, orindifferent sentiment. One could per-haps use sentiment analysis to define autility function by taking the polarity ofthe description of strategies into account.The idea is that people are reluctant toperform actions that evoke negative sen-timents, like stealing, but are eager toperform actions that evoke positive sen-timents; the utility function could takethis into account (in addition to takinginto account the monetary consequencesof actions). Being able to associate util-ity to words would also allow us to mea-sure the explanatory power of language-based models. Experimental Regular-ity 7 shows that language-based mod-els explain behavior in some games bet-ter than models based solely on mone-tary outcomes. However, to the best ofour knowledge, there has been no econo-metric study measuring the exploratorypower of language-based models.

We have only scratched the surfacehere of potential research directions. Webelieve that economics is taking bravesteps into uncharted territory here. Newideas will be needed, and bridges to otherfields will need to be built. The prospectsare most exciting!

REFERENCES

Abeler, Johannes, DanieleNosenzo, and Collin Raymond.2019. “Preferences for truth-telling.”Econometrica, 87(4): 1115–1153.

Abrams, Burton A, and Mark DSchitz. 1978. “The ”cowding-out” ef-fect of governmental transfers on pri-vate charitable contributions.” PublicChoice, 33(1): 29–39.

Abrams, Burton A, and Mark DSchmitz. 1984. “The crowding-out ef-fect of governmental transfers on pri-vate charitable contributions: cross-section evidence.” National Tax Jour-nal, 37(4): 563.

Ahn, D., and H. Ergin. 2010.“Framing contingencies.” Economet-rica, 78(2): 655–695.

Akerlof, George A, and Rachel EKranton. 2000. “Economics and iden-tity.” The Quarterly Journal of Eco-nomics, 115(3): 715–753.

Alger, Ingela, and Jorgen WWeibull. 2013. “Homo moralis – pref-erence evolution under incomplete in-formation and assortative matching.”Econometrica, 81(6): 2269–2302.

Alger, Ingela, and Jorgen WWeibull. 2016. “Evolution and Kan-tian morality.” Games and EconomicBehavior, 98: 56–67.

Alger, Ingela, and Regis Renault.2006. “Screening ethics when honestagents care about fairness.” Interna-tional Economic Review, 47(1): 59–85.

Alger, Ingela, and Regis Renault.2007. “Screening ethics when honestagents keep their word.” EconomicTheory, 30(2): 291–311.

Allais, M. 1953. “Le comportement del’homme rationel devant le risque: cri-tique de l’Ecole Americaine.” Econo-metrica, 21: 503–546.

Andreoni, James. 1988. “Privatelyprovided public goods in a large econ-omy: The limits of altruism.” Journalof Public Economics, 35(1): 57–73.

Andreoni, James. 1990. “Impure al-truism and donations to public goods:A theory of warm-glow giving.” TheEconomic Journal, 100(401): 464–477.

Page 37: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 37

Andreoni, James. 1995. “Cooperationin public-goods experiments: Kind-ness or confusion?” The AmericanEconomic Review, 891–904.

Andreoni, James, and B DouglasBernheim. 2009. “Social image andthe 50–50 norm: A theoretical andexperimental analysis of audience ef-fects.” Econometrica, 77(5): 1607–1636.

Andreoni, James, and John Miller.2002. “Giving according to GARP: Anexperimental test of the consistency ofpreferences for altruism.” Economet-rica, 70(2): 737–753.

Andreoni, J., and C. Sprenger.2009. “Certain and uncertain utility:The Allais paradox and five deci-sion theory phenomena.” unpublishedmanuscript.

Bacharach, Michael. 1999. “Interac-tive team reasoning: A contribution tothe theory of co-operation.” Researchin Economics, 53(2): 117–147.

Bandiera, Oriana, Iwan Barankay,and Imran Rasul. 2005. “Socialpreferences and the response to incen-tives: Evidence from personnel data.”The Quarterly Journal of Economics,120(3): 917–962.

Bardsley, Nicholas. 2008. “Dictatorgame giving: Altruism or artefact?”Experimental Economics, 11(2): 122–133.

Basic, Zvonimir, and Eugenio Ver-rina. 2021. “Personal norms—and notonly social norms—shape economicbehavior.” MPI Collective Goods Dis-cussion Paper.

Basu, K. 1994. “The traveler’sdilemma: paradoxes of rationality ingame theory.” American EconomicReview, 84(2): 391–395.

Battigalli, P., and M. Dufwenberg.2009. “Dynamic psychological games.”Journal of Economic Theory, 144: 1–35.

Battigalli, P., and M. Dufwen-berg. 2020. “Belief-dependent moti-vations and pyschological game the-ory.” IGIER Working Paper n. 646. Toappear, Journal of Economic Litera-ture.

Benabou, Roland, and Jean Tirole.2006. “Incentives and prosocial be-havior.” American Economic Review,96(5): 1652–1678.

Benabou, Roland, and Jean Tirole.2011. “Identity, morals, and taboos:Beliefs as assets.” The Quarterly Jour-nal of Economics, 126(2): 805–855.

Bentham, Jeremy. 1996. The collectedworks of Jeremy Bentham: An intro-duction to the principles of morals andlegislation. Clarendon Press.

Benz, Matthias, and StephanMeier. 2008. “Do people behave in ex-periments as in the field? Evidencefrom donations.” Experimental Eco-nomics, 11(3): 268–281.

Bernheim, B. D. 1984. “Rationaliz-able strategic behavior.” Economet-rica, 52(4): 1007–1028.

Bernheim, B Douglas. 1986. “On thevoluntary and involuntary provisionof public goods.” The American Eco-nomic Review, 789–793.

Bernheim, B Douglas. 1994. “A the-ory of conformity.” Journal of PoliticalEconomy, 102(5): 841–877.

Bertrand, Joseph. 1883. “Book re-view of Theorie Mathematique de laRichesse Social and of Recherches surles Principes Mathematiques de la

Page 38: From outcome-based to language-based preferences - PsyArXiv

38 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

Theorie des Richesses.” Journal desSavants.

Bicchieri, Cristina. 2005. The gram-mar of society: The nature and dy-namics of social norms. CambridgeUniversity Press.

Bicchieri, Cristina, and AlexChavez. 2010. “Behaving as ex-pected: Public information andfairness norms.” Journal of Behav-ioral Decision Making, 23(2): 161–178.

Bicchieri, Cristina, and Erte Xiao.2009. “Do the right thing: but onlyif others do so.” Journal of BehavioralDecision Making, 22(2): 191–208.

Binmore, Ken. 1994. Game theory andthe Social Contract: Playing Fair.MIT Press.

Biziou-van Pol, Laura, Jana Hae-nen, Arianna Novaro, AndresOcchipinti Liberman, and Va-lerio Capraro. 2015. “Does tellingwhite lies signal pro-social prefer-ences?” Judgment and Decision Mak-ing, 10: 538–548.

Bjorndahl, Adam, Joseph YHalpern, and Rafael Pass. 2013.“Language-based games.” Proceedingsof the 14th Conference on TheoreticalAspects of Rationality and Knowledge,39–48.

Blount, Sally. 1995. “When social out-comes aren’t fair: The effect of causalattributions on preferences.” Organi-zational Behavior and Human Deci-sion Processes, 63(2): 131–144.

Blume, A., and O. Board. 2014.“Intentional vagueness.” Erkenntnis,79: 855–899.

Blume, L. E., D. Easley, and J. Y.Halpern. 2021. “Connstructive de-cision theory.” Journal of Economic

Theory, 196. An earlier version, en-titled “Redoing the Foundations ofDecision Theory”, appears in Princi-ples of Knowledge Representation andReasoning: Proc. Tenth InternationalConference (KR ’06).

Bolle, Friedel, and Peter Ockenfels.1990. “Prisoners’ dilemma as a gamewith incomplete information.” Journalof Economic Psychology, 11(1): 69–84.

Bolton, Gary E, and Axel Ock-enfels. 2000. “ERC: A theory ofequity, reciprocity, and competi-tion.” The American Economic Re-view, 90(1): 166–193.

Bonneau, Maxime. 2021. “Can differ-ent moral foundations help explain dif-ferent forms of altruistic behavior?” Inpreparation.

Bonomi, Giampaolo, Nicola Gen-naioli, and Guido Tabellini. 2021.“Identity, beliefs, and political con-flict.” The Quarterly Journal of Eco-nomics, 136(4): 2371–2411.

Bott, Kristina Maria, Alexander WCappelen, Erik Sorensen, andBertil Tungodden. 2019. “You’vegot mail: A randomised field exper-iment on tax evasion.” ManagementScience.

Branas-Garza, Pablo. 2007. “Promot-ing helping behavior with framing indictator games.” Journal of EconomicPsychology, 28(4): 477–486.

Branas-Garza, Pablo, ValerioCapraro, and Ericka Rascon-Ramirez. 2018. “Gender differencesin altruism on Mechanical Turk:Expectations and actual behaviour.”Economics Letters, 170: 19–23.

Brekke, Kjell Arne, Snorre Kvern-dokk, and Karine Nyborg. 2003.

Page 39: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 39

“An economic model of moral moti-vation.” Journal of Public Economics,87(9-10): 1967–1983.

Burnham, Terence, Kevin McCabe,and Vernon L Smith. 2000. “Friend-or-foe intentionality priming in an ex-tensive form trust game.” Journal ofEconomic Behavior & Organization,43(1): 57–73.

Camerer, C. F. 2003. Behavioral GameTheory: Experiments in Strategic In-teraction. Princeton, N.J.:PrincetonUniversity Press.

Cappelen, Alexander W, As-tri Drange Hole, Erik ØSørensen, and Bertil Tungodden.2007. “The pluralism of fairnessideals: An experimental approach.”The American Economic Review,97(3): 818–827.

Cappelen, Alexander W, Erik ØSørensen, and Bertil Tungodden.2013. “When do we lie?” Journal ofEconomic Behavior & Organization,93: 258–265.

Cappelen, Alexander W, Ulrik HNielsen, Erik Ø Sørensen, BertilTungodden, and Jean-RobertTyran. 2013. “Give and take indictator games.” Economics Letters,118(2): 280–283.

Capraro, V. 2013. “A model of humancooperation in social dilemmas.” PLoSONE, 8(8): e72427.

Capraro, V. 2018. “Social Versus MoralPreferences in the Ultimatum Game:A Theoretical Model and an Experi-ment.” Available at SSRN 3155257.

Capraro, Valerio, and AndreaVanzo. 2019. “The power of moralwords: Loaded language generatesframing effects in the extreme dictator

game.” Judgment and Decision Mak-ing, 14(3): 309–317.

Capraro, Valerio, and David GRand. 2018. “Do the right thing: Ex-perimental evidence that preferencesfor moral behavior, rather than equityor efficiency per se, drive human proso-ciality.” Judgment and Decision Mak-ing, 13(1): 99–111.

Capraro, Valerio, and IsmaelRodriguez-Lara. 2021. “MoralPreferences in Bargaining Games.”Available at SSRN 3933603.

Capraro, Valerio, and Matjaz Perc.2018. “Grand challenges in socialphysics: In pursuit of moral behavior.”Frontiers in Physics, 6: 107.

Capraro, Valerio, and MatjazPerc. 2021. “Mathematical founda-tions of moral preferences.” Jour-nal of the Royal Society Interface,18(175): 20200880.

Capraro, Valerio, GloriannaJagfeld, Rana Klein, MathijsMul, and Iris van de Pol. 2019.“Increasing altruistic and cooperativebehaviour with simple moral nudges.”Scientific Reports, 9(1): 11880.

Capraro, Valerio, Matjaz Perc, andDaniele Vilone. 2020. “Lying on net-works: The role of structure and topol-ogy in promoting honesty.” PhysicalReview E, 101: 032305.

Catola, Marco, SimoneD’Alessandro, Pietro Guarnieri,and Veronica Pizziol. 2021. “Per-sonal norms in the online publicgood game.” Economics Letters,207: 110024.

Chang, Daphne, Roy Chen, andErin Krupka. 2019. “Rhetoric mat-ters: A social norms explanation for

Page 40: From outcome-based to language-based preferences - PsyArXiv

40 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

the anomaly of framing.” Games andEconomic Behavior, 116: 158–178.

Chapman, Jonathan, Mark Dean,Pietro Ortoleva, Erik Snowberg,and Colin Camerer. 2018. “Econo-graphics.” National Bureau of Eco-nomic Research.

Charness, Gary, and BritGrosskopf. 2001. “Relative pay-offs and happiness: An experimentalstudy.” Journal of Economic Behavior& Organization, 45(3): 301–328.

Charness, Gary, and MatthewRabin. 2002. “Understanding so-cial preferences with simple tests.”The Quarterly Journal of Economics,117(3): 817–869.

Cialdini, Robert B., andMelanie R. Trost. 1998. “Socialinfluence: Social norms, conformityand compliance.” In The Handbook ofSocial Psychology. , ed. D. T. Gilbert,S. T. Fiske and G. Lindzey, 151–192.McGraw-Hill.

Cialdini, Robert B, Raymond RReno, and Carl A Kallgren. 1990.“A focus theory of normative conduct:Recycling the concept of norms to re-duce littering in public places.” Jour-nal of Personality and Social Psychol-ogy, 58(6): 1015–1026.

Clark, C Brendan, Jeffrey A Swails,Heidi M Pontinen, Shannon EBowerman, Kenneth A Kriz, andPeter S Hendricks. 2017. “A behav-ioral economic assessment of individ-ualizing versus binding moral founda-tions.” Personality and Individual Dif-ferences, 112: 49–54.

Curry, Oliver Scott. 2016. “Moralityas cooperation: A problem-centred ap-proach.” In The evolution of morality.27–51. Springer.

Curry, Oliver Scott, Daniel AustinMullins, and Harvey White-house. 2019. “Is it good to cooper-ate? Testing the theory of morality-as-cooperation in 60 societies.” CurrentAnthropology, 60(1): 47–69.

Curry, Oliver Scott,Matthew Jones Chesters, andCaspar J Van Lissa. 2019. “Map-ping morality with a compass:Testing the theory of ‘morality-as-cooperation’with a new ques-tionnaire.” Journal of Research inPersonality, 78: 106–124.

Dal Bo, Ernesto, and Pedro Dal Bo.2014. ““Do the right thing:” The ef-fects of moral suasion on cooperation.”Journal of Public Economics, 117: 28–38.

Dana, Jason, Daylian M Cain, andRobyn M Dawes. 2006. “What youdon’t know won’t hurt me: Costly (butquiet) exit in dictator games.” Organi-zational Behavior and human DecisionProcesses, 100(2): 193–201.

DellaVigna, Stefano, John A List,and Ulrike Malmendier. 2012.“Testing for altruism and social pres-sure in charitable giving.” The Quar-terly Journal of Economics, 127(1): 1–56.

Dhaene, Geert, and Jan Bouck-aert. 2010. “Sequential reciprocity intwo-player, two-stage games: An ex-perimental analysis.” Games and Eco-nomic Behavior, 70(2): 289–303.

Dhami, Sanjit. 2016. The foundationsof behavioral economic analysis. Ox-ford University Press.

Dreber, Anna, Tore Ellingsen,Magnus Johannesson, andDavid G Rand. 2013. “Do people

Page 41: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 41

care about social context? Framing ef-fects in dictator games.” ExperimentalEconomics, 16(3): 349–371.

Dufwenberg, Martin, and GeorgKirchsteiger. 2004. “A theory of se-quential reciprocity.” Games and Eco-nomic Behavior, 47(2): 268–298.

Easley, David, and Jon Kleinberg.2010. Networks, crowds, and markets:Reasoning about a highly connectedworld. Cambridge University Press.

Eckel, Catherine C, and Philip JGrossman. 1996. “Altruism inanonymous dictator games.” Gamesand Economic Behavior, 16(2): 181–191.

Edgeworth, Francis Ysidro. 1881.Mathematical psychics: An essay onthe application of mathematics to themoral sciences. Vol. 10, Kegan Paul.

Ellingsen, Tore, and Magnus Jo-hannesson. 2008. “Pride and prej-udice: The human side of incentivetheory.” American Economic Review,98(3): 990–1008.

Ellingsen, Tore, Magnus Johannes-son, Johanna Mollerstrom, andSara Munkhammar. 2012. “Socialframing effects: Preferences or be-liefs?” Games and Economic Behav-ior, 76(1): 117–130.

Engel, Christoph. 2011. “Dictatorgames: A meta study.” ExperimentalEconomics, 14(4): 583–610.

Engelmann, Dirk, and MartinStrobel. 2004. “Inequality aversion,efficiency, and maximin preferencesin simple distribution experiments.”The American Economic Review,94(4): 857–869.

Enke, Benjamin. 2019. “Kinship, co-operation, and the evolution of moralsystems.” The Quarterly Journal ofEconomics, 134(2): 953–1019.

Enke, Benjamin, Mattias Polborn,and Alex Wu. 2022. “Morals asluxury goods and political polariza-tion.” National Bureau of EconomicResearch.

Erat, Sanjiv, and Uri Gneezy. 2012.“White lies.” Management Science,58(4): 723–733.

Eriksson, Kimmo, Pontus Strim-ling, Per A Andersson, and TorunLindholm. 2017. “Costly punishmentin the ultimatum game evokes moralconcern, in particular when framed aspayoff reduction.” Journal of Experi-mental Social Psychology, 69: 59–64.

Esuli, Andrea, and Fabrizio Sebas-tiani. 2007. “SentiWordNet: A high-coverage lexical resource for opinionmining.” Evaluation, 1–26.

Falk, Armin, and Urs Fischbacher.2006. “A theory of reciprocity.” Gamesand Economic Behavior, 54(2): 293–315.

Falk, Armin, Ernst Fehr, and UrsFischbacher. 2003. “On the natureof fair behavior.” Economic Inquiry,41(1): 20–26.

Falk, Armin, Ernst Fehr, andUrs Fischbacher. 2008. “Testingtheories of fairness—Intentions mat-ter.” Games and Economic Behavior,62(1): 287–303.

Fehr, Ernst, and Klaus M Schmidt.1999. “A theory of fairness, competi-tion, and cooperation.” The QuarterlyJournal of Economics, 114(3): 817–868.

Page 42: From outcome-based to language-based preferences - PsyArXiv

42 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

Fehr, Ernst, and Klaus M Schmidt.2006. “The economics of fairness, reci-procity and altruism–experimental ev-idence and new theories.” Handbook ofthe economics of giving, altruism andreciprocity, 1: 615–691.

Fischbacher, Urs, and FranziskaFollmi-Heusi. 2013. “Lies in dis-guise? An experimental study oncheating.” Journal of the EuropeanEconomic Association, 11(3): 525–547.

Forsythe, Robert, Joel L Horowitz,Nathan E Savin, and MartinSefton. 1994. “Fairness in simplebargaining experiments.” Games andEconomic Behavior, 6(3): 347–369.

Frey, Bruno S, and FelixOberholzer-Gee. 1997. “Thecost of price incentives: An empiricalanalysis of motivation crowding-out.”The American Economic Review,87(4): 746–755.

Gardenfors, P., and N. Sahlin.1982. “Unreliable probabilities, risktaking, and decision making.” Syn-these, 53: 361–386.

Geanakoplos, J., D. Pearce, andE. Stacchetti. 1989. “Psychologi-cal games and sequential rational-ity.” Games and Economic Behavior,1(1): 60–80.

Gerlach, Philipp, Kinneret Teodor-escu, and Ralph Hertwig. 2019.“The truth about lies: A meta-analysis on dishonest behavior.” Psy-chological Bulletin, 145(1): 1–44.

Gilbert, Margaret. 1987. “Mod-elling collective belief.” Synthese,73(1): 185–204.

Gilboa, I., and D. Schmeidler. 1989.“Maxmin expected utility with a non-unique prior.” Journal of Mathemati-cal Economics, 18: 141–153.

Gilboa, I., and D. Schmeidler. 2001.A Theory of Case-Based Decisions.Cambridge, MA:MIT Press.

Gintis, Herbert, Eric Alden Smith,and Samuel Bowles. 2001. “Costlysignaling and cooperation.” Journal ofTheoretical Biology, 213(1): 103–119.

Glazer, Amihai, and Kai A Kon-rad. 1996. “A signaling explanationfor charity.” The American EconomicReview, 86(4): 1019–1028.

Gneezy, Uri. 2005. “Deception: Therole of consequences.” The AmericanEconomic Review, 95(1): 384–394.

Gneezy, Uri, Agne Kajackaite, andJoel Sobel. 2018. “Lying Aversionand the Size of the Lie.” The AmericanEconomic Review, 108(2): 419–53.

Gneezy, Uri, and Aldo Rustichini.2000. “Pay enough or don’t pay at all.”The Quarterly Journal of Economics,115(3): 791–810.

Gneezy, Uri, Bettina Rockenbach,and Marta Serra-Garcia. 2013.“Measuring lying aversion.” Journal ofEconomic Behavior & Organization,93: 293–300.

Graham, Jesse, Jonathan Haidt,and Brian A Nosek. 2009. “Lib-erals and conservatives rely on differ-ent sets of moral foundations.” Journalof Personality and Social Psychology,96(5): 1029–1046.

Graham, Jesse, Jonathan Haidt,Sena Koleva, Matt Motyl, RaviIyer, Sean P Wojcik, and Peter HDitto. 2013. “Moral foundations the-ory: The pragmatic validity of moralpluralism.” In Advances in experimen-tal social psychology. Vol. 47, 55–130.Elsevier.

Page 43: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 43

Guth, Werner, Rolf Schmittberger,and Bernd Schwarze. 1982. “An ex-perimental analysis of ultimatum bar-gaining.” Journal of Economic Behav-ior & Organization, 3(4): 367–388.

Haidt, Jonathan. 2012. The righteousmind: Why good people are divided bypolitics and religion. Vintage.

Haidt, Jonathan, and Craig Joseph.2004. “Intuitive ethics: How in-nately prepared intuitions generateculturally variable virtues.” Daedalus,133(4): 55–66.

Halpern, J. Y., and N. Rong.2010. “Cooperative equilibrium.”1465–1466.

Halpern, J. Y., and W. Kets. 2015.“Ambiguous language and commonpriors.” Games and Economic Behav-ior, 90: 171–180.

Halvorsen, Trond U. 2015. “Are dic-tators loss averse?” Rationality andSociety, 27(4): 469–491.

Hardin, G. 1968. “The tragedy of thecommons.” Science, 162: 1243–1248.

Hauge, Karen Evelyn, Kjell ArneBrekke, Lars-Olof Johansson,Olof Johansson-Stenman, andHenrik Svedsater. 2016. “Keep-ing others in our mind or in ourheart? Distribution games under cog-nitive load.” Experimental Economics,19(3): 562–576.

Henrich, Joseph, Robert Boyd,Samuel Bowles, Colin Camerer,Ernst Fehr, Herbert Gintis, andRichard McElreath. 2001. “Insearch of homo economicus: Behav-ioral experiments in 15 small-scale so-cieties.” American Economic Review,91(2): 73–78.

Iyer, Ravi, Spassena Koleva,Jesse Graham, Peter Ditto, andJonathan Haidt. 2012. “Under-standing libertarian morality: Thepsychological dispositions of self-identified libertarians.” PloS ONE,7(8): e42366.

Jehiel, P. 2005. “Analogy-based expec-tation equilibrium.” Journal of Eco-nomic Theory, 123: 81–104.

Kahneman, D., and A. Tversky.1979. “Prospect theory, an analysisof decision under risk.” Econometrica,47(2): 263–292.

Kahneman, D., J.L. Knetsch, andR. H. Thaler. 1986. “Fairness andthe assumptions of economics.” Jour-nal of Business, 59(4): S285–300.

Kant, Immanuel. 2002. Groundworkfor the Metaphysics of Morals. YaleUniversity Press.

Kessler, Judd B, and Stephen Lei-der. 2012. “Norms and contracting.”Management Science, 58(1): 62–77.

Kimbrough, E, and AlexanderVostroknutov. 2020a. “InjunctiveNorms and Moral Rules.” mimeo,Chapman University and MaastrichtUniversity.

Kimbrough, Erik O, and Alexan-der Vostroknutov. 2016. “Normsmake preferences social.” Journal ofthe European Economic Association,14(3): 608–638.

Kimbrough, Erik O, and AlexanderVostroknutov. 2020b. “A Theory ofInjunctive Norms.” mimeo, ChapmanUniversity and Maastricht University.

Koszegi, B., and M. Rabin. 2006. “Amodel of reference-dependent prefer-ences.” The Quarterly Journal of Eco-nomics, 121(4): 1133–1165.

Page 44: From outcome-based to language-based preferences - PsyArXiv

44 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

Kritikos, Alexander, and FriedelBolle. 2001. “Distributional concerns:equity-or efficiency-oriented?” Eco-nomics Letters, 73(3): 333–338.

Krueger, J., and R. W. Clement.1994. “Memory-based judgmentsabout multiple categories: a revisionand extension of Tajfel’s accentuationthoery.” Journal of Personality andSocial Psychology, 67(1): 35–47.

Krupka, Erin L, and Roberto AWeber. 2013. “Identifying socialnorms using coordination games: Whydoes dictator game sharing vary?”Journal of the European Economic As-sociation, 11(3): 495–524.

Laffont, Jean-Jacques. 1975.“Macroeconomic constraints, eco-nomic efficiency and ethics: Anintroduction to Kantian economics.”Economica, 42(168): 430–437.

Lazear, Edward P, Ulrike Mal-mendier, and Roberto A Weber.2012. “Sorting in experiments with ap-plication to social preferences.” Amer-ican Economic Journal: Applied Eco-nomics, 4(1): 136–63.

Ledyard, John O. 1995. “Publicgoods: A survey of experimental re-search.” In Handbook of ExperimentalEconomics. , ed. Kagel J. and RothA. Princeton, NJ:Princeton UniversityPress.

Levine, David K. 1998. “Modelingaltruism and spitefulness in experi-ments.” Review of Economic Dynam-ics, 1(3): 593–622.

Levitt, Steven D, and John A List.2007. “What do laboratory experi-ments measuring social preferences re-veal about the real world?” Journalof Economic Perspectives, 21(2): 153–174.

Lin, Haijin, and Shyam Sunder.2002. “Using experimental data tomodel bargaining behavior in ultima-tum games.” In Experimental BusinessResearch. , ed. Rami Zwick and Am-non Rapoport, 373–397. Springer.

Lipman, B. L. 1999. “Decision the-ory without logical omniscience: To-ward an axiomatic framework forbounded rationality.” Review of Eco-nomic Studies, 66: 339–361.

List, John A. 2006. “The behavioral-ist meets the market: Measuring so-cial preferences and reputation effectsin actual transactions.” Journal of Po-litical Economy, 114(1): 1–37.

List, John A. 2007. “On the interpreta-tion of giving in dictator games.” Jour-nal of Political Economy, 115(3): 482–493.

Lopez-Perez, Raul. 2008. “Aversionto norm-breaking: A model.” Gamesand Economic Behavior, 64(1): 237–267.

Manski, C. F., and F. Moli-nari. 2010. “Rounding probabilis-tic expectations in surveys.” Journalof Business and Economic Statistics,28(4): 219–231.

McCabe, Kevin A, Mary L Rigdon,and Vernon L Smith. 2003. “Posi-tive reciprocity and intentions in trustgames.” Journal of Economic Behav-ior & Organization, 52(2): 267–275.

McNeil, B. J., S. J. Pauker, H. C.Sox Jr., and A. Tversky. 1982. “Onthe elicitation of preferences for alter-native therapies.” New England Jour-nal of Medicine, 306: 1259–1262.

Mieth, Laura, Axel Buchner, andRaoul Bell. 2021. “Moral labels in-crease cooperation and costly punish-ment in a Prisoner’s Dilemma game

Page 45: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 45

with punishment option.” ScientificReports, 11(1): 1–13.

Mill, John Stuart. 2016. “Utilitarian-ism.” In Seven Masterpieces of Philos-ophy. , ed. Steven M. Cahn, 337–383.Routledge.

Moyer, R. S., and T. K. Lan-dauer. 1967. “Time required forjudgements of numerical inequality.”Nature, 215: 1519–1520.

Mullainathan, S. 2002. “Think-ing through categories.” Available atwww.haas.berkeley.edu/groups/finance/cat3.pdf.

Niehans, J. 1948. “Zur preisbildung beiungewissen erwartungen.” Schweiz-erische Zeitschrift fur Volkswirtschaftund Statistik, 84(5): 433–456.

Nowak, M. A., and K. Sigmund.1998. “Evolution of indirect reci-procity by image scoring.” Nature,393: 573–577.

Olson, Mancur. 2009. The logic of col-lective action. Vol. 124, Harvard Uni-versity Press.

Ostrom, Elinor. 1990. Governing thecommons: The evolution of institu-tions for collective action. CambridgeUniversity Press.

Pang, Bo, and Lillian Lee. 2004.“A sentimental education: Sentimentanalysis using subjectivity summariza-tion based on minimum cuts.” 271, As-sociation for Computational Linguis-tics.

Pang, Bo, Lillian Lee, and Shiv-akumar Vaithyanathan. 2002.“Thumbs up?: Sentiment classi-fication using machine learningtechniques.” 79–86, Association forComputational Linguistics.

Pearce, D. G. 1984. “Rationaliz-able strategic behavior and the prob-lem of perfection.” Econometrica,52(4): 1029–1050.

Perc, M., and A. Szolnoki. 2010.“Coevolutionary games – a mini re-view.” BioSystems, 99: 109–125.

Rabin, Matthew. 1993. “Incorporat-ing fairness into game theory and eco-nomics.” The American Economic Re-view, 1281–1302.

Rapoport, Anatol, and Albert MChammah. 1965. Prisoner’sdilemma: A study in conflict andcooperation. University of Michiganpress.

Restle, F. 1978. “Speed of adding andcomparing numbers.” Journal of Ex-perimental Psychology, 83: 274–278.

Roberts, Russell D. 1984. “A positivemodel of private charity and publictransfers.” Journal of Political Econ-omy, 92(1): 136–148.

Roemer, John E. 2010. “Kantianequilibrium.” Scandinavian Journal ofEconomics, 112(1): 1–24.

Ross, L., and A. Ward. 1996. “Naiverealism in everyday life: Implicationsfor social conflict and misunderstand-ing.” In Values and knowledge. TheJean Piaget symposium series, , ed.E. S. Reed, E. Turiel and T. Brown,103–135. Hillsdale, NJ, US:LawrenceErlbaum Associates, Inc.

Rubinstein, A. 2000. ModelingBounded Rationality. Cambridge,U.K.:Cambridge University Press.

Salant, Yuval, and Ariel Rubin-stein. 2008. “(A, f): choice withframes.” The Review of EconomicStudies, 75(4): 1287–1296.

Page 46: From outcome-based to language-based preferences - PsyArXiv

46 JOURNAL OF ECONOMIC LITERATURE FORTHCOMING

Santos, F. C., J. M. Pacheco,and Tom Lenaerts. 2006. “Evolu-tionary dynamics of social dilemmasin structured heterogeneous popula-tions.” Proceedings of the NationalAcademy of Sciences of the USA,103: 3490–3494.

Savage, L. J. 1951. “The theoryof statistical decision.” Journal ofthe American Statistical Association,46: 55–67.

Schier, Uta K, Axel Ockenfels, andWilhelm Hofmann. 2016. “Moralvalues and increasing stakes in a dicta-tor game.” Journal of Economic Psy-chology, 56: 107–115.

Schotter, Andrew, Keith Weigelt,and Charles Wilson. 1994. “Alaboratory investigation of multiper-son rationality and presentation ef-fects.” Games and Economic behavior,6(3): 445–468.

Schwartz, Shalom H. 1977. “Nor-mative influences on altruism.” Ad-vances in experimental social psychol-ogy, 10: 221–279.

Sen, Amartya K. 1977. “Rationalfools: A critique of the behavioralfoundations of economic theory.” Phi-losophy & Public Affairs, 317–344.

Smith, Adam. 2010. The theory ofmoral sentiments. Penguin.

Smith, Eric Alden, and RebeccaL Bliege Bird. 2000. “Turtle huntingand tombstone opening: Public gen-erosity as costly signaling.” Evolutionand Human Behavior, 21(4): 245–261.

Sobel, Joel. 2005. “Interdependentpreferences and reciprocity.” Journalof Economic Literature, 43(2): 392–436.

Stigler, George J, and Gary SBecker. 1977. “De gustibus non estdisputandum.” The American Eco-nomic Review, 67(2): 76–90.

Strulov-Shlain, A. 2019. “More thana penny’s worth: Left-digit bias andfirm pricing.” Chicago Booth ResearchPaper No. 19-22.

Sugden, Robert. 2000. “Team pref-erences.” Economics & Philosophy,16(2): 175–204.

Swope, Kurtis, John Cadigan,Pamela Schmitt, and RobertShupp. 2008. “Social position anddistributive justice: Experimental ev-idence.” Southern Economic Journal,811–818.

Tabellini, Guido. 2008. “Institutionsand culture.” Journal of the EuropeanEconomic Association, 6(2-3): 255–294.

Tappin, Ben M, and ValerioCapraro. 2018. “Doing good vs.avoiding bad in prosocial choice: Arefined test and extension of themorality preference hypothesis.” Jour-nal of Experimental Social Psychology,79: 64–70.

Thaler, R. 1980. “Towards a positivetheory of consumer choice.” Journal ofEconomic Behavior and Organization,1: 39–60.

Titmuss, Richard. 2018. The gift rela-tionship (reissue): From human bloodto social policy. Policy Press.

Tversky, Amos, and Daniel Kah-neman. 1985. “The framing of deci-sions and the psychology of choice.”In Behavioral decision making. 25–41.Springer.

Page 47: From outcome-based to language-based preferences - PsyArXiv

VOL. NO. 47

Tversky, Amos, and Itamar Si-monson. 1993. “Context-dependentpreferences.” Management Science,39(10): 1179–1189.

Wald, A. 1950. Statistical DecisionFunctions. New York:Wiley.

Warr, Peter G. 1982. “Pareto opti-mal redistribution and private char-ity.” Journal of Public Economics,19(1): 131–138.