Top Banner
A short introduction to game theory (presentation to the Cortex Club, 11/3/2003) Suggested readings: "A Survey of Applicable Game Theory" by Robert Gibbons (a non-technical introduction to the main ideas) Chapter 8 from William Poundstone's book "Prisoner's Dilemma" (very entertaining historical perspective) Gibbons: “Game theory is rampant in economics. Having long ago invaded industrial organization, game-theoretic modeling is now commonplace in international labor, macro and public finance, and it is gathering steam in development and economic history. Nor is economics alone: accounting, finance, law, marketing, political science and sociology are beginning similar experiences. Many modelers use game theory because it allows them to think like an economist when price theory does not apply.” 1
28

(presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Jan 08, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

A short introduction to game theory

(presentation to the Cortex Club, 11/3/2003) Suggested readings: • "A Survey of Applicable Game Theory" by Robert

Gibbons (a non-technical introduction to the main ideas) • Chapter 8 from William Poundstone's book "Prisoner's

Dilemma" (very entertaining historical perspective)

Gibbons: “Game theory is rampant in economics. Having long ago invaded industrial organization, game-theoretic modeling is now commonplace in international labor, macro and public finance, and it is gathering steam in development and economic history. Nor is economics alone: accounting, finance, law, marketing, political science and sociology are beginning similar experiences. Many modelers use game theory because it allows them to think like an economist when price theory does not apply.”

1

Page 2: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Selective history of noncooperative game theory 1928: Minimax solution for 2-person zero-sum games (von Neumann):

the optimal solution is to minimize your opponent’s maximum expected gain (and vice versa)

1944: Theory of Games and Economic Behavior: von Neumann and

Morgenstern propose a game-theoretic foundation for economics (“the problem of 2, 3, 4,… bodies”)

1950: Nash formulates an equilibrium concept for n-person non-zero-

sum games of complete information and gives an existence proof based on Kakutani’s fixed-point theorem (later adapted by Arrow to prove the existence of competitive equilibrium)

1950’s: First golden age of game theory, early prisoner’s dilemma

experiments, abortive attempts to apply game theory to cold war politics and various problems in the social sciences

Mid-1960’s: Game theory is pronounced dead in the social sciences 1965: Selten proposes the concept of subgame perfect equilibrium, a

“refinement” of Nash equilibria for extensive-form games 1967: Harsanyi proposes the Bayesian Nash equilibrium solution

concept for games of incomplete information 1970’s-1980’s: Game theory is reinvigorated by the pursuit of

additional refinements of Nash equilibrium, by evolutionary analogies, and by applications of Bayesian equilibrium (e.g., to auction theory and mechanism design)

1990’s: Game theory swallows microeconomics; Nash, Harsanyi, and

Selten share the 1994 Nobel prize; meanwhile, a new field of “behavioral game theory” emerges

2

Page 3: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

“The method of von Neumann and Morgenstern has become the archetype of later applications of game theory. One takes an economic problem, formulates it as a game, finds the game-theoretic solution, then translates the solution back into economic terms. This is to be distinguished from the more usual methodology of economics and other social sciences, where the building of a formal model and a solution concept, and the application of the solution concept, are all rolled into one.”

—Robert Aumann: “Game Theory” in The New Palgrave

• Von Neumann and Morgenstern’s book deals separately

with games involving different numbers of players, and introduces the solution concept of the “stable set.” Ultimately this approach did not prove fruitful, and most of the book is now considered unreadable

• The lasting contribution of the book is the proposition that

game theory should be the foundation of microeconomics, as well as a system of preference axioms implying the existence of a cardinal (interval scale) utility function whose expected value the player seeks to maximize. Thus, units of utility are henceforth the “money” in which game payoffs are measured.

3

Page 4: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Nash’s key proposal: • The outcome of a game with any number of players should

be an equilibrium, even if the game is played only once among players who cannot communicate, coordinate, or cooperate with each other.

• There are no explicit disequilibrium dynamics or

equilibrating forces. Rather, the equilibrium simply happens as the result of an infinite regress of reciprocal expectations of rationality.

"You can only understand the Nash equilibrium if you have met Nash. It's a game and it's played alone."

— Martin Shubik, quoted in A Beautiful Mind

4

Page 5: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

A noncooperative game is an interactive (multi-player) decision problem in which: 1. All players are “Bayesian rational” utility maximizers 2. The Bayesian rationality of the players is “common

knowledge” 3. The structure of the game that is being played (sequence of

available moves, etc.) is common knowledge 4. The payoffs of all players (in units of personal utility) are

common knowledge 5. If there is private information, the players share a “common

prior distribution” over their possible information states 6. There are no binding multilateral contracts (players are “free”

to move as they wish when their “turn” comes) “It is helpful to think of players’ strategies as corresponding to various ‘buttons’ on a computer keyboard. The players are thought of as being in separate rooms, and being asked to choose a button without communicating with each other. Usually we assume that all players know the structure of the strategic form, and know their opponents know it, and know that their opponents know that they know, and so on ad infinitum.”

—Fudenberg & Tirole, Game Theory

5

Page 6: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

How is a game “solved”? • Many, if not most, games do not have a unique, obvious

Bayesian rational solution • A game therefore must be solved by applying some “solution

concept”, which is an algorithm supported by desiderata and anecdotal evidence (e.g., “plausible” results in well-chosen examples)

• Popular solution concepts include Nash equilibrium,

interative deletion of dominated strategies, forward and backward induction, various “refinements” of Nash equilibrium, etc.

• Different solution concepts may yield different solutions to a

game

6

Page 7: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Aumann:

“Given a game, what outcome may be expected? Most of game theory is, in one way or another, directed at this question. In the case of two-person zero-sum games, a clear answer is provided: the unique individually rationally outcome [i.e., the minimax solution]. But in almost all other cases, there is no unique answer. There are different criteria, approaches, points of view, and they yield different answers. A solution concept is a function (or correspondence) that associates outcomes, or sets of outcomes, with games. .... for example, the strategic [Nash] equilibrium and its variants for strategic form games, and the core, the von Neumann-Morgenstern stable sets, the Shapley value and the nucleolus for coalitional games. Each represents a different point of view. What will ‘really’ happen? Which solution concept is ‘right’? None of them; they are indicators, not predictions.”

7

Page 8: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Nash equilibrium • A solution in which each player chooses either a pure strategy

or an independently randomized (“mixed”) strategy such that the strategies of different players are in equilibrium with each other.

• In other words, the strategy of each player is a “best response”

to the strategies of the other players. • Such an equilibrium always exists, because the best-response

correspondence is a continuous mapping of a compact convex set into itself, and therefore it has a fixed point (by Kakutani’s theorem)

“In the great majority of the applications of noncooperative game theory to economics, the mode of analysis is equilibrium analysis. And in many of these analyses, the analyst identifies a Nash equilibrium (and sometimes more than one) and proclaims it (them?) as ‘the solution.’ I wish to stress that this practice is sloppy at best and at probably a good deal worse. ...it is clear that having the answer ‘Nash equilibrium’ is pretty thin gruel if what we are after is a way to solve games.”

—David Kreps, A Course in Microeconomic Theory

8

Page 9: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Troublesome questions: • What is the meaning of Bayesian rationality in an interactive

environment? Does subjective expected utility theory (Savage’s axioms and all that) apply to uncertainty about the behavior of other human agents?

• What is the meaning of common knowledge of Bayesian

rationality (an infinite regress of reciprocal expectations of rationality), and how or why should it obtain in practice?

• How do the players achieve common knowledge of the rules

of the game (how can I know your utilities, and how can you know that I know them, etc. ad infinitum)?

• Is it really helpful to think of the players seated at computer

terminals in separate rooms, with no opportunity for communication despite common knowledge of the game? Does this scenario capture the essence of strategic reasoning and interactive decision making?

• Why should the solution of the game be an equilibrium if it is

only played once? How is equilibrium behavior “learned”? • If different solution concepts yield different solutions, how do

the players agree on which one to use (assuming they are pushing buttons in separate rooms, etc.)?

• If a given solution concept admits multiple solutions (e.g.,

multiple Nash equilibria), how do the players agree on which one to use?

9

Page 10: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Classification of types of games: Normal (static) form vs. extensive (dynamic) form Left Right

Top Bottom

Perfect information (alternating move) vs. imperfect information (simultaneous move) Complete information (deterministic knowledge of the rules) vs. incomplete information (probabilistic knowledge of the rules)

10

“Information set”

“Nature”

Page 11: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Taxonomy of solution concepts for games of complete information

Iterated strict dominance

A

Backward induction**

*Extensive-form games

**Extensive-form games of perfect information

Correlated rationalizability

Efficient solutions to prisoner’s dilemma & centipede game

Subjective correlatedequilibrium

rbitrage-free equilibrium (joint coherence)

Objective correlated equilibrium

Su

Se

Rationalizability

Nash equilibrium

bgame-perfect Nash equilibrium*

( )

quential equilibrium*

Direction of increasing

“refinement”

11

Proper equilibrium in the normal form*

Perfect Bayesianequilibrium*

Iterated weakdominance

Nash + 1 iteration ofweak dominance

Perfect equilibrium (in the agent normal form*)

Holy Grail (unique rational solution)

Page 12: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Gibbons chooses to focus on (only) four solution concepts:

• Nash equilibrium (for static games of complete information) • Subgame perfect Nash equilibrium (for dynamic games of

complete information) • Bayesian equilibrium (for static games of incomplete

information) • Perfect Bayesian equilibrium (for dynamic games of

incomplete information) “This outline may seem to suggest that game theory invokes a brand new equilibrium concept for each class of games, but one theme of this paper is that these equilibrium concepts are very closely linked. As we consider progressively richer games, we progressively strengthen the equilibrium concept to rule out implausible equilibria in the richer games that would survive if we applied equilibrium concepts suitable for simpler games. In each case, the stronger equilibrium concept differs from the weaker concept only for the richer games, not for the simpler games.” (Note the euphemism “implausible.” The solutions that are implausible from the abstract viewpoint of a particular solution concept are not necessarily implausible in the concrete setting of a particular game. For example, “implausible” equilibria may yield better payoffs to all the players than “plausible” equilibria.)

12

Page 13: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

The prisoner’s dilemma*

Cooperate Defect Cooperate 3, 3 0, 4 Defect 4, 0 1, 1

Defection is a strictly dominant strategy for both players, so the only “rational” outcome is the payoff pair (1,1)

*By convention, the first number in each cell is the payoff to the row player, the second number is the payoff to the column player. In this case, the row player gets one more unit of utility by defecting regardless of column player’s move, and vice versa.

The first lesson of game theory:

Individually rational strategies can be socially irrational; even the simplest equilibria can be self-defeating spirals of negative thinking and mutually destructive behavior (the “tragedy of the commons”)

13

Page 14: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

The “no brainer” game

Cooperate Defect Cooperate 4, 4 1, 3 Defect 3, 1 0, 0

• This is not a dilemma: the individually rational solution

(mutual cooperation) is also socially rational • From the standpoint of non-cooperative game theory, both

games are no-brainers, and they essentially are the same game.

• In both games, each player has a strategy that yields one

more unit of utility than the other strategy, regardless of what his opponent does. Whether the outcome is efficient or not is beside the point in a single-play game.

14

Page 15: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Repeated play: • The repeated prisoner’s dilemma game has spawned a vast

literature • Repetition may create opportunities for learning and for

cooperation (by punishing defection), particularly when play is repeated against the same opponent

• In a finitely repeated prisoner’s dilemma, mutual defection

is still the unique “optimal” strategy • In an infinitely repeated (or randomly terminated)

prisoner’s dilemma, cooperation can become rational (“tit for tat” is a very good strategy)

• The “folk theorem”: in an infinitely repeated game,

virtually anything can happen in equilibrium Experimental evidence: • In the repeated prisoner’s dilemma game, where mutual

defection is optimal, some players habitually cooperate, contrary to theory

• In the repeated no-brainer game, where mutual cooperation

is both individually rational and socially efficient, some players defect anyway out of a desire to beat their opponent (at least when payoffs are in money…)

15

Page 16: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

The “coarsest” solution concept: dominance solvability (iterative deletion of dominated strategies) 1. Strict dominance

Left Right

Top 2, 2 1, 1

Bottom 3, 1 0, 0

2. Weak dominance (is BL the only rational solution?)

Left Right

Top 4, 4 4, 4

Bottom 5, 1 0, 0

3. Iterated weak dominance (Kreps 12.1d—horrors!)

Left Center Right

Top 10, 0 5, 1 4, -200

Bottom 10, 100 5, 0 0, -100

16

Page 17: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Backward induction: 4, 4

R

L

B

T

5, 1

0, 0 • Game 2 on the previous page is the normal form of this

game. • Is BL (again) the only rational solution?

Backward induction carried to extremes: Rosenthal’s centipede game 1 2 1 2 1 2 . . . (1, 1) (0, 3) (2, 2) (97,100) (99,99) (98,101) You each start with 1 unit and have alternating moveof “across” adds 2 to your opponent’s account and one unit from yours; a move of “down” ends the gam The backward-induction solution is for the first play“down” at the first stage, yielding payoffs of 1 to bo

17

(100, 100)

s. A move subtracts

e.

er to move th players

Page 18: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Another interesting 2x2 game: “battle of the sexes”

Boxing Ballet

Boxing 3, 2 0, 0

Ballet 0, 0 2, 3

Solutions: • 2 pure Nash equilibria: both go to ballet or to boxing • 1 completely mixed Nash equilibria: each goes to preferred

event independently with probability 3/5 • It might be argued that if the players find themselves in this

game with no opportunities to communicate or coordinate, then they should play the mixed-strategy Nash equilibrium

• However, the mixed-strategy equilibrium is both inefficient

and unstable • This game is generically equivalent to “chicken” and “hawk-

dove” (If some communication is possible, why not decide on the basis of a coin flip or whether the day of the month is odd or even? That’s a correlated equilibrium, not Nash) 18

Page 19: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Refinements of Nash equilibrium The basic idea: look for refinements (strengthenings) of the Nash equilibrium concept to reduce the number of possible equilibria, particularly in extensive-form games • In normal-form games, some equilibria may involve

weakly dominated strategies • In extensive-form games, some equilibria may be supported

by “implausible” threats of irrational behavior off the equilibrium path

• Is such behavior really “implausible”?

Subgame perfection (Selten 1965): • Requires the solution in every proper subtree to be Nash—

essentially a backward-induction application of Nash equilibrium

• Doesn’t always refine the set of equilibria (e.g., if game

does not have proper subtrees)

19

Page 20: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Trembling-hand perfection (Selten 1975)

3, 3, 0

3

2

1

R

LR

L

D

U

B

T

5, 5, 0

2, 2, 2 4, 4, 4 1, 1, 1 Here, TUR and BUL are both Nash, but only TUR is trembling-hand perfect—although it is payoff dominated by BUL!

20

Page 21: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Sequential/Perfect Bayesian Equilibrium • A solution consists of a specification of strategies for each

player at each node and also beliefs about prior history of play

2, 2

2

R

LR

LM

B

T

5, 1

1 0, 0

0, 0

1, 3 • Here, TR and ML are both Nash. TR is sequentially

rational—e.g., supported by 2’s beliefs that M and B are equally likely if T is not played (and also supported by a threat by 2 to play R in any case).

• But ML is supported by forward induction (and also by

elimination of B as a dominated strategy, followed by backward induction)

21

Page 22: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

“So, what is the bottom line for refinements of Nash equilibrium? The philosophy espoused here can be paraphrased as follows: The bottom line is that there is no bottom line. In refining Nash equilibrium, one is speculating about what is supposed to happen after there is evidence that the going theory is incorrect. Depending on why you (and the players involved) think one sees deviations from a priori likely play and what this portends about future play, one supports or diminishes the relevance of particular formal refinements. Since the appropriate story is apt to be specific to the context (and, especially, to depend on why one thinks there is a “solution” in the first place), it seems fruitless to try to choose among refinements—a in the abstract.”

—David Kreps

22

Page 23: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

“There exists a widespread myth in game theory, that it is possible to achieve a miraculous prediction regarding the outcome of interaction among human beings using only data on the order of events, combined with a description of the players’ preferences over the feasible outcomes of the situation. For forty years, game theory has searched for the grand solution which would accomplish this task. The mystical and vague word ‘rationality’ is used to fuel our hopes of achieving this goal. I fail to see any possibility of this being accomplished. Overall, game theory accomplishes two tasks: It builds models based on intuition and uses deductive arguments based on mathematical knowledge. Deductive arguments cannot by themselves be used to discover truths about the world. Missing are data describing the process of reasoning adopted by the players when they analyze a game. Thus, if a game in the formal sense has any coherent interpretation, it has to be understood to include explicit data on the players’ reasoning processes. Alternatively, we should add more detail to the description of these reasoning procedures. We are attracted to game theory because it deals with the mind. Incorporating psychological elements which distinguish our minds from machines will make game theory even more exciting and certainly more meaningful.” [italics added]

—Ariel Rubinstein “Comments on the interpretation of game theory,” Econometrica 1991

23

Page 24: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

“Behavioral” game theory

• Bayesian rationality is often not descriptive of real-world behavior under uncertainty, which has given rise to parallel “behavioral” theories of individual and corporate decision making, economics, and finance over the last 50 years

• Contrary to Bayesian theory, real human decision makers

are prone to various cognitive biases and often follow simple heuristic rules that emphasize gains and losses relative to the current status quo

• Daniel Kahneman and Vernon Smith (and implicitly also

the late Amos Tversky) received the economics Nobel Prize in 2002 for work in behavioral decision theory and behavioral economics

• Behavioral game theory is the most recent addition to the

list of behavioral theories

24

Page 25: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

“Behavioral game theory is about what players actually do. It expands analytical theory by adding emotion, mistakes, limited foresight, doubts about how smart others are, and learning to analytical game theory… Behavioral game theory is one branch of behavioral economics, an approach which uses psychological regularity to suggest ways to weaken rationality assumptions and extend theory… Even if game theory is not always accurate, descriptive failure is prescriptive opportunity. Just as evangelists preach because people routinely violate moral codes, the fact that players violate game theory provides a chance for giving helpful advice…”

—Camerer, Behavioral Game Theory, 2003

25

Page 26: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

What’s the bottom line? • Analytical game theory makes some good predictions in

some kinds of games, but not in others • The infinite regress of reciprocal expectations only extends

about 2 or 3 steps ahead in practice • Models of adaptive behavior (e.g., reinforcement and belief

learning, experience weighted attraction) work well in some repeated-play situations

• The devil is in the details (e.g., in auction design)

Camerer: “..[B]ehavioral game theory is not a scolding catalog of how poorly game theory describes choices. In fact, the results are uniformly mixed, in a way that encourages the view that better theory is close at hand… It appears to be easy to modify theories so self-interested people people are human and infinite steps become finite, while preserving the central principle in game theory, namely that players think about what others are likely to do, and do so with some degree of thought.”

26

Page 27: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

Camerer’s “top ten open research questions:”

1. How do people value the payoffs of others?

2. How do people learn?

3. How do social preferences vary across people and environments?

4. What happens when people confront “new” games?

5. How exactly are people thinking in games?

6. What games do people think they are playing?

7. Can experiments sharpen the design of new institutions?

8. How do teams, groups, and firms play games?

9. How do people behave in very complex games?

10. How do socio-cognitive dimensions [e.g., shared background of subjects] influence behavior in games?

27

Page 28: (presentation to the Cortex Club, 11/3/2003) Suggested readingsrnau/cortexclub.pdf · 2004-01-06 · A short introduction to game theory (presentation to the Cortex Club, 11/3/2003)

28

My own two cents:

• When considering applications of game theory outside of economics (or for that matter within economics), it is worth reflecting on whether the situation at hand is more usefully regarded as a “game” or as some other kind of dynamical many-body system.

• If the “equilibrating force” is not mutual expectations of rational behavior by intelligent agents, then what is it?

• The argument for equilibrium may well be stronger in a physical or biological system than in a game

• In economic systems, the argument for equilibrium is stronger if disequilibrium conditions would create arbitrage (riskless profit) opportunities for “alert” agents.

• Quantitative models of behavior in games are more credible in situations where payoffs to agents are measured in units of exchange that can actually be counted (e.g., money rather than “utility”).

• Decision making with a capital D (as in “being a Good Decision Maker”) is mediated by language and by social relationships, which may be why simple, abstract mathematical models of decision trees and game trees often fail to describe complex real-world behavior.