Top Banner
THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY David Kelsey University of Birmingham and Australian National University John Quiggin University of Maryland and Australian National University Abstract. In this paper, Knight’s distinction between risk and uncertainty, and its significance for economic analysis are examined. The paper consists of a survey of some recent developments on the theory of choice under uncertainty and some applications of these theories to problems for which Bayesian Decision Theory has not proved entirely satisfactory. Two problems are examined in detail. The first is that of finance and insurance and the second is that of risk- taking behaviour with special emphasis on lotteries. Keywords. Risk, uncertainty, ambiguity, subjective expected utility, lotteries, ignorance. 1. Introduction It was at one time common in the literature to draw a distinction between ‘risk’ and ‘uncertainty’, following Knight (1921), who maintained that estimates of relative frequency over repeated trials provide the only legitimate basis for the assignment of numerical probabilities. Knight (1921) referred to the case when this can be done as ‘risk’ and to other cases as ‘uncertainty’. He argued that only uncertainty is economically relevant. ‘An uncertainty which can by any method be reduced to an objective, quantitatively determinate probability can be reduced to complete certainty by grouping cases. The business world has evolved several organization devices for effectuating this consolidation, with the result that when the technique of business organization is fairly developed, measurable uncertainties do not introduce into business any uncertainty whatever.’ (Knight, 1921, p. 232) During the 60s and 70s, the distinction was usually mentioned by writers on the economics of uncertainty, but was normally ignored in practice. Recent years have fortunately seen a revival of interest in this subject. These have included suggestions that general economic analysis should pay more attention to the issues associated with uncertainty, (see for instance, Lawson 1985) and formal developments in the theory of uncertainty itself, which will be the subject of this 0950-0804/92/02 0133-2 1 %02.50/0 JOURNAL OF ECONOMIC SURVEYS Vol. 6, No. 2 0 1992 Kelsey and Quiggin
21

THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

Apr 06, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

David Kelsey University of Birmingham and Australian National University

John Quiggin University of Maryland and Australian National University

Abstract. In this paper, Knight’s distinction between risk and uncertainty, and its significance for economic analysis are examined. The paper consists of a survey of some recent developments on the theory of choice under uncertainty and some applications of these theories to problems for which Bayesian Decision Theory has not proved entirely satisfactory. Two problems are examined in detail. The first is that of finance and insurance and the second is that of risk- taking behaviour with special emphasis on lotteries.

Keywords. Risk, uncertainty, ambiguity, subjective expected utility, lotteries, ignorance.

1. Introduction

It was at one time common in the literature to draw a distinction between ‘risk’ and ‘uncertainty’, following Knight (1921), who maintained that estimates of relative frequency over repeated trials provide the only legitimate basis for the assignment of numerical probabilities. Knight (1921) referred to the case when this can be done as ‘risk’ and to other cases as ‘uncertainty’. He argued that only uncertainty is economically relevant.

‘An uncertainty which can by any method be reduced to an objective, quantitatively determinate probability can be reduced to complete certainty by grouping cases. The business world has evolved several organization devices for effectuating this consolidation, with the result that when the technique of business organization is fairly developed, measurable uncertainties do not introduce into business any uncertainty whatever.’ (Knight, 1921, p. 232)

During the 60s and 70s, the distinction was usually mentioned by writers on the economics of uncertainty, but was normally ignored in practice. Recent years have fortunately seen a revival of interest in this subject. These have included suggestions that general economic analysis should pay more attention to the issues associated with uncertainty, (see for instance, Lawson 1985) and formal developments in the theory of uncertainty itself, which will be the subject of this

0950-0804/92/02 0133-2 1 %02.50/0 JOURNAL OF ECONOMIC SURVEYS Vol. 6, No. 2

0 1992 Kelsey and Quiggin

Page 2: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

134 KELSEY AND QUIGGIN

paper. There seems to be an increasing number of economists who take the view that there are important economic problems involving the kinds of uncertainty which cannot usefully be described by a single probability distribution. The Austrian and post-Keynesian schools have placed particular emphasis on these ideas. Knight’s own arguments have been developed by a number of writers in the Austrian tradition, notably Hayek (1937), Lachmann (1977) and Kirzner (1979) who have sought to emphasise the role of human powers of exploration, discovery and decision-making, particularly in relation to entrepreneurial activity, and to attack the neoclassical reliance on optimisation and equilibrium.

Writers in the post-Keynesian tradition have used the concept of uncertainty very differently. Keynes (1920, 1936) accepted a concept of subjective probability, but sought to limit its scope. Keynes (1936) argued that in the absence of sufficient knowledge to form a reasonable subjective probability, the status quo will be given more weight than can be justified on objective grounds. While the mainstream neoclassical synthesis of Keynes’ ideas, largely due to Hicks (1937), effectively discarded Keynes’ concerns with uncertainty, post- Keynesian writers, such as Minsky (1975), Chick (1983) and especially Shackle (*952) ve pursued them vigorously. These writers stress the idea, developed in Chapter 12 of the General Theory, that the uncertainty associated with long- run investment decisions is such that these decisions cannot be based on rational optimisation, but only o n a set of expectations which are essentially conventional in nature. The rapid revision of these expectations is a major element of economic crises. Hence, full-employment equilibrium is inherently unstable in the absence of extensive government intervention.

The issue also arises in micro-economic contexts such as the field of risk analysis.’ Suppose that you are concerned about safety and ask an engineer about the chances of an accident at a nuclear power station. He replies that the chance of a serious accident is one in a million, but then adds that he is not sure of this. Can subjective expected utility theory make sense of this kind of statement?

It does not seem reasonable to assume that the engineer has a probability distribution over probability distributions, since if this were the case he could assign a single number to the probability using the usual methods for reducing compound lotteries to simple lotteries. It may be better to explain this in terms of theories (such as those discussed in section 2.1 below) which allow the decision-maker simultaneously to hold many subjective probability distributions.

The aim of this paper is to present a non-technical account of some recent developments in the theory of such kinds of uncertainty. Section 1.1 presents the basic distinction between risk and uncertainty and an sutline of the Bayesian subjective expected utility model. Section 2 contains an outline of alternative models for choice under uncertainty Section 3 contains examples of the application of these concepts.

ha- - ~~~~ ~. ~ ,

1.1. Risk and uncertainty

Let us agree to call risk, a situation in which there are well defined, unique and

Page 3: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 135

generally accepted objective probabilities. Then it is now a common view that in the social sciences there are many situations in which people have to take decisions in circumstances involving uncertainty which cannot be classified as risk in the above sense. Throughout the remainder of this paper we shall use the term uncertainty to mean something distinct from risk.

Subjective expected utility maximisation (also known as Bayesian decision theory) is the most commonly used theory of decision-making under uncertainty. This theory implies that an individual has beliefs over uncertain events which have the mathematical form of probabilities. These beliefs are referred to as subjective probabilities. Moreover, the individual has a utility function and maximises the expected value of that function. In contrast to situations of risk, subjective probabilities are not necessarily based upon much or indeed any evidence, nor is there any reason to believe that, in general, different decision- makers will have the same subjective probabilities, even when they are faced with similar decision problems.

It does not seem plausible that people are always able to assign probabilities to relevant events when they are faced with a problem involving uncertainty. In particular one might expect people to have trouble assigning subjective probabilities in unfamiliar circumstances or when they have little evidence concerning the relevant variables. The following example is due to Savage (1954). If Bayesian decision theory is correct, then an individual would be able to assign an exact number, such as 0.42 to the proposition ‘The next president of the United States will be a Democrat’. This conclusion seems implausible.

However, arguments based on the types of statements people might make do not necessarily imply that Bayesian decision theory is wrong. It is not an implication of that theory that an individual would be aware of his subjective probabilities or of his utility function. The theory only requires that individuals behave as i f they maximise subjective expected utility. Nevertheless, such examples are worrying. In this paper, we shall present some alternative theories, which can be defended as models of rational choice under uncertainty. We shall argue that the alternative theories perform better when the decision-maker is in a state of ignorance or has little knowledge of the relevant probabilities.

Apart from such a priori considerations there is also some evidence which indicates that people do not base decisions on subjective probabilities in some situations involving uncertainty. The best-known example of this is the Ellsberg Paradox (see Ellsberg (1961)). This can be described as follows. There is an urn which contains 90 balls. One ball will be drawn from the urn at random. The urn contains 30 red balls and the remaining balls are known to be either blue or yellow, but the number of balls which have each of these two colours is unknown.

In the experiment subjects were offered the following four gambles:

a. f 1 ,OOO if a red ball is drawn, b. f 1,OOO if a blue ball is drawn, c. f 1,OOO if a red or yellow ball is drawn, d . f 1,OOO if a blue or yellow ball is drawn.

Page 4: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

136 KELSEY AND QUIGGIN

When faced with this choice most subjects state that they prefer a to b and d to c. It is easy to check algebraically that there is no subjective probability, which is capable of representing the stated choices as maximising the expected value of any utility function. Suppose, to the contrary, that the decision-maker does indeed have a subjective probability distribution. Then since she prefers a to b she must have a higher subjective probability for a red ball being drawn than for a blue ball being drawn. But the fact that she prefers d to c implies that she has a higher subjective probability for a blue ball being drawn than for a red ball. These two deductions are contradictory.3 Of course it is easy to come up with hypotheses which describe this behaviour. It seems that the subjects are choosing gambles where the probabilities are better known. But it is still the case that subjective expected utility theory cannot explain this pattern of choices.

2. Alternative models

SEU has two main implications. The first is termed ‘probabilistic sophistication’ by Machina and Schmeidler (1990). This says that a decision-maker’s beliefs may be represented as a unique and additive subjective probability distribution. The second is the assumption that these probabilities enter the decision-maker’s preferences in a linear fashion, so that preferences can be represented as the expected value of a utility function. In the present paper, attention will be focused on alternatives to probabilistic sophistication.

In order to proceed beyond the Bayesian approach, we need an alternative model. The alternatives to subjective expected utility theory can be put into three broad categories. First there are those models in which subjective probabilities do not have to be unique. In these models, the decision rules are generally compromises between maximin and expected utility maximisation. Typically, in these theories, if the decision-maker has a unique subjective probability then the decision rule simplifies to maximising expected utility. Hence they contain Bayesian theory as a special case. We shall describe the models with multiple probabilities in Section 2.1. Second, there are models in which the decision- maker does not use subjective probabilities at all. In the latter models typically a lexicographic decision rule based on maximin or maximax is used. In section 2.2 we discuss models where there is a finite set of states and the decision maker does not have a subjective probability distribution. Recent research has also found ways to describe uncertainty without using the states-consequences framework. These models are discussed in section 2.3 of the present paper. Finally, there are models in which subjective probabilities exist and are unique, but do not satisfy the usual mathematical properties. These ideas, based on the notion of non-additive probabilities are discussed in section 2.4.

One alternative to subjective expected utility maximisation which can be applied in situations of uncertainty is maximin (which has also been used as the basis of an equilibrium concept in game theory). According to maximin, an action b should be preferred to another action c if and only if the worst possible consequence of b is better than the worst possible consequence of c. Another

Page 5: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOLCE UNDER IGNORANCE AND UNCERTAINTY 137

possibility is that actions could be ranked by only looking at their best outcomes. This procedure is known as maximax. Maximin has been thought to be intuitively more plausible than maximax and consequently has received more attention in the literature.

2. I . Convex sets of probabilities

It has often been suggested that when faced with uncertainty people have beliefs which take the form of ranges or intervals of probabilities for an event. For instance a person could believe that the probability of rain tomorrow was between and i. There are problems with this approach since it is no longer automatically the case that the probabilities of a number of mutually exclusive and exhaustive events will sum to one. For example, suppose the set S of states of nature can be partitioned into two exhaustive and mutually exclusive events El and E2. Suppose that the decision-maker believes that the probability of both events is between and $. Then it is compatible with this that both of these events have probability 3. In this case the sum of the probabilities is greater than one. The notion that probabilities add up to one is sufficiently important that it seems unlikely that one would be willing to abandon it for intuitions about assigning ranges of probabilities to events.

There appears to be a consensus on how this problem should be resolved. A decision maker’s beliefs should be represented by a convex set of probabilities, B . Then the probabilities in a given distribution from B will sum to one, while for any given event A, the set of all probabilities assigned to A by members of B will form an interval. This is an implication of convexity of B. We now have a potential answer to a problem mentioned in the introduction. In these models it is possible that an individual could have beliefs of the form ‘The probability that the next president of the United States will be a Democrat is between 0.5 and 0.7’.

A number of models of this form have been proposed, see for instance Bewley (1986), Gilboa and Schmeidler (1989) and Levi (1980)’ (1986).4 All these writers model beliefs as a convex set of probabilities. However, there are differences between the decision rules they propose. Let u be the decision-maker’s utility function. This is a cardinal utility function whose existence is derived axiomatically. Let B denote the decision-maker’s set of probabilities and let f and g be actions whose consequences are uncertain. The expected utility of an action is no longer unique, since the expectation may be taken with respect to any of the probabilities from the set B.

Gilboa and Schmeidler (1989) show that if the decision-maker satisfies their axioms, he will prefer f to g if and only if the minimal possible value for the expected utility of f exceeds that for g. This may be expressed mathematically as,

where P denotes strict preference, and Ep[u( f)] is the expected utility off with

Page 6: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

138 KELSEY AND QUlGGlN

respect to the probability distribution p . This procedure represents a compromise between maximin and expected utility maximisation. If the set B is small this will be like expected utility maximisation. In the extreme where B consists of only a single probability distribution, Gilboa-Schmeidler preferences are the same as subjective expected utility maximisation. As the set B becomes larger, these preferences become more like maximin. The decision rule described by (1) is referred to as maxmin expected utility maximisation. Gardenfors and Sahlin (1982) have a theory which is similar, save that they allow the set of subjective probabilities to be non-convex.

In Bewley (1986) the decision-maker is modelled as having incomplete preferences but otherwise obeys the axioms of Anscombe and Aumann (1963). The action f will be preferred to the action g if and only if f yields higher expected utility with respect to all the probability distributions in B. Formally,

This decision rule is not complete, since i f f gives higher expected utility with respect to some probabilities while at other times B gives higher expected utility, then f and g will not be comparable. Bewley overcomes this disadvantage by assuming that among the actions there is a ‘status quo’ which has a privileged position and will be chosen unless there is another action which is definitely preferred to it.

Levi (1980) defines an admissible action to be one which maximises expected utility with respect to some probability distribution in B. If there is only one admissible action then it will be chosen. In the event that more than one action is admissible he would select that admissible action for which the minimum value of expected utility is greatest. The following example illustrates Levi’s theory and how it differs from that of Gilboa and Schmeidler. Suppose there are two states SI, s2 with probabilities p and (1 - p ) respectively. Consider three actions a = (1, 6) , b = (3,3) and c = (6, l), where the first and second components are utilities in states 1 and 2 respectively. Suppose that the decision maker believes

< p < 3. This generates a convex set of probabilities over the states. Then minimal expected utilities of a, b and c are :, 3 and respectively. Hence an individual who obeyed the Gilboa-Schmeidler axioms would choose b. However b does not maximise expected utility with respect to any single probability distribution. Hence it is not admissible, according to Levi’s definition. Thus Levi’s theory prescribes that b should be rejected and the decision-maker should be indifferent between a and c . Bewley’s theory says that the decision maker will not be able to compare a, b and c.

We can also compare these theories using the Ellsberg Paradox. With the notation of the introduction, Bewley’s theory says that the agent will not be able to compare a and b. Yet in experiments, subjects systematically say that they prefer a to b. By contrast the maximin-based theories of Levi, Gardenfors and Sahlin, Gilboa and Schmeidler correctly predict behaviour in the Ellsberg Paradox. In fairness it should be said that Bewley’s theory is not intended to

Page 7: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 139

explain the Ellsberg Paradox, but is instead the basis of an interesting theory of intertemporal decision-making.

2.2. Complete ignorance

In the previous section we described some models in which the subjective probability was not unique. By contrast we shall now describe some theories with no probabilities. This approach to non-Bayesian decision theory has been one of the most heavily studied. The papers by Arrow and Hurwicz (1972), Barbera and Jackson (1988), Barrett and Pattanaik (1986), Cohen and Jaffray (1980), (1983), (1985), Maskin (1979) and Toulet (1986) all deal with this approach to the problem. The complete ignorance approach retains the states - consequences framework of Savage (1954) but starts with the assumption that a decision-maker has no beliefs about the relative likelihood of the states. An important distinction between these papers and the Savage framework is that they assume that the state space is finite. This means that subjective probabilities cannot be derived by repeatedly sub-dividing events, as in Savage (1954). The decision- maker is assumed to have a utility function over the consequences which may be ordinal or cardinal.

A distinctive feature of these models is that the set of states is not fixed. Conditions are imposed which require the decision-maker to be consistent between different sets of states. The most common of these is the merger of states axiom, which we shall now explain.

s1 s2 s3 si S;

a 5 8 8 a’ 5 8 b 6 7 7 b’ 6 7

The tables above describe two possible decision problems. In the first the set of states has three members s1, s2, s3 while the second is over the two member set of states sl’, s2‘. The decision-maker has to pick either action D or b in the first problem and either action a’ or b’ in the second problem. The numbers indicate the utility which each action will yield in any given state. Consider the pair of actions a and 6 . Both of these yield the same utility in state s3 as state s2 The logic of the complete ignorance models, is that the decision-maker has such a lack of information that the fact this pair of consequences can occur in state s3 as well as sz does not make them appear more likely than if they only occurred in s2. Having the same consequences in two states merely duplicates information. Consequently the decision-maker should make the same choice as in a second problem in which this pair of consequences could only occur in one state, but was otherwise similar. The choice between a‘ and b‘ fits this description. For these reasons the merger of states axiom requires that the decision-maker should pick a (resp b ) in the first problem if and only if he would pick a’ (resp b‘ ) in the second problem.

Page 8: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

140 KELSEY AND QUIGGIN

The usual motivation for the merger of states axiom is that there is a certain symmetry between the two decision problems involved and hence there should be a similar symmetry between the final decisions. Barrett and Pattanaik (1986) present a different motivation for this axiom. They suggest that these tables above can be viewed as two alternative ways of presenting the same decision problem. Clearly it is desirable for normative reasons that choice should not depend on how a decision is presented, hence we should expect a to be chosen if and only if a' is chosen.

The other characteristic axiom of these models is a symmetry axiom. This says that permuting the consequences between the states will not affect the decision. The symmetry axiom on its own is compatible with subjective expected utility maximisation. So is the merger of states axiom on its own. The two together however, rule out the possibility of the decision-maker holding a unique subjective probability distribution.

The other axioms which these models use are fairly standard. There is a dominance axiom which says that if action a yields a better consequence than b in every state then a should be preferred to b. Also all of the papers require that strict preferences should be transitive. However some of them allow violations of transitivity of indifference.

The papers conclude that rational choice must imply the use of decision procedures which rely primarily on the maximum and the minimum of an action. The most familiar examples of such a procedure are maximin, that is always ranking actions by their least preferred consequence, or its lexicographic extension leximin. However these are not the only possibilities. For example, the conclusion of Arrow and Hurwicz (1972) is that actions are ranked by an arbitrary function of their maximum and minimum. It must be admitted that many practical problems do not satisfy the assumptions of the complete ignorance models. Most of the researchers in this area would agree that complete ignorance is a special case and a rather extreme one. The point of studying these models is to develop our intuitions so that we may proceed to understand less extreme cases better. In Kelsey (1986), it is shown that the conclusions do not change greatly if the assumption of complete ignorance is relaxed somewhat, and the merger of states axiom is not used.

2.3. Ranking of sets

There is a criticism of Bayesian decision theory which was not mentioned in the introduction. This argument says that there are important economic problems in which the decision-maker is not likely to know the states-consequences structure. In other words, the decision maker's information is much more limited than Bayesian theory assumes. There has recently emerged a theory of uncertainty which does not use the states-consequences framework. In these models the decision-maker chooses from a set of actions. For each action all he knows is the set of possible consequences of each action. Not only are there no known probabilities but also the agent does not know the states-consequences structure.

Page 9: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 141

This means that the agent cannot always tell when one action dominates another. Consider the following two actions defined over a three member set of states:

Suppose the decision-maker prefers the consequence x to the consequence y . Then action a dominates action b. But in the ranking of sets framework the decision-maker will identify both these actions with the set (x , y ) . Thus dominance information is lost in this framework.

In this model there is a set Xof consequences. Actions are essentially identified with subsets of X . Usually the set of actions is identified with the set of all subsets of X . These theories represent a conceptual advance, since they share the only formal model of uncertainty which is not based on a set of states of nature. The two key assumptions in this kind of theory are the monotonicity axiom and the independence axiom. There is no generally agreed form for either of these axioms. Almost every paper in this area has presented its own variants on them. Informally independence says that the ranking of two actions must remain unchanged when the same additional outcomes are added to each of them. Monotonicity says that if we take an action and add to it an additional consequence, which is more attractive than any of its existing outcomes then this must make the action more attractive. We shall now explain the model more formally. Let X be the set of outcomes, and suppose that the decision-maker has an ordering described by the binary relation > on X We interpret this as representing the decision maker’s preferences over the consequences, in the absence of uncertainty. In this theory an action is identified with the set of consequences which it yields. Thus an action can be represented by a subset of X . We shall denote actions by upper case roman letters A , B, C etc. The decision-maker has weak preferences over actions given by the binary relation R . We shall use P and I to denote respectively the strict preference and indifference corresponding to R.

The following axioms include two formulations, each of monotonicity and independence. The first two are monotonicity axioms.

Weak Dominance If x and y are outcomes with x > y , then ( x ) P ( x , y ) P ( y ) .

Positive Responsiveness If for all y E A , x > y (resp. y > x ) , then A U ( x ) PA (resp. APA U (x) ).

Independence If APB and A n C = B f l C = 0, then A U C P B U C .

Weak Independence If APB and A r l C = B n C = 0, then A U C R B U C .

The weak dominance principle says that if there are two possible outcomes, then the better outcome for sure should be preferred to receiving either outcome

Page 10: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

142 KELSEY AND QUIGGIN

by some unknown process, which is in turn preferred to receiving the worst outcome for sure. The weak dominance principle is implied by positive responsiveness which says that if we take an action and add a new consequence which is better than all the existing consequences, this is a definite improvement. The independence axioms are compatible with the spirit of the sure-thing principle of Savage (1954). They say that if one action is preferred to another and the same additional outcomes are added to both actions then the preferences between the resulting actions should be unchanged.

The conclusions of these theories have been somewhat disappointing. Two impossibility theorems have been proved. Kannai and Peleg (1984) have shown that there is no transitive preference over actions which satisfies positive responsiveness and weak independence while Barbera and Pattanaik (1984) show that there is no binary relation on actions satisfying both the weak dominance principle and independence. There are rules satisfying both the weak dominance principle and weak independence. Barbera, Barrett and Pattanaik (1984) show that these must rank actions by some function of their best and worst outcomes.

The bulk of research on this model has either discovered an impossibility result or has shown that the decision rule must be some variant of maximin or maximax. Thus Fishburn (1984) and Pattanaik and Peleg (1984) axiomatise variants of maximin and leximax while Heiner and Packard (1984) have characterised a lexicographic version of maximax. One exception can be found in Nitzan and Pattanaik (1984), where the decision rule is based on the median.

2.4. Non-additive probabilities

The exclusive concentration on extreme outcomes inherent in the maximin and Arrow-Hurwicz approaches is disturbing. It can be argued that it would be better to permit the decision-maker to place a high weight on extreme outcomes and a lower but still positive weight on intermediate outcomes. This can be done using the notion of non-additive probabilities (Schmeidler 1989, Gilboa 1987). This approach is closely related to the Rank-Dependent Expected Utility (RDEU) model (Quiggin 1982, Allais 1986, Yaari 1987) which has been used by Segal (1987) to analyse the Ellsberg paradox.

The basic notion of integration with respect to a non-additive measure was developed by Choquet (1953-4). A non-additive measure (but monotonic) measure is a function 1 on a a-field of subsets of some space X such that for disjoint subsets XI, X2 we have

but not necessarily

(b) XI U X z ) = XI) + p ( X 2 ) .

The model of Gilboa (1987) and Schmeidler (1989) is simplest in the case where there are a finite set of possible states of the world SI, . . . , sn. Assume without loss

Page 11: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 143

of generality that for a given prospect the states are numbered so that XI < .... < xn. Then the utility of the prospect is given by

Except that the representation is expressed in terms of a measure rather than a transformation of probabilities this representation is the same as that proposed for the RDEU model by Quiggin (1982). More general non-additive measures are discussed by Gilboa (1 987). Wakker (1989) shows that, provided first stochastic dominance is preserved, all non-additive measures are consistent with RDEU. For the finite-state case, the maximin and Arrow-Hurwicz models are included as special cases.

Equation (3) may be interpreted by regarding the non-additive measure p as a weighting function. It means that the weight on state i is the difference between the total weight of all the states up to and including i and the total weight of all the states worse than i. Whereas in expected utility theory, the weight on state i is simply p(si) in the non-additive models, the weight depends on the ranking of state i.

The relationship between non-additive probabilities and the RDEU model may be clarified further using the notion of comonotonicity. The axiomatic approach taken by Gilboa and Schmeidler weakens the Sure Thing Principle by introducing the notion of comonotonicity. Let an act be represented as a mapping y from some state space Q to a space of outcomes X . Then two acts yl, y2 are comonotonic if and only if yl(w1) 2 yl(w2) @ yz(w1) 2 yz(w2); Vwl, 0 2 E Q . That is, for a set of comonotonic acts, the states of the world may be unambiguously ranked from worst to best. Note that if all acts are cornonotonic, it is possible to regard (3) as an additive representation in which the second term is simply the subjective probability of state j .

The approach taken by Schmeidler and Gilboa is to replace the usual Savage sure-thing principle with an axiom which applies only to comonotonic acts. This axiom is of the form:

Suppose:

yl and y2 are comonotonic, yl =y2 on an event E and ylIy2, and y : and y: are such that:

(4)

Then y?Iyz*. In conjunction with an additional more technical axiom of comonotonic

consistency, this yields the representation (3). In the case of RDEU it is assumed that the probabilities are known and hence

that we may define equiprobable states. A permutation of the outcomes of equiprobable states does not affect the valuation of any prospect. Hence it is possible to choose a ‘canonical’ permutation such that all prospects are comonotonic with outcomes ordered from worst to best. If this is done, then (4)

y? = y T on E and y : = y l , y: = y2 elsewhere.

Page 12: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

144 KELSEY AND QUIGGIN

becomes the axiom of ordinal independence analysed by Green and Jullien (1988) in the RDEU context.

The notion of comonotonicity may be motivated by observing that when prospects are comonotonic a probability mixture between the two does not involve any ‘hedging’, that is, any opportunity to average good and bad outcomes between the two prospects. In the RDEU context, such a mixture is essentially an averaging of the cumulative distribution functions. When prospects are non-comonotonic the possibility of hedging could possibly provide an explanation for violations of the sure-thing principle.

In essence then, the Schmeidler-Gilboa analysis corresponds exactly to the RDEU model, but does not involve the use of known subjective probabilities. Thus, analysis undertaken using the RDEU model with known subjective probabilities may easily be translated into the case where subjective probabilities are not known. An example is Segal’s analysis of the Ellsberg paradox.

Segal observes that the rank-dependent model can be derived either by modifying the Independence Axiom or by retaining the Independence Axiom but abandoning the Reduction of Compound Lotteries Axiom; that is that a two- stage lottery is evaluated by using standard probability calculations to derive the ultimate probability of each outcome. This axiom has not been the subject of controversy and many authors have not even given it the status of an axiom, but have simply made it implicit in their notation. However, once notions such as non-additive probabilities are brought into consideration, it is natural to re- evaluate this axiom. Segal shows that, if decision-makers are risk-averse in a natural RDEU sense, they will always prefer unambiguous versions of a given lottery to ambiguous ones. This presents a potential resolution of the Ellsberg paradox.

2.5. Dutch book arguments

The strongest normative justification for SEU is based on ‘Dutch book’ arguments. A Dutch book is a sequence of bets with a non-positive outcome in every state of the world, and a negative outcome in at least one state. De Finetti (1974) showed that an individual who did not act on the basis of unique and additive subjective probabilities could be induced to accept a Dutch book. This is a normatively undesirable property of a decision rule. The individual has effectively handed over money to the person offering the Dutch book. Freedman and Purves (1969) developed a similar argument in a multi-period context. Their conclusion was that, unless a decision-maker had unique and additive subjective probabilities and revised them by Bayesian updating as new information is acquired, he or she could be the victim of a Dutch book.

However, these arguments depend on the assumption that a decision maker will be prepared to bet either for or against any given event at given odds. The non-additive probability theory of Schmeidler and Gilboa (including polar cases such as the min-max and Arrow-Hurwicz models) does not possess this property. As has already been shown, preferences may be risk-averse, even in a

Page 13: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER lGNORANCE AND UNCERTAINTY 145

neighbourhood of certainty, implying that for some bets, individuals will refuse both sides of the bet. In the corresponding RDEU model under risk, Green (1987) shows that, when preferences are quasi-convex, Dutch book problems will not arise, provided the bookmaker does not know the probability distribution of the endowment.

Border and Segal (1990) investigate the possibility of constructing Dutch books within the rank-dependent model. Their model is based on the assumption that a bookmaker chooses odds strategically in order to maximise profit. It is shown that if the customers satisfy RDEU, it may be profitable for the bookmaker to set non-additive odds.

In summary, it does not seem that Dutch book arguments provide sufficient grounds for rejecting the models discussed here. On the other hand, the theories discussed in the present paper do not presently have generally accepted dynamic extensions. There is scope for further research aimed at applying these models to multi-period decision problems.

2.6. Related research

A related area of research is that on ‘upper and lower probabilities’ by Dempster (1966), (1967), (1968). This provided a foundation for a theory of belief functions due to Shafer (1976), (1982a), (1982b). Strictly speaking these papers are outside the scope of the present paper since they do not deal with decision theory. They are concerned with representations of beliefs and have been used to construct alternatives to classical statistical inference, Dempster (1966), Shafer (1982a) and in a theory of reliability of evidence presented before the courts Shafer (1982b). However these theories relate to the research discussed in this paper since they can be seen as a special case of non-additive probabilities and also are compatible with representing beliefs as convex set of probabilities. Jaffray (1989) has proposed a decision theory based on belief functions. Another related area is the theory of fuzzy sets and fuzzy measures due to Zadeh (1965). Wakker (1989) shows that fuzzy measures can be interpreted as non-additive probabilities.

3. Examples

Much of the appeal of expected utility theory is due to the extent to which it has been applied. There is a significant literature which shows that it is useful in both theoretical and practical problems involving uncertainty. The purpose of this section is to give some examples of circumstances in which non-Bayesian theories may be applied.

3.1. Applications in finance and insurance

In this section, we outline three papers which apply the RDEU model or non- additive probabilities to insurance and financial markets. As indicated above,

Page 14: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

146 KELSEY AND QUIGCIN

these two models are sufficiently similar that results derived in one context are likely to apply in the other.

Kelsey and Milne (1990) show that the arbitrage pricing theorem (APT) of finance theory will remain true with either non-additive probabilities or RDEU preferences. The APT is a central result in financial theory. The Modigliani- Miller theorem on the irrelevance of the capital structure of the firm and the capital asset pricing model are both special cases. The Black-Scholes option pricing formula depends on a multi-period extension of the APT. This is not covered by Kelsey and Milne, but it seems that such an extension should be possible. Thus, Kelsey and Milne show that many results of finance originally proved using SEU will remain true if SEU is replaced with modern theories of decision-making under uncertainty.

A second class of results relate to the SEU property of risk-aversion in a neighbourhood of certainty. An individual who obeys SEU and has a differentiable utility function will behave as if risk-neutral in relation to small risks if initial wealth is certain (or uncorrelated with the new risk). Hence, such an individual, in the absence of transactions costs would either buy or short sell some of any financial asset, depending on whether her subjective probabilities are favourable or unfavourable. This is true even if the individual is risk-averse. Risk aversion would affect the size of asset holdings, but not whether they are positive or zero.

Dow and Werlang (1990) have shown that non-additive subjective probability models provide a richer and more plausible account of asset holding behaviour. In addition to non-additive probabilities, Dow and Werlang assume uncertainty aversion of the type illustrated by the Ellsberg problem. With these assumptions, it is shown that, for any financial asset there is an interval of prices such that an investor will not wish either to buy or to short sell the asset whenever the price lies within the interval.

Non-participation in financial markets could be explained in other ways, particularly by transactions costs. It is not easy in practice to separate the influence of these various effects on decision-making. It seems plausible, however, that even if transactions costs could be eliminated, most individuals would still have zero holdings of most assets.

Segal and Spivak (1990) adopt a related line of argument in relation to insurance decisions. It follows from the local risk-neutrality properties of SEU that it is never rational to purchase full insurance if the premium exceeds the actuarially fair rate. Segal and Spivak show that individuals with RDEU preferences will not be locally risk neutral, and under plausible conditions will be averse to arbitrarily small risks. Consequently such individuals may purchase full insurance at actuarially unfair terms. Evidence from actual insurance decisions seems to support this prediction.

In summary, recent research applying the models described in this survey to financial markets has shown that many of the core results of traditional asset pricing models are still valid. On the other hand, it is possible to explain certain observed phenomena which are inconsistent with SEU.

Page 15: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 147

3.2 . Lotteries

Gambling provided the inspiration for the development of probability theory. It is embarrassing, therefore that Bayesian decision theory cannot provide an adequate analysis of gambling decisions. As its title indicates, Dubins and Savage (1965) work, How to gamble i f you must, represents an attempt to bring Bayesian concepts to bear while taking the irrational decision to participate as imposed ex ante. But this only increases the difficulties. Not only does it point up the fact that gambling is an example of the type of decision which ought to fall within the Bayesian framework, but the suggestions offered by Dubins and Savage are sharply at variance with actual gambling behaviour. In this section, we will explain why Bayesian decision theory cannot account for lottery ticket purchases. It is our opinion that similar problems arise with other kinds of gambling and risk-taking behaviour.

It is useful to focus on the case of lottery gambling because the nature of the gamble may be clearly specified and extraneous elements, such as the excitement of casino and racetrack gambling are removed. The only plausible reason for participation in such a pure gamble, and the reason actually advanced by participants, is the chance of winning a large amount of money (Walker 1984).

Two possible explanations of lottery gambling (and risk-taking behaviour generally) can be attempted within the Bayesian decision theory framework. The first and most popular is to assume that the utility function is convex in wealth over at least some range, that is, that the marginal utility of money is increasing. Friedman and Savage (1948) attempted to explain the coexistence of gambling and insurance using the concept of an S-shaped utility function. The intuitive justification for increasing marginal utility of wealth is that people may be risk- seeking with respect to prospects which may take them into a higher social class. Thus, the utility function may be regarded as concave for low income levels and convex for some higher incomes. However, as has been pointed out by Machina (1983b), the Friedman-Savage model implies radical changes in behaviour as wealth changes. In particular, anyone whose wealth lies in the convex segment of the curve would be willing to bet large amounts on the toss of a fair coin. As will be shown below, even for people whose wealth lies in the concave segment, the Friedman-Savage model predicts ‘pathological’ gambling behaviour.

The alternative, more relevant to the current paper, is to assume that gamblers misperceive the odds they face. At first blush, this suggestion seems appealing. Very few participants could give an accurate estimate of the probability of purchasing a winning ticket in a lottery, and personal experience is not likely to be of much assistance. Moreover the advertising for lotteries, which naturally focuses on the reactions of winners, is obviously designed to make the winning chance seem more real, and presumably less improbable. Alternatively, the misperception may be individual in the sense that people regard the lottery as actuarially unfair, but believe themselves to be lucky. Nevertheless, the application of Bayesian decision theory leads to startling results. In particular, whereas most people hold only one or a few tickets at any given time, Bayesian

Page 16: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

148 KELSEY AND QUIGGIN

decision theory predicts that risk-neutral individuals who believe the lottery to be favourable should be willing to buy all the tickets. The introduction of concave utility does not repair the difficulties. If individuals are willing to buy any tickets, they should continue buying tickets until they have reduced their wealth (in the absence of winnings) sufficiently to raise the marginal disutility of further purchases to be equal to the expected utility of the winning prize. Having reached this level, any unanticipated changes in wealth will have no effects on any consumption decisions apart from changing the number of lottery tickets purchased.

This can be seen by the following argument. Let the individual’s wealth before any lottery tickets are purchased be denoted WO. (It may be assumed for convenience that wo is a known constant. Provided the range of possible outcomes is small, the introduction of some uncertainty into the initial position will not affect the argument.) Now let p be the perceived probability of any given ticket winning the prize, x be the lottery prize and 6 the cost of a ticket.

Then the decision problem is

MaXk kpl/(Wo - kd + X ) + ( 1 - kp)U(Wo - kd) ( 5 )

The equilibrium solution must set the expected marginal utility of wealth equal t o p A u / ~ where AU is the utility difference between the winning and losing states. Assume that the the probability of winning is small enough that the expected marginal utility of wealth may be approximated by marginal utility in the case where the prize is not won.6 Then the solution to ( 5 ) will yield a fixed value of wo - k6, the wealth remaining after purchases of lottery tickets. Thus, a Bayesian decision maker who believes that the lottery is actuarially favourable will set lottery purchases so as to maintain wo - k6, at the level maximising (9, and will adjust to unanticipated changes in wealth solely by changing the number of tickets purchased.

The alternative models presented here provide a far more satisfactory model of lottery ticket purchases. First, consider the Arrow-Hurwicz models in which the objective is to maximise some function of the minimum and maximum possible outcomes. Provided the weight on the maximum outcome is not trivially small, the optimal strategy will be to buy a single lottery ticket. This model is far more satisfactory than the Bayesian model, but it still fails to account for observed behaviour. First, many people buy more than one ticket at a time, though not nearly as many as the Bayesian model would suggest. Second, it suggests that the most preferred lottery would have a single large prize whereas most real-world lotteries have multiple prizes.

These observations may be accounted for using a non-additive probability model. Quiggin (1990) presents a model of lottery ticket demand using the rank- dependent model which accounts for the existence of multiple prizes and permits multiple ticket purchases. While Quiggin’s formulation assumes that the probability of a winning ticket is known, it is apparent from the discussion above that an equivalent analysis could be undertaken using the non-additive probability model of Schmeidler and Gilboa.

Page 17: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 149

Much of the analysis presented here for the pure lottery case could be carried over to the analysis of portfolio choice and production theory, which have so far been treated only under the assumption of risk aversion or risk neutrality. This raises the possibility of a rigorous treatment of risk-taking behaviour which may account for some of the observed empirical violations of expected utility theory.

4. Concluding Comments

It is usual to distinguish between normative and descriptive theories of decision- making. Descriptive theories are proposed in an attempt to model how people actually make decisions. Normative theories prescribe how decisions ought to be made. Since the theories in the present paper are mainly motivated by empirical failures of SEU, it is relatively easy to motivate using them as descriptive theories. We should like to argue that they have some plausibility as normative theories.

An individual whose behaviour is described by one of the theories presented above expresses a preference for situations in which the relevant probabilities are more precisely defined. Given that an individual has such preferences, is it irrational to act in accordance with them? One normative justification for SEU is that it has been derived from an appealing set of axioms. The models presented here satisfy the same axioms with the exception of Savage’s sure thing principle. In the case of non-additive probability models, the sure-thing principle is weakened by requiring only that it apply to comonotonic acts. Thus, the argument that the SEU model is more rational than non-additive probabilities turns on the claim that the sure-thing principle should apply to non-comonotonic as well as to comonotonic acts. The other main normative justification for SEU is the Dutch Book argument, which appears in a variety of guises. We have argued in Section 2.5 that the Dutch Book arguments that have been advanced so far do not provide sufficient grounds for rejecting the models considered here.

We have surveyed several new approaches to the theory of choice under uncertainty. In all of these approaches the maximum and minimum value of an action receive special weight. We have also described two examples which appear to fit into this pattern. It is clear that there are circumstances in which applying maximin will lead to unpalatable conclusions. However these can be overcome if we consider some compromise between maximin and expected utility theory such as non-additive probabilities or maximin expected utility with a convex set of probabilities.

There are a number of important directions for future research. The paper has largely been concerned with the analysis of a single decision problem. As Hey ( 1985) has emphasised, an important aspect of uncertainty is learning as a series of decisions is made over time. There does not appear to be a natural way to model learning in either the ranking of sets or the complete ignorance models. In contrast it is easy to model learning within theories which allow non-unique subjective probabilities. One set B of subjective probabilities could be replaced by a new set B* where every individual probability distribution has been revised

Page 18: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

150 KELSEY AND QUIGGIN

by Bayesian updating. The other theories seem to be less able t o cope with learning, because of problems of dynamic inconsistency, although a number of solutions to this problem are being considered (see for example, Karni and Safra. 1988).

A related area for future research is the development of dynamic extensions of the static models presented above. In addition t o taking account of learning over time, it is necessary t o incorporate the existence of multi-period objective functions. The passage of time must be analysed both in terms of sequential consumption levels and sequential revelation of information. These two processes may proceed on very different time scales.

Perhaps the most important challenge is t o link the theoretical developments described above with the fundamental economic problems of uncertainty referred t o in the introduction. The development of rigorous theories of choice under uncertainty has important implications for both micro and macro theory.

Acknowledgements

Research supported in part by the ESRC post-doctoral fellowship scheme. This paper draws in part from discussions with Prasanta Pattanaik. We would like to thank Paul Anand, Nick Baigent, Salvador Barbera, Lilli Basile, Itzhak Gilboa, Frank Hahn, Geoff Harcourt, Mark Machina, Frank Milne, Robin Pope and Peter Read and two anonymous referees for helpful comments.

Notes

1. Note that the term ‘risk’ here takes its normal English meaning, ‘the possibility of an adverse event’, as opposed to the technical meaning, ‘a situation in which choice must be made on the basis of known probabilities’.

2. A number of generalisations of the expected utility model (Machina 1982, Chew 1983, Quiggin 1982) modify the assumption that the expected value of a fixed utility function is maximised. For example, in Machina’s model the individual maximises a non-linear preference functional which may be approximated by a linear expected utility functional in a neighbourhood of any given distribution. These generalisations are confined to problems of risk and will not be examined here. However some formal similarities with non-additive probability models will be discussed in section 2.3.

3. The Ellsberg paradox is not compatible with subjective expected utility maximisation. Further i f an individual exhibits the preferences expressed in the Ellsberg paradox then subjective probabilities cannot be assigned to that individual in the manner described by Savage (1954).

4. Dreze (1986) presents an axiomatic theory which says that a decision maker will have a convex set of subjective probabilities. The motivation for his result is different, being based on the possibility of moral hazard rather than uncertainty aversion.

5. Variations on this theme, expressed in terms of indivisibilities in expenditure, are developed by Flemming (1969). Kim (1973) and Ng (1965).

6. This may be seen as follows. The first-order condition on k is

~ U ( W O - k 6 + ~ ) - p p L T ( ~ o - k 6 ) - 6 ( k p Z / ‘ ( ~ g - k 6 + ~ ) + ( 1 - k p ) U ‘ ( w o - k 6 ) ) = 0

or p A v / 6 = k p U ’ ( w o - k 6 + x ) + ( l - k p ) U f ( w o - k 6 ) = E [ U ‘ ]

since kp is normally small, the RHS may be approximated by U’(wo - k 6 ) .

Page 19: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 151

References

Allis, M. (1986), The general theory of random choices in relation to the invariant cardinal utility function and the specific probability function, paper presented at the 3rd International Conference on the Foundations and Applications of Utility, Risk and Decision Theories, Aix en Provence, June, 1979.

Anscombe, F. J. and Aumann, R. J. (1963) A definition of subjective probability, Annals of Mathematical Statistics 34, 199-205.

Arrow, K. J. and Hurwicz, L. (1972) Optimality criterion for decision-making under ignorance, in C. F. Carter and J. L. Ford (ed.), Uncertainty and Expectations in Economics.

Barbera, S. and Pattanaik, P. K. (1984) Extending an order on a set to the power set: some remarks on Kannai and Peleg’s approach, Journal of Economic Theory 32,

Barbera, S . and Jackson, M. (1988), Maximin, leximin and the protective criterion, Journal of Economic Theory, 46, 34-44.

Barbera, S., Barrett, C. R., and Pattanaik, P. K. (1984) On some axioms for ranking sets of alternatives, Journal of Economic Theory 32, 301 -308.

Barrett, C. R. and Pattanaik, P. K. (1986) Decision making under complete uncertainty: the ordinal framework, in S. Sen (ed.), Uncertainty in Economic Life.

Bewley, T. F. (1986) Knightian decision theory part I , Cowles Foundation Discussion Paper No. 807.

Border, K. and Segal, U. (1990) Dutch Book arguments and subjective probability, Working paper.

Chew, S. (1983) A generalisation of the quasilinear mean with applications to the measurement of income inequality and decision theory resolving the Allais paradox, Econometrica 51(4), 1065-92.

Chick, V. (1983) Macroeconomics After Keynes: A Reconsideration of the General Theory, Phillip Allen, Oxford.

Choquet, G . (1953-4) Theory of capacities, Annales lnstitut Fourier 5 , 131-295. Cohen, M. and Jaffray, J . Y. (1980) Rational behaviour under complete ignorance,

- (1983) Approximations of rational criteria under complete ignorance and the

- (1985) Decision making in a case of mixed uncertainty: a normative model, Journal

de Finetti, B. (1974), Theory Of Probability, Wiley. N.Y. Dempster, A.P. (1966) New methods for reasoning towards posterior distributions based

- (1967) Upper and lower probabilities induced by a multivariate mapping, Annals of

- (1968) Upper and lower probabilities generated by a random closed interval, Annals

Dow, J. and Werlang, S. R. C. (1990) Uncertainty aversion and the optimal choice of

Dreze, J. (1986) Moral expectation with moral hazard’, in W. Hildenbrand and A. Mas-

Dubins, N. and Savage, L. J. (1965), How To Gamble If You Must: Inequalities For

Ellsberg, D. (1961) Risk, ambiguity and the Savage axioms, Quarterly Journal of

Fishburn, P.C. (1984) Comment on the Kannai-Peleg impossibility theorem for extending

Flemming, J. (1969) The utility of wealth and the utility of windfalls, Review of Economic

185-191.

Econometrica 48, 1281-1299.

independence axiom, Theory and Decision 15, 121-150.

of Mathematical Psychology 29, 428-442.

on sample data, Annals of Mathematical Statistics 37, 355-74.

Mathematical Statistics 38, 325-339.

of Mathematical Statistics 39, 957-96.

port folio’, Econometrica, forthcoming.

Cole11 (ed.), Contributions to Mathematical Economics.

Stochastic Processes, McGraw-Hill, N.Y.

Economics 75(4), 643-69, reprinted in Gardenfors and Sahlin (1988).

orders, Journal of Economic Theory 32, 176-179.

Studies 36(1), 55-66.

Page 20: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

152 KELSEY AND QUIGGIN

Freedman, D. A. and Purves, R. A, (1969) Bayes’ method for bookies, Annals of Mathematical Statistics, 40, 1177-86.

Friedman, M. and Savage, L. J . (1948) The utility analysis of choices involving risk, Journal of Political Economy 56(4), 219-304.

Gardenfors, P. and Sahlin, N. E. (1982) Unreliable probabilities, risk taking and decision making, Synthese 53, 361-386, reprinted in Gardenfors, P. and Sahlin, N. E. (1988)

__ (1988) Decision, Probability and Utility, Cambridge University Press, Cambridge. Gilboa, I . (1987) Expected utility with purely subjective non-additive probabilities,

Journal of Mathematical Economics 16(1), 65-88. Gilboa, 1. and Schmeidler, D. (1989) Maximin expected utility with a non-unique prior,

Journal of Mathematical Economics 18, 141-53. Green, J. (1987) Making book against oneself: The independence axiom and non-linear

utility theory, Quarterly Journal of Economics 102, 785-96. Green, J . and Jullien, B. (1988) Ordinal independence in non-linear utility theory,

Journal of Risk and Uncertainty 1(4), 355-88. Hammond, P. J. (1983), Ex-post optimality as a dynamically consistent objective for

collective choice under uncertainty, in Pattanaik and Salles (ed.), Social Choice and Welfare, North Holland, Amsterdam.

Harsanyi, J . C. (1953) Cardinal utility in welfare economics and in the theory of risk taking, Journal of Political Economy 61, 434-435.

Hayek, F. (1937) Economics and knowledge, Economica NS4 ( l ) , 33-54. __ (1945) The use of knowledge in society, American Economic Review 35(3), 519-30. ~ (1960) The Constitution Of Liberty, Routledge and Kegan Paul, London. Heiner, R. and Packard, D. J . (1984) A uniqueness result for extending orders with

application to collective choice as inconsistency resolution, Journal of Economic Theory 32, 180-184.

Hey, J . (1985) Whither uncertainty?, Economic Journal 95 (Conference papers), 130-39. Hicks, J. R. (1937) Mr. Keynes and the ‘classics’: a suggested interpretation,

Econometrica 5(2), 147-59. Jaffray, J . Y. (1989) Linear utility theory for belief functions, Operations Research

Letters 8, 107-1 12. Kannai, Y. and Peleg, B. (1984) A note on the extension of an order on a set to the power

set, Journal of Economic Theory 32, 172-175. Karni, E. and Safra, Z. (1988) Ascending bid auctions with behaviorally consistent

bidders, Working Paper 1-88, Foerder lnstitute for Economic Research, Tel-Aviv University, Ramat Aviv, Israel.

Kelsey, D. (1986) Choice under partial uncertainty, Cambridge Economic Theory Discussion Paper No. 93.

__ and Milne, F. (1990) The Arbitrage Pricing Theorem with Non-Expected Utility preferences, Dept. of Economics, Australian National University, Working paper No. 217.

Keynes, J . M. (1920). A Treatise On Probability, MacMillan, London. __ (1936), The General Theory of Employment, Interest and Money, MacMillan,

Kim, Y. (1973) Choice in the lottery-insurance situation: Augmented income approach,

Kirzner, I . (1979) Perception, Opportunity and Projit, Houghton Mimin, New York. Knight, F. (1921) Risk, Uncertainty and Projit, Houghton Miftlin, New York. Lachmann, L. (1977) Capital, Expeclation and the Market Process, Sheed, Andrews &

Lawson, T. (1985) Uncertainty and economic analysis, Economic Journal 95, 909-927. Levi, I. (1980) The Enterprise of Knowledge, MIT, Cambridge. __ (1986) The paradoxes of Allais and Ellsberg, Economics and Philosophy 2, 23-53.

London.

Quarterly Journal of Economics 87, 148-56.

McNeil, Kansas City.

Page 21: THEORIES OF CHOICE UNDER IGNORANCE AND UNCERTAINTY

CHOICE UNDER IGNORANCE AND UNCERTAINTY 153

Machina, M. (1982) ‘Expected Utility’ analysis without the independence axiom, Econometrica 50(2), 277-323. - (1983a) Generalised expected utility analysis and the nature of observed violations

of the independence axiom, in Stigum, B. and Wenstop, F. (ed.), Foundations of Utility and Risk Theory with Applications, Reidel, Dordrecht.

- (1983b) The economic theory of individual behaviour toward risk: Theory, evidence and new directions, Technical Report No. 433, Centre for Research on Organisational Efficiency, Stanford. - and Schmeidler, D. (1990) A more robust definition of subjective probability,

UCSD, Discussion paper 90-29. Maskin, E. (1979) Decision making under ignorance with implications for social choice,

Theory and Decision 1 1 , 319-37. Minsky, H. (1975) John Maynard Keynes, MacMillan, London. Ng, Y. K . (1965) Why do people buy lottery tickets?: Choices involving risk and the

indivisibility of expenditure, Journal of Political Economy 73, 530-35. Nitzan, S. I. and Pattanaik, P. K . (1984) Median based extension of an ordering over a

set to the power set: an axiomatic characterisation, Journal of Economic Theory 34,

Pattanaik, P. K. and Peleg, B. (1984) An axiomatic characterization of the lexicographic maximin extension of an ordering over a set to the power set, Social Choice and Welfare 1 , 113-122.

Quiggin, J . ( 1982) A theory of anticipated utility, Journal of Economic Behavior and Organisation 3(4), 323-43.

- (1987) On the nature of probability weighting, Journal of Economic Behavior and Organisation 8(4), 641-5.

- (1991) On the optimal design of lotteries, Economica, forthcoming. Savage, L. J. (1954) Foundations of Statistics, Wiley, N.Y. Schmeidler, D. (1989) Subjective probability and expected utility without additivity,

Segal, U. (1987) The Ellsberg paradox and risk aversion: An anticipated utility approach,

~ and Spivak, A. (1990) First-order versus second-order risk-aversion, Journal of

Shackle, G. (1952) Expectations in Economics, Cambridge University Press, Cambridge. Shafer, G. (1976) A Theory of Evidence, Princeton University Press, Princeton. - (1982a) Belief functions and parametric models, Journal of the Royal Statistical

- (1982b) Lindley’s Paradox, Journal of the American Statistical Association 77,

Toulet, C. (1986) Complete ignorance and independence axiom: optimism, pessimism, indecisiveness, Mathematical Social Sciences 1 1 , 33-5 1 .

Wakker, P. (1989) A behavioral foundation for fuzzy measures, Nijmegen Institute for Cognition Research and Information Technology, Nijmegen, The Netherlands.

Walker, M. (1984) Explanations for gambling, in Caldwell, G. , Haig, B., Dickerson, M. and Sylvan, L. (ed.), Gambling in Australia, Croom Helm, Sydney, pp. 146-62.

Yaari, M. (1987) The dual theory of choice under risk, Econometrica 55(1), 95-115. Zadeh, L. A. (1965) Fuzzy sets, Information and Control 8, 338-353.

252-61.

Econometrica 57, 571-87.

International Economic Review, 28( l), 175-202.

Economic Theory 51(1), 111-25.

Society, Series B 44, 322-52.

325-34.