2 Behavioural decision studies Everyone complains of his memory, no one of his judgement. (Franc ¸ ois de La Rochefoucauld) 2.1 Introduction Behavioural or empirical decision science is concerned with how people actually make decisions – i.e. descriptive studies of human behaviour. In section 1.5 we briefly outlined SEU theory, a normative approach speci- fying how people shoulddecide in the face of uncertainty if they wish to be rational. More broadly, there are many other normative theories that seekto encapsulate the essence of rational decision making, each doing so in a different context. We indicate more of what we mean by this in the next chapter, but the key point here is that each norm ative theo ry is based upon a number of simple axioms or assumptions that are generally plausible and that, their proponents argue, characterise rational decision behaviour. In this chapter we explore research investigating the extent to which people actually make decisions in ways that are compatible with a particular nor- mative theory of risky decision making, based on the maximisation of SEU. We show that this normative theory rarely predicts human choice and that we need to develop a different set of theories, referred to as descriptive theories, if we are to predict and explain the ways in which people actuallymake decisions. Further, in this chapter we consider the extent to which differences between normative and descriptive theories indicate important limitations in human decision making, and suggest how knowledge of these limitations can be used for developing procedures to improve our decision making. Our objectives in this chapter are thus: to introdu ce empirical studies of decision -mak ing behaviou r; to demonstrate that unguided hum an decision making is no t as flawless as we might hope; to show how an understandi ng of the se flaws not only indi cates a strong need for decision support, but also provides important insights about the nature of the support that is needed; and 26
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
theory than repeated decisions.1 A second approach has been to augment
the theory to increase its ability to predict actual choice behaviour – e.g.
adding an extra component to take account of the fact that people may
have a utility for gambling (Diecidue et al ., 2004; Fishburn, 1980) or
anticipated regret (Loomes and Sugden, 1982). Adding these components
can indeed increase the extent to which the theory can predict choices
between gambles, but, since they are not specified by the underlying
axioms, we may question whether people ought to be influenced by them,
if they wish to be rational. Also, there seems to be little agreement about
what these added elements ought to be.
Simon (1960) argued that people have limited cognitive capacity and so
are unable to carry out all the mental operations that are required by the
SEU model – or, indeed, many other normative models. His view was that,instead, people use simpler decision strategies that involve processing less
information, often in a much simpler way. One such strategy, satisficing ,
involves establishing a minimum standard for each attribute2 of an action or
outcome and then choosing the first alternative that meets these standards.
For instance, according to many normative models, when purchasing a
house a buyer should develop an overall evaluation of all houses that are
available (and there are usually a lot of them!) in terms of how each per-
forms on all the attributes that he or she feels are important (e.g. cost,closeness to work, ‘atmosphere’), making sure that each attribute is
weighted appropriately to take account of the importance he/she attaches to
it. Not only does this suggest a lot of cognitive processing, evaluating and
weighting attributes and then aggregating them into an overall evaluation of
each available house, but there is also a substantial load placed on the
memory, given that the buyer needs to remember all these overall evalu-
ations in order to choose the best. In contrast to this, in satisficing the buyer
considers each house in turn to see whether it meets the minimum standard
set for each attribute. If a house fails to meet a standard it is immediately
rejected (and any evaluations already undertaken can be forgotten). As soon
as a house meets all the standards it is chosen; though, of course, it might be
the case that an option, yet to be considered, might be better – the DM
would never know.
Satisficing means that DMs choose the first option that is reasonable
rather than the best. Since many people have limited time and resources
1 It should be noted, however, that SEU theory explicitly seeks to include a valuation of the risk inherent in one-off decisions: see our discussion of risk attitude in section 8.4.
2 We discuss and define attributes in more detail later (see section 7.3). For the present, aninformal understanding is sufficient.
to make decisions this may be better than trying to implement the
rational model, given the lower demands this strategy makes on mental
and other resources. Indeed, Simon suggested that there are often
regularities in the environment (e.g. redundancy between different pieces
of information) such that it is often not necessary to process all the
available information. In these situations satisficing may perform quite as
well as the rational model. He called this phenomenon bounded ration-
ality , since people are using bounded or rather simpler strategies yet
maintaining decision accuracy at a comparable level to that derived from
the rational model. Since Simon’s seminal work, researchers have iden-
tified many other strategies that people adopt when making decisions
(see, for example, Svenson, 1979) and have provided evidence showing
that simpler strategies, often referred to as fast-and-frugal – or, in thevernacular, quick and dirty – heuristics, can indeed do quite as well as,
and sometimes better than more complex ones (Gigerenzer et al ., 1999).
We discuss some of these in section 3.8.
Taken together, these findings show that people rarely, if at all, make
decisions according to the SEU model, and that this is due in large part to
the cognitive demands of choosing in this way. This conclusion has pro-
vided one of the primary reasons for developing the kinds of decision aids
discussed in later chapters. Many of these aids structure the decisionprocess so that it follows the approach advocated by the rational model
and, at the same time, provide support mechanisms that address the
problems arising from limitations in human cognitive capacity.
2.3 The sure-thing axiom
He is no wise man that will quit a certainty for an uncertainty. (Samuel Johnson)
A second way of assessing whether the SEU model is descriptive of how people actually make decisions is to investigate whether they behave in
accordance with the axioms of the model. When asked, most people agree
that these axioms3 are acceptable and are principles that should be fol-
lowed when making decisions (MacCrimmon, 1968). Research has shown
that people often behave in ways that violate these axioms, however, even
when the implications of the axioms are explained to them in the context
of their choice (Slovic and Tversky, 1974). In this section we outline some
3 Strictly, we should not say the axioms underlying SEU theory, for there are many derivations of the SEU model from apparently different sets of axioms. All are fundamentally equivalent,however. French and Rıos Insua (2000: chap. 2) provide a survey of several derivations.
of the many studies that demonstrate these violations and consider the
implications these have for decision-making effectiveness.
A key implication4 of the SEU model is the sure-thing axiom. Stated
simply, this demands that, if there are some outcomes that will occur
regardless of which option is chosen, the nature of these common out-
comes should not affect choice behaviour. Baron (2001: 235) provides
an example: imagine choosing between two different lotteries with the
same likelihoods of winning but with different prizes if you actually win
(e.g. foreign holidays at different locations A and B). Should you lose,
then each lottery has the identical consolation prize (e.g. a discount on
the next holiday that you book). The sure-thing principle states that,
since the outcome from losing is the same for both options then the exact
nature of this outcome (e.g. whether it is a 5 per cent or a 10 per centdiscount) should not affect your choice between the lotteries. In other
words, if you prefer the lottery with a prize for foreign location A when
the outcome from losing is a 5 per cent discount then you should also
prefer the same lottery when the outcome from losing is a 10 per cent
discount.
Although this axiom seems highly plausible, there are a number of high-
profile violations that have had a significant impact on our understanding of
human decision making. In particular, both Allais (1953) and Ellsberg(1961) asked people to choose between different pairs of gambles that
included common outcomes and showed that varying the pay-offs associ-
ated with these common outcomes affected choice behaviour. We describe
each of these in more detail in the next sections.
The Allais paradox
In 1952 Allais presented the problem given in table 2.1. Pause for a minute
and think about which option you would choose in each of the two
choices. A majority of individuals choose option A in the first choice and
option D in the second. They argue that option A makes them ‘rich’
beyond their wildest dreams so why should they risk the small chance
(1 per cent) in option B of receiving nothing. In the second choice,
however, there is roughly the same high probability of their receiving
4 As indicated in footnote 3, there are many derivations of the SEU model, and some of the axioms
of one derivation may be implied by those of another. In section 3.3 we give a very simplederivation, but one that does not explicitly use the sure-thing axiom discussed here. As problem 1at the end of this chapter shows, however, the sure-thing axiom is a necessary implication of theSEU model. It is therefore implicit in our – and any other – development of SEU.
way outcomes are framed also changes. For example, consider the fol-
lowing problem.
Imagine you are part of a team planning how European governments should
respond to the next major nuclear accident. You are considering two possible
strategies to protect people. Strategy 1 is based on protecting people in their own
homes by sealing windows, providing instructions to remain indoors, etc.,
whereas strategy 2 is based on evacuating people so that they are a safe distance
from the plume of nuclear contamination. The risks associated with these two
strategies are rather different. Strategy 2 is riskier in the short term given the
threats associated with evacuating a large number of people – e.g. traffic acci-
dents, stress-related heart attacks and the possibility of being in the open during
the passage of the plume – but better in the long term, given that removing
people from the situation leads to less exposure to nuclear contamination,
thereby reducing long-term health threats from cancer and related illnesses.When framing the value of the outcomes of each strategy, there are at least
two different reference points that could be adopted. The first reference
point is ‘what life was like before the accident’. From this reference point,
outcomes associated with both strategies are framed as losses; the accident
makes everything worse than before regardless of what strategy we adopt.
The second reference point is ‘what life would be like if no protective action
was taken’; from this reference point, outcomes associated with both
strategies are framed as gains, each offering some gain over doing nothing.Does it matter which reference point is adopted? Research by McNeil
et al . (1982) in the medical domain suggests that it does. They asked people
to choose between different medical treatments with broadly similar risk
characteristics to those described in the nuclear accident problem above.
They predicted and then showed that, when outcomes were framed as
losses, short-term risks were particularly aversive as compared with a
situation in which those same outcomes were framed as gains. Thus, in the
nuclear accident problem described above, we would predict that evacu-
ation, associated with the short-term risks, would be more likely to be
chosen when the reference point adopted leads to outcomes framed as
gains rather than losses – i.e. the second reference point.
Most DMs, including those who actually make nuclear protection deci-
sions, are unaware that they use reference points, so they are likely to adopt
one of these at random without realising how it can bias their choice of
action. Similarly, it is argued that the change in wording in the two versions
of the Asian disease problem presented earlier also led to the adoption of
different reference points. Why should framing in terms of gains and losseschange attitudes to risk, however? In order to answer this question we need
to consider the second or evaluation phase of prospect theory.
Similar to SEU theory, evaluation of an alternative involves summing
the products of the values and probabilities associated with each possible
outcome. Specifically, prospect theory ranks alternatives according to
Ri pð p i Þv ðc i Þ
where p i is the ‘actual’ subjective probability of the i th consequence, p( p ) is
a decision-weighting function that adjusts the probability, increasing the
influence of small probabilities and decreasing the influence of high ones,
and v (c i ) is the value of the i th consequence.
As indicated above, the values of outcomes are represented as gains and
losses rather than final states of wealth. Figure 2.1 presents a typical valuefunction describing how people value varying amounts of gain or loss. Put
simply, gains are evaluated positively, but with each incremental gain have
less value as the total gain increases. The concave shape of the value
function in gains leads to people valuing a certain gain more than a
probable gain with equal or greater expected value – i.e. they exhibit risk
GainsLosses
Value
Figure 2.1 The form of the value function in prospect theory representing risk aversion for
aversion.5 Similarly, losses are evaluated negatively, but with each incre-
mental loss having less negative value as the total loss increases. The convex
shape of the value function in losses also means that people value a certain
loss more than a probable loss with equal or greater expected value, but
since this value is negative people prefer the risky option – i.e. they exhibit
risk-seeking behaviour. A second important feature of the value function is
that it is steeper in losses than in gains. This means that the impact of a loss
is greater than a comparable gain and leads to loss aversion – i.e. people are
overly sensitive to loss.
Prospect theory also predicts a cognitive distortion of probability, with
the suggestion that the impact of a probability, referred to as a decision
weight (p p ), is different from its numerical value. Figure 2.2 outlines the
relationship between probabilities and their associated decision weights.The figure shows that small probabilities are overweighted. Thus, out-
comes associated with small probabilities have a bigger impact on choice
than they should. In addition, medium to high probabilities are under-
weighted, so outcomes associated with these probabilities have less of an
impact on choice than they should. This pattern of weighting is found to
occur regardless of whether people are given the probabilities or they have
to estimate them for themselves.
A further feature of the probability-weighting function is that people arevery sensitive to changes around the ends of the scale – i.e. 0 (impossibility)
and 1 (certainty); in other words, changes in probability from 0.0 to 0.01 or
0.99 to 1.00 have a greater impact than changes from, say, 0.01 to 0.02 or
0.60 to 0.61. This effect ties in with the phenomenon of ambiguity aversion
discussed earlier: people value certainty over uncertainty.
These cognitive distortions associated with value and probability
combine together to predict a fourfold pattern of choice behaviour. Pre-
viously we identified two of these: risk aversion in gains and risk seeking in
losses. This pattern occurs for medium to large probabilities only, how-
ever, with the pattern reversing (i.e. risk seeking in gains and risk aversion
in losses) at small probabilities.
Overall, prospect theory has been used to explain many of the incon-
sistencies and violations of SEU theory, some of which have been described
above, as well as making many new predictions about human decision
making that have subsequently been supported by empirical research (Fox
and See, 2003; Kahneman and Tversky, 2000).
5 SEU models can also represent the same assumptions of decreasing marginal worth and risk aversion: see section 8.4.
(B). Mr Smith has had one or more heart attacks and is over fifty-five years old.
The majority of people choose option B (see, for example, Tversky and
Kahneman, 1983). Choosing this is a mistake, however, called the con-
junction fallacy , since A can be subdivided into those having heart attacks
and over fifty-five years of age (i.e. option B) and those having heart
attacks and under fifty-five years old. Thus, option A must either have the
same probability (if nobody under fifty-five years old has had a heart
attack) or greater (if some of those under fifty-five actually has had a heart
attack as well).
Choosing B is a mistake, and we call it a failure of coherence because it
violates a basic principle of probability theory: that if B is a subset of A thenit cannot be more probable than A. Researchers have shown that people
violate other principles of probability theory. Perhaps the most important
of these is neglecting base rates. Consider the following.
A breast-cancer-screening procedure can detect 80 per cent of women with
undiagnosed cancer of the breast and misclassifies only 5 per cent without
cancer. It is estimated that the rate of cancer sufferers in women who are
screened is thirty cases per 10,000. What is the probability that any particular
woman who has a positive test actually has cancer? Give a value between 0 and100 per cent.
Many people think that the probability is around 70 to 75 per cent,
including a senior nurse one of us taught who was responsible for the
breast-screening service in a large town. The true probability is about 5 per
cent. The correct answer derives from a simple application of Bayes’
theorem, which prescribes how probability judgements should be updated
in the light of new data. This involves combining the information about
the reliability of the test with the initial probability of having cancer inthe first place, often referred to as the base rate.6 Research indicates that
experts often make errors similar to this when making judgements (see, for
example, Casscells et al ., 1978, and Gigerenzer, 2002). These are just two
of many examples of failures in coherence that together show that people
often make inconsistent probability judgements (Kahneman et al ., 1982;
Kahneman and Tversky, 2000).
A failure of correspondence is illustrated in a study reported by Lich-
tenstein et al . (1978), which asked research participants the following
6 We provide an approximate application of Bayes theorem to this example in section 3.8.
predictions play a crucial role in decision analysis (see chapter 8). In sections
6 and 7 above we showed that predictions about the future are often based
on judgemental heuristics that are associated with errors and biases – e.g.
optimism, which leads to an overestimation of the likelihood of positive
outcomes and an underestimation of negative outcomes. Lovallo and
Kahneman (2003) have argued that one important limitation underpinning
these kinds of judgements is an overdependence on ‘inside’ rather than
‘outside’ thinking. Inside thinking focuses on the specific features and
characteristics of the problem/situation in hand and uses these to make
predictions about such aspects as its likelihood of success, profitability or
time to completion. In contrast, outside thinking focuses on the outcomes
of similar problems that have been completed in the past, considering where
the current problem sits in terms of the distribution of these previous cases,and derives predictions from what might be expected given its position in
the distribution of previous outcomes. Under inside thinking, people focus
on the positive aspects of the situation, so they are overly optimistic. In
contrast to this, outside thinking is based on previous outcomes, so it is not
affected by this bias (or it is affected to a lesser degree)
To help facilitate outside thinking, Lovallo and Kahneman (2003)
advocate using a five-step procedure. We present these steps and illustrate
them in the context of predictions made by a pharmaceutical company about the profitability that might be expected from developing a new drug.
(1). Select a reference class: identify previous relevant/similar situations
to the one that is currently being evaluated – e.g. previous instances
in which similar drugs have been developed.
(2). Assess the distribution of outcomes: list the outcomes of these pre-
vious situations – e.g. how long the drugs took to develop, their actual
profitability and other relevant output measures.
(3). Make an intuitive prediction about how the current situation compareswith those in the reference class: use this to predict where in the dis-
tribution of past outcomes the current situation lies – e.g. in the past
similar drugs have taken between four and eight years to come to market;
this project is dealing with something quite difficult, so the estimate
should be to the top end of the distribution, say around 7.5 years.
While the first three steps are sufficient to derive an outside prediction,
Lovallo and Kahneman advocate two further steps to improve this forecast
(4). Assess the reliability of the prediction by deriving the likely correlation
between this prediction and the actual outcome (i.e. a value between