A decision is a choice made by some agent between competing beliefs about the world or between alternative courses of action to achieve the agent’s goals. The ability to reason flexibly about problems and make good choices are fundamental to any kind of intelligence and the need to do them well is highly valued in society. Consequently, there is a growing demand for decision support systems which can assist human decision makers make important choices more effectively. This demand has stimulated research in many fields, including the behavioural and cognitive sciences (notably psychology, artificial intelligence, and management science) as well as various mathematical disciplines (such as computer science, statistics, and operations research).
Behavioral scientists have traditionally drawn a distinction between two kinds of theory that address problems in decision making. Prescriptive theories set out formal methods for reasoning and decision making, and criteria for ‘rational’ inference. Descriptive theories typically seek to explain how people make decisions and account for the errors they make that may result in violations of rational norms. Countless empirical studies of human judgement, in the laboratory and in real-world settings, have demonstrated that human decision making is subject to a variety of systematic errors and biases compared with prescriptive decision models (see Decision Research: Behavioral).
There are many reasons for this. Mistakes are clearly more likely when someone is tired or overloaded with information, or if a decision maker lacks the specialist knowledge required to make the best choice. Even under ideal conditions however, with full information, people still make ‘unforced errors.’ One of the major reasons for this is that we do not seem to be very good at managing uncertainty and complexity.
In the 1970s and 1980s cognitive scientists came to the view that people are not only rather poor at reasoning under uncertainty, but that they also revise their beliefs by processes that bear little resemblance to formal mathematical calculation, and it is these processes that seem to give rise to the characteristic failures in decision making. Kahneman and Tversky developed a celebrated explanation that people use various heuristics in making decisions under uncertainty, as when they judge things to be highly likely when they come easily to mind or are typical of a class rather than by means of a proper calculation of the relative probabilities. Such heuristic methods are often reasonable approximations but they can also lead to systematic errors (see also Heuristics for Decision and Choice and Subjective Probability Judgments).
Psychological research on ‘deductive reasoning,’ has looked at categorical rather than uncertain inferences, as in syllogistic reasoning but with similar questions in mind, namely: how do people carry out logical tasks? And how well do they do them by comparison with prescriptive logical models? (see also Problem Solving and Reasoning: Case-based). Systematic forms of error and failures to recognize and avoid logic fallacies have also been found, and this is probably because people do not arrive at a conclusion by applying inference rules the way a logician might, but they appear to construct a concrete ‘mental model’ of the situation, and manipulate it in order to determine whether a proposition is supported in the model (see also Mental Models, Psychology of).
In summary, many behavioral scientists now take the view that human cognition is fundamentally flawed (e.g., Sutherland 1992). Good reviews of different aspects of research on human reasoning and decision making can be found in Kahneman et al. (1982), Evans and Over (1996), Wright and Ayton (1994), Gigerenzer and Todd (1999) and Stanovich and West (2000).
2. Mathematical Methods and Decision Support Systems
If people demonstrate imperfect reasoning or decision making then it would presumably be desirable to support them with techniques that avoid errors and comply with rational rules. There is a vast amount of research on decision support systems that are designed to help people overcome their biases and limitations, and make decisions more knowledgeably and effectively. If we are to engineer computer systems to take decisions, it would seem clear that we should build those systems around theories that give us some appropriate guarantees of rationality. In a standard text on rational decision making, Lindley summarizes the ‘correct’ way to take decisions as follows: … there is essentially only one way to reach a decision sensibly. First, the uncertainties present in the situation must be quantified in terms of values called probabilities. Second, the consequences of the courses of actions must be similarly described in terms of utilities. Third, that decision must be taken which is expected on the basis of the calculated probabilities to give the greatest utility. The force of ‘must’ used in three places is simply that any deviation from the precepts is liable to lead the decision maker to procedures which are demonstrably absurd. (Lindley 1985, p. vii) The above viewpoint leads naturally to Expected Utility Theory (EUT) which is well established and very well understood mathematically. If its assumptions are satisfied and the expected utilities of alternative options are properly calculated, it can be argued that the procedure will reliably select the best decision. If people made more use of EUT in their work, it is said, this would result in more effective decision making. Doctors, for example, would make more accurate diagnoses, choose better treatments, and make better use of resources. Similar claims are made about the decision making of managers, politicians, and even juries in courts of law.
3. Limits to Theory
As living and moving beings, we are forced to act… [even when] our existing knowledge does not provide a sufficient basis for a calculated mathematical expectation. (John Maynard Keynes, quoted by Bernstein 1996).
Many people think that the practical value of mathematical methods of decision making like SEU and the ‘irrationality’ of human decision makers are overstated. First of all, an expected-utility decision procedure requires that we know, or can estimate reasonably accurately, all the required probability and utility parameters. This is frequently difficult in real-world situations since a decision may still be urgently required even if precise quantitative data are not available. Even when it is possible to establish the necessary parameters, the cost of obtaining good estimates may outweigh the expected benefits.
Furthermore, in many situations a decision is needed before the decision options, or the relevant information sources, are fully known. The complete set of options may only emerge as the decision making process evolves. Neither logic nor decision theory provide any guidance on this evolutionary process. Lindley acknowledges this difficulty: The first task in any decision problem is to draw up a list of the possible actions that are available. Considerable attention should be paid to the compilation of this list [though] we can provide no scientific advice as to how this should be done.
In short, the potential value of mathematical decision theory is limited by the frequent lack of objective quantitative data on which to base the calculations, the limited range of functions that it can be used to support, and the problem that the underlying numerical representation of the decision is very different from the intuitive understanding of human decision makers.
There are also many who doubt that people are as ‘irrational’ as the prescriptive theories appear to suggest (see also Decision Research: Behavioral; Utility and Subjective Probability: Contemporary Theories). Skilled professionals may find it difficult to accept that that they do not make decisions under uncertainty as well as they ’should’ and that under some circumstances their thinking can actually be profoundly flawed. Most of us have no difficulty accepting that our knowledge may be incomplete, that we are subject to tiredness and lapses of attention, and even that our abilities to recall and bear in mind all relevant factors are imperfect. But we are less willing to acknowledge an irremediable irrationality in our thought processes (’people complain about their memories but never about their judgement’).
A more optimistic school of thought argues that many apparent biases and shortcomings are actually artefacts of the artificial situations that researchers create in order to study reasoning and judgement in controlled conditions. When we look at real-world decision making we see that human reasoning and decision making is in fact far more impressive than the research suggests. Herbert A. Simon observes that ‘humans, whose computational abilities are puny compared with those of modern super-computers or even PCs, are sometimes able to solve, with very little computation, problems that are very difficult even by computer standards problems having ill-defined goals, poorly characterized and bounded problem spaces, or which lack a strong and regular mathematical structure.’ (Simon 1995) Shanteau (1987) has investigated ‘factors that lead to competence in experts, as opposed to the usual emphasis on incompetence,’ and he identifies a number of important positive characteristics of expert decision makers: they know what is relevant to specific decisions, what to attend to in a busy environment, and they know when to make exceptions to general rules. Second, experts know a lot about what they know, and can make decisions about their own decision processes: they know which decisions to make and when, and which to skip, for example. They can adapt to changing task conditions, and are frequently able to find novel solutions to problems. Classical deduction and probabilistic reasoning do not capture these meta-cognitive skills. Consider medical decision making as an example. Most of the research in this area has viewed clinical decision making primarily in terms of deciding what is wrong with a patient (determining the diagnosis) and what to do based on the diagnosis (selecting the best treatment) (see Problem Solving and Reasoning, Psychology of; Bounded and Costly Rationality).
In practice a doctor’s activities and responsibilities are extremely diverse. They include:
(a) recognizing the possibility of a significant clinical problem
(b) identifying information that is relevant to understanding the problem
(c) selecting appropriate investigations and other procedures
(d) deciding on the causes of clinical abnormalities and test results
(e) interpreting time-dependent data and trends (e.g., blood pressure)
(f) setting out clinical goals
(g) formulating treatment plans over time (h) anticipating hazards
(i) creating contingency plans (j) assessing the effectiveness of treatment Each of these tasks involves different types of patient information and requires many different types of knowledge in order to interpret the information and act upon it appropriately. (For related issues in other areas see Bounded and Costly Rationality.) We must understand this complexity if we are to develop theories that are sufficiently sophisticated to capture the diversity or to build decision support systems which can emulate and improve upon human capabilities.
It has also been strongly argued that people are actually well adapted for making decisions under the adverse conditions caused by time pressure, lack of detailed information, and knowledge, etc. Gigerenzer, for instance, has suggested that human cognition is rational in the sense that it is optimized for speed at the cost of occasional and usually inconsequential errors (Gigerenzer and Todd 1999).
4. Effects of Tradeoffs on Effectiveness of Decision Making
… cognitive mechanisms capable of successful performance in a real world environment do not need to satisfy the classical norms of rational inference: the classical norms may be sufficient, but are not necessary, for a mind capable of sound reasoning. (Gigerenzer and Goldstein, ‘Reasoning the fast and frugal way,’ Psychological Review, 1996) Tradeoffs simplify decision making and in practice may entail only modest costs in the decision maker’s accuracy and effectiveness. This possibility has been studied quite extensively in the field of medical decision making. In the prediction of sudden infant death, for example, Carpenter et al. (1977) attempted to predict death from a simple linear combination of eight variables. They found that weights can be varied across a broad range without decreasing predictive accuracy.
In diagnosing patients suffering from dyspepsia, Fox et al. (1980) found that giving all pieces of evidence equal weights produced the same accuracy as a more precise statistical method (and also much the same pattern of errors). Fox et al. (1985) developed a system for the interpretation of blood data in leukemia diagnosis, using the EMYCIN expert system software. EMYCIN provided facilities to attach numerical ‘certainty factors’ to inference rules. Initially a system was developed using the full range of available values ( 1 to +1) though later these values were replaced with just two: if the rule made a purely categorical inference the certainty factor was set to be 1.0 while if there was any uncertainty associated with the rule the certainty factor was set to 0.5. The effect was to increase diagnostic accuracy by 5 percent.
In a study of whether or not to admit patients with suspected heart attacks to hospital, O’Neil and Glow-inski (1990) found no advantage of a precise decision procedure over simply ‘adding up the pros and cons.’ Pradhan et al. (1996) carried out a similar comparison in a diagnosis task and showed a slight increase in accuracy of diagnosis with precise statistical reasoning, but the effect was so small that it would have no practical clinical value.
In a recent study of genetic risk assessment for 50 families the leading probabilistic risk assessment software was compared with a simple procedure made up of if… then … rules (e.g., if the client has more than two first-degree relatives with breast cancer under the age of 50 then this is a risk factor). Despite the use of a simple weighting scheme for each rule, the rule-based system produced exactly the same risk classification for all cases as the probabilistic system (Emery et al. 2000).
While the available evidence is not conclusive a provisional hypothesis is that, at least for certain kinds of decision such as clinical diagnosis and patient management decisions, the strict use of quantitatively precise decision-making methods may not add much practical value to the design of decision support and artificial intelligence systems.
5. Nonclassical Methods for Decision Making
In artificial intelligence (Al) the desire to develop versatile automata has stimulated a great deal of research in new methods of decision making under uncertainty, ranging from sophisticated refinements of probabilistic methods such as ‘Bayesian networks,’ and nonprobabilistic methods such as fuzzy logic and possibility theory. Good overviews of the different approaches and their applications are Krause and Clark (1993) and Hunter and Parsons (1998). (See also Artificial Intelligence: Uncertainty.) These approaches are similar to probability methods in that they treat uncertainty as a matter of degree. However, quantitative approaches in general have also been questioned in Al because they require much data and do not capture varied human intuitions about the nature of ‘belief,’ ‘doubt,’ and natural justifications for decision making.
Consequently, interest has grown in the use of non-numerical methods for reasoning under uncertainty that seem to have some ‘common sense’ validity. Attempts have been made to develop qualitative approximations to quantitative methods, such as qualitative probability (Wellman 1990). In addition new kinds of logic have been proposed, including:
(a) nonmonotonic logics which express the everyday idea of changing one’s mind, as though the probability of some proposition being true is 1.0 at one point but at some later point the probability becomes zero;
(b) default logic, a form of nonmonotonic logic which formalises the idea of assuming that something is true until there is reason to belief otherwise;
(c) defeasible reasoning in which one line of reasoning can ‘rebut’ or ‘undermine’ another line of reasoning
Cognitive approaches, sometimes called ‘reason-based’ decision making, are gaining ground, including the idea of using informal ‘endorsements’ for alternative decision options and logical formalizations of everyday strategies of reasoning about competing beliefs and actions based on ‘argumentation.’ Models of argumentation are reviewed by Krause and Clark (1993) and the role of argumentation techniques in decision support systems is surveyed by Girle et al. (2001) (see also Problem Solving and Reasoning: Case-based and Linear Algebra for Neural Networks).
Some advocate an eclectic approach to the formali-zation of uncertainty that sanctions the use of different representations under different circumstances. Fox and Das (2000, Chap. 4) discuss the possibility that many techniques for reasoning under uncertainty, from quantitative and qualitative probability to default logic and representation of uncertainty in natural language, capture different intuitions about ‘belief and decision making but can all be viewed as different technical specializations of the informal notion of argumentation.
Finally, as noted above, human decision makers often demonstrate ‘metacognitive’ capabilities, showing some ability to reason about the nature of the decision, the relevant information sources, and applicable forms of argument, and so forth. Metacogni-tion may be at the heart of what we call ‘intelligence.’ Designers of decision support systems now have available a range of formalisms based on mathematical logic which can emulate such ‘meta-level reasoning’ (Fox and Das 2000) while conventional algorithmic decision procedures seem to be confined to reasoning about the input data, not the decision problem itself.
It appears from the foregoing that the claim that decision support systems should not be modeled on ‘irrational’ human cognitive processes is less compelling than it first appeared. This offers greater flexibility for the designers of decision support systems; they can adopt different decision theoretic frameworks for different applications. Indeed, they may also be able to apply various metacognitive strategies, as people appear to do, to select the most effective representations and reasoning methods in light of the demands and constraints of the current task.