ignorance and uncertainty

All about unknowns and uncertainties

Can We Make “Good” Decisions Under Ignorance?

with 2 comments

There are well-understood formal frameworks for decision making under risk, i.e., where we know all the possible outcomes of our acts and also know the probabilities of those outcomes. There are even prescriptions for decision making when we don’t quite know the probabilities but still know what the outcomes could be. Under ignorance we not only don’t know the probabilities, we may not even know all of the possible outcomes. Shorn of their probabilities and a completely known outcome space, normative frameworks such as expected utility theory stand silent. Can there be such a thing as a good decision-making method under ignorance? What criteria or guidelines make sense for decision making when we know next to nothing?

At first glance, the notion of criteria for good decisions under ignorance may seem absurd. Here is a simplified list of criteria for “good” (i.e., rational) decisions under risk:

  1. Your decision should be based on your current assets.
  2. Your decision should be based on the possible consequences of all possible outcomes.
  3. You must be able to rank all of the consequences in order of preference and assign a probability to each possible outcome.
  4. Your choice should maximize your expected utility, or roughly speaking, the likelihood of those outcomes that yield highly preferred consequences.

In non-trivial decisions, this prescription requires a vast amount of knowledge, computation, and time. In many situations at least one of these requirements isn’t met, and often none of them are.

This problem has been recognized for a long time, although it has been framed in rather different ways. In the 1950’s at least two spokespeople emerged for decision making under ignorance. The economist Herbert Simon proposed “bounded rationality” as an alternative to expected utility theory, in recognition of the fact that decision makers have limited time, information-gathering resources, and cognitive capacity. He coined the term “satisficing” to describe criteria for decisions that may fall short of optimality but are “good enough” and also humanly feasible to achieve. Simon also championed the use of “heuristics,” rules of thumb for reasoning and decision making that, again, are not optimal but work well most of the time. These ideas have been elaborated since by many others, including Gerd Gigerenzer’s “fast and frugal” heuristics and Gary Klein’s “naturalistic” decision making. These days bounded rationality has many proponents.

Around the same time that Simon was writing about bounded rationality, political economist Charles Lindblom emerged as an early proponent of various kinds of “incrementalism,” which he engagingly called “muddling through.” Whereas Simon and his descendants focused on the individual decision maker, Lindblom wrote about decision making mainly in institutional settings. One issue that the bounded rationality people tended to neglect was highlighted by Lindblom, namely the problem of not knowing what our preferences are when the issues are complex:

“Except roughly and vaguely, I know of no way to describe–or even to understand–what my relative evaluations are for, say, freedom and security, speed and accuracy in governmental decisions, or low taxes and better schools than to describe my preferences among specific policy choices that might be made between the alternatives in each of the pairs… one simultaneously chooses a policy to attain certain objectives and chooses the objectives themselves.” (Lindblom 1959, pg. 82).

Lindblom’s characterization of “muddling through” also is striking for its rejection of means-ends analysis. For him, the means and ends are entwined together in the policy options under consideration.  “Decision-making is ordinarily formalized as a means-ends relationship: means are conceived to be evaluated and chosen in the light of ends finally selected independently of and prior to the choice of means… Typically, …such a means- ends relationship is absent from the branch method, where means and ends are simultaneously chosen.” (Lindblom 1959, pg. 83).

In the absence of a means-end analysis, how can the decision or policy maker know what is a good decision or policy? Lindblom’s answer is consensus among policy makers: “Agreement on objectives failing, there is no standard of ‘correctness.’… the test is agreement on policy itself, which remains possible even when agreement on values is not.” (Lindblom 1959, pg. 83)

Lindblom’s prescription is to restrict decisional alternatives to small or incremental deviations from the status quo. He claims that “A wise policy-maker consequently expects that his policies will achieve only part of what he hopes and at the same time will produce unanticipated consequences he would have preferred to avoid. If he proceeds through a succession of incremental changes, he avoids serious lasting mistakes in several ways.” First, a sequence of small steps will have given the policy maker grounds for predicting the consequences of an additional similar step. Second, he claims that small steps are more easily corrected or reversed than large ones. Third, stakeholders are more likely to agree on small changes than on radical ones.

How, then, does Lindblom think his approach will not descend into groupthink or extreme confirmation bias? Through diversity and pluralism among the stakeholders involved in decision making:

“… agencies will want among their own personnel two types of diversification: administrators whose thinking is organized by reference to policy chains other than those familiar to most members of the organization and, even more commonly, administrators whose professional or personal values or interests create diversity of view (perhaps coming from different specialties, social classes, geographical areas) so that, even within a single agency, decision-making can be fragmented and parts of the agency can serve as watchdogs for other parts.”

Naturally, Lindblom’s prescriptions and claims were widely debated. There is much to criticize, and he didn’t present much hard evidence that his prescriptions would work. Nevertheless, he ventured beyond the bounded rationality camp in four important respects. First, he brought into focus the prospect that we may not have knowable preferences. Second, he realized that means and ends may not be separable and may be reshaped in the very process of making a decision. Third, he mooted the criteria of choosing incremental and corrigible changes over large and irreversible ones. Fourth, he observed that many decisions are embedded in institutional or social contexts that may be harnessed to enhance decision making. All four of these advances suggest implications for decision making under ignorance.

Roger Kasperson contributed a chapter on “coping with deep uncertainty” to Gabriele Bammer’s and my 2008 book. By “deep uncertainty” he means “situations in which, typically, the phenomena… are characterized by high levels of ignorance and are still only poorly understood scientifically, and where modelling and subjective judgments must substitute extensively for estimates based upon experience with actual events and outcomes, or ethical rules must be formulated to substitute for risk-based decisions.” (Kasperson 2008, pg. 338) His list of options open to decision makers confronted with deep uncertainty includes the following:

  1. Delay to gather more information and conduct more studies in the hope of reducing uncertainty across a spectrum of risk;
  2. Interrelate risk and uncertainty to target critical uncertainties for priority further analysis, and compare technology and development options to determine whether clearly preferable options exist for proceeding;
  3. Enlarge the knowledge base for decisions through lateral thinking and broader perspective;
  4. Invoke the precautionary principle;
  5. Use an adaptive management approach; and
  6. Build a resilient society.

He doesn’t recommend these unconditionally, but instead writes thoughtfully about their respective strengths and weaknesses. Kasperson also observes that “The greater the uncertainty, the greater the need for social trust… The combination of deep uncertainty and high social distrust is often a recipe for conflict and stalemate.” (Kasperson 2008, pg. 345)

At the risk of leaping too far and too fast, I’ll conclude by presenting my list of criteria and recommendations for decisions under ignorance. I’ve incorporated material from the bounded rationality perspective, some of Lindblom’s suggestions, bits from Kasperson, my own earlier writings and from other sources not mentioned in this post. You’ll see that the first two major headings echo the first two in the expected utility framework, but beneath each of them I’ve slipped in some caveats and qualifications.

  1. Your decision should be based on your current assets.
    a. If possible, know which assets can be traded and which are non-negotiable.
    b. If some options are decisively better (worse) than others considering the range of risk that may exist, then choose them (get rid of them).
    c. Consider options themselves as assets. Try to retain them or create new ones.
    d. Regard your capacity to make decisions as an asset. Make sure you don’t become paralyzed by uncertainty.
  2. Your decision should be based on the possible consequences.
    a. Be aware of the possibility that means and ends may be inseparable and that your choice may reshape both means and ends.
    b. Beware unacceptable ends-justify-means arguments.
    c. Avoid irreversible or incorrigible alternatives if possible.
    d. Seek alternatives that are “robust” regardless of outcome.
    e. Where appropriate, invoke the precautionary principle.
    f. Seek alternatives whose consequences are observable.
    g. Plan to allocate resources for monitoring consequences and (if appropriate) gathering more information.
  3. Don’t assume that getting rid of ignorance and uncertainty is always a good idea.
    a. See 1.c. and 2.c. above. Options and corrigibility require uncertainty; freedom of choice is positively badged uncertainty.
    b. Interventions that don’t alter people’s uncertainty orientations will be frustrated with attempts by people to re-establish the level of uncertainty they are comfortable with.
    c. Ignorance and uncertainty underpin particular kinds of social capital. Eliminate ignorance and uncertainty and you also eliminate that social capital, so make sure you aren’t throwing any babies out with the bathwater.
    d. Other people are not always interested in reducing ignorance and uncertainty. They need uncertainty to have freedom to make their own decisions. They may want ignorance to avoid culpability.
  4. Where possible, build and utilize relationships based on trust instead of contracts.
    a. Contracts presume and require predictive knowledge, e.g., about who can pay whom how much and when. Trust relationships are more flexible and robust under uncertainty.
    b. Contracts lock the contractors in, incurring opportunity costs that trust relationships may avoid.

Managing and decision making under ignorance is, of course, far too large a topic for a single post, and I’ll be returning to it in the near future. Meanwhile, I’m hoping this piece at least persuades readers that the notion of criteria for good decisions under ignorance may not be absurd after all.

2 Responses

Subscribe to comments with RSS.

  1. Some quick comments drawing on my experience as an economic policy adviser to the UK, Australian and Queensland governments.

    Lindblom’s suggestion that we know a decision or policy is good when there is consensus among policy-makers is nonsense. We might consider “good” policy to be that which best serves the broad public interest, whether defined as Pareto-optimal or by other criteria. Unfortunately, many policy-makers are not driven by public benefit considerations, and the consensus will tend to settle where it provides most benefit to the policy-makers themselves, e.g. in career-enhancement, avoidance of criticism and conflict, etc. I can quote many instances where agreed policy was clearly against the public interest, where demonstrations to that effect were ignored, and where the outcomes were clearly wealth-destroying. One example was the Australian Magnesium Smelter project, which collapsed with the loss of about $A480m of mainly public funding. I had demonstrated that the project, which had been mooted for ever 30 years without attracting commercial support, failed almost all criteria for viability, and directed CGE modelling which showed highly negative returns. The preferred client for the magnesium metal said, for similar reasons to mine, that it could never be viable in Australia. But promotions depended on getting the project up. No one was sanctioned when it collapsed.

    Re incrementalism, this is often an inferior choice. For example, the Australian economy was in dire straits in the early 1980s, largely from massively intrusive regulation and government direction and very high border protection. Incremental change was not a viable solution, the vested interests created by government intervention would fight each change all the way, and there would be little constituency for each specific change. Wholesale change was needed: the Hawke government engaged in this, floating the dollar, cutting quotas and tariffs, weakening IR regulation and pursuing a National Competition Policy. The benefits from these large-scale changes were significant and widespread. Where particular proposals met resistance, they could be seen as fair given the broad reform context and the gains arising from it: most of the “losers” in a specific case would share in broad gains and have wider opportunities than before. The reforms from the 1980s to early 2000s underpinned rapid growth in real incomes, opportunities and employment which would not have been possible with incrementalism.

    I don’t understand your points regarding getting rid of ignorance and uncertainty not always being a good idea. Of course, people are driven by many deep impulses, and may not be convinced by greater knowledge and reduced uncertainty. But surely they make it easier to gain support for “good” policy and to discredit options which are “bad” policy? Freedom of choice is not reduced by better information, but there is a better basis for choice.

    I endorse your remarks that there is virtue in relationships built on trust rather than contracts. In the CAGW issue (I came here from Climate Etc), the lack of trust is a major factor in people’s attitudes and choices. However, in my Queensland career, I was an outsider on the basis that I gave advice based on up-to-date professional knowledge, good data and good analysis, I was not trusted because this threatened the cosy, self-serving status quo. Non one could fault my analysis or conclusions, but they preferred to ignore them and to conceal things from me. The in-group trusted each other, but this led to bad policy. Trust divorced from strong ethics and public-spiritedness may be of little value to society.

    Michael Cunningham, aka Faustino aka Genghis Cunn

    August 24, 2011 at 12:53 pm

    • Michael,
      My apologies for this tardy response to your well-articulated comment. I agree that there is much to criticize about Lindblom’s ideas (and indeed they attracted widespread criticism, as I mentioned in my post). My own view is that the value of his ideas is primarily in raising important issues that at the time were being ignored by decision scientists. Of course consensus is not a defensible criterion for what constitutes “good” policy, and a largely unsolved problem is how to ensure that policy-makers don’t end up agreeing to policy choices that serve their interests to the detriment of public interests. Likewise, there are plenty of examples of social or policy change where incrementalism won’t do (my favourite example is changing the side of the road on which people drive cars).
      As for arguments about whether getting rid of ignorance and uncertainty is always a good idea, I’ve given examples in several other posts on this blog. These fall roughly into two classes: (1) It may be too expensive or even impossible to get rid of them; and/or (2) Getting rid of them may sacrifice other desirable things such as the basis for trust relations, privacy, or the capacity for innovation.

      michaelsmithson

      August 30, 2011 at 9:19 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: