ignorance and uncertainty

All about unknowns and uncertainties

Communicating about Uncertainty in Climate Change, Part II

with 5 comments

In my previous post I attempted to provide an overview of the IPCC 2007 report’s approach to communicating about uncertainties regarding climate change and its impacts. This time I want to focus on how the report dealt with probabilistic uncertainty. It is this kind of uncertainty that the report treats most systematically. I mentioned in my previous post that Budescu et al.’s (2009) empirical investigation of how laypeople interpret verbal probability expressions (PEs, e.g., “very likely”) in the IPCC report revealed several problematic aspects, and a paper I have co-authored with Budescu’s team (Smithson, et al., 2011) yielded additional insights.

The approach adopted by the IPCC is one that has been used in other contexts, namely identifying probability intervals with verbal PEs. Their guidelines are as follows:
Virtually certain >99%; extremely likely >95%; very likely >90%; likely >66%; more likely than not > 50%; about as likely as not 33% to 66%; unlikely <33%; very unlikely <10%; extremely unlikely <5%; exceptionally unlikely <1%.

One unusual aspect of these guidelines is their overlapping intervals. For instance, “likely” takes the interval [.66,1] and thus contains the interval [.90,1] for “very likely,” and so on. The only interval that doesn’t overlap with others is “as likely as not.” Other interval-to-PE guidelines I am aware of use non-overlapping intervals. An early example is Sherman Kent’s attempt to standardize the meanings of verbal PEs in the American intelligence community.

Attempts to translate verbal PEs into numbers have a long and checkered history. Since the earliest days of probability theory, the legal profession has steadfastly refused to quantify its burdens of proof (“balance of probabilities” or “reasonable doubt”) despite the fact that they seem to explicitly refer to probabilities or at least degrees of belief. Weather forecasters debated the pros and cons of verbal versus numerical PEs for decades, with mixed results. A National Weather Service report on a 1997 survey of Juneau, Alaska residents found that although the rank-ordering of the mean numerical probabilities residents assigned to verbal PE’s reasonably agreed with those assumed by the organization, the residents’ probabilities tended to be less extreme than the organization’s assignments. For instance, “likely” had a mean of 62.5% whereas the organization’s assignments for this PE were 80-100%.

And thus we see a problem arising that has been long noted about individual differences in the interpretation of PEs but largely ignored when it comes to organizations. Since at least the 1960’s empirical studies have demonstrated that people vary widely in the numerical probabilities they associate with a verbal PE such as “likely.” It was this difficulty that doomed Sherman Kent’s attempt at standardization for intelligence analysts. Well, here we have the NWS associating it with 80-100% whereas the IPCC assigns it 66-100%. A failure of organizations and agencies to agree on number-to-PE translations leaves the public with an impossible brief. I’m reminded of the introduction of the now widely-used cyclone (hurricane) category 1-5 scheme (higher numerals meaning more dangerous storms) at a time when zoning for cyclone danger where I was living also had a 1-5 numbering system that went in the opposite direction (higher numerals indicating safer zones).

Another interesting aspect is the frequency of the PEs in the report itself. There are a total of 63 PEs therein. “Likely” occurs 36 times (more than half), and “very likely” 17 times. The remaining 10 occurrences are “very unlikely” (5 times), “virtually certain” (twice), “more likely than not” (twice), and “extremely unlikely” (once). There is a clear bias towards fairly extreme positively-worded PEs, perhaps because much of the IPCC report’s content is oriented towards presenting what is known and largely agreed on about climate change by climate scientists. As we shall see, the bias towards positively-worded PEs (e.g., “likely” rather than “unlikely”) may have served the IPCC well, whether intentionally or not.

In Budescu et al.’s experiment, subjects were assigned to one of four conditions. Subjects in the control group were not given any guidelines for interpreting the PEs, as would be the case for readers unaware of the report’s guidelines. Subjects in a “translation” condition had access to the guidelines given by the IPCC, at any time during the experiment. Finally, subjects in two “verbal-numerical translation” conditions saw a range of numerical values next to each PE in each sentence. One verbal-numerical group was shown the IPCC intervals and the other was shown narrower intervals (with widths of 10% and 5%).

Subjects were asked to provide lower, upper and “best” estimates of the probabilities they associated with each PE. As might be expected, these figures were most likely to be consistent with the IPCC guidelines in the verbal- numerical translation conditions, less likely in the translation condition, and least likely in the control condition. They were also less likely to be IPCC-consistent the more extreme the PE was (e.g., less consistent foro “very likely” than for “likely”). Consistency rates were generally low, and for the extremal PEs the deviations from the IPCC guidelines were regressive (i.e., subjects’ estimates were not extreme enough, thereby echoing the 1997 National Weather Service report findings).

One of the ironic claims by the Budescu group is that the IPCC 2007 report’s verbal probability expressions may convey excessive levels of imprecision and that some probabilities may be interpreted as less extreme than intended by the report authors. As I remarked in my earlier post, intervals do not distinguish between consensual imprecision and sharp disagreement. In the IPCC framework, the statement “The probability of event X is between .1 and .9 could mean “All experts regard this probability as being anywhere between .1 and .9” or “Some experts regard the probability as .1 and others as .9.” Budescu et al. realize this, but they also have this to say:

“However, we suspect that the variability in the interpretation of the forecasts exceeds the level of disagreement among the authors in many cases. Consider, for example, the statement that ‘‘average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years’’ (IPCC, 2007, p. 8). It is hard to believe that the authors had in mind probabilities lower than 70%, yet this is how 25% of our subjects interpreted the term very likely!” (pg. 8).

One thing I’d noticed about the Budescu article was that their graphs suggested the variability in subjects’ estimates for negatively-worded PEs (e.g., “unlikely”) seemed greater than for positively worded PEs (e.g., “likely”). That is, subjects seemed to have less of a consensus about the meaning of the negatively-worded PEs. On reanalyzing their data, I focused on the six sentences that used the PE “very likely” or “very unlikely”. My statistical analyses of subjects’ lower, “best” and upper probability estimates revealed a less regressive mean and less dispersion for positive than for negative wording in all three estimates. Negative wording therefore resulted in more regressive estimates and less consensus regardless of experimental condition. You can see this in the box-plots below.

clip_image002

In this graph, the negative PEs’ estimates have been reverse-scored so that we can compare them directly with the positive PEs’ estimates. The “boxes” (the blue rectangles) contain the middle 50% of subjects’ estimates and these boxes are consistently longer for the negative PEs, regardless of experimental condition. The medians (i.e., the score below which 50% of the estimates fall) are the black dots, and these are fairly similar for positive and (reverse-scored) negative PEs. However, due to the negative PE boxes’ greater lengths, the mean estimates for the negative PEs end up being pulled further away from their positive PE counterparts.

There’s another effect that we confirmed statistically but also is clear from the box-plots. The difference between the lower and upper estimates is, on average, greater for the negatively-worded PEs. One implication of this finding is that the impact of negative wording is greatest on the lower estimates—And these are the subjects’ translations of the very thresholds specified in the IPCC guidelines.

If anything, these results suggest the picture is worse even than Budescu et al.’s assessment. They noted that 25% of the subjects interpreted “very likely” as having a “best” probability below 70%. The boxplots show that in three of the four experimental conditions at least 25% of the subjects provided a lower probability of less than 50% for “very likely”. If we turn to “very unlikely” the picture is worse still. In three of the four experimental conditions about 25% of the subjects returned an upper probability for “very unlikely” greater than 80%!

So, it seems that negatively-worded PEs are best avoided where possible. This recommendation sounds simple, but it could open a can of syntactical worms. Consider the statement “It is very unlikely that the MOC will undergo a large abrupt transition during the 21st century.” Would it be accurate to equate it with “It is very likely that the MOC will not undergo a large abrupt transition during the 21st century?” Perhaps not, despite the IPCC guidelines’ insistence otherwise. Moreover, turning the PE positive entails turning the event into a negative. In principle, we could have a mixture of negatively- and positively-worded PE’s and events (“It is (un)likely that A will (not) occur”). It is unclear at this point whether negative PEs or negative events are the more confusing, but inspection of the Budescu et al. data suggested that double-negatives were decidedly more confusing than any other combination.

As I write this, David Budescu is spearheading a multi-national study of laypeople’s interpretations of the IPCC probability expressions (I’ll be coordinating the Australian component). We’ll be able to compare these interpretations across languages and cultures. More anon!

References

Budescu, D.V., Broomell, S. and Por, H.-H. (2009) Improving the communication of uncertainty in the reports of the Intergovernmental panel on climate change. Psychological Science, 20, 299–308.

Intergovernmental Panel on Climate Change (2007). Summary for policymakers: Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Retrieved May 2010 from http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf.

Smithson, M., Budescu, D.V., Broomell, S. and Por, H.-H. (2011) Never Say “Not:” Impact of Negative Wording in Probability Phrases on Imprecise Probability Judgments. Accepted for presentation at the Seventh International Symposium on Imprecise Probability: Theories and Applications, Innsbruck, Austria, 25-28 July 2011.

About these ads

5 Responses

Subscribe to comments with RSS.

  1. Michael,

    I have just blogged on the UK approach at http://djmarsay.wordpress.com/2011/06/22/science-advice-and-the-management-of-risk/. On the surface, its concerns hardly overlap yours. I think that we would all be much happier it we could express uncertainties about climate change as simple probabilites. But, taking your post at face value, I do have some more constructive points.

    Are the subjects reporting what they think the report authors intended, or what they make of the given estimates? If the latter then they may be simply discounting the reported uncertainties. This may be very sensible, even if you do not think that scientists often simplify or exaggerate. (My link above points to a discussion on this.)

    On a more technical issue, the report continues the common practice of confusing likelihood and probability. In this situation if I was given a likelihood description I would probably simply try to translate it to a number. But I suspect that the report is actually giving probability estimates, in which case I need to assess what the author’s priors were and substitute my own. Having to produce a single number is very limiting, but I would probably end up with a less extreme estimate.

    There is some evidence that people naturally think in terms of probability intervals. It might be interesting to ask people what range of probabilities they would consider consistent with the probability description. An alternative model would be that people think in terms of caveats. E.g. ‘If they say X and they are trustworthy then x else y.’ I know when I read reports I continually form judgments about the quality of the work, and interpret it accordingly.

    I guess that interpreting probability descriptions inevitably involves deeper uncertainties. The literature doesn’t seem very helpful on this. Or is this taking you off piste?

    djmarsay

    June 22, 2011 at 8:15 pm

    • Dave,
      Your comment raises several interesting points. To begin, I’ll try to clarify two matters. First, subjects were asked what probabilities they thought the authors of the report had in mind, not what they made of the estimates. Second, as I mentioned in the post they were asked to provide lower, “best,” and upper probabilities in answer to that question, so probability intervals were elicited from them. That’s why it was interesting that the positive versus negative wording had the largest impact on the upper probability estimates for the negatively-worded probability expressions.
      Of course the terms “likelihood” and “probability” have different technical meanings but in colloquial language they usually are synonyms. Most of the subjects treated their lower, “best” and upper estimates quite systematically. For instance, something I didn’t mention in my post was that subjects’ lower and upper probabilities usually obeyed the conjugacy relationship (i.e., lower P(A) = 1 – upper P(not-A)). Emil Borel, C.A.B. Smith, and Peter Walley would have been very pleased!
      One line of research I intend pursuing is investigating how people interpret the “confidence” and “agreement” expressions used in the IPCC reports. These also refer to uncertainties, and in the case of confidence, the IPCC guidelines associate verbal expressions with a 0-10 rating scale.
      Perhaps investigations along these lines will help get at some of the “deeper” uncertainties that are not captured well by probabilities. That said, I disagree somewhat with your statement that the literature (on interpretations of probability expressions?) isn’t helpful regarding other kinds of uncertainty. There is a sizable psychological literature regarding human interpretations of verbal and numerical probabilities, with interesting and applicable accounts of what words can do that numbers can’t and vice-versa. But of course I do agree with you that the uncertainties arising in complex domains such as our understandings of climate change cannot be reduced to probabilistic terms.

      michaelsmithson

      June 22, 2011 at 10:19 pm

  2. Michael, We seem to be in agreement on the big picture. I am somewhat concerned that following the crash of 2007/8 behavioural economics has come to the fore. Much of its literature is about ‘bias’, by which it means systematic differences between how subjects treat uncertainty and how the normative theory says they should. At the same time, at least in some parts, Keynes became popular. He has a theory of uncertainty according to which subjects should treat uncertainty differently from the classical normative theory. In cases where I’ve been able to get back to the raw experiments, it seems to me that human subjects tend to differ from the normative theory in a way that is consistent with Keynes. So I am unconvinced that people do naturally treat uncertainty as a number. And why should they? As you say, there is a large literature. Can you recommend anything that hasn’t been tainted by arguable assumptions about how uncertainty ‘should’ be treated, or at least makes any such asumptions explicit?

    By the way, I do think that behavioural economics has much to offer: but we do need to be clear about what we can properly draw from it.

    djmarsay

    June 23, 2011 at 11:32 pm

    • Dave, my apologies for this tardy response to your comment. The usual excuses for an academic (end of semester marking, admin pressures, etc.) apply. As I understand it, you’ve asked about literature bearing on the question of whether people think of uncertainty numerically or not. The literature I’m familiar with is primarily focused on verbal versus numerical representations of probabilities. This literature makes the following relevant points about verbal probability expressions (PEs):
      1. Recipients of PEs consistently interpret them as referring to less extreme probabilities than their communicators intend (e.g., Fillenbaumn et al., 1991, Budescu et al. 2009).
      2. There is a “false consensus effect” arising from the use of PEs because of an implicit assumption that everyone interprets them similarly. The overwhelming evidence from numerous studies is that there is very large inter-personal variation in the interpretation of PEs (e.g., Johnson 1973, Reagan et al. 1989). Moreover, this variability holds even for experts acting within their domains of expertise (e.g,, Kong et al. 1986).
      3. Most people prefer to communicate their views of uncertainty verbally but also prefer to receive such communications numerically, in good part because they feel the verbal mode is easier and the numerical mode is more precise (e.g., Brun & Teigen 1988, Wallsten et al. 1993).
      In sum, people are capable of thinking about probabilities both numerically and verbally, but the translations between the two modes tend to be idiosyncratic and people have asymmetric preferences for either mode depending on whether they’re communicating or receiving probabilistic information. Wallsten and Budescu (1995) is a good review article on this topic. Anyhow, I hope this summary and starting-list of references is useful.
      References
      Brun, W., & Teigen, K. H. (1988). Verbal probabilities: Ambiguous, context-dependent, or both? Organizational Behavior and Human Decision Processes, 41, 390–414.
      Budescu, D. V., Broomell, S. B., & Por, H. H. (2009). Improving communication of uncertainty in the reports of the Intergovernmental Panel on Climate Change. Psychological Science, 20, 299-308.
      Fillenbaum, S., Wallsten, T. S., Cohen, B. L., & Cox, J. A. (1991). Some effects of vocabulary and communication task on the understanding and use of vague probability expressions. American Journal of Psychology, 104, 35–60.
      Johnson, E. M. (1973). Numerical encoding of qualitative expressions of uncertainty (Technical paper No. 250). US Army Research Institute for the Behavioral & Social Sciences.
      Kong, A., Barnett, G. O., Mosteller, F., & Youtz, C. (1986). How medical professionals evaluate expressions of probability. The New England Journal of Medicine, 315, 740–744.
      Reagan, R., Mosteller, F., & Youtz, C. (1989). Quantitative meanings of verbal probability expressions. Journal of Applied Psychology, 74, 433–442.
      Wallsten, T. S., & Budescu, D. V. (1995). A review of human linguistic probability processing: General principles and empirical evidence. Knowledge Engineering Review, 10, 43-62.
      Wallsten, T. S., Budescu, D. V., Zwick, R., & Kemp, S. M. (1993). Preferences and reasons for communicating probabilistic information in numerical or verbal terms. Bulletin of the Psychonomic Society, 31, 135–138.

      michaelsmithson

      July 3, 2011 at 9:15 am

      • Thanks. Your (3) fits with my experience. I have previously supposed that those reporting uncertainty want to communicate it in full, whereas those being reported to only want the ‘rational’ part, to inform ‘rational’ decisions. I hope to follow up your references in a few weeks. Regards.

        djmarsay

        July 13, 2011 at 7:41 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: