ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Ellsberg paradox

Writing about “Agnotology, Ignorance and Uncertainty”

with 2 comments

From time to time I receive invitations to contribute to various “encyclopedias.” Recent examples include an entry on “confidence intervals” in the International Encyclopedia of Statistical Science (Springer, 2010) and an entry on “uncertainty” in the Encyclopedia of Human Behavior (Elsevier, 1994, 2012). The latter link goes to the first (1994) edition; the second edition is due out in 2012. I’ve duly updated and revised my 1994 entry for the 2012 edition.

Having been raised by a librarian (my mother worked in the Seattle Public Library for 23 years), I’m a believer in the value of good reference works. So, generally I’m willing to accept invitations to contribute to them. These days there is a niche market even for non-digital works of this kind, and of course the net has led to numerous hybrid versions.

Despite the fact that such invitations are regarded as markers of professional esteem, they don’t count for much in the university system where I work because they aren’t original research publications. Same goes for textbooks. Thus, for my younger academic colleagues, writing encyclopedia entries or, worse still, writing textbooks actually can harm their careers. They understandably avoid doing so, which leaves it to older academics like me.

Some of these encyclopedias have interesting moments on the world stage. The International Encyclopedia of Statistical Science has been said to have set a record for the number of countries involved (105, via the 619 contributing authors). Its editors were nominated for the 2011 Nobel Peace Prize, apparently the first time any statisticians had received this honor. Meanwhile, V.S. Ramachandran, editor of the Encyclopedia of Human Behavior, was selected by Time Magazine as one of the world’s most influential people of 2011.

However, I digress. The Sage Encyclopedia of Philosophy and the Social Sciences is an intriguing proposal for a reference work that bridges these two intellectual cultures. I regard this aim as laudable, and I’m fortunate insofar as the areas where I work have a tradition of dialogs linking philosophers and social scientists. So, I was delighted to be asked to provide an entry on “agnotology, ignorance and uncertainty”. There is, however, a bit of a catch.

The guidelines for contributors state that “Entries should be written at a level appropriate for students who do not have an extensive background either in philosophy or the social sciences and for academics from other disciplines… it is essential that a reader versed in philosophy only or mostly, or alternatively, in social sciences, should gain by reading entries that aim at expanding their knowledge of concepts and theories as these have developed in the complementary area.” All of this is supposed to be achieved for a treatment of “agnotology, ignorance and uncertainty” in just 1,000 words, with a short list of “further readings” at the end. All of my posts in this blog thus far exceed 1,000 words (gulp). Can I be sufficiently concise without butchering or omitting crucial content?

Here’s my first draft (word count: 1,018). See what you think.

AGNOTOLOGY, IGNORANCE AND UNCERTAINTY

“Agnotology” is the study of ignorance (from the Greek “agnosis”). “Ignorance,” “uncertainty,” and related terms refer variously to the absence of knowledge, doubt, and false belief. This topic has a long history in Western philosophy, rooted in the Socratic tradition. It has a considerably shorter and, until recently, sporadic treatment in the human sciences. This entry focuses on relatively recent developments within and exchanges between both domains.

A key starting-point is that anyone attributing ignorance cannot avoid making claims to know something about who is ignorant of what: A is ignorant from B’s viewpoint if A fails to agree with or show awareness of ideas which B defines as actually or potentially valid. A and B can be identical, so that A self-attributes ignorance. Numerous scholars thereby have noted the distinction between conscious ignorance (known unknowns, learned ignorance) and meta-ignorance (unknown unknowns, ignorance squared).

The topic has been beset with terminological difficulties, due to the scarcity and negative cast of terms referring to unknowns. Several scholars have constructed typologies of unknowns, in attempts to make explicit their most important properties. Smithson’s book, Ignorance and Uncertainty: Emerging Paradigms, pointed out the distinction between being ignorant of something and ignoring something, the latter being akin to treating something as irrelevant or taboo. Knorr-Cetina coined the term “negative knowledge” to describe knowledge about the limits of the knowable. Various authors have tried to distinguish reducible from irreducible unknowns.

Two fundamental concerns have been at the forefront of philosophical and social scientific approaches to unknowns. The first of these is judgment, learning and decision making in the absence of complete information. Prescriptive frameworks advise how this ought to be done, and descriptive frameworks describe how humans (or other species) do so. A dominant prescriptive framework since the second half of the 20th century is subjective expected utility theory (SEU), whose central tenet is that decisional outcomes are to be evaluated by their expected utility, i.e., the product of their probability and their utility (e.g., monetary value, although utility may be based on subjective appraisals). According to SEU, a rational decision maker chooses the option that maximizes her/his expected utility. Several descriptive theories in psychology and behavioral economics (e.g., Prospect Theory and Rank-Dependent Expected Utility Theory) have amended SEU to render it more descriptively accurate while retaining some of its “rational” properties.

The second concern is the nature and genesis of unknowns. While many scholars have treated unknowns as arising from limits to human experience and cognitive capacity, increasing attention has been paid recently to the thesis that unknowns are socially constructed, many of them intentionally so. Smithson’s 1989 book was among the earliest to take up the thesis that unknowns are socially constructed. Related work includes Robert Proctor’s 1995 Cancer Wars and Ulrich Beck’s 1992 Risk Society. Early in the 21st century this thesis has become more mainstream. Indeed, the 2008 edited volume bearing “agnotology” in its title focuses on how culture, politics, and social dynamics shape what people do not know.

Philosophers and social scientists alike have debated whether there are different kinds of unknowns. This issue is important because if there is only one kind then only one prescriptive decisional framework is necessary and it also may be the case that humans have evolved one dominant way of making decisions with unknowns. On the other hand, different kinds of unknowns may require distinct methods for dealing with them.

In philosophy and mathematics the dominant formal framework for dealing with unknowns has been one or another theory of probability. However, Max Black’s ground-breaking 1937 paper proposed that vagueness and ambiguity are distinguishable from each other, from probability, and also from what he called “generality.” The 1960’s and 70’s saw a proliferation of mathematical and philosophical frameworks purporting to encompass non-probabilistic unknowns, such as fuzzy set theory, rough sets, fuzzy logic, belief functions, and imprecise probabilities. Debates continue to this day over whether any of these alternatives are necessary, whether all unknowns can be reduced to some form of probability, and whether there are rational accounts of how to deal with non-probabilistic unknowns. The chief contenders currently include generalized probability frameworks (including imprecise probabilities, credal sets, belief functions), robust Bayesian techniques, and hybrid fuzzy logic techniques.

In the social sciences, during the early 1920’s Keynes distinguished between evidentiary “strength” and “weight,” while Knight similarly separated “risk” (probabilities are known precisely) from “uncertainty” (probabilities are not known). Ellsberg’s classic 1961 experiments demonstrated that people’s choices can be influenced by how imprecisely probabilities are known (i.e., “ambiguity”), and his results have been replicated and extended by numerous studies. Smithson’s 1989 book proposed a taxonomy of unknowns and his 1999 experiments showed that choices also are influenced by uncertainty arising from conflict (disagreeing evidence from equally credible sources); those results also have been replicated.

More recent empirical research on how humans process unknowns has utilized brain imaging methods. Several studies have suggested that Knightian uncertainty (ambiguity) and risk differentially activate the ventral systems that evaluate potential rewards (the so-called “reward center”) and the prefrontal and parietal regions, with the latter two becoming more active under ambiguity. Other kinds of unknowns have yet to be widely studied in this fashion but research on them is emerging. Nevertheless, the evidence thus far suggests that the human brain treats unknowns as if there are different kinds.

Finally, there are continuing debates regarding whether different kinds of unknowns should be incorporated in prescriptive decision making frameworks and, if so, how a rational agent should deal with them. There are several decisional frameworks incorporating ambiguity or imprecision, some of which date back to the mid-20th century, and recently at least one incorporating conflict as well. The most common recommendation for decision making under ambiguity amounts to a type of worst-case analysis. For instance, given a lower and upper estimate of the probability of event E, the usual advice is to use the lower probability for evaluating bets on E occurring but to use the upper probability for bets against E. However, the general issue of what constitutes rational behavior under non-probabilistic uncertainties such as ambiguity, fuzziness or conflict remains unresolved.

Further Readings

Bammer, G. and Smithson, M. (Eds.), (2008). Uncertainty and Risk: Multidisciplinary Perspectives. London: Earthscan.

Beck, Ulrich (1999). World Risk Society. Oxford: Polity Press.

Black, M. (1937). Vagueness: An exercise in logical analysis. Philosophy of Science, 4, 427-455.

Gardenfors, P. and Sahlin, N.-E. (Eds.), (1988). Decision, Probability, and Utility: Selected Readings. Cambridge, UK: Cambridge University Press.

Proctor, R. and Schiebinger, L. (Eds.), (2008). Agnotology: The Making and Unmaking of Ignorance. Stanford, CA: Stanford University Press.

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.

Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman Hall.

Are There Different Kinds of Uncertainties? Probability, Ambiguity and Conflict

with one comment

The domain where I work is a crossroads connecting cognitive and social psychology, behavioral economics, management science, some aspects of applied mathematics and analytical philosophy. Some of the traffic there carries some interesting debates about the nature of uncertainty. Is it all one thing? For instance, can every version of uncertainty be reduced to some form of probability? Or are there different kinds? Was Keynes right to distinguish between “strength” and “weight” of evidence?

Why is this question important? Making decisions under uncertainty in a partially learnable world is one of the most important adaptive challenges facing humans and other species. If there are distinct (and consequential) kinds, then we may need distinct ways of coping with them. Each kind also may have its own special uses. Some kinds may be specific to an historical epoch or to a culture.

On the other hand, if there really is only one kind of uncertainty then a rational agent should have only one (consistent) method of dealing with it. Moreover, such an agent should not be distracted or influenced by any other “uncertainty” that is inconsequential.

A particular one-kind view dominated accounts of rationality in economics, psychology and related fields in the decades following World War II. The rational agent was a Bayesian probabilist, a decision-maker whose criterion for best option was the option that yielded the greatest expected return. This rational agent maximizes expected outcomes. Would you rather be given $1,000,000 right now for sure, or offered a fair coin-toss in which Heads nets you $2,400,000 and Tails leaves you nothing? The rational agent would go for the coin-toss because the expected return is 1/2 times $2,400,000, i.e., $1,200,000, which exceeds $1,000,000. A host of experimental evidence says that most of us would choose the $1M instead.

One of the Bayesian rationalist’s earliest serious challenges came from a 1961 paper by Daniel Ellsberg (Yes, that Daniel Ellsberg). Ellsberg set up some ingenious experiments demonstrating that people are influenced by another kind of uncertainty besides probability: Ambiguity. Ambiguous probabilities are imprecise (e.g., a probability we can only say is somewhere between 1/3 and 2/3). Although a few precursors such as Frank Knight (1921) had explored some implications for decision makers when probabilities aren’t precisely known, Ellsberg brought this matter to the attention of behavioral economists and cognitive psychologists. His goal was to determine whether the distinction between ambiguous and precise probabilities has any “behavioral significance.”

Here’s an example of what he meant by that phrase. Suppose you have to choose between drawing one ball from one of two urns, A or B. Each urn contains 90 balls, with the offer of a prize of $10 if the first ball drawn is either Red or Yellow. In Urn A, 30 of the balls are Red and the remaining 60 are either Black or Yellow but the number of Yellow and Black balls has been randomly selected by a computer program, and we don’t know what it is. Urn B contains 30 Red, 30 Yellow, and 30 Black balls. If you prefer Urn B, you’re manifesting an aversion to not knowing precisely how many Yellow balls there are in Urn A– i.e., ambiguity aversion. In experiments of this kind, most people choose Urn B.

But now consider a choice between drawing a ball from one of two bags, each containing a thousand balls with a number on each ball, and a prize of $1000 if your ball has the number 500 on it. In Bag A, the balls were numbered consecutively from 1 to 1000, so we know there is exactly one ball with the number 500 on it. Bag B, however, contains a thousand balls with numbers randomly chosen from 1 to 1000. If you prefer Bag B, then you may have reasoned that there is a chance of more than one ball in there with the number 500. If so, then you are demonstrating ambiguity seeking. Indeed, most people choose Bag B.

Now, Bayesian rational agents would not have a preference in either of the scenarios just described. They would find the expected return in Urns A and B to be identical, and likewise with Bags A and B. So, our responses to these scenarios indicate that we are reacting to the ambiguity of probabilities as if ambiguity is a consequential kind of uncertainty distinct from probability.

Ellsberg’s most persuasive experiment (the one many writers describe in connection with “Ellsberg’s Paradox“) is as follows. Suppose we have an urn with 90 balls, of which 30 are Red and 60 are either Black or Yellow (but the proportions of each are unknown). If asked to choose between gambles A and B as shown in the upper part of the table below (i.e., betting on Red versus betting on Black), most people prefer A.

 

30 60
Urn Red Black Yellow
A $100 $0 $0
B $0 $100 $0

 

 

30 60
Urn Red Black Yellow
C $100 $0 $100
D $0 $100 $100

 

However, when asked to choose between gambles C and D, most people prefer D. People preferring A to B and D to C are violating a principle of rationality (often called the “Sure-Thing Principle”) because the (C,D) pair simply adds $100 for drawing a Yellow ball to the (A,B) pair. If rational agents prefer A to B, they should also prefer C to D. Ellsberg demonstrated that ambiguity influences decisions in a way that is incompatible with standard versions of rationality.

But is it irrational to be influenced by ambiguity? It isn’t hard to find arguments for why a “reasonable” (if not rigorously rational) person would regard ambiguity as important. Greater ambiguity could imply greater variability in outcomes. Variability can have a downside. I tell students in my introductory statistics course to think about someone who can’t swim considering whether to wade across a river. All they know is that the average depth of the river is 2 feet. If the river’s depth doesn’t vary much then all is well, but great variability could be fatal. Likewise, financial advisors routinely ask clients to consider how comfortable they are with “volatile” versus “conservative” investments. As many an investor has found out the hard way, high average returns in the long run are no guarantee against ruin in the short term.

So, a prudent agent could regard ambiguity as a tell-tale marker of high variability and decide accordingly. It all comes down to whether the agent restricts their measure of good decisions to expected return (i.e., the mean) or admits a second criterion (variability) into the mix. Several models of risk behavior in humans and other animals incorporate variability. A well-known animal foraging model, the energy budget rule, predicts that an animal will prefer foraging options with smaller variance when the expected energy intake exceeds its requirements. However, it will prefer greater variance when the expected energy intake is below survival level because greater variability conveys a greater chance of obtaining the intake needed for survival.

There are some fascinating counter-arguments against all this, and counter-counter-arguments too. I’ll revisit these in a future post.

I was intrigued by Ellsberg’s paper and the literature following on from it, and I’d also read William Empson’s classic 1930 book on seven types of ambiguity. It occurred to me that some forms of ambiguity are analogous to internal conflict. We speak of being “in two minds” when we’re undecided about something, and we often simulate arguments back and forth in our own heads. So perhaps ambiguity isn’t just a marker for variability. It could indicate conflict as well. I decided to run some experiments to see whether people respond to conflicting evidence as they do to ambiguity, but also pitting conflict against ambiguity.

Imagine that you’re serving on a jury, and an important aspect of the prosecution’s case turns on the color of the vehicle seen leaving the scene of a nighttime crime. Here are two possible scenarios involving eyewitness testimony on this matter:

A) One eyewitness says the color was blue, and another says it was green.

B) Both eyewitnesses say the color was either blue or green.

Which testimony would you rather receive? In which do the eyewitnesses sound more credible? If you’re like most people, you’d choose (B) for both questions, despite the fact that from a purely informational standpoint there is no difference between them.

Evidence from this and other choice experiments in my 1999 paper suggests that in general, we are “conflict averse” in two senses:

  1. Conflicting messages from two equally believable sources are dispreferred in general to two informatively equivalent, ambiguous, but agreeing messages from the same sources; and
  2. Sharply conflicting sources are perceived as less credible than ambiguous agreeing sources.

The first effect goes further than simply deciding what we’d rather hear. It turns out that we will choose options for ourselves and others on the basis of conflict aversion. I found this in experiments asking people to make decisions about eyewitness testimony, risk messages, dieting programs, and hiring and firing employees. Nearly a decade later, Laure Cabantous (then at the Université Toulouse) confirmed my results in a 2007 paper demonstrating that insurers would charge a higher premium for insurance against mishaps whose risk information was conflictive than if the risk information was merely ambiguous.

Many of us have an intuition that if there’s equal evidence for and against a proposition, then both kinds of evidence should be accessed by all knowledgeable, impartial experts in that domain. If laboratory A produces 10 studies supporting an hypothesis (H, say) and laboratory B produces 10 studies disconfirming H, something seems fishy. The collection of 20 studies will seem more trustworthy if laboratories A and B each produce 5 studies supporting H and 5 disconfirming it, even though there are still 10 studies for and 10 studies against H. We expect experts to agree with one another, even if their overall message is vague. Being confronted with sharply disagreeing experts sets off alarms for most of us because it suggests that the experts may not be impartial.

The finding that people attribute greater credibility to agreeing-ambiguous sources than conflicting-precise sources poses strategic difficulties for the sources themselves. Communicators embroiled in a controversy face a decision equivalent to a Prisoners’ Dilemma, when considering communications strategies while knowing that other communicators hold views contrary to their own. If they “soften up” by conceding a point or two to the opposition they might win credibility points from their audience, but their opponents could play them for suckers by not conceding anything in return. On the other hand, if everyone concedes nothing then they all could lose credibility.

None of this is taken into account by the one-kind view of uncertainty. A probability “somewhere between 0 and 1” is treated identically to a probability of “exactly 1/2,” and sharply disagreeing experts are regarded as the same as ambiguous agreeing experts. But you and I know differently.

Written by michaelsmithson

November 3, 2010 at 10:52 am

Follow

Get every new post delivered to your Inbox.