ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘conflicting information

Writing about “Agnotology, Ignorance and Uncertainty”

with 2 comments

From time to time I receive invitations to contribute to various “encyclopedias.” Recent examples include an entry on “confidence intervals” in the International Encyclopedia of Statistical Science (Springer, 2010) and an entry on “uncertainty” in the Encyclopedia of Human Behavior (Elsevier, 1994, 2012). The latter link goes to the first (1994) edition; the second edition is due out in 2012. I’ve duly updated and revised my 1994 entry for the 2012 edition.

Having been raised by a librarian (my mother worked in the Seattle Public Library for 23 years), I’m a believer in the value of good reference works. So, generally I’m willing to accept invitations to contribute to them. These days there is a niche market even for non-digital works of this kind, and of course the net has led to numerous hybrid versions.

Despite the fact that such invitations are regarded as markers of professional esteem, they don’t count for much in the university system where I work because they aren’t original research publications. Same goes for textbooks. Thus, for my younger academic colleagues, writing encyclopedia entries or, worse still, writing textbooks actually can harm their careers. They understandably avoid doing so, which leaves it to older academics like me.

Some of these encyclopedias have interesting moments on the world stage. The International Encyclopedia of Statistical Science has been said to have set a record for the number of countries involved (105, via the 619 contributing authors). Its editors were nominated for the 2011 Nobel Peace Prize, apparently the first time any statisticians had received this honor. Meanwhile, V.S. Ramachandran, editor of the Encyclopedia of Human Behavior, was selected by Time Magazine as one of the world’s most influential people of 2011.

However, I digress. The Sage Encyclopedia of Philosophy and the Social Sciences is an intriguing proposal for a reference work that bridges these two intellectual cultures. I regard this aim as laudable, and I’m fortunate insofar as the areas where I work have a tradition of dialogs linking philosophers and social scientists. So, I was delighted to be asked to provide an entry on “agnotology, ignorance and uncertainty”. There is, however, a bit of a catch.

The guidelines for contributors state that “Entries should be written at a level appropriate for students who do not have an extensive background either in philosophy or the social sciences and for academics from other disciplines… it is essential that a reader versed in philosophy only or mostly, or alternatively, in social sciences, should gain by reading entries that aim at expanding their knowledge of concepts and theories as these have developed in the complementary area.” All of this is supposed to be achieved for a treatment of “agnotology, ignorance and uncertainty” in just 1,000 words, with a short list of “further readings” at the end. All of my posts in this blog thus far exceed 1,000 words (gulp). Can I be sufficiently concise without butchering or omitting crucial content?

Here’s my first draft (word count: 1,018). See what you think.

AGNOTOLOGY, IGNORANCE AND UNCERTAINTY

“Agnotology” is the study of ignorance (from the Greek “agnosis”). “Ignorance,” “uncertainty,” and related terms refer variously to the absence of knowledge, doubt, and false belief. This topic has a long history in Western philosophy, rooted in the Socratic tradition. It has a considerably shorter and, until recently, sporadic treatment in the human sciences. This entry focuses on relatively recent developments within and exchanges between both domains.

A key starting-point is that anyone attributing ignorance cannot avoid making claims to know something about who is ignorant of what: A is ignorant from B’s viewpoint if A fails to agree with or show awareness of ideas which B defines as actually or potentially valid. A and B can be identical, so that A self-attributes ignorance. Numerous scholars thereby have noted the distinction between conscious ignorance (known unknowns, learned ignorance) and meta-ignorance (unknown unknowns, ignorance squared).

The topic has been beset with terminological difficulties, due to the scarcity and negative cast of terms referring to unknowns. Several scholars have constructed typologies of unknowns, in attempts to make explicit their most important properties. Smithson’s book, Ignorance and Uncertainty: Emerging Paradigms, pointed out the distinction between being ignorant of something and ignoring something, the latter being akin to treating something as irrelevant or taboo. Knorr-Cetina coined the term “negative knowledge” to describe knowledge about the limits of the knowable. Various authors have tried to distinguish reducible from irreducible unknowns.

Two fundamental concerns have been at the forefront of philosophical and social scientific approaches to unknowns. The first of these is judgment, learning and decision making in the absence of complete information. Prescriptive frameworks advise how this ought to be done, and descriptive frameworks describe how humans (or other species) do so. A dominant prescriptive framework since the second half of the 20th century is subjective expected utility theory (SEU), whose central tenet is that decisional outcomes are to be evaluated by their expected utility, i.e., the product of their probability and their utility (e.g., monetary value, although utility may be based on subjective appraisals). According to SEU, a rational decision maker chooses the option that maximizes her/his expected utility. Several descriptive theories in psychology and behavioral economics (e.g., Prospect Theory and Rank-Dependent Expected Utility Theory) have amended SEU to render it more descriptively accurate while retaining some of its “rational” properties.

The second concern is the nature and genesis of unknowns. While many scholars have treated unknowns as arising from limits to human experience and cognitive capacity, increasing attention has been paid recently to the thesis that unknowns are socially constructed, many of them intentionally so. Smithson’s 1989 book was among the earliest to take up the thesis that unknowns are socially constructed. Related work includes Robert Proctor’s 1995 Cancer Wars and Ulrich Beck’s 1992 Risk Society. Early in the 21st century this thesis has become more mainstream. Indeed, the 2008 edited volume bearing “agnotology” in its title focuses on how culture, politics, and social dynamics shape what people do not know.

Philosophers and social scientists alike have debated whether there are different kinds of unknowns. This issue is important because if there is only one kind then only one prescriptive decisional framework is necessary and it also may be the case that humans have evolved one dominant way of making decisions with unknowns. On the other hand, different kinds of unknowns may require distinct methods for dealing with them.

In philosophy and mathematics the dominant formal framework for dealing with unknowns has been one or another theory of probability. However, Max Black’s ground-breaking 1937 paper proposed that vagueness and ambiguity are distinguishable from each other, from probability, and also from what he called “generality.” The 1960’s and 70’s saw a proliferation of mathematical and philosophical frameworks purporting to encompass non-probabilistic unknowns, such as fuzzy set theory, rough sets, fuzzy logic, belief functions, and imprecise probabilities. Debates continue to this day over whether any of these alternatives are necessary, whether all unknowns can be reduced to some form of probability, and whether there are rational accounts of how to deal with non-probabilistic unknowns. The chief contenders currently include generalized probability frameworks (including imprecise probabilities, credal sets, belief functions), robust Bayesian techniques, and hybrid fuzzy logic techniques.

In the social sciences, during the early 1920’s Keynes distinguished between evidentiary “strength” and “weight,” while Knight similarly separated “risk” (probabilities are known precisely) from “uncertainty” (probabilities are not known). Ellsberg’s classic 1961 experiments demonstrated that people’s choices can be influenced by how imprecisely probabilities are known (i.e., “ambiguity”), and his results have been replicated and extended by numerous studies. Smithson’s 1989 book proposed a taxonomy of unknowns and his 1999 experiments showed that choices also are influenced by uncertainty arising from conflict (disagreeing evidence from equally credible sources); those results also have been replicated.

More recent empirical research on how humans process unknowns has utilized brain imaging methods. Several studies have suggested that Knightian uncertainty (ambiguity) and risk differentially activate the ventral systems that evaluate potential rewards (the so-called “reward center”) and the prefrontal and parietal regions, with the latter two becoming more active under ambiguity. Other kinds of unknowns have yet to be widely studied in this fashion but research on them is emerging. Nevertheless, the evidence thus far suggests that the human brain treats unknowns as if there are different kinds.

Finally, there are continuing debates regarding whether different kinds of unknowns should be incorporated in prescriptive decision making frameworks and, if so, how a rational agent should deal with them. There are several decisional frameworks incorporating ambiguity or imprecision, some of which date back to the mid-20th century, and recently at least one incorporating conflict as well. The most common recommendation for decision making under ambiguity amounts to a type of worst-case analysis. For instance, given a lower and upper estimate of the probability of event E, the usual advice is to use the lower probability for evaluating bets on E occurring but to use the upper probability for bets against E. However, the general issue of what constitutes rational behavior under non-probabilistic uncertainties such as ambiguity, fuzziness or conflict remains unresolved.

Further Readings

Bammer, G. and Smithson, M. (Eds.), (2008). Uncertainty and Risk: Multidisciplinary Perspectives. London: Earthscan.

Beck, Ulrich (1999). World Risk Society. Oxford: Polity Press.

Black, M. (1937). Vagueness: An exercise in logical analysis. Philosophy of Science, 4, 427-455.

Gardenfors, P. and Sahlin, N.-E. (Eds.), (1988). Decision, Probability, and Utility: Selected Readings. Cambridge, UK: Cambridge University Press.

Proctor, R. and Schiebinger, L. (Eds.), (2008). Agnotology: The Making and Unmaking of Ignorance. Stanford, CA: Stanford University Press.

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.

Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman Hall.

Not Knowing When or Where You’re At

with 3 comments

My stepfather, at 86 years of age, just underwent major surgery to remove a considerable section of his colon. He’d been under heavy sedation for several days, breathing through a respirator, with drip-feeds for hydration and sustenance. After that, he was gradually reawakened. Bob found himself in a room he’d never seen before, and he had no sense of what day or time it was. He had no memory of events between arriving at the hospital and waking up in the ward. He had to figure out where and when he was.

These are what philosophers are fond of calling “self-locating beliefs.” They say we learn two quite different kinds of things about the world—Things about what goes on in this world, and things about where and when we are in this world.

Bob has always had an excellent sense of direction. He’s one of those people who can point due North when he’s in a basement. I, on the other hand, have such a poor sense of direction that I’ve been known to blunder into a broom-closet when attempting to exit from a friend’s house. So I find the literature on self-locating beliefs personally relevant, and fascinating for the problems they pose for reasoning about uncertainty.

Adam Elga’s classic paper published in 2000 made the so-called “Sleeping Beauty” self-locating belief problem famous: “Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking.”

You have just woken up. What probability should you assign to the proposition that the coin landed Heads? Some people answer 1/2 because the coin is fair, your prior probability that it lands Heads should be 1/2, and the fact that you have just awakened adds no other information.

Others say the answer is 1/3. There are 3 possible awakenings, of which only 1 arises from the coin landing Heads. Therefore, given that you have ended up with one of these possible awakenings, the probability that it’s a “Heads” awakening is 1/3. Elga himself opted for this answer. However, the debates continued long after the publication of his paper with many ingenious arguments for both answers (and even a few others). Defenders of one position or the other became known as “halfers” and “thirders.”

But suppose we accept Elga’s answer: What of it? It raises a problem for widely accepted ideas about how and why our beliefs should change if we consider the probability we’d assign the coin landing Heads before the researchers imposed their experiment on us. We’d say a fair coin has a probability of 1/2 of landing Heads. But on awakening, Elga says we should now believe that probability is 1/3. But we haven’t received any new information about the coin or anything else relevant to the outcome of the coin-toss. In standard accounts of conditional probability, we should alter this probability only on grounds of having acquired some new information– But all the information about the experiment was given to us before we were put to sleep!

Here’s another example, the Shangri La Journey problem from Frank Arntzenius (2003):

“There are two paths to Shangri La, the path by the Mountains, and the path by the Sea. A fair coin will be tossed by the guardians to determine which path you will take: if Heads you go by the Mountains, if Tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La you will forever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But, just as soon as you enter Shangri La, your memory of this Beauteous Journey will be erased and be replaced by a memory of the Journey by the Mountains.”

Arntzenius takes this case to provide a counterexample to the standard account of how conditional probability works. As in the Sleeping Beauty problem, consider what the probability we’d assign to the coin landing Heads should be at different times. Our probability before the journey should be 1/2, since the only relevant information we have is that the coin is fair. Now, suppose the coin actually lands Heads. Our probability of Heads after we set out and see that we are traveling by the mountains should be 1, because we now known the outcome of the coin toss. But once we pass through the gates of Shangri La, Arntzenius argues, our probability should revert to 1/2: “for you will know that you would have had the memories that you have either way, and hence you know that the only relevant information that you have is that the coin is fair.”

Well, this goes against the standard Bayesian account of conditional probabilities. Letting H = Heads and M = memory of the Journey by the Mountains, according to Bayes’ formula we should update our belief that the coin landed heads by computing
P(H|M) = P(M|H)P(H)/P(M),
where P(H) is the probability of Heads after we know the outcome of the coin toss. According to our setup, P(H) = P(M|H) = P(M) = 1. Therefore, P(H|M) = 1. Arntzenius declares that because our probability of Heads should revert to 1/2, something is wrong with Bayesian conditionalization.

The difficulty arises because the Bayesian updating rule—conditionalization—requires certainties to be permanent: once you’re certain of something, you should always be certain of it. But when we consider self-locating beliefs, there seem to be cases where this requirement clearly is false. For example, one can reasonably change from being certain that it’s one time to being certain that it’s another.

This is the kind of process my stepfather went through as he reoriented himself to what his family and medical caretakers assured him is the here-and-now. It was rather jarring for him at first, but fortunately he isn’t a Bayesian. He could sneak peeks at newspapers and clocks to see if those concurred with what he was being told, and like most of us he could be comfortable with the notion of shifting from being certain it was, say March 20th to being certain that it was March 26th. But what if he were a Bayesian?

Bob: What’s going on?
Nearest & Dearest: You’re in Overlake Hospital and you’ve been sedated for 6 days. It’s Saturday the 26th of March.
Bob: I can see I’m in a hospital ward, but I’m certain the date is the 20th of March because my memory tells me that a moment ago that was the case.
N&D: But it really is the 26th; you’ve had major surgery and had to be sedated for 6 days. We can show you newspapers and so on if you don’t believe us.
Bob: My personal probability that it is the 20th was 1, last I recall, and it still is. No evidence you provide me can shift me from a probability of 1 because I’m using Bayes’ Rule to update my beliefs. You may as well try to convince me the sun won’t rise tomorrow.
N&D: Uhhh…

A Bayesian faces additional problems that do not and need not trouble my stepfather. One issue concerns identity. Bayesian conditionalization is only well-defined if the subject has a unique set of prior beliefs, i.e., a “unique predecessor.” How should we extend conditionalization in order to accommodate non-unique predecessors? For instance, suppose we’re willing to entertain both the 1/2 and the 1/3 answers to the Sleeping Beauty conundrum?

Meacham’s (2010) prescription for multiple predecessors is to represent them with an interval: “A natural proposal is to require our credences to lie in the span of the credences conditionalization prescribes to our predecessors.” But the pair of credences {1/3, 1/2} that the Sleeping Beauty puzzle leaves us with does not lend any plausibility to values in between them. For instance, it surely would be silly to average them and declare that the answer to this riddle is 5/12—Neither the thirders nor the haflers would endorse this solution.

A while ago, I (Smithson, 1999) and more recently, Gajdos & Vergnaud (2011), said that there is a crucial difference between two sharp but disagreeing predecessors {1/3, 1/2} and two vague but agreeing ones {[1/3, 1/2], [1/3, 1/2]}. Moreover, it is not irrational to prefer the second situation to the first, as I showed that many people do. Cabantous et al. (2011) recently report that insurers would charge a higher premium for insurance when expert risk estimates are precise but conflicting than when they agree but are imprecise.

In short, standard probability theories have difficulty not only with self-location belief updating but, more generally, with any situation that presents multiple equally plausible probability assessments. The traditional Bayesian can’t handle multiple selves but the rest of us can.

Follow

Get every new post delivered to your Inbox.