ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Social sciences

Writing about “Agnotology, Ignorance and Uncertainty”

with 5 comments

From time to time I receive invitations to contribute to various “encyclopedias.” Recent examples include an entry on “confidence intervals” in the International Encyclopedia of Statistical Science (Springer, 2010) and an entry on “uncertainty” in the Encyclopedia of Human Behavior (Elsevier, 1994, 2012). The latter link goes to the first (1994) edition; the second edition is due out in 2012. I’ve duly updated and revised my 1994 entry for the 2012 edition.

Having been raised by a librarian (my mother worked in the Seattle Public Library for 23 years), I’m a believer in the value of good reference works. So, generally I’m willing to accept invitations to contribute to them. These days there is a niche market even for non-digital works of this kind, and of course the net has led to numerous hybrid versions.

Despite the fact that such invitations are regarded as markers of professional esteem, they don’t count for much in the university system where I work because they aren’t original research publications. Same goes for textbooks. Thus, for my younger academic colleagues, writing encyclopedia entries or, worse still, writing textbooks actually can harm their careers. They understandably avoid doing so, which leaves it to older academics like me.

Some of these encyclopedias have interesting moments on the world stage. The International Encyclopedia of Statistical Science has been said to have set a record for the number of countries involved (105, via the 619 contributing authors). Its editors were nominated for the 2011 Nobel Peace Prize, apparently the first time any statisticians had received this honor. Meanwhile, V.S. Ramachandran, editor of the Encyclopedia of Human Behavior, was selected by Time Magazine as one of the world’s most influential people of 2011.

However, I digress. The Sage Encyclopedia of Philosophy and the Social Sciences is an intriguing proposal for a reference work that bridges these two intellectual cultures. I regard this aim as laudable, and I’m fortunate insofar as the areas where I work have a tradition of dialogs linking philosophers and social scientists. So, I was delighted to be asked to provide an entry on “agnotology, ignorance and uncertainty”. There is, however, a bit of a catch.

The guidelines for contributors state that “Entries should be written at a level appropriate for students who do not have an extensive background either in philosophy or the social sciences and for academics from other disciplines… it is essential that a reader versed in philosophy only or mostly, or alternatively, in social sciences, should gain by reading entries that aim at expanding their knowledge of concepts and theories as these have developed in the complementary area.” All of this is supposed to be achieved for a treatment of “agnotology, ignorance and uncertainty” in just 1,000 words, with a short list of “further readings” at the end. All of my posts in this blog thus far exceed 1,000 words (gulp). Can I be sufficiently concise without butchering or omitting crucial content?

Here’s my first draft (word count: 1,018). See what you think.

AGNOTOLOGY, IGNORANCE AND UNCERTAINTY

“Agnotology” is the study of ignorance (from the Greek “agnosis”). “Ignorance,” “uncertainty,” and related terms refer variously to the absence of knowledge, doubt, and false belief. This topic has a long history in Western philosophy, rooted in the Socratic tradition. It has a considerably shorter and, until recently, sporadic treatment in the human sciences. This entry focuses on relatively recent developments within and exchanges between both domains.

A key starting-point is that anyone attributing ignorance cannot avoid making claims to know something about who is ignorant of what: A is ignorant from B’s viewpoint if A fails to agree with or show awareness of ideas which B defines as actually or potentially valid. A and B can be identical, so that A self-attributes ignorance. Numerous scholars thereby have noted the distinction between conscious ignorance (known unknowns, learned ignorance) and meta-ignorance (unknown unknowns, ignorance squared).

The topic has been beset with terminological difficulties, due to the scarcity and negative cast of terms referring to unknowns. Several scholars have constructed typologies of unknowns, in attempts to make explicit their most important properties. Smithson’s book, Ignorance and Uncertainty: Emerging Paradigms, pointed out the distinction between being ignorant of something and ignoring something, the latter being akin to treating something as irrelevant or taboo. Knorr-Cetina coined the term “negative knowledge” to describe knowledge about the limits of the knowable. Various authors have tried to distinguish reducible from irreducible unknowns.

Two fundamental concerns have been at the forefront of philosophical and social scientific approaches to unknowns. The first of these is judgment, learning and decision making in the absence of complete information. Prescriptive frameworks advise how this ought to be done, and descriptive frameworks describe how humans (or other species) do so. A dominant prescriptive framework since the second half of the 20th century is subjective expected utility theory (SEU), whose central tenet is that decisional outcomes are to be evaluated by their expected utility, i.e., the product of their probability and their utility (e.g., monetary value, although utility may be based on subjective appraisals). According to SEU, a rational decision maker chooses the option that maximizes her/his expected utility. Several descriptive theories in psychology and behavioral economics (e.g., Prospect Theory and Rank-Dependent Expected Utility Theory) have amended SEU to render it more descriptively accurate while retaining some of its “rational” properties.

The second concern is the nature and genesis of unknowns. While many scholars have treated unknowns as arising from limits to human experience and cognitive capacity, increasing attention has been paid recently to the thesis that unknowns are socially constructed, many of them intentionally so. Smithson’s 1989 book was among the earliest to take up the thesis that unknowns are socially constructed. Related work includes Robert Proctor’s 1995 Cancer Wars and Ulrich Beck’s 1992 Risk Society. Early in the 21st century this thesis has become more mainstream. Indeed, the 2008 edited volume bearing “agnotology” in its title focuses on how culture, politics, and social dynamics shape what people do not know.

Philosophers and social scientists alike have debated whether there are different kinds of unknowns. This issue is important because if there is only one kind then only one prescriptive decisional framework is necessary and it also may be the case that humans have evolved one dominant way of making decisions with unknowns. On the other hand, different kinds of unknowns may require distinct methods for dealing with them.

In philosophy and mathematics the dominant formal framework for dealing with unknowns has been one or another theory of probability. However, Max Black’s ground-breaking 1937 paper proposed that vagueness and ambiguity are distinguishable from each other, from probability, and also from what he called “generality.” The 1960’s and 70’s saw a proliferation of mathematical and philosophical frameworks purporting to encompass non-probabilistic unknowns, such as fuzzy set theory, rough sets, fuzzy logic, belief functions, and imprecise probabilities. Debates continue to this day over whether any of these alternatives are necessary, whether all unknowns can be reduced to some form of probability, and whether there are rational accounts of how to deal with non-probabilistic unknowns. The chief contenders currently include generalized probability frameworks (including imprecise probabilities, credal sets, belief functions), robust Bayesian techniques, and hybrid fuzzy logic techniques.

In the social sciences, during the early 1920’s Keynes distinguished between evidentiary “strength” and “weight,” while Knight similarly separated “risk” (probabilities are known precisely) from “uncertainty” (probabilities are not known). Ellsberg’s classic 1961 experiments demonstrated that people’s choices can be influenced by how imprecisely probabilities are known (i.e., “ambiguity”), and his results have been replicated and extended by numerous studies. Smithson’s 1989 book proposed a taxonomy of unknowns and his 1999 experiments showed that choices also are influenced by uncertainty arising from conflict (disagreeing evidence from equally credible sources); those results also have been replicated.

More recent empirical research on how humans process unknowns has utilized brain imaging methods. Several studies have suggested that Knightian uncertainty (ambiguity) and risk differentially activate the ventral systems that evaluate potential rewards (the so-called “reward center”) and the prefrontal and parietal regions, with the latter two becoming more active under ambiguity. Other kinds of unknowns have yet to be widely studied in this fashion but research on them is emerging. Nevertheless, the evidence thus far suggests that the human brain treats unknowns as if there are different kinds.

Finally, there are continuing debates regarding whether different kinds of unknowns should be incorporated in prescriptive decision making frameworks and, if so, how a rational agent should deal with them. There are several decisional frameworks incorporating ambiguity or imprecision, some of which date back to the mid-20th century, and recently at least one incorporating conflict as well. The most common recommendation for decision making under ambiguity amounts to a type of worst-case analysis. For instance, given a lower and upper estimate of the probability of event E, the usual advice is to use the lower probability for evaluating bets on E occurring but to use the upper probability for bets against E. However, the general issue of what constitutes rational behavior under non-probabilistic uncertainties such as ambiguity, fuzziness or conflict remains unresolved.

Further Readings

Bammer, G. and Smithson, M. (Eds.), (2008). Uncertainty and Risk: Multidisciplinary Perspectives. London: Earthscan.

Beck, Ulrich (1999). World Risk Society. Oxford: Polity Press.

Black, M. (1937). Vagueness: An exercise in logical analysis. Philosophy of Science, 4, 427-455.

Gardenfors, P. and Sahlin, N.-E. (Eds.), (1988). Decision, Probability, and Utility: Selected Readings. Cambridge, UK: Cambridge University Press.

Proctor, R. and Schiebinger, L. (Eds.), (2008). Agnotology: The Making and Unmaking of Ignorance. Stanford, CA: Stanford University Press.

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.

Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman Hall.

Making the Wrong Decision for the Right Reasons

with 2 comments

There seems to be a widespread intuition that if we use a well-reasoned, evidence-based approach to making decisions under uncertainty then we’ll make the right decision most of the time. Sure, we’ll make some bad calls but the majority of the time we’ll get it right. Or will we?

Here’s an example from law enforcement. Suppose you’re the commanding officer in a local police jurisdiction, and you have to decide how to allocate resources to a missing person case. A worst-case scenario is that the missing person ends up a homicide. Although police are required to treat all missing persons cases seriously, as most do not involve foul play it would be grossly inefficient to treat all missing persons as potential homicides. So, if the missing person isn’t found within 24 hours, you’ll undertake a risk analysis, considering issues such as whether the circumstances are suspicious or out of character, or there is evidence of the commission of a crime.

What would be your best approach to this risk analysis, and how likely would you be to come to the right decision? a landmark UK study examined 32,705 cases of missing persons in the UK between 2000 and 2002, and determined that 0.6 percent were found dead, although not necessary victims of homicide (Newiss, 2006). This is a very low percentage, and it turns out to be the source of a major headache for you as the commander responsible for deciding what resources to allocate to your case.

You have years of experience, wisdom handed down from seasoned investigators who came before you, and you’ve read the relevant literature. You know that where a missing person is found to have been a victim of foul play, risk factors include age and sex, involvement in prostitution, last being seeing in a public place and an absence of a history of suicide attempts or mental health problems.

So, you’re going to make a decision whether to allocate more resources to a missing persons case investigation based on some diagnostic criteria which I’ll denote by D. The criteria included in D are indicators that the missing person may have died. There are four commonly used criteria for evaluating how good D is:

The expressions on the right hand side of these equations are conditional probabilities. For instance, P(D present|death) is the probability that D is present given that the person has died. Sensitivity and specificity measure the ability of the model to detect the occurrence or absence (respectively) of deaths. Predictive value, on the other hand, tells us the probability of making a correct diagnosis (death versus no death) based on D.

Now, suppose D has a sensitivity of .99 and specificity of .99 (far better than can be obtained from the otherwise worthwhile predictors identified by Newiss). The next table shows how well D would perform in distinguishing between cases ending in death and cases not involving death.

D present D absent Error-rate
Dead 196 2 198 0.01 Sensitivity 0.99
Alive 325 32182 32507 0.01 Specificity 0.99
521 32184 32705
pos. pred. neg. pred.
0.3762 0.999938

Because sensitivity is .99, D misses only .01*198 = 2 cases involving deaths, and correctly detects the remaining 196. Likewise, because specificity is .99, D absent misses .01*32507 = 325 cases that do not involve death. That is, there are 325 missing persons with D who will be found to be alive. But 325 is large compared to the number of correctly identified deaths (196). So positive predictive value is poor: P(death|D present) = 196/(196 + 325) = .376. The rate of incorrect positive diagnosis therefore is 1 – .376 = .624. If you, as commander, decided to allocate more resources to cases where D is present you could expect to be wrong about 62% of the time.

Can these uncertainties be reduced? An obvious and frequently recommended remedy is further investigation into factors that may predict the likelihood of a missing person ending up dead and, conditional on death, being a homicide victim. These investigations could be combined with survival analysis of the kind employed by Newiss, to determine whether there is a relationship between the length of time a person has gone missing and the likelihood that the person ends up dead.

But how effective can we expect these remedies to be? Note that improving sensitivity would have only a negligible effect on positive predictive value. To get to the point where positive predictive value was an even-money bet (.5) would require specificity to be .994. To move positive predictive value to .9 would require specificity to be .9993. Thus the test would have to be incredibly accurate in order to not devote considerable resources to investigations where it was not warranted.

These are unachievable standards. Police will inevitably face a considerable error-rate in making resource allocation decisions regarding missing persons cases. Of course, this does not imply that improving predictions of homicide in missing persons cases is futile, but simply tells us not to expect such improvements to raise the probability of a correct decision to a desirable level.

Mind you, it isn’t all gloom and doom. If we consider the false negative problem (e.g., a Britt Lapthorne outcome) it may be possible to obtain a reasonably high predictive value rate without unrealistically accurate predictors. In our unrealistic scenario (with sensitivity and specificity both at .99), negative diagnositicity is .99994. If sensitivity and specificity both were .5 (i.e., coin-toss levels) then negative predictive value would be about 16,253/16,352 = .994. You, as commander, are very unlikely to end up with a Britt Lapthorne case which you stand accused of having failed to treat with due diligence. Instead, you are very likely to be chastised by higher-ups and perhaps the media for “wasting” money and resources on cases where the missing person turned up alive and well.

There is an analogous problem in preventative medical testing, where the disorder to be detected occurs at a low rate in the population. For example pregnant women may wish to test for the possibility that their unborn baby has Downs Syndrome. According to an Australian government health assessment document released in 2002, when used as a single modality, the standard screening by measurement of Nuchal Translucency in the first trimester has a detection rate for Downs of approximately 73%-82% at a false positive rate of 5%-8%.. Additional ultrasound cues can further increase detection rates for Down syndrome to more than 95%.

The next table shows the most optimistic scenario according to those figures, i.e., sensitivity and specificity of 95%. At the time, about 12.8 per 10,000 births yielded a baby with Downs, so I’ve included that rate in the table. Downs Syndrome, thankfully, is rare. The result, as you can see, is a positive predictive value of just 2.38%. Given a test result that says the baby has Downs, the probability that it really does have Downs is about 2.4 chances in 100. If these procedures were widely used, there would be many needlessly upset pregnant women—about 97.6% of those whose combined tests came back positive.

Positive Negative Error-rate
Downs 122 6 128 0.05 Sensitivity 0.95
Normal 4994 94878 99872 0.05 Specificity 0.95
5116 94884 100000
pos. pred. neg. pred.
0.0238 0.9999

In July last year there was a furore over a study published in the Journal of the American Medical Association. The study found that of 2176 participants free of HIV infection who received a vaccine product, 908 tested positive even though they had been exposed to the vaccine, not (of course) the virus. That’s a false positive rate of about 41.7%. Now, suppose a successful vaccine is developed but it also has this reactivity problem. In any Western country where the rate of HIV infection is low, the combination of a large proportion of the population being vaccinated and tested could be a major disaster. This is not to say that an HIV vaccine would be a bad idea; the point is that it could play havoc with HIV detection.

The chief difference between the medical preventative testing quandary and the police commander’s problem is that the negative consequences of the wrong diagnosis fall on the patient instead of the decision maker. Yet this issue is seldom aired in public debates regarding medical testing. Perhaps understandably, the bulk of medical research effort in this domain goes into devising more accurate tests. But hang on—In the Downs test scenario, even with a sensitivity rate of 100% the specificity would have to be 99.87% to raise the positive predictive value to a mere 50%. For a positive predictive value of 90%? Sensitivity would have to be about 99.99%, a crazily impossible target. Realistically, the tests will never be accurate enough to avoid the problem posed by low positive predictive values for rare disorders.

What can a decision maker do? A final point to all this is that in settings where you’re doomed to a high decisional error-rate despite using the best available methods, it may be better to direct your energies toward handling the flak instead of persisting in a futile quest for unattainably accurate predictors or diagnostic cues. The chief difficulty may be educating your clientele, constituency, or bosses that it really is possible to be making the best possible decisions and still getting them wrong most of the time.

Written by michaelsmithson

May 8, 2011 at 2:55 pm

I Can’t Believe What I Teach

with 2 comments

For the past 34 years I’ve been compelled to teach a framework that I’ve long known is flawed. A better framework exists and has been available for some time. Moreover, I haven’t been forced to do this by any tyrannical regime or under threats of great harm to me if I teach this alternative instead. And it gets worse: I’m not the only one. Thousands of other university instructors have been doing the same all over the world.

I teach statistical methods in a psychology department. I’ve taught courses ranging from introductory undergraduate through graduate levels, and I’m in charge of that part of my department’s curriculum. So, what’s the problem—Why haven’t I abandoned the flawed framework for its superior alternative?

Without getting into technicalities, let’s call the flawed framework the “Neyman-Pearson” approach and the alternative the “Bayes” approach. My statistical background was formed as I completed an undergraduate degree in mathematics during 1968-72. My first courses in probability and statistics were Neyman-Pearson and I picked up the rudiments of Bayes toward the end of my degree. At the time I thought these were simply two valid alternative ways of understanding probability.

Several years later I was a newly-minted university lecturer teaching introductory statistics to fearful and sometimes reluctant students in the social sciences. The statistical methods used in the social science research were Neyman-Pearson, so of course I taught Neyman-Pearson. Students, after all, need to learn to read the literature of their discipline.

Gradually, and through some of my research into uncertainty, I became aware of the severe problems besetting the Neyman-Pearson framework. I found that there was a lengthy history of devastating criticisms raised against Neyman-Pearson even within the social sciences, criticisms that had been ignored by practising researchers and gatekeepers to research publication.

However, while the Bayesian approach may have been conceptually superior, in the late ‘70’s through early ‘80’s it suffered from mathematical and computational impracticalities. It provided few usable methods for dealing with complex problems. Disciplines such as psychology were held in thrall to Neyman-Pearson by a combination of convention and the practical requirements of complex research designs. If I wanted to provide students or, for that matter, colleagues who came to me for advice, with effective statistical tools for serious research then usually Neyman-Pearson techniques were all I could offer.

But what to do about teaching? No university instructor takes a formal oath to teach the truth, the whole truth, and nothing but the truth; but for those of us who’ve been called to teach it feels as though we do. I was sailing perilously close to committing Moore’s Paradox in the classroom (“I assert Neyman-Pearson but I don’t believe it”).

I tried slipping in bits and pieces alerting students to problematic aspects of Neyman-Pearson and the existence of the Bayesian alternative. These efforts may have assuaged my conscience but they did not have much impact, with one important exception. The more intellectually proactive students did seem to catch on to the idea that theories of probability and statistics are just that—Theories, not god-given commandments.

Then Bayes got a shot in the arm. In the mid-80’s some powerful computational techniques were adapted and developed that enabled this framework to fight at the same weight as Neyman-Pearson and even better it. These techniques sail under the banner of Markov chain Monte Carlo methods, and by the mid-90’s software was available (free!) to implement them. The stage was set for the Bayesian revolution. I began to dream of writing a Bayesian introductory statistics textbook for psychology students that would set the discipline free and launch the next generation of researchers.

It didn’t happen that way. Psychology was still deeply mired in Neyman-Pearson and, in fact, in a particularly restrictive version of it. I’ll spare you the details other than saying that it focused, for instance, on whether the researcher could reject the claim that an experimental effect was nonexistent. I couldn’t interest my colleagues in learning Bayesian techniques, let alone undergraduate students.

By the late ‘90’s a critical mass of authoritative researchers convinced the American Psychological Association to form a task-force to reform statistical practice, but this reform really amounted to shifting from the restrictive Neyman-Pearson orientation to a more liberal one that embraced estimating how big an experimental effect is and setting a “confidence interval” around it.

It wasn’t the Bayesian revolution, but I leapt onto this initiative because both reforms were a long stride closer to the Bayesian framework and would still enable students to read the older Neyman-Pearson dominated research literature. So, I didn’t write a Bayesian textbook after all. My 2000 introductory textbook was, so far as I’m aware, one of the first to teach introductory statistics to psychology students from a confidence interval viewpoint. It was generally received well by fellow reformers, and I got a contract to write a kind of researcher’s confidence interval handbook that came out in 2003. The confidence interval reform in psychology was under weigh, and I’d booked a seat on the juggernaut.

Market-wise, my textbook flopped. I’m not singing the blues about this, nor do I claim sour grapes. For whatever reasons, my book just didn’t take the market by storm. Shortly after it came out, a colleague mentioned to me that he’d been at a UK conference with a symposium on statistics teaching where one of the speakers proclaimed my book the “best in the world” for explaining confidence intervals and statistical power. But when my colleague asked if the speaker was using it in the classroom he replied that he was writing his own. And so better-selling introductory textbooks continued to appear. A few of them referred to the statistical reforms supposedly happening in psychology but the majority did not. Most of them are the nth edition of a well-established book that has long been selling well to its set of long-serving instructors and their students.

My 2003 handbook fared rather better. I had put some software resources for computing confidence intervals on a webpage and these got a lot of use. These, and my handbook, got picked up by researchers and their graduate students. Several years on, the stuff my scripts did started to appear in mainstream commercial statistics packages. It seemed that this reform was occurring mainly at the advanced undergraduate, graduate and researcher levels. Introductory undergraduate statistical education in psychology remained (and still remains) largely untouched by it.

Meanwhile, what of the Bayesian movement? In this decade, graduate-level social science oriented Bayesian textbooks began to appear. I recently reviewed several of them and have just sent off an invited review of another. In my earlier review I concluded that the market still lacked an accessible graduate-level treatment oriented towards psychology, a gap that may have been filled by the book I’ve just finished reviewing.

Have I tried teaching Bayesian methods? Yes, but thus far only in graduate-level workshops, and on my own time (i.e., not as part of the official curriculum). I’ll be doing so again in the second half of this year, hoping to recruit some of my colleagues as well as graduate students. Next year I’ll probably introduce a module on Bayes for our 4th-year (Honours) students.

It’s early days, however, and we remain far from being able to revamp the entire curriculum. Bayesian techniques still rarely appear in the mainstream research literature in psychology, and so students still need to learn Neyman-Pearson to read that literature with a knowledgably critical eye. A sea-change may be happening, but it’s going to take years (possibly even decades).

Will I try writing a Bayesian textbook? I already know from experience that writing a textbook is a lot of time and hard work, often for little reward. Moreover, in many universities (including mine) writing a textbook counts for nothing. It doesn’t bring research money, it usually doesn’t enhance the university’s (or the author’s) scholarly reputation, it isn’t one of the university’s “performance indicators,” and it seldom brings much income to the author. The typical university attitude towards textbooks is as if the stork brings them. Writing a textbook, therefore, has to be motivated mainly by a passion for teaching. So I’m thinking about it…

What are the Functions of Innumeracy?

leave a comment »

Recently a colleague asked me for my views on the social and psychological functions of innumeracy. He aptly summarized the heart of the matter:

“I have long-standing research interests in mathematics anxiety and adult numeracy (or, more specifically, innumeracy, including in particular what I term the ‘adult numeracy conundrum’ – that is, that despite decades of investment in programs to raise adult numeracy rates little, if any, measurable improvements have been achieved. This has led me to now consider the social functions performed by this form of ignorance, as its persistence suggests the presence of underlying mechanisms that provide a more valuable pay-off than that offered by well-meaning educators…)”

This is an interesting deviation from the typical educator’s attack on innumeracy. “Innumeracy” apparently was coined by cognitive scientist Douglas Hofstadter but it was popularized by mathematician John Allen Paulos in his 1989 book, Innumeracy: Mathematical Illiteracy and its Consequences. Paulos’ book was a (IMO, deserved) bestseller and has gone through a second edition. Most educators’ attacks on innumeracy do what Paulos did: Elaborate the costs and dysfunctions of innumeracy, and ask what we can blame for it and how it can be overcome.

Paulos’ list of the consequences of innumeracy include:

  1. Inaccurate media reporting and inability of the public to detect such inaccuracies
  2. Financial mismanagement (e.g., of debts), especially regarding the misunderstanding of compound interest
  3. Loss of money on gambling, in particular caused by gambler’s fallacy
  4. Belief in pseudoscience
  5. Distorted assessments of risks
  6. Limited job prospects

These are bad consequences indeed, but mainly for the innumerate. Consequences 2 through 6 also are windfalls for those who exploit the innumerate. Banks, retailers, pyramid selling fraudsters, and many others either legitimately or illicitly take advantage of consequence 2. Casinos, bookies, online gambling agencies, investment salespeople and the like milk the punters of their funds on the strength of consequences 3 and 5. Peddlers of various religions, magical and pseudo-scientific beliefs batten on consequence 4, and of course numerous employers can keep the wages and benefits low for those trapped by consequence 6.

Of course, the fact that all these interests are served doesn’t imply that innumeracy is created and maintained by a vast conspiracy of bankers, retailers, casino owners, and astrologers. They’re just being shrewd and opportunistic. Nevertheless, these benefits do indicate that we should not expect the beneficiaries to be in the vanguard of a campaign to improve, say, public understanding of compound interest or probability.

Now let’s turn to Paulos’ accounts of the “whodunit” part of innumeracy: What creates and maintains it? A chief culprit is, you guessed it, poor mathematical education. My aforementioned colleague and I would agree: For the most part, mathematics is badly taught, especially at primary and secondary school levels. Paulos, commendably, doesn’t beat up the teachers. Instead, he identifies bad curricula and a lack of mathematical education in teacher training as root causes.

On the other hand, he does blame “us,” that is, the innumerate and even the numerate. The innumerate are castigated for demanding personal relevance and an absence of anxiety in their educations. According to Paulos, personalizing the universe yields disinterest in (depersonalized?) mathematics and science generally, and an unhealthy guillibility for pseudosciences such as astrology and numerology. He seems to have skated onto thin ice here. He doesn’t present empirical evidence for his main claim, and there are plenty of examples throughout history of numerate or even mathematically sophisticated mystics (the Pythagoreans, for one).

Paulos also accuses a subset of the innumerate of laziness and lack of discipline, but the ignorance of the undisciplined would surely extend beyond innumeracy. If we want instances of apathy that actually sustain innumeracy, let’s focus on public institutions that could militate against it but don’t. There, we shall encounter social and political forces that help perpetuate innumeracy, not via any conspiracy or even direct benefits, but simply by self-reinforcing feedback loops.

As the Complete Review points out “… the media isn’t much interested in combating innumeracy (think of how many people got fired after all the networks prematurely declared first Gore then Bush the winner in Florida in the 2000 American presidential election – none…” Media moguls and their editors are interested in selling stories, and probably will become interested in getting the numbers right only when the paying public starts objecting to numerical errors in the media. An innumerate public is unlikely to object, so the media and the public stagnate in a suboptimal but mutually reinforcing equilibrium.

Likewise, politicians don’t want a numerate electorate any more than they want a politically sophisticated one, so elected office-holders also are unlikely to lead the charge to combat innumeracy. Michael Moore, a member of the Australian Capital Territory Legislative Assembly for four terms, observes that governments usually avoid clear, measurable goals for which they can be held accountable (pg. 178, in a chapter he contributed to Gabriele Bammer’s and my book on uncertainty). Political uses of numbers are mainly rhetorical or for purposes of control. Again, we have a mutually reinforcing equilibrium: A largely innumerate public elects office-holders who are happy for the public to remain innumerate because that’s partly what got them elected.

I’ve encountered similar feedback-loops in academia, beginning with my experiences as a math graduate doing a PhD in a sociology department. The ideological stances taken by some departments of cultural studies, anthropology, and sociology position education for numeracy as aligned with so-called “positivist” research methods, against which they are opposed. The upshot is that courses with statistical or other numeracy content are devalued and students are discouraged from taking them. A subset of the innumerate graduates forms a succeeding generation of innumerate academics, and on it goes.

Meanwhile, Paulos blames the rest of us for perpetuating romantic stereotypes in which math and science are spoilers of the sublime, and therefore to be abhorred by anyone with artistic or spiritual sensibilities. So, he is simultaneously stereotyping the innumerate and railing against us for indulging another stereotype (No disrespect to Paulos; I’ve been caught doing this kind of thing often enough).

Lee Dembart, then of the Los Angeles Times, observed that “Paulos is very good at explaining all of this, though sometimes with a hectoring, bitter tone, for which he apologizes at the very end.” Unfortunately, hectoring people, focusing attention on their faults, or telling them they need to work harder “for their own good” seldom persuades them. I’ve taught basic statistics to students in the human sciences for many years. Many of these students dread a course in stats. They’re in it only because it’s a required course, telling them it’s for their own good isn’t going to cut any ice with them, and blaming them for finding statistics difficult or off-putting is a sure-fire way of turning them off entirely.

Now that we all have to be here, I propose to them, let’s see how we can make the best of it. I teach them how to lie with or abuse statistics so that they can gain a bit more power to detect when someone is trying to pull the proverbial wool over their eyes. This also opens the way to considering ethical and moral aspects of statistics. Then I try to link the (ab)uses of stats with important issues and debates in psychology. I let them in on some of psychology’s statistical malpractices (and there are plenty), so they can start detecting these for themselves and maybe even become convinced that they could do better. I also try to convey the view that data analysis is not self-automating; it requires human judgment and interpretive work.

Does my approach work? Judging from student evaluations, a fair amount of the time, but by no means always. To be sure, I get kudos for putting on a reasonably accessible, well-organized course and my tutors get very positive evaluations from the students in their tutorials. Nevertheless, there are some who, after the best efforts by me and my tutors, still say they don’t get it and don’t like it. And many of these reluctant students are not poor students—Most have put in the work and some have obtained good marks. Part of their problem may well be cognitive style. There is a lot of evidence that it is difficult for the human mind to become intuitively comfortable with probability, so those who like intuitive understanding might find statistics and probability aversive.

It’s also possible that my examples and applications simply aren’t motivating enough for these students. Despite the pessimism I share with my colleague, I think there has been a detectable increase in basic statistical literacy both in the public and the media over the past 30 years. It is mainly due to unavoidably statistical aspects of issues that the public and media both deem important (e.g., medical advances or failures, political polls, environmental threats). Acquiring numeracy requires effort and that, in turn, takes motivation. Thank goodness I don’t have the job of persuading first-year undergraduates to voluntarily sign up for a basic statistics course.

Written by michaelsmithson

March 15, 2011 at 1:14 pm

Ignorance as a Public Problem

leave a comment »

It’s my last post for this year, and I’m going to mine Sheldon Ungar’s 2008 paper for more material. Is ignorance a public problem? If so, what kind is it, and are there any solutions to it? Ungar not only declares ignorance to be a social problem, but also claims it is “under-identified” and difficult to “sell” as a social problem.

The latter claim may seem a tad puzzling, given the column inches and tomes devoted to exposing how little most of us know about science, for example. Commentators such as Jesse Kluver and books such as Mooney and Kirshenbaum’s 2009 opus leave little doubt that scientific illiteracy is regarded with alarm in at least some reasonably well-informed quarters. Likewise, for more than two decades popularizers such as John Allen Paulos have been warning us about the dangers and costs of innumeracy through his best-selling books. In fact, some people think he invented the term (he points out that he got it from the OED). And, of course, the notion that “those who cannot remember the past are condemned to repeat it” is Santayana’s famous aphorism, although the idea behind it did not originate with him.

These lacunae are the sort of thing that Ungar calls “functional knowledge deficits,” because they pose dangers or costs to those afflicted by them. But there’s another brand of ignorance-as-a-public-problem, namely one of the most successful exports from psychology and behavioral economics. These could be called “functional cognitive deficits,” but usually go under the names of cognitive “biases” or “illusions.” A fairly extensive (and reasonably accurate) list of these identifies more than 100 of them. Producing books about these has become a cottage industry during the past two decades (e.g., from Gilovich 1991 to Ariely 2008).

The cognitive bias problem is hard to sell for the ironic reason that one of the cognitive biases most of us suffer from is an inflated estimate of our own abilities and a conviction that we perceive reality more or less accurately and completely. This goes for me too, by the way. Moreover, we tend to be a bit testy when our deficiencies in thinking and decision making are pointed out to us. I’ve observed this in friends, colleagues and students. Most of us are relaxed and comfortable with being taken in by visual illusions, or with finding out (well, up to a point) that our memory is less than perfect. But our hackles become decidedly raised when tests of reasoning or judgment reveal us to be logical blunderers or deluded about probability.

Worse still, many of our cognitive biases or illusions turn out to be exceedingly difficult to get rid of. Unlike knowledge deficits, which can be overcome by absorbing the requisite information, some cognitive habits appear to be stubbornly hardwired. It appears that this kind of ignorance problem is more difficult to solve than the knowledge-deficit kind.

But even the knowledge-deficit version of ignorance lacks a straightforward solution, because there’s far too much important knowledge for us to absorb and retain. I’ve been in the education business for 33 years, so clearly I’m a fan of the notion that, ceteris paribus, more knowledge is a Good Thing. Nevertheless, I’m aware that we educators (and other would-be social influence merchants) face a common-pool social dilemma. In the 2008 book I co-edited with Gabriele Bammer I’ve called it the “persuasion-versus-information-glut dilemma.” All of us with an educational or persuasive interest will want to impose our messages on the public. I teach stats to psychology students, so of course I think that all university students should get an introduction to stats. A specialist in children’s literature once seriously suggested to me that a class in children’s literature should be required for all university students!

Too many messages in an unregulated forum, however, can drive the public to tune out altogether. The scarce resource threatened with depletion is not information or knowledge, but attention. Attention is effectively a zero-sum resource (I can’t pay full attention to two things simultaneously), whereas information is a multiplier resource (you can give me your information and still hang onto it). So, more and more and more education isn’t the solution to Ungar’s knowledge deficit problem.

If you need further persuasion, consider all of the stuff known by people in the past that we no longer know. In 1840 Lord Clive wrote: ‘Every schoolboy knows who imprisoned Montezuma, and who strangled Atahualpa.” Hands up, anyone? Or take a look at the curriculum for an Elizabethan schoolboy (I’m not being sexist here; only boys were permitted schooling in both periods I’ve just mentioned). Or what about good old “how-to” knowledge: Who among us knows the basics of such trades as coopersmith, milliner, or fletcher? One of my colleagues recently told me that his father was a farrier and then congratulated me for knowing what that was.

There’s a third kind of ignorance problem, one arising from hyper-specialization. Specialized knowledge doesn’t integrate itself. Without people to put it all together we end up with no synthesis, no “big picture.” I’m not referring just to “big” in the sense of a grand totalizing framework. This problem manifests itself even within specializations. John Von Neumann often is said to have been the last mathematician who possessed an overview of that discipline, and he passed away 53 years ago (here is an interesting discussion of this question). A more quotidian example is the recent post by Charlie Schulting on the perils of over-specialization in IT. Nor is this problem new, as witnessed by this 1957 article highlighting a Stanford University dean’s concern about this issue and his proposed remedy for it, or this 1922 note on overspecialization in public health care.

This version of the ignorance problem also lacks an easy solution, but in some respects it may be the most urgently in need of one. A moment’s consideration of the most important problems facing humankind should suffice to convince you of the need for specialists to be able to not only work with one another but also with non-specialist stakeholders. There are efforts on several fronts to address this problem, some of which go under names such as transdisciplinarity and integration and implementation sciences. More on these at another time.

It should be clear by now that there are multiple ignorance “problems,” none of which have straightforward solutions. In lieu of nice solutions, here are a few pitfalls and fallacies that we can avoid.

  1. We can avoid hubris. None of us knows very much, when all is said and done. There is also a vast amount of important stuff we can never know.
  2. We can become more aware of what we don’t know (within limits). We might even reform some aspects of our educational programs to help future generations in this endeavor.
  3. We can bear in mind that we have cognitive biases and mental short-cuts. Some of these are adaptive in certain settings (e.g., hunter-gathering) but not in others (e.g., the casino or stock market). Where these aren’t adaptive we can generate computational and other tools to help us.
  4. We are not cleverer than those who came before us. We’re not even always better-informed than they were. A pertinent observation in the conclusion of Cyril Kornbluth’s short story “the mindworm,” is that what many very clever people have not yet learned, some ordinary people have not yet quite forgotten.

Written by michaelsmithson

December 19, 2010 at 1:48 pm

A Knowledge Economy but an Ignorance Society?

leave a comment »

In an intriguing 2008 paper sociologist Sheldon Ungar asked why, in the age of “knowledge” or “information,” ignorance not only persists but seems to have increased and intensified. There’s a useful sociological posting on Ungar’s paper that this post is intended to supplement. Along with an information explosion, we also have had an ignorance explosion: Most of us are confronted to a far greater degree than our forebears with the sheer extent of what we don’t know and what we (individually and collectively) shall never know.

I forecast this development (among others) in my 1985 paper where I called my (then) fellow sociologists’ attention to the riches to be mined from studying how we construct the unknown, accuse others of having too much ignorance, claim ignorance for ourselves when we try to evade culpability, and so forth. I didn’t get many takers, but there’s nothing remarkable about that. Ideas seem to have a time of their own, and that paper and my 1989 book were a bit ahead of that time.

Instead, the master-concepts of the knowledge economy (Peter Drucker’s 1969 coinage) and information society (Fritz Machlup 1962) were all the rage in the ‘80’s. Citizens in such societies were to become better educated and more intelligent than their forebears or their less fortunate counterparts in other societies. The evidence for this claim, however, is mixed.

On the one hand, the average IQ has been increasing in a number of countries for some time, so the kind of intelligence IQ measures has improved. On the other, we routinely receive news of apparent declines in various intellective skills such as numeracy and literacy. On the one hand, thanks to the net, many laypeople can and do become knowledgeable about medical matters that concern them. On the other, there is ample documentation of high levels of public ignorance regarding heart disease and strokes, many of the effects of smoking or alcohol consumption, and basic medication instructions.

Likewise, again thanks to the net, people can and do become better-informed about current, especially local, events so that social media such as Twitter are redefining the nature of “news.” On the other, as Putnam (1999) grimly observed, the typical recent university graduate knows less about public affairs than did the average high school graduate in the 1940’s, “despite the proliferation of sources of information.” Indeed, according to Mark Liberman’s 2006 posting, the question of how ignorant Americans are has become a kind of national sport. Other countries have joined in (for instance, the Irish).

In addition to concerns about lack of knowledge, alarms frequently are raised regarding the proliferation and persistence of erroneous beliefs, often with a sub-text saying that surely in the age of information we would be rid of these. Scott Lilienfeld, assistant professor of psychology and consulting editor at the Skeptical Inquirer, sees the prevalence of pseudoscientific beliefs as by-products of two phenomena: the (mis)information explosion and the scientific illiteracy of the general population. From the Vancouver Sun on November 25th, an op-ed piece by Janice Kennedy had this to say:

“Mis- and disinformation, old fears and prejudices, breathtaking knowledge gaps – all share the same stage, all bathe in the same spotlight glow as thoughtful contributions and informed opinions. The Internet is the great democratizer. Everyone has a voice, and every voice can be heard. Including those that should stifle themselves… Add to these realities the presence of the radio and television talk show – hardly a new phenomenon, but one that has exploded in popularity, thanks to our Internet-led dumbing down – and you have the perfect complement. Shockingly ignorant things are said, repeated and, magnified a millionfold by the populist momentum of cyberspace and sensationalist talk shows, accorded a credibility once unthinkable.”

Now, I want to set Ungar’s paper alongside the attributions of ignorance that people make to those who disagree with them. If you set up a Google alert for the word “ignorance” then the most common result will be just this kind of attribution: X doesn’t see things correctly (i.e., my way) because X is ignorant. Behind many such attributions is a notion widely shared by social scientists and other intellectuals of yore that there is a common stock of knowledge that all healthy, normally-functioning members of society should know. We should all not only speak the same language but also know the laws of the land, the first verse of our national anthem, that 2 + 2 = 4, that we require oxygen to breathe, where babies come from, where we can get food, and so on and so forth.

The trouble with this notion is that the so-called information age has made it increasingly difficult for everyone to agree on what this common stock of knowledge should include while still being small enough for the typical human to absorb it all before reaching adulthood.

For instance, calculators may have made mental arithmetic unnecessary for the average citizen to “get by.” But what about the capacity to think mathematically? Being able to understand a graph, compound interest, probability and risk, or the difference between a two-fold increase in area versus in volume are not obviated by calculators. Which parts of mathematics should be part of compulsory education? This kind of question does not merely concern which bits of knowledge should be retained from what people used to know—The truly vexing problem is which bits of the vastly larger and rapidly increasing storehouse of current knowledge should we require everyone to know.

Ungar suggests some criteria for deciding what is important for people to know, and of course he is not the first to do so. For him, ignorance becomes a “functional deficit” when it prevents people from being able “to deal with important social, citizenship, and personal or practical issues.” Thus, sexually active people should know about safe sex and the risks involved if it isn’t practiced; sunbathers should know what a UV index is, automobile drivers should understand the relevant basic physics of motion and the workings of their vehicles, and smokers should know about the risks they take. These criteria are akin to the “don’t die of ignorance” public health and safety campaigns that began with the one on AIDS in the 1980’s.

But even these seemingly straightforward criteria soon run into difficulties. Suppose you’re considering purchasing a hybrid car such as the Prius. Is the impact of the Prius on the environment less than that of a highly fuel-efficient conventional automobile? Yes, if you consider the impact of running these two vehicles. No, if you consider the impacts from their manufacture. So how many miles (or kilometers) would you have to run each vehicle before the net impact of the conventional car exceeds that of the Prius? It turns out that the answer to that question depends on the kind of driving you’ll be doing. To figure all this out on your own is not a trivial undertaking. Even consulting experts who may have done it for you requires a reasonably high level of technological literacy, not to mention time. And yet, this laboriously informed purchasing decision is just what Ungar means by “an important social, citizenship, and personal or practical issue.” In fact, it ticks all four of those boxes.

Now imagine trying to be a well-informed citizen not only on the merits of the Prius, but the host of other issues awaiting your input such as climate change mitigation and adaptation, the situation in Afghanistan, responses to terrorism threats, the socio-economic consequences of globalization, and so on and on. Thus, in the end Ungar has to concede that “it is impossible to produce a full-blown, stable or consensual inventory of a stock of knowledge that well-informed members should hold.” The instability of such an inventory has been a fact of life for eons (e.g., the need to know Latin in order to read nearly anything of importance vanished long ago). Likewise, the impossibility of a “full blown” inventory is not new; that became evident well before the age of information. What is new is the extraordinary difficulty in achieving consensus on even small parts of this inventory.

It has become increasingly difficult to be a well-informed citizen on a variety of important issues, and these issues are therefore difficult to discuss in general public forums, let alone dinner-table conversation. Along with the disappearance of the informed citizen, we have witnessed the disappearance of the public intellectual. What have replaced both of these are the specialist and the celebrity. We turn to specialists to tell us what to believe; we turn to celebrities to tell us what to care about.

One of Ungar’s main points is that we haven’t ended up with a knowledge society, but only a knowledge economy. The key aspects of this economy are that knowledge and information are multiplier resources, whereas interest is bounded and attention is strictly zero-sum. Public ignorance of key issues and reliance on specialists is the norm, whereas the occurrence of pockets and snippets of public widely shared knowledge becomes exceptional. Thus, we live not only in a risk society but, increasingly, in an ignorance society.

Written by michaelsmithson

December 14, 2010 at 12:45 pm

“Negative knowledge”: From Wicked Problems and Rude Surprises to Mathematics

leave a comment »

It is one thing to know that we don’t know, but what about knowing that we can never know something? Karin Knorr-Cetina (1999) first used the term negative knowledge to refer to knowledge about the limits of knowledge. This is a type of meta-knowledge, and is a special case of known unknowns. Philosophical interest in knowing what we don’t know dates back at least to Socrates—certainly long before Donald Rumsfeld’s prize-winning remark on the subject. Actually, Rumsefeld’s “unknown unknowns” were articulated in print much earlier by philosopher Ann Kerwin, whose 1993 paper appeared along with mine and others in a special issue of the journal Science Communication as an outcome of our symposium on “Ignorance in Science” at the AAAS meeting in Boston earlier that year. My 1989 coinage, meta-ignorance, is synonymous with unknown unknowns.

There are plenty of things we know that we cannot know (e.g., I cannot know my precise weight and height at the moment I write this), but why should negative knowledge be important? There are at least three reasons. First, negative knowledge tells us to put a brake on what would otherwise be a futile wild goose-chase for certainty. Second, some things we cannot know we might consider important to know, and negative knowledge humbles us by highlighting our limitations. Third, negative knowledge about important matters may be contestable. We might disagree with others about it.

Let’s begin with the notion that negative knowledge instructs us to cease inquiry. On the face of it, this would seem a good thing: Why waste effort and time on a question that you know cannot be answered? Peter Medawar (1967) famously coined the aphorism that science is the “art of the soluble.” A commonsensical inference follows that if a problem is not soluble then it isn’t a scientific problem and so should be banished from scientific inquiry. Nevertheless, aside from logical flaw in this inference, over-subscribing to this kind of negative-knowledge characterization of science exacts a steep price.

First, there is what philosopher Jerome Ravetz (in the same journal and symposium as Ann Kerwin’s paper) called ignorance of ignorance. By this phrase Ravetz meant something slightly different from meta-ignorance or unknown unknowns. He observed that conventional scientific training systematically shields students from problems outside the soluble. As a result, they remain unacquainted with those problems, i.e., ignorant about scientific ignorance itself. The same charge could be laid on many professions (e.g., engineering, law, medicine).

Second, by neglecting unsolvable problems scientists exclude themselves from any input into what people end up doing about those problems. Are there problem domains where negative knowledge defines the criteria for inclusion? Yes: wicked problems and rude surprises. The characteristics of wicked problems were identified in the classic 1973 paper by Rittel and Webber, and most of these referred to various kinds of negative knowledge. Thus, the very definition and scope of wicked problems are unresolvable; such problems have no definitive solutions; there are no ultimate tests of whether a solution works; every wicked problem is unique; and there are no opportunities to learn how to deal with them by trial-and-error. Claimants to the title of “wicked problem” include how to craft policy responses to climate change, how to combat terrorism, how to end poverty, and how to end war.

Rude surprises are not always wicked problems but nonetheless are, as Todd La Porte describes them in his 2005 paper, “unexpected, potentially overwhelming circumstances that are likely to deliver punishing blows to human life, to political or economic viability, and/or to environmental integrity” (pg. 2). Financial advisors and traders around the world no doubt saw the most recent global financial crisis as a rude surprise.

As Matthias Gross (2010) points out at the beginning of his absorbing book, “ignorance and surprise belong together.” So it should not be, well, surprising that in an uncertain world we are in for plenty of surprises. But why are we so unprepared for surprises? Two important reasons are confirmation bias and the Catch-All Underestimation Bias (CAUB). Confirmation bias is the tendency to be more interested in and pay more attention to information that is likely to confirm what we already know or believe. As Robert Nickerson’s 1998 review sadly informs us, this tendency operates unconsciously even when we’re not trying to defend a position or bolster our self-esteem. The CAUB is a tendency to underestimate the likelihood that something we’ve never seen before will occur. The authors of the classic 1978 study first describing the CAUB pointed out that it’s an inescapable “out of sight, out of mind” phenomenon—After all, how can you have something in sight that never has occurred? And the final sting in the tail is that clever people and domain experts (e.g., scientists, professionals) suffer from both biases just as the rest of us do.

Now let’s move to the second major issue raised at the outset of this post: Not being able to know things we’d like to know. And let’s raise the stakes, from negative knowledge to negative meta-knowledge. Wouldn’t it be wonderful if we had a method of finding truths that was guaranteed not to steer us wrong? Possession of such a method would tame the wild seas of the unknown for us by providing the equivalent of an epistemic compass. Conversely, wouldn’t it be devastating if we found out that we never can obtain this method?

Early in the 20th century, mathematicians underwent the experience of expecting to find such a method and having their hopes dashed. They became among the first (and best) postmodernists. Their story has been told in numerous ways (even as a graphic novel), but for my money the best account is the late Morris Kline’s brilliant (1980) book, “Mathematics: The Loss of Certainty.” Here’s how Kline characterizes mathematicians’ views of their domain at the turn of the century:

“After many centuries of wandering through intellectual fog, by 1900 mathematicians had seemingly imparted to their subject the ideal structure… They had finally recognized the need for undefined terms; definitions were purged of vague or objectionable terms; the several branches were founded on rigorous axiomatic bases; and valid, rigorous, deductive proofs replaced intuitively or empirically based conclusions… mathematicians had cause to rejoice.” (pg. 197)

The tantalizing prospect before them was to establish the consistency and completeness of mathematical systems. Roughly speaking, consistency amounts to a guarantee of never running into paradoxes (well-formed mathematical propositions that nevertheless are provably both true and false) and completeness amounts to a guarantee of never running into undecidables (well-formed mathematical propositions whose truth or falsity cannot be proven). These guarantees would tame the unknown for mathematicians; a proper axiomatic foundation would ensure that any proposition derived from it would be provably true or false.

The famous 1931 paper by Kurt Gödel denied this paradise forever. He showed that if any mathematical theory adequate to deal with whole numbers is consistent, it will be incomplete. He also showed that consistency of such a theory could not be established by the logical principles in use by several foundational schools of mathematics. So, consistency would have to be determined by other methods and, if attained, its price would be incompleteness. But is there a way to ascertain which mathematical propositions are undecidable and which provable? Alan Turing’s 1936 paper on “computable numbers” (in addition to inventing Turing machines!) showed that the answer to this question is “no.” One of the consequences of these results is that instead of a foundational consensus there can be divergent schools of mathematics, each legitimate and selected as a matter of preference. Here we have definitively severe negative knowledge in an area that to most people even today epitomizes certitude.

“Loss of certainty” themes dominate high-level discourse in various intellectual and professional domains throughout the 20th century. Physics is perhaps the most well-known example, but one can find such themes in many other disciplines and fascinating debates around them. To give one example, historian Ann Curthoys’ and John Docker’s 2006 book “Is History Fiction?” begins by identifying three common responses to the book title’s question: Relativists who answer in the affirmative, foundationalists who insist that history is well-grounded in evidence after all, and a third (they claim, largest) puzzled group who says “well, is it?” To give just one more, I’m a mathematical modeler in a discipline where various offspring of the “is psychology a science?” question are seriously debated. In particular, I and others (e.g., here and here) regard the jury as still out on whether there are (m)any quantifiable psychological attributes. Some such attributes can be rank-ordered, perhaps, but quantified? Good question.

Are there limits to negative knowledge—In other words, is there such a thing as negative negative-knowledge? It turns out that there is, mainly in the Gödelian realm of self-referential statements. For example, we cannot believe that we currently hold a false belief; otherwise we’d be compelled to disbelieve it. There are also limits to the extent to which we can self-attribute erroneous belief formation. Philosophers Andy Egan and Adam Elga laid these out in their delightfully titled 2005 paper, “I Can’t Believe I’m Stupid.” According to them, I can believe that in some domains my way of forming beliefs goes wrong all of the time (e.g., I have a sense of direction that is invariably wrong), but I can’t believe that every time I form any belief it goes wrong without undermining that very meta-belief.

Dealing with wicked problems and rude surprises requires input from multiple stakeholders encompassing their perspectives, values, priorities, and (possibly non-scientific) ways of knowing. Likewise, there is no algorithm or sure-fire method to anticipate or forecast rude surprises or Nicolas Taleb’s “black swans.” These are exemplars of insoluble problems beyond the ken of science. But surely none of this implies that input from experts is useless or beside the point. So, are there ways of educating scientists, other experts, and professionals so that they will be less prone to Ravetz’s ignorance of ignorance? And what about the rest of us—Are there ways we can combat confirmation bias and the CAUB? Are there effective methods for dealing with wicked problems or rude surprises? Ah, grounds for a future post!

Follow

Get every new post delivered to your Inbox.