ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Social sciences

Writing about “Agnotology, Ignorance and Uncertainty”

with 2 comments

From time to time I receive invitations to contribute to various “encyclopedias.” Recent examples include an entry on “confidence intervals” in the International Encyclopedia of Statistical Science (Springer, 2010) and an entry on “uncertainty” in the Encyclopedia of Human Behavior (Elsevier, 1994, 2012). The latter link goes to the first (1994) edition; the second edition is due out in 2012. I’ve duly updated and revised my 1994 entry for the 2012 edition.

Having been raised by a librarian (my mother worked in the Seattle Public Library for 23 years), I’m a believer in the value of good reference works. So, generally I’m willing to accept invitations to contribute to them. These days there is a niche market even for non-digital works of this kind, and of course the net has led to numerous hybrid versions.

Despite the fact that such invitations are regarded as markers of professional esteem, they don’t count for much in the university system where I work because they aren’t original research publications. Same goes for textbooks. Thus, for my younger academic colleagues, writing encyclopedia entries or, worse still, writing textbooks actually can harm their careers. They understandably avoid doing so, which leaves it to older academics like me.

Some of these encyclopedias have interesting moments on the world stage. The International Encyclopedia of Statistical Science has been said to have set a record for the number of countries involved (105, via the 619 contributing authors). Its editors were nominated for the 2011 Nobel Peace Prize, apparently the first time any statisticians had received this honor. Meanwhile, V.S. Ramachandran, editor of the Encyclopedia of Human Behavior, was selected by Time Magazine as one of the world’s most influential people of 2011.

However, I digress. The Sage Encyclopedia of Philosophy and the Social Sciences is an intriguing proposal for a reference work that bridges these two intellectual cultures. I regard this aim as laudable, and I’m fortunate insofar as the areas where I work have a tradition of dialogs linking philosophers and social scientists. So, I was delighted to be asked to provide an entry on “agnotology, ignorance and uncertainty”. There is, however, a bit of a catch.

The guidelines for contributors state that “Entries should be written at a level appropriate for students who do not have an extensive background either in philosophy or the social sciences and for academics from other disciplines… it is essential that a reader versed in philosophy only or mostly, or alternatively, in social sciences, should gain by reading entries that aim at expanding their knowledge of concepts and theories as these have developed in the complementary area.” All of this is supposed to be achieved for a treatment of “agnotology, ignorance and uncertainty” in just 1,000 words, with a short list of “further readings” at the end. All of my posts in this blog thus far exceed 1,000 words (gulp). Can I be sufficiently concise without butchering or omitting crucial content?

Here’s my first draft (word count: 1,018). See what you think.

AGNOTOLOGY, IGNORANCE AND UNCERTAINTY

“Agnotology” is the study of ignorance (from the Greek “agnosis”). “Ignorance,” “uncertainty,” and related terms refer variously to the absence of knowledge, doubt, and false belief. This topic has a long history in Western philosophy, rooted in the Socratic tradition. It has a considerably shorter and, until recently, sporadic treatment in the human sciences. This entry focuses on relatively recent developments within and exchanges between both domains.

A key starting-point is that anyone attributing ignorance cannot avoid making claims to know something about who is ignorant of what: A is ignorant from B’s viewpoint if A fails to agree with or show awareness of ideas which B defines as actually or potentially valid. A and B can be identical, so that A self-attributes ignorance. Numerous scholars thereby have noted the distinction between conscious ignorance (known unknowns, learned ignorance) and meta-ignorance (unknown unknowns, ignorance squared).

The topic has been beset with terminological difficulties, due to the scarcity and negative cast of terms referring to unknowns. Several scholars have constructed typologies of unknowns, in attempts to make explicit their most important properties. Smithson’s book, Ignorance and Uncertainty: Emerging Paradigms, pointed out the distinction between being ignorant of something and ignoring something, the latter being akin to treating something as irrelevant or taboo. Knorr-Cetina coined the term “negative knowledge” to describe knowledge about the limits of the knowable. Various authors have tried to distinguish reducible from irreducible unknowns.

Two fundamental concerns have been at the forefront of philosophical and social scientific approaches to unknowns. The first of these is judgment, learning and decision making in the absence of complete information. Prescriptive frameworks advise how this ought to be done, and descriptive frameworks describe how humans (or other species) do so. A dominant prescriptive framework since the second half of the 20th century is subjective expected utility theory (SEU), whose central tenet is that decisional outcomes are to be evaluated by their expected utility, i.e., the product of their probability and their utility (e.g., monetary value, although utility may be based on subjective appraisals). According to SEU, a rational decision maker chooses the option that maximizes her/his expected utility. Several descriptive theories in psychology and behavioral economics (e.g., Prospect Theory and Rank-Dependent Expected Utility Theory) have amended SEU to render it more descriptively accurate while retaining some of its “rational” properties.

The second concern is the nature and genesis of unknowns. While many scholars have treated unknowns as arising from limits to human experience and cognitive capacity, increasing attention has been paid recently to the thesis that unknowns are socially constructed, many of them intentionally so. Smithson’s 1989 book was among the earliest to take up the thesis that unknowns are socially constructed. Related work includes Robert Proctor’s 1995 Cancer Wars and Ulrich Beck’s 1992 Risk Society. Early in the 21st century this thesis has become more mainstream. Indeed, the 2008 edited volume bearing “agnotology” in its title focuses on how culture, politics, and social dynamics shape what people do not know.

Philosophers and social scientists alike have debated whether there are different kinds of unknowns. This issue is important because if there is only one kind then only one prescriptive decisional framework is necessary and it also may be the case that humans have evolved one dominant way of making decisions with unknowns. On the other hand, different kinds of unknowns may require distinct methods for dealing with them.

In philosophy and mathematics the dominant formal framework for dealing with unknowns has been one or another theory of probability. However, Max Black’s ground-breaking 1937 paper proposed that vagueness and ambiguity are distinguishable from each other, from probability, and also from what he called “generality.” The 1960’s and 70’s saw a proliferation of mathematical and philosophical frameworks purporting to encompass non-probabilistic unknowns, such as fuzzy set theory, rough sets, fuzzy logic, belief functions, and imprecise probabilities. Debates continue to this day over whether any of these alternatives are necessary, whether all unknowns can be reduced to some form of probability, and whether there are rational accounts of how to deal with non-probabilistic unknowns. The chief contenders currently include generalized probability frameworks (including imprecise probabilities, credal sets, belief functions), robust Bayesian techniques, and hybrid fuzzy logic techniques.

In the social sciences, during the early 1920’s Keynes distinguished between evidentiary “strength” and “weight,” while Knight similarly separated “risk” (probabilities are known precisely) from “uncertainty” (probabilities are not known). Ellsberg’s classic 1961 experiments demonstrated that people’s choices can be influenced by how imprecisely probabilities are known (i.e., “ambiguity”), and his results have been replicated and extended by numerous studies. Smithson’s 1989 book proposed a taxonomy of unknowns and his 1999 experiments showed that choices also are influenced by uncertainty arising from conflict (disagreeing evidence from equally credible sources); those results also have been replicated.

More recent empirical research on how humans process unknowns has utilized brain imaging methods. Several studies have suggested that Knightian uncertainty (ambiguity) and risk differentially activate the ventral systems that evaluate potential rewards (the so-called “reward center”) and the prefrontal and parietal regions, with the latter two becoming more active under ambiguity. Other kinds of unknowns have yet to be widely studied in this fashion but research on them is emerging. Nevertheless, the evidence thus far suggests that the human brain treats unknowns as if there are different kinds.

Finally, there are continuing debates regarding whether different kinds of unknowns should be incorporated in prescriptive decision making frameworks and, if so, how a rational agent should deal with them. There are several decisional frameworks incorporating ambiguity or imprecision, some of which date back to the mid-20th century, and recently at least one incorporating conflict as well. The most common recommendation for decision making under ambiguity amounts to a type of worst-case analysis. For instance, given a lower and upper estimate of the probability of event E, the usual advice is to use the lower probability for evaluating bets on E occurring but to use the upper probability for bets against E. However, the general issue of what constitutes rational behavior under non-probabilistic uncertainties such as ambiguity, fuzziness or conflict remains unresolved.

Further Readings

Bammer, G. and Smithson, M. (Eds.), (2008). Uncertainty and Risk: Multidisciplinary Perspectives. London: Earthscan.

Beck, Ulrich (1999). World Risk Society. Oxford: Polity Press.

Black, M. (1937). Vagueness: An exercise in logical analysis. Philosophy of Science, 4, 427-455.

Gardenfors, P. and Sahlin, N.-E. (Eds.), (1988). Decision, Probability, and Utility: Selected Readings. Cambridge, UK: Cambridge University Press.

Proctor, R. and Schiebinger, L. (Eds.), (2008). Agnotology: The Making and Unmaking of Ignorance. Stanford, CA: Stanford University Press.

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.

Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman Hall.

Making the Wrong Decision for the Right Reasons

with 2 comments

There seems to be a widespread intuition that if we use a well-reasoned, evidence-based approach to making decisions under uncertainty then we’ll make the right decision most of the time. Sure, we’ll make some bad calls but the majority of the time we’ll get it right. Or will we?

Here’s an example from law enforcement. Suppose you’re the commanding officer in a local police jurisdiction, and you have to decide how to allocate resources to a missing person case. A worst-case scenario is that the missing person ends up a homicide. Although police are required to treat all missing persons cases seriously, as most do not involve foul play it would be grossly inefficient to treat all missing persons as potential homicides. So, if the missing person isn’t found within 24 hours, you’ll undertake a risk analysis, considering issues such as whether the circumstances are suspicious or out of character, or there is evidence of the commission of a crime.

What would be your best approach to this risk analysis, and how likely would you be to come to the right decision? a landmark UK study examined 32,705 cases of missing persons in the UK between 2000 and 2002, and determined that 0.6 percent were found dead, although not necessary victims of homicide (Newiss, 2006). This is a very low percentage, and it turns out to be the source of a major headache for you as the commander responsible for deciding what resources to allocate to your case.

You have years of experience, wisdom handed down from seasoned investigators who came before you, and you’ve read the relevant literature. You know that where a missing person is found to have been a victim of foul play, risk factors include age and sex, involvement in prostitution, last being seeing in a public place and an absence of a history of suicide attempts or mental health problems.

So, you’re going to make a decision whether to allocate more resources to a missing persons case investigation based on some diagnostic criteria which I’ll denote by D. The criteria included in D are indicators that the missing person may have died. There are four commonly used criteria for evaluating how good D is:

The expressions on the right hand side of these equations are conditional probabilities. For instance, P(D present|death) is the probability that D is present given that the person has died. Sensitivity and specificity measure the ability of the model to detect the occurrence or absence (respectively) of deaths. Predictive value, on the other hand, tells us the probability of making a correct diagnosis (death versus no death) based on D.

Now, suppose D has a sensitivity of .99 and specificity of .99 (far better than can be obtained from the otherwise worthwhile predictors identified by Newiss). The next table shows how well D would perform in distinguishing between cases ending in death and cases not involving death.

D present D absent Error-rate
Dead 196 2 198 0.01 Sensitivity 0.99
Alive 325 32182 32507 0.01 Specificity 0.99
521 32184 32705
pos. pred. neg. pred.
0.3762 0.999938

Because sensitivity is .99, D misses only .01*198 = 2 cases involving deaths, and correctly detects the remaining 196. Likewise, because specificity is .99, D absent misses .01*32507 = 325 cases that do not involve death. That is, there are 325 missing persons with D who will be found to be alive. But 325 is large compared to the number of correctly identified deaths (196). So positive predictive value is poor: P(death|D present) = 196/(196 + 325) = .376. The rate of incorrect positive diagnosis therefore is 1 – .376 = .624. If you, as commander, decided to allocate more resources to cases where D is present you could expect to be wrong about 62% of the time.

Can these uncertainties be reduced? An obvious and frequently recommended remedy is further investigation into factors that may predict the likelihood of a missing person ending up dead and, conditional on death, being a homicide victim. These investigations could be combined with survival analysis of the kind employed by Newiss, to determine whether there is a relationship between the length of time a person has gone missing and the likelihood that the person ends up dead.

But how effective can we expect these remedies to be? Note that improving sensitivity would have only a negligible effect on positive predictive value. To get to the point where positive predictive value was an even-money bet (.5) would require specificity to be .994. To move positive predictive value to .9 would require specificity to be .9993. Thus the test would have to be incredibly accurate in order to not devote considerable resources to investigations where it was not warranted.

These are unachievable standards. Police will inevitably face a considerable error-rate in making resource allocation decisions regarding missing persons cases. Of course, this does not imply that improving predictions of homicide in missing persons cases is futile, but simply tells us not to expect such improvements to raise the probability of a correct decision to a desirable level.

Mind you, it isn’t all gloom and doom. If we consider the false negative problem (e.g., a Britt Lapthorne outcome) it may be possible to obtain a reasonably high predictive value rate without unrealistically accurate predictors. In our unrealistic scenario (with sensitivity and specificity both at .99), negative diagnositicity is .99994. If sensitivity and specificity both were .5 (i.e., coin-toss levels) then negative predictive value would be about 16,253/16,352 = .994. You, as commander, are very unlikely to end up with a Britt Lapthorne case which you stand accused of having failed to treat with due diligence. Instead, you are very likely to be chastised by higher-ups and perhaps the media for “wasting” money and resources on cases where the missing person turned up alive and well.

There is an analogous problem in preventative medical testing, where the disorder to be detected occurs at a low rate in the population. For example pregnant women may wish to test for the possibility that their unborn baby has Downs Syndrome. According to an Australian government health assessment document released in 2002, when used as a single modality, the standard screening by measurement of Nuchal Translucency in the first trimester has a detection rate for Downs of approximately 73%-82% at a false positive rate of 5%-8%.. Additional ultrasound cues can further increase detection rates for Down syndrome to more than 95%.

The next table shows the most optimistic scenario according to those figures, i.e., sensitivity and specificity of 95%. At the time, about 12.8 per 10,000 births yielded a baby with Downs, so I’ve included that rate in the table. Downs Syndrome, thankfully, is rare. The result, as you can see, is a positive predictive value of just 2.38%. Given a test result that says the baby has Downs, the probability that it really does have Downs is about 2.4 chances in 100. If these procedures were widely used, there would be many needlessly upset pregnant women—about 97.6% of those whose combined tests came back positive.

Positive Negative Error-rate
Downs 122 6 128 0.05 Sensitivity 0.95
Normal 4994 94878 99872 0.05 Specificity 0.95
5116 94884 100000
pos. pred. neg. pred.
0.0238 0.9999

In July last year there was a furore over a study published in the Journal of the American Medical Association. The study found that of 2176 participants free of HIV infection who received a vaccine product, 908 tested positive even though they had been exposed to the vaccine, not (of course) the virus. That’s a false positive rate of about 41.7%. Now, suppose a successful vaccine is developed but it also has this reactivity problem. In any Western country where the rate of HIV infection is low, the combination of a large proportion of the population being vaccinated and tested could be a major disaster. This is not to say that an HIV vaccine would be a bad idea; the point is that it could play havoc with HIV detection.

The chief difference between the medical preventative testing quandary and the police commander’s problem is that the negative consequences of the wrong diagnosis fall on the patient instead of the decision maker. Yet this issue is seldom aired in public debates regarding medical testing. Perhaps understandably, the bulk of medical research effort in this domain goes into devising more accurate tests. But hang on—In the Downs test scenario, even with a sensitivity rate of 100% the specificity would have to be 99.87% to raise the positive predictive value to a mere 50%. For a positive predictive value of 90%? Sensitivity would have to be about 99.99%, a crazily impossible target. Realistically, the tests will never be accurate enough to avoid the problem posed by low positive predictive values for rare disorders.

What can a decision maker do? A final point to all this is that in settings where you’re doomed to a high decisional error-rate despite using the best available methods, it may be better to direct your energies toward handling the flak instead of persisting in a futile quest for unattainably accurate predictors or diagnostic cues. The chief difficulty may be educating your clientele, constituency, or bosses that it really is possible to be making the best possible decisions and still getting them wrong most of the time.

Written by michaelsmithson

May 8, 2011 at 2:55 pm

I Can’t Believe What I Teach

with 2 comments

For the past 34 years I’ve been compelled to teach a framework that I’ve long known is flawed. A better framework exists and has been available for some time. Moreover, I haven’t been forced to do this by any tyrannical regime or under threats of great harm to me if I teach this alternative instead. And it gets worse: I’m not the only one. Thousands of other university instructors have been doing the same all over the world.

I teach statistical methods in a psychology department. I’ve taught courses ranging from introductory undergraduate through graduate levels, and I’m in charge of that part of my department’s curriculum. So, what’s the problem—Why haven’t I abandoned the flawed framework for its superior alternative?

Without getting into technicalities, let’s call the flawed framework the “Neyman-Pearson” approach and the alternative the “Bayes” approach. My statistical background was formed as I completed an undergraduate degree in mathematics during 1968-72. My first courses in probability and statistics were Neyman-Pearson and I picked up the rudiments of Bayes toward the end of my degree. At the time I thought these were simply two valid alternative ways of understanding probability.

Several years later I was a newly-minted university lecturer teaching introductory statistics to fearful and sometimes reluctant students in the social sciences. The statistical methods used in the social science research were Neyman-Pearson, so of course I taught Neyman-Pearson. Students, after all, need to learn to read the literature of their discipline.

Gradually, and through some of my research into uncertainty, I became aware of the severe problems besetting the Neyman-Pearson framework. I found that there was a lengthy history of devastating criticisms raised against Neyman-Pearson even within the social sciences, criticisms that had been ignored by practising researchers and gatekeepers to research publication.

However, while the Bayesian approach may have been conceptually superior, in the late ‘70’s through early ‘80’s it suffered from mathematical and computational impracticalities. It provided few usable methods for dealing with complex problems. Disciplines such as psychology were held in thrall to Neyman-Pearson by a combination of convention and the practical requirements of complex research designs. If I wanted to provide students or, for that matter, colleagues who came to me for advice, with effective statistical tools for serious research then usually Neyman-Pearson techniques were all I could offer.

But what to do about teaching? No university instructor takes a formal oath to teach the truth, the whole truth, and nothing but the truth; but for those of us who’ve been called to teach it feels as though we do. I was sailing perilously close to committing Moore’s Paradox in the classroom (“I assert Neyman-Pearson but I don’t believe it”).

I tried slipping in bits and pieces alerting students to problematic aspects of Neyman-Pearson and the existence of the Bayesian alternative. These efforts may have assuaged my conscience but they did not have much impact, with one important exception. The more intellectually proactive students did seem to catch on to the idea that theories of probability and statistics are just that—Theories, not god-given commandments.

Then Bayes got a shot in the arm. In the mid-80’s some powerful computational techniques were adapted and developed that enabled this framework to fight at the same weight as Neyman-Pearson and even better it. These techniques sail under the banner of Markov chain Monte Carlo methods, and by the mid-90’s software was available (free!) to implement them. The stage was set for the Bayesian revolution. I began to dream of writing a Bayesian introductory statistics textbook for psychology students that would set the discipline free and launch the next generation of researchers.

It didn’t happen that way. Psychology was still deeply mired in Neyman-Pearson and, in fact, in a particularly restrictive version of it. I’ll spare you the details other than saying that it focused, for instance, on whether the researcher could reject the claim that an experimental effect was nonexistent. I couldn’t interest my colleagues in learning Bayesian techniques, let alone undergraduate students.

By the late ‘90’s a critical mass of authoritative researchers convinced the American Psychological Association to form a task-force to reform statistical practice, but this reform really amounted to shifting from the restrictive Neyman-Pearson orientation to a more liberal one that embraced estimating how big an experimental effect is and setting a “confidence interval” around it.

It wasn’t the Bayesian revolution, but I leapt onto this initiative because both reforms were a long stride closer to the Bayesian framework and would still enable students to read the older Neyman-Pearson dominated research literature. So, I didn’t write a Bayesian textbook after all. My 2000 introductory textbook was, so far as I’m aware, one of the first to teach introductory statistics to psychology students from a confidence interval viewpoint. It was generally received well by fellow reformers, and I got a contract to write a kind of researcher’s confidence interval handbook that came out in 2003. The confidence interval reform in psychology was under weigh, and I’d booked a seat on the juggernaut.

Market-wise, my textbook flopped. I’m not singing the blues about this, nor do I claim sour grapes. For whatever reasons, my book just didn’t take the market by storm. Shortly after it came out, a colleague mentioned to me that he’d been at a UK conference with a symposium on statistics teaching where one of the speakers proclaimed my book the “best in the world” for explaining confidence intervals and statistical power. But when my colleague asked if the speaker was using it in the classroom he replied that he was writing his own. And so better-selling introductory textbooks continued to appear. A few of them referred to the statistical reforms supposedly happening in psychology but the majority did not. Most of them are the nth edition of a well-established book that has long been selling well to its set of long-serving instructors and their students.

My 2003 handbook fared rather better. I had put some software resources for computing confidence intervals on a webpage and these got a lot of use. These, and my handbook, got picked up by researchers and their graduate students. Several years on, the stuff my scripts did started to appear in mainstream commercial statistics packages. It seemed that this reform was occurring mainly at the advanced undergraduate, graduate and researcher levels. Introductory undergraduate statistical education in psychology remained (and still remains) largely untouched by it.

Meanwhile, what of the Bayesian movement? In this decade, graduate-level social science oriented Bayesian textbooks began to appear. I recently reviewed several of them and have just sent off an invited review of another. In my earlier review I concluded that the market still lacked an accessible graduate-level treatment oriented towards psychology, a gap that may have been filled by the book I’ve just finished reviewing.

Have I tried teaching Bayesian methods? Yes, but thus far only in graduate-level workshops, and on my own time (i.e., not as part of the official curriculum). I’ll be doing so again in the second half of this year, hoping to recruit some of my colleagues as well as graduate students. Next year I’ll probably introduce a module on Bayes for our 4th-year (Honours) students.

It’s early days, however, and we remain far from being able to revamp the entire curriculum. Bayesian techniques still rarely appear in the mainstream research literature in psychology, and so students still need to learn Neyman-Pearson to read that literature with a knowledgably critical eye. A sea-change may be happening, but it’s going to take years (possibly even decades).

Will I try writing a Bayesian textbook? I already know from experience that writing a textbook is a lot of time and hard work, often for little reward. Moreover, in many universities (including mine) writing a textbook counts for nothing. It doesn’t bring research money, it usually doesn’t enhance the university’s (or the author’s) scholarly reputation, it isn’t one of the university’s “performance indicators,” and it seldom brings much income to the author. The typical university attitude towards textbooks is as if the stork brings them. Writing a textbook, therefore, has to be motivated mainly by a passion for teaching. So I’m thinking about it…

What are the Functions of Innumeracy?

leave a comment »

Recently a colleague asked me for my views on the social and psychological functions of innumeracy. He aptly summarized the heart of the matter:

“I have long-standing research interests in mathematics anxiety and adult numeracy (or, more specifically, innumeracy, including in particular what I term the ‘adult numeracy conundrum’ – that is, that despite decades of investment in programs to raise adult numeracy rates little, if any, measurable improvements have been achieved. This has led me to now consider the social functions performed by this form of ignorance, as its persistence suggests the presence of underlying mechanisms that provide a more valuable pay-off than that offered by well-meaning educators…)”

This is an interesting deviation from the typical educator’s attack on innumeracy. “Innumeracy” apparently was coined by cognitive scientist Douglas Hofstadter but it was popularized by mathematician John Allen Paulos in his 1989 book, Innumeracy: Mathematical Illiteracy and its Consequences. Paulos’ book was a (IMO, deserved) bestseller and has gone through a second edition. Most educators’ attacks on innumeracy do what Paulos did: Elaborate the costs and dysfunctions of innumeracy, and ask what we can blame for it and how it can be overcome.

Paulos’ list of the consequences of innumeracy include:

  1. Inaccurate media reporting and inability of the public to detect such inaccuracies
  2. Financial mismanagement (e.g., of debts), especially regarding the misunderstanding of compound interest
  3. Loss of money on gambling, in particular caused by gambler’s fallacy
  4. Belief in pseudoscience
  5. Distorted assessments of risks
  6. Limited job prospects

These are bad consequences indeed, but mainly for the innumerate. Consequences 2 through 6 also are windfalls for those who exploit the innumerate. Banks, retailers, pyramid selling fraudsters, and many others either legitimately or illicitly take advantage of consequence 2. Casinos, bookies, online gambling agencies, investment salespeople and the like milk the punters of their funds on the strength of consequences 3 and 5. Peddlers of various religions, magical and pseudo-scientific beliefs batten on consequence 4, and of course numerous employers can keep the wages and benefits low for those trapped by consequence 6.

Of course, the fact that all these interests are served doesn’t imply that innumeracy is created and maintained by a vast conspiracy of bankers, retailers, casino owners, and astrologers. They’re just being shrewd and opportunistic. Nevertheless, these benefits do indicate that we should not expect the beneficiaries to be in the vanguard of a campaign to improve, say, public understanding of compound interest or probability.

Now let’s turn to Paulos’ accounts of the “whodunit” part of innumeracy: What creates and maintains it? A chief culprit is, you guessed it, poor mathematical education. My aforementioned colleague and I would agree: For the most part, mathematics is badly taught, especially at primary and secondary school levels. Paulos, commendably, doesn’t beat up the teachers. Instead, he identifies bad curricula and a lack of mathematical education in teacher training as root causes.

On the other hand, he does blame “us,” that is, the innumerate and even the numerate. The innumerate are castigated for demanding personal relevance and an absence of anxiety in their educations. According to Paulos, personalizing the universe yields disinterest in (depersonalized?) mathematics and science generally, and an unhealthy guillibility for pseudosciences such as astrology and numerology. He seems to have skated onto thin ice here. He doesn’t present empirical evidence for his main claim, and there are plenty of examples throughout history of numerate or even mathematically sophisticated mystics (the Pythagoreans, for one).

Paulos also accuses a subset of the innumerate of laziness and lack of discipline, but the ignorance of the undisciplined would surely extend beyond innumeracy. If we want instances of apathy that actually sustain innumeracy, let’s focus on public institutions that could militate against it but don’t. There, we shall encounter social and political forces that help perpetuate innumeracy, not via any conspiracy or even direct benefits, but simply by self-reinforcing feedback loops.

As the Complete Review points out “… the media isn’t much interested in combating innumeracy (think of how many people got fired after all the networks prematurely declared first Gore then Bush the winner in Florida in the 2000 American presidential election – none…” Media moguls and their editors are interested in selling stories, and probably will become interested in getting the numbers right only when the paying public starts objecting to numerical errors in the media. An innumerate public is unlikely to object, so the media and the public stagnate in a suboptimal but mutually reinforcing equilibrium.

Likewise, politicians don’t want a numerate electorate any more than they want a politically sophisticated one, so elected office-holders also are unlikely to lead the charge to combat innumeracy. Michael Moore, a member of the Australian Capital Territory Legislative Assembly for four terms, observes that governments usually avoid clear, measurable goals for which they can be held accountable (pg. 178, in a chapter he contributed to Gabriele Bammer’s and my book on uncertainty). Political uses of numbers are mainly rhetorical or for purposes of control. Again, we have a mutually reinforcing equilibrium: A largely innumerate public elects office-holders who are happy for the public to remain innumerate because that’s partly what got them elected.

I’ve encountered similar feedback-loops in academia, beginning with my experiences as a math graduate doing a PhD in a sociology department. The ideological stances taken by some departments of cultural studies, anthropology, and sociology position education for numeracy as aligned with so-called “positivist” research methods, against which they are opposed. The upshot is that courses with statistical or other numeracy content are devalued and students are discouraged from taking them. A subset of the innumerate graduates forms a succeeding generation of innumerate academics, and on it goes.

Meanwhile, Paulos blames the rest of us for perpetuating romantic stereotypes in which math and science are spoilers of the sublime, and therefore to be abhorred by anyone with artistic or spiritual sensibilities. So, he is simultaneously stereotyping the innumerate and railing against us for indulging another stereotype (No disrespect to Paulos; I’ve been caught doing this kind of thing often enough).

Lee Dembart, then of the Los Angeles Times, observed that “Paulos is very good at explaining all of this, though sometimes with a hectoring, bitter tone, for which he apologizes at the very end.” Unfortunately, hectoring people, focusing attention on their faults, or telling them they need to work harder “for their own good” seldom persuades them. I’ve taught basic statistics to students in the human sciences for many years. Many of these students dread a course in stats. They’re in it only because it’s a required course, telling them it’s for their own good isn’t going to cut any ice with them, and blaming them for finding statistics difficult or off-putting is a sure-fire way of turning them off entirely.

Now that we all have to be here, I propose to them, let’s see how we can make the best of it. I teach them how to lie with or abuse statistics so that they can gain a bit more power to detect when someone is trying to pull the proverbial wool over their eyes. This also opens the way to considering ethical and moral aspects of statistics. Then I try to link the (ab)uses of stats with important issues and debates in psychology. I let them in on some of psychology’s statistical malpractices (and there are plenty), so they can start detecting these for themselves and maybe even become convinced that they could do better. I also try to convey the view that data analysis is not self-automating; it requires human judgment and interpretive work.

Does my approach work? Judging from student evaluations, a fair amount of the time, but by no means always. To be sure, I get kudos for putting on a reasonably accessible, well-organized course and my tutors get very positive evaluations from the students in their tutorials. Nevertheless, there are some who, after the best efforts by me and my tutors, still say they don’t get it and don’t like it. And many of these reluctant students are not poor students—Most have put in the work and some have obtained good marks. Part of their problem may well be cognitive style. There is a lot of evidence that it is difficult for the human mind to become intuitively comfortable with probability, so those who like intuitive understanding might find statistics and probability aversive.

It’s also possible that my examples and applications simply aren’t motivating enough for these students. Despite the pessimism I share with my colleague, I think there has been a detectable increase in basic statistical literacy both in the public and the media over the past 30 years. It is mainly due to unavoidably statistical aspects of issues that the public and media both deem important (e.g., medical advances or failures, political polls, environmental threats). Acquiring numeracy requires effort and that, in turn, takes motivation. Thank goodness I don’t have the job of persuading first-year undergraduates to voluntarily sign up for a basic statistics course.

Written by michaelsmithson

March 15, 2011 at 1:14 pm

Ignorance as a Public Problem

leave a comment »

It’s my last post for this year, and I’m going to mine Sheldon Ungar’s 2008 paper for more material. Is ignorance a public problem? If so, what kind is it, and are there any solutions to it? Ungar not only declares ignorance to be a social problem, but also claims it is “under-identified” and difficult to “sell” as a social problem.

The latter claim may seem a tad puzzling, given the column inches and tomes devoted to exposing how little most of us know about science, for example. Commentators such as Jesse Kluver and books such as Mooney and Kirshenbaum’s 2009 opus leave little doubt that scientific illiteracy is regarded with alarm in at least some reasonably well-informed quarters. Likewise, for more than two decades popularizers such as John Allen Paulos have been warning us about the dangers and costs of innumeracy through his best-selling books. In fact, some people think he invented the term (he points out that he got it from the OED). And, of course, the notion that “those who cannot remember the past are condemned to repeat it” is Santayana’s famous aphorism, although the idea behind it did not originate with him.

These lacunae are the sort of thing that Ungar calls “functional knowledge deficits,” because they pose dangers or costs to those afflicted by them. But there’s another brand of ignorance-as-a-public-problem, namely one of the most successful exports from psychology and behavioral economics. These could be called “functional cognitive deficits,” but usually go under the names of cognitive “biases” or “illusions.” A fairly extensive (and reasonably accurate) list of these identifies more than 100 of them. Producing books about these has become a cottage industry during the past two decades (e.g., from Gilovich 1991 to Ariely 2008).

The cognitive bias problem is hard to sell for the ironic reason that one of the cognitive biases most of us suffer from is an inflated estimate of our own abilities and a conviction that we perceive reality more or less accurately and completely. This goes for me too, by the way. Moreover, we tend to be a bit testy when our deficiencies in thinking and decision making are pointed out to us. I’ve observed this in friends, colleagues and students. Most of us are relaxed and comfortable with being taken in by visual illusions, or with finding out (well, up to a point) that our memory is less than perfect. But our hackles become decidedly raised when tests of reasoning or judgment reveal us to be logical blunderers or deluded about probability.

Worse still, many of our cognitive biases or illusions turn out to be exceedingly difficult to get rid of. Unlike knowledge deficits, which can be overcome by absorbing the requisite information, some cognitive habits appear to be stubbornly hardwired. It appears that this kind of ignorance problem is more difficult to solve than the knowledge-deficit kind.

But even the knowledge-deficit version of ignorance lacks a straightforward solution, because there’s far too much important knowledge for us to absorb and retain. I’ve been in the education business for 33 years, so clearly I’m a fan of the notion that, ceteris paribus, more knowledge is a Good Thing. Nevertheless, I’m aware that we educators (and other would-be social influence merchants) face a common-pool social dilemma. In the 2008 book I co-edited with Gabriele Bammer I’ve called it the “persuasion-versus-information-glut dilemma.” All of us with an educational or persuasive interest will want to impose our messages on the public. I teach stats to psychology students, so of course I think that all university students should get an introduction to stats. A specialist in children’s literature once seriously suggested to me that a class in children’s literature should be required for all university students!

Too many messages in an unregulated forum, however, can drive the public to tune out altogether. The scarce resource threatened with depletion is not information or knowledge, but attention. Attention is effectively a zero-sum resource (I can’t pay full attention to two things simultaneously), whereas information is a multiplier resource (you can give me your information and still hang onto it). So, more and more and more education isn’t the solution to Ungar’s knowledge deficit problem.

If you need further persuasion, consider all of the stuff known by people in the past that we no longer know. In 1840 Lord Clive wrote: ‘Every schoolboy knows who imprisoned Montezuma, and who strangled Atahualpa.” Hands up, anyone? Or take a look at the curriculum for an Elizabethan schoolboy (I’m not being sexist here; only boys were permitted schooling in both periods I’ve just mentioned). Or what about good old “how-to” knowledge: Who among us knows the basics of such trades as coopersmith, milliner, or fletcher? One of my colleagues recently told me that his father was a farrier and then congratulated me for knowing what that was.

There’s a third kind of ignorance problem, one arising from hyper-specialization. Specialized knowledge doesn’t integrate itself. Without people to put it all together we end up with no synthesis, no “big picture.” I’m not referring just to “big” in the sense of a grand totalizing framework. This problem manifests itself even within specializations. John Von Neumann often is said to have been the last mathematician who possessed an overview of that discipline, and he passed away 53 years ago (here is an interesting discussion of this question). A more quotidian example is the recent post by Charlie Schulting on the perils of over-specialization in IT. Nor is this problem new, as witnessed by this 1957 article highlighting a Stanford University dean’s concern about this issue and his proposed remedy for it, or this 1922 note on overspecialization in public health care.

This version of the ignorance problem also lacks an easy solution, but in some respects it may be the most urgently in need of one. A moment’s consideration of the most important problems facing humankind should suffice to convince you of the need for specialists to be able to not only work with one another but also with non-specialist stakeholders. There are efforts on several fronts to address this problem, some of which go under names such as transdisciplinarity and integration and implementation sciences. More on these at another time.

It should be clear by now that there are multiple ignorance “problems,” none of which have straightforward solutions. In lieu of nice solutions, here are a few pitfalls and fallacies that we can avoid.

  1. We can avoid hubris. None of us knows very much, when all is said and done. There is also a vast amount of important stuff we can never know.
  2. We can become more aware of what we don’t know (within limits). We might even reform some aspects of our educational programs to help future generations in this endeavor.
  3. We can bear in mind that we have cognitive biases and mental short-cuts. Some of these are adaptive in certain settings (e.g., hunter-gathering) but not in others (e.g., the casino or stock market). Where these aren’t adaptive we can generate computational and other tools to help us.
  4. We are not cleverer than those who came before us. We’re not even always better-informed than they were. A pertinent observation in the conclusion of Cyril Kornbluth’s short story “the mindworm,” is that what many very clever people have not yet learned, some ordinary people have not yet quite forgotten.

Written by michaelsmithson

December 19, 2010 at 1:48 pm

A Knowledge Economy but an Ignorance Society?

leave a comment »

In an intriguing 2008 paper sociologist Sheldon Ungar asked why, in the age of “knowledge” or “information,” ignorance not only persists but seems to have increased and intensified. There’s a useful sociological posting on Ungar’s paper that this post is intended to supplement. Along with an information explosion, we also have had an ignorance explosion: Most of us are confronted to a far greater degree than our forebears with the sheer extent of what we don’t know and what we (individually and collectively) shall never know.

I forecast this development (among others) in my 1985 paper where I called my (then) fellow sociologists’ attention to the riches to be mined from studying how we construct the unknown, accuse others of having too much ignorance, claim ignorance for ourselves when we try to evade culpability, and so forth. I didn’t get many takers, but there’s nothing remarkable about that. Ideas seem to have a time of their own, and that paper and my 1989 book were a bit ahead of that time.

Instead, the master-concepts of the knowledge economy (Peter Drucker’s 1969 coinage) and information society (Fritz Machlup 1962) were all the rage in the ‘80’s. Citizens in such societies were to become better educated and more intelligent than their forebears or their less fortunate counterparts in other societies. The evidence for this claim, however, is mixed.

On the one hand, the average IQ has been increasing in a number of countries for some time, so the kind of intelligence IQ measures has improved. On the other, we routinely receive news of apparent declines in various intellective skills such as numeracy and literacy. On the one hand, thanks to the net, many laypeople can and do become knowledgeable about medical matters that concern them. On the other, there is ample documentation of high levels of public ignorance regarding heart disease and strokes, many of the effects of smoking or alcohol consumption, and basic medication instructions.

Likewise, again thanks to the net, people can and do become better-informed about current, especially local, events so that social media such as Twitter are redefining the nature of “news.” On the other, as Putnam (1999) grimly observed, the typical recent university graduate knows less about public affairs than did the average high school graduate in the 1940’s, “despite the proliferation of sources of information.” Indeed, according to Mark Liberman’s 2006 posting, the question of how ignorant Americans are has become a kind of national sport. Other countries have joined in (for instance, the Irish).

In addition to concerns about lack of knowledge, alarms frequently are raised regarding the proliferation and persistence of erroneous beliefs, often with a sub-text saying that surely in the age of information we would be rid of these. Scott Lilienfeld, assistant professor of psychology and consulting editor at the Skeptical Inquirer, sees the prevalence of pseudoscientific beliefs as by-products of two phenomena: the (mis)information explosion and the scientific illiteracy of the general population. From the Vancouver Sun on November 25th, an op-ed piece by Janice Kennedy had this to say:

“Mis- and disinformation, old fears and prejudices, breathtaking knowledge gaps – all share the same stage, all bathe in the same spotlight glow as thoughtful contributions and informed opinions. The Internet is the great democratizer. Everyone has a voice, and every voice can be heard. Including those that should stifle themselves… Add to these realities the presence of the radio and television talk show – hardly a new phenomenon, but one that has exploded in popularity, thanks to our Internet-led dumbing down – and you have the perfect complement. Shockingly ignorant things are said, repeated and, magnified a millionfold by the populist momentum of cyberspace and sensationalist talk shows, accorded a credibility once unthinkable.”

Now, I want to set Ungar’s paper alongside the attributions of ignorance that people make to those who disagree with them. If you set up a Google alert for the word “ignorance” then the most common result will be just this kind of attribution: X doesn’t see things correctly (i.e., my way) because X is ignorant. Behind many such attributions is a notion widely shared by social scientists and other intellectuals of yore that there is a common stock of knowledge that all healthy, normally-functioning members of society should know. We should all not only speak the same language but also know the laws of the land, the first verse of our national anthem, that 2 + 2 = 4, that we require oxygen to breathe, where babies come from, where we can get food, and so on and so forth.

The trouble with this notion is that the so-called information age has made it increasingly difficult for everyone to agree on what this common stock of knowledge should include while still being small enough for the typical human to absorb it all before reaching adulthood.

For instance, calculators may have made mental arithmetic unnecessary for the average citizen to “get by.” But what about the capacity to think mathematically? Being able to understand a graph, compound interest, probability and risk, or the difference between a two-fold increase in area versus in volume are not obviated by calculators. Which parts of mathematics should be part of compulsory education? This kind of question does not merely concern which bits of knowledge should be retained from what people used to know—The truly vexing problem is which bits of the vastly larger and rapidly increasing storehouse of current knowledge should we require everyone to know.

Ungar suggests some criteria for deciding what is important for people to know, and of course he is not the first to do so. For him, ignorance becomes a “functional deficit” when it prevents people from being able “to deal with important social, citizenship, and personal or practical issues.” Thus, sexually active people should know about safe sex and the risks involved if it isn’t practiced; sunbathers should know what a UV index is, automobile drivers should understand the relevant basic physics of motion and the workings of their vehicles, and smokers should know about the risks they take. These criteria are akin to the “don’t die of ignorance” public health and safety campaigns that began with the one on AIDS in the 1980’s.

But even these seemingly straightforward criteria soon run into difficulties. Suppose you’re considering purchasing a hybrid car such as the Prius. Is the impact of the Prius on the environment less than that of a highly fuel-efficient conventional automobile? Yes, if you consider the impact of running these two vehicles. No, if you consider the impacts from their manufacture. So how many miles (or kilometers) would you have to run each vehicle before the net impact of the conventional car exceeds that of the Prius? It turns out that the answer to that question depends on the kind of driving you’ll be doing. To figure all this out on your own is not a trivial undertaking. Even consulting experts who may have done it for you requires a reasonably high level of technological literacy, not to mention time. And yet, this laboriously informed purchasing decision is just what Ungar means by “an important social, citizenship, and personal or practical issue.” In fact, it ticks all four of those boxes.

Now imagine trying to be a well-informed citizen not only on the merits of the Prius, but the host of other issues awaiting your input such as climate change mitigation and adaptation, the situation in Afghanistan, responses to terrorism threats, the socio-economic consequences of globalization, and so on and on. Thus, in the end Ungar has to concede that “it is impossible to produce a full-blown, stable or consensual inventory of a stock of knowledge that well-informed members should hold.” The instability of such an inventory has been a fact of life for eons (e.g., the need to know Latin in order to read nearly anything of importance vanished long ago). Likewise, the impossibility of a “full blown” inventory is not new; that became evident well before the age of information. What is new is the extraordinary difficulty in achieving consensus on even small parts of this inventory.

It has become increasingly difficult to be a well-informed citizen on a variety of important issues, and these issues are therefore difficult to discuss in general public forums, let alone dinner-table conversation. Along with the disappearance of the informed citizen, we have witnessed the disappearance of the public intellectual. What have replaced both of these are the specialist and the celebrity. We turn to specialists to tell us what to believe; we turn to celebrities to tell us what to care about.

One of Ungar’s main points is that we haven’t ended up with a knowledge society, but only a knowledge economy. The key aspects of this economy are that knowledge and information are multiplier resources, whereas interest is bounded and attention is strictly zero-sum. Public ignorance of key issues and reliance on specialists is the norm, whereas the occurrence of pockets and snippets of public widely shared knowledge becomes exceptional. Thus, we live not only in a risk society but, increasingly, in an ignorance society.

Written by michaelsmithson

December 14, 2010 at 12:45 pm

“Negative knowledge”: From Wicked Problems and Rude Surprises to Mathematics

leave a comment »

It is one thing to know that we don’t know, but what about knowing that we can never know something? Karin Knorr-Cetina (1999) first used the term negative knowledge to refer to knowledge about the limits of knowledge. This is a type of meta-knowledge, and is a special case of known unknowns. Philosophical interest in knowing what we don’t know dates back at least to Socrates—certainly long before Donald Rumsfeld’s prize-winning remark on the subject. Actually, Rumsefeld’s “unknown unknowns” were articulated in print much earlier by philosopher Ann Kerwin, whose 1993 paper appeared along with mine and others in a special issue of the journal Science Communication as an outcome of our symposium on “Ignorance in Science” at the AAAS meeting in Boston earlier that year. My 1989 coinage, meta-ignorance, is synonymous with unknown unknowns.

There are plenty of things we know that we cannot know (e.g., I cannot know my precise weight and height at the moment I write this), but why should negative knowledge be important? There are at least three reasons. First, negative knowledge tells us to put a brake on what would otherwise be a futile wild goose-chase for certainty. Second, some things we cannot know we might consider important to know, and negative knowledge humbles us by highlighting our limitations. Third, negative knowledge about important matters may be contestable. We might disagree with others about it.

Let’s begin with the notion that negative knowledge instructs us to cease inquiry. On the face of it, this would seem a good thing: Why waste effort and time on a question that you know cannot be answered? Peter Medawar (1967) famously coined the aphorism that science is the “art of the soluble.” A commonsensical inference follows that if a problem is not soluble then it isn’t a scientific problem and so should be banished from scientific inquiry. Nevertheless, aside from logical flaw in this inference, over-subscribing to this kind of negative-knowledge characterization of science exacts a steep price.

First, there is what philosopher Jerome Ravetz (in the same journal and symposium as Ann Kerwin’s paper) called ignorance of ignorance. By this phrase Ravetz meant something slightly different from meta-ignorance or unknown unknowns. He observed that conventional scientific training systematically shields students from problems outside the soluble. As a result, they remain unacquainted with those problems, i.e., ignorant about scientific ignorance itself. The same charge could be laid on many professions (e.g., engineering, law, medicine).

Second, by neglecting unsolvable problems scientists exclude themselves from any input into what people end up doing about those problems. Are there problem domains where negative knowledge defines the criteria for inclusion? Yes: wicked problems and rude surprises. The characteristics of wicked problems were identified in the classic 1973 paper by Rittel and Webber, and most of these referred to various kinds of negative knowledge. Thus, the very definition and scope of wicked problems are unresolvable; such problems have no definitive solutions; there are no ultimate tests of whether a solution works; every wicked problem is unique; and there are no opportunities to learn how to deal with them by trial-and-error. Claimants to the title of “wicked problem” include how to craft policy responses to climate change, how to combat terrorism, how to end poverty, and how to end war.

Rude surprises are not always wicked problems but nonetheless are, as Todd La Porte describes them in his 2005 paper, “unexpected, potentially overwhelming circumstances that are likely to deliver punishing blows to human life, to political or economic viability, and/or to environmental integrity” (pg. 2). Financial advisors and traders around the world no doubt saw the most recent global financial crisis as a rude surprise.

As Matthias Gross (2010) points out at the beginning of his absorbing book, “ignorance and surprise belong together.” So it should not be, well, surprising that in an uncertain world we are in for plenty of surprises. But why are we so unprepared for surprises? Two important reasons are confirmation bias and the Catch-All Underestimation Bias (CAUB). Confirmation bias is the tendency to be more interested in and pay more attention to information that is likely to confirm what we already know or believe. As Robert Nickerson’s 1998 review sadly informs us, this tendency operates unconsciously even when we’re not trying to defend a position or bolster our self-esteem. The CAUB is a tendency to underestimate the likelihood that something we’ve never seen before will occur. The authors of the classic 1978 study first describing the CAUB pointed out that it’s an inescapable “out of sight, out of mind” phenomenon—After all, how can you have something in sight that never has occurred? And the final sting in the tail is that clever people and domain experts (e.g., scientists, professionals) suffer from both biases just as the rest of us do.

Now let’s move to the second major issue raised at the outset of this post: Not being able to know things we’d like to know. And let’s raise the stakes, from negative knowledge to negative meta-knowledge. Wouldn’t it be wonderful if we had a method of finding truths that was guaranteed not to steer us wrong? Possession of such a method would tame the wild seas of the unknown for us by providing the equivalent of an epistemic compass. Conversely, wouldn’t it be devastating if we found out that we never can obtain this method?

Early in the 20th century, mathematicians underwent the experience of expecting to find such a method and having their hopes dashed. They became among the first (and best) postmodernists. Their story has been told in numerous ways (even as a graphic novel), but for my money the best account is the late Morris Kline’s brilliant (1980) book, “Mathematics: The Loss of Certainty.” Here’s how Kline characterizes mathematicians’ views of their domain at the turn of the century:

“After many centuries of wandering through intellectual fog, by 1900 mathematicians had seemingly imparted to their subject the ideal structure… They had finally recognized the need for undefined terms; definitions were purged of vague or objectionable terms; the several branches were founded on rigorous axiomatic bases; and valid, rigorous, deductive proofs replaced intuitively or empirically based conclusions… mathematicians had cause to rejoice.” (pg. 197)

The tantalizing prospect before them was to establish the consistency and completeness of mathematical systems. Roughly speaking, consistency amounts to a guarantee of never running into paradoxes (well-formed mathematical propositions that nevertheless are provably both true and false) and completeness amounts to a guarantee of never running into undecidables (well-formed mathematical propositions whose truth or falsity cannot be proven). These guarantees would tame the unknown for mathematicians; a proper axiomatic foundation would ensure that any proposition derived from it would be provably true or false.

The famous 1931 paper by Kurt Gödel denied this paradise forever. He showed that if any mathematical theory adequate to deal with whole numbers is consistent, it will be incomplete. He also showed that consistency of such a theory could not be established by the logical principles in use by several foundational schools of mathematics. So, consistency would have to be determined by other methods and, if attained, its price would be incompleteness. But is there a way to ascertain which mathematical propositions are undecidable and which provable? Alan Turing’s 1936 paper on “computable numbers” (in addition to inventing Turing machines!) showed that the answer to this question is “no.” One of the consequences of these results is that instead of a foundational consensus there can be divergent schools of mathematics, each legitimate and selected as a matter of preference. Here we have definitively severe negative knowledge in an area that to most people even today epitomizes certitude.

“Loss of certainty” themes dominate high-level discourse in various intellectual and professional domains throughout the 20th century. Physics is perhaps the most well-known example, but one can find such themes in many other disciplines and fascinating debates around them. To give one example, historian Ann Curthoys’ and John Docker’s 2006 book “Is History Fiction?” begins by identifying three common responses to the book title’s question: Relativists who answer in the affirmative, foundationalists who insist that history is well-grounded in evidence after all, and a third (they claim, largest) puzzled group who says “well, is it?” To give just one more, I’m a mathematical modeler in a discipline where various offspring of the “is psychology a science?” question are seriously debated. In particular, I and others (e.g., here and here) regard the jury as still out on whether there are (m)any quantifiable psychological attributes. Some such attributes can be rank-ordered, perhaps, but quantified? Good question.

Are there limits to negative knowledge—In other words, is there such a thing as negative negative-knowledge? It turns out that there is, mainly in the Gödelian realm of self-referential statements. For example, we cannot believe that we currently hold a false belief; otherwise we’d be compelled to disbelieve it. There are also limits to the extent to which we can self-attribute erroneous belief formation. Philosophers Andy Egan and Adam Elga laid these out in their delightfully titled 2005 paper, “I Can’t Believe I’m Stupid.” According to them, I can believe that in some domains my way of forming beliefs goes wrong all of the time (e.g., I have a sense of direction that is invariably wrong), but I can’t believe that every time I form any belief it goes wrong without undermining that very meta-belief.

Dealing with wicked problems and rude surprises requires input from multiple stakeholders encompassing their perspectives, values, priorities, and (possibly non-scientific) ways of knowing. Likewise, there is no algorithm or sure-fire method to anticipate or forecast rude surprises or Nicolas Taleb’s “black swans.” These are exemplars of insoluble problems beyond the ken of science. But surely none of this implies that input from experts is useless or beside the point. So, are there ways of educating scientists, other experts, and professionals so that they will be less prone to Ravetz’s ignorance of ignorance? And what about the rest of us—Are there ways we can combat confirmation bias and the CAUB? Are there effective methods for dealing with wicked problems or rude surprises? Ah, grounds for a future post!

Things We Never Want to Know

leave a comment »

My last post concentrated on why many of us want to have at least some temporary unknowns in our lives. What about things we never need or want to know? The most mundane examples are things we believe we don’t need to know, either because we already know enough or because we regard them as irrelevant. For those of us like me who were born earlier than, say, 5 BC (that’s five years Before Computers), knowing how to do basic mental arithmetic was necessary. It isn’t anymore, at least as long as we have computers or calculators handy.

Seeking, learning, remembering and comprehending information are not cost-free processes. The human brain also has limits to how much it can absorb and understand. These factors are frequently ignored in prescriptive or rational decisional frameworks. In those frameworks, more information is assumed to be better and the ideal rational agent is assumed to possess and comprehend all available information all of the time. We humans usually cannot do this, but we can and do make choices about which information we ignore or don’t bother learning. For an example, look no further than your own choices regarding email filters, search engines, web-feeds, and the like. These instruments reflect what you choose to ignore or filter out.

The problem is that information is a multiplier resource, whereas attention is for all practical purposes a zero-sum resource. You can give information to someone and simultaneously give it to others as well. But if you pay attention to someone you can’t pay attention to someone else at the same time. In 1997 David Shenk’s “Data Smog” appeared and 10 years later he published an update. He’s forthright and self-honest about the things he got wrong, but his primary thesis has stood the test of time: “While our grandparents were limited by access to information and speed of communication, we are restricted largely by our ability to wade through it all.” Peter Denning wrote about “info-glut” 28 years ago and a few years back he also presented an update. There, he discusses strategies for intelligently but not excessively filtering information.

Are there any real dangers aside from fatigue or a sense of being overwhelmed by info-glut? According to David Strayer and his co-researchers’ publications at the University of Utah’s applied cognition lab, there are: Among them, an increased likelihood of automobile accidents due to distractions such as cell phones. Their studies suggest that the risks incurred by driving while using a cell phone (even hands-free) are on a part with those of driving while intoxicated.

Now, how about things we really don’t want to know, as opposed to those we don’t need to? A good friend has resolutely refused to see or listen to anyone tell him about the 2004 Peter Sellers movie. I should hasten to add that this is the one and only facet of reality I’ve ever seen him shrink from. A long-time fan of the comedic actor, he wished his memories and appreciation of Sellers’ oeuvre to remain unsullied by any “warts and all” revelations that the film might inflict on him.

And then there’s this age-old quandary: A supernatural messenger materializes before you, bearing a sealed envelope containing the the date and manner of your own demise. Would you open it and read it? This issue has been debated in health forums with various online surveys (e.g., here and here). Generally the majority of respondents say “no,” and their reasons typically refer to the aversive emotional effects of this information, a desire to have some surprises in store for the future, and a feeling that this knowledge would compromise enjoyment of one’s current circumstances. Of course, none of this has prevented the proliferation of “death clocks” (e.g., here, here, here and here). These purport to identify the date (if not the manner) of your death via a few simple risk-factors. Their pseudo-resolution of the riddle “when will I die” sometimes is celebrated with ironic declarations such as “today I died” or “Hey, I’ve been dead for years.”

Are there things we choose to know that we would have been better off not knowing? All of us probably can think of specific instances, but are there categories of such things? Most of us prefer to find out the causes and reasons behind our experiences, good or bad. There’s a large psychological literature supporting the notion that making sense out of negative or traumatic events helps us to deal with and recover from them. But what about pleasant or uplifting events? Several years ago, Timothy Wilson and his colleagues (2005) reported experiments demonstrating what they called the “pleasure paradox,” whereby making sense out of pleasurable experiences reduces the pleasure obtained from them.

One of Wilson et al.’s experiments had confederates giving a dollar coin with a card to people in a library. In one condition the card contained arbitrary statements such as “The Smile Society” and “We like to promote Random Acts of Kindness.” In another condition the card had a question-answer explanation instead, so the arbitrary statement was preceded by a question such as “Who are we?” or “Why do we do this?” About 5 minutes after each participant received the card, an experimenter came along and asked them to complete a brief mood survey which included a self-rating of how positive or negative the participant’s mood was at that moment. The mood scale was also given to people in the library who had not received a card at all (these were the “controls”). It turned out that those who received the non-explanatory card were in a better mood than those with the explanatory card and the controls. Moreover, the explanatory card folks’ moods didn’t differ from the controls.

Why would a pleasurable experience be less pleasurable if it were explained? One possibility is that explaining the event makes it less surprising and thereby lessens the intensity of the emotion (whether positive or negative) that surprises induce. Wilson’s team went one step further in their studies: They asked another sample of people to forecast which kind of card would make recipients feel happier. A large majority of respondents predicted that the explanatory card would be the more pleasurable of the two. Our commonsense psychology leads us astray in this case—Resolving the uncertainty about a positive event does not make it more pleasurable. It does the opposite.

If resolving uncertainty about positive events isn’t always beneficial, what about events whose consequences could be positive or negative? Many uncertainties in everyday life have this characteristic, for example, awaiting the results of a test diagnosing whether you have a disease.

Genetic testing can raise the stakes about what we choose not to know to a very high level indeed. Perhaps the most agonizing choice of this kind faces descendents of Huntington’s Disease sufferers. Huntington’s Disease (HD) is a fatal neurodegenerative disorder with no known cure or prevention and very little in the way of palliative treatment. Symptoms begin with emotional disturbances and loss of higher intellective functions, followed by uncontrollable movements and ultimately inability to control movements at all.

The child of an HD parent has a 50% chance of inheriting the disease. If they do inherit HD, then each of their children enters this 50-50 lottery, whereas if they do not then their children are not at risk of HD. Now, here’s what makes this lottery very debilitating indeed: HD usually manifests itself only when the carrier is well into adulthood (in their 30’s or 40’s). Until the 1980’s all that a child of an HD parent could do was wait and see whether they passed their 40’s with no symptoms. It impact of this uncertainty on young persons trying to plan their lives would be hard to overstate.

Then a genetic marker test became available for diagnosing HD. The test was relatively inexpensive, it could be taken at any age, and early surveys among those at risk suggested that uptake rates among them would be high. Arguments for HD descendants to take this test would seem unassailable. But in most countries where testing is available, the uptake rate has been low—5%-20%. Why?

Self-report studies have elicited reasons such as being “comfortable” with the uncertainties, concerns about the irreversibility of knowing the outcome, and fears associated with the consequences of an unfavorable outcome. A 1992 study identified an additional clue. Those electing to take the test viewed both favorable and unfavorable outcomes as having less extreme consequences for them and their families than those refusing the test. That is, they rated the favorable outcomes less positively and the unfavorable outcomes less negatively. This finding agrees with the notion that, all else being equal, people are more willing to take risks when their outcomes are less variable (a notion that is substantiated by research).

Nonetheless, a much larger 2008 longitudinal prospective study of 1001 North Americans at risk of HD reported that reasons for not taking up the test still are poorly understood. It didn’t arrive at strong conclusions either. For example, this study included measures of depression and tolerance of ambiguity, and found no detectable differences on either of these between those who already had undergone genetic testing and those who had not.

In recent times, reasons for and against taking the test have become complicated by issues such as insurability. The aforementioned 2008 study reported that for those taking the genetic test during the study, loss of insurability was their greatest concern and more than 40% of them paid for other medical services to conceal their genetic inheritance from insurers and/or employers. As a recent (2010) article in Lancet points out, there is little protection in either North America or the UK (or, I might add, in Australia) of HD carriers against discrimination by insurers. In the USA, the 2009 Genetic Information Nondiscrimination Act prohibits insurers from using genetic information to determine eligibility, premiums, or to compel individuals to undergo genetic tests. However, it does not extend to life insurance, disability insurance, or long-term care insurance.

So there is a double set of issues here: One regarding each person’s decision to undergo genetic testing or not, and the second regarding who else should know about the results and what they can and cannot do with that knowledge. Genetic testing is a complex topic, one that will affect many more of us in the near future, and certainly worth more than one post.

Written by michaelsmithson

November 3, 2010 at 10:51 am

When Is It Folly to Be Wise?

with 3 comments

There are things we’d rather not know. Some of these are temporary; we’d like to know them eventually but not just now. Others, less common, are things we never want(ed) to know.

In this post I’ll focus on the temporary kind. Temporary ignorance has many uses, some of which are not immediately obvious. I’ve already mentioned a few of them in earlier posts. One of these is entertainment. Many forms of entertainment require temporary audience ignorance, including all forms of story-telling and jokes. No unknowns? No mysteries? No surprises? Then no entertainment.

Games are an example of entertainment where uncertainty has a key role even in games of skill. A game that is a foregone conclusion is not very entertaining. Games of skill are like tests but more fun. Why? Partly because games have more uncertainty built into them than tests do, and so they tease us with a mix of outcomes due to skill and sheer luck. More than 25 years ago, a clinical neuropsychologist working in a large hospital told me how he ended up exploiting this connection between games and tests. One of his chief duties was to assess the state and recovery of cognitive functions of patients in a head trauma unit—Often victims of automobile accidents or strokes. The well-established tests of memory, motor control and sustained attention had good psychometric properties but they were boring. Some patients refused to take them; others complied but only with a desultory effort.

Then inspiration struck: My colleague noticed that anyone who could manage it would head down the ward corridor to play Space Invaders. Here was a ready-made test of attention and motor control built into a game. Moreover, repeatedly playing the game actually would facilitate patients’ recovery, so unlike the standard cognitive tests this “test” had a therapeutic effect. He attached a computer to the back of the game, established benchmark measures such as how long players would last if they did nothing or moved the joystick randomly, and started recording individual patients’ results. The results were a clinician’s dream—Meaningful data tracking patients’ recovery and a therapeutic exercise.

Some psychologists who should know better (e.g., Gudykunst and Nishida 2001) have declared that the emotional accompaniment of uncertainty is anxiety. Really? What about thrill, excitement, anticipation, or hope? We can’t feel thrill, excitement, or anticipation without the unknowns that compel them. And as for hope, if there’s no uncertainty then there’s no hope. These positive emotions aren’t merely permitted under uncertainty, they require uncertainty. To my knowledge, no serious investigation has been made into the emotional concomitants of omniscience, but in fact, there is only one human emotional state I associate with omniscience (aside from smugness)—Boredom.

We don’t just think we’re curious or interested; we feel curious or interested. Curiosity and interest have an emotional cutting-edge. Intellectuals, artists and researchers have a love-hate emotional relationship with their own ignorance. On the one hand, they are in the business of vanquishing ignorance and resolving uncertainties. On the other, they need an endless supply of the unknowns, uncertainties, riddles, problems and even paradoxes that are the oxygen of the creative mind. One of the hallmarks of scientists’ reactions to Horgan’s (1996) book, “The End of Science,” was their distress at Horgan’s message that science might be running out of things to discover. Moreover, artists are not attracted to obvious ideas, nor scientists to easy problems. They want their unknowns to be knowable and problems to be solvable, but also interesting and challenging.

Recently an Honours student undertaking her first independent research project came to me for some statistical advice. She sounded frustrated and upset. Gradually it became apparent that hardly any of her experimental work had turned out as expected, and the standard techniques she’d been taught were not helping her to analyze her data and interpret her findings. I explained that she might have to learn about another technique that could help here. She asked me, “Is research always this difficult?” I replied with Piet Hein’s aphorism, “Problems worthy of attack prove their worth by fighting back.” Her eyes narrowed. “Well, now that you put it that way…” Immediately I knew that this student had the makings of a researcher.

A final indication of the truly ambivalent relationship creative folk have with their favorite unknowns is that they miss them once they’ve been dispatched. Andrew Wiles, the mathematician who proved Fermat’s Last Theorem, spoke openly of his sense of loss for the problem that had possessed him for more than 7 years.

And finally, let’s take one more step to reach a well-known but often forgotten observation: Freedom is positively labeled uncertainty about the future. There isn’t much more to it than that. No future uncertainties in your life? Everything about your future is fore-ordained? Then you have no choices and therefore no freedom. As with intellectuals and their unknowns, we want many of our future unknowns to be ultimately knowable but not foreordained. We crave at least some freedom of choice.

People are willing to make sacrifices for their freedom, and here I am not referring only to a choice between freedom and a dreadful confinement or tyrannical oppression. Instead, I have in mind tradeoffs between freedom and desirable, even optimal but locked-in outcomes. People will cling to their freedom to choose even if it means refusing excellent choices.

A 2004 paper by Jiwoong Shin and Daniel Ariely, described in Ariely’s entertaining book “Predictably Irrational” (2008, pp. 145-153) reports the results of experimental evidence for this claim. Shin and Ariely set up an online game with 3 clickable doors, each of which yielded a range of payoffs (e.g. between 1 and 10 cents). The object of the game was to make as much money as possible in 100 clicks. There was a twist: Every time one door was clicked, the others would shrink by a certain amount, and if unchosen for sufficiently many times a door would disappear altogether. Shin and Ariely found that even bright university (MIT) students would forgo top earnings in order to keep all the doors open. Shin and Ariely tried providing the participants with the exact monetary payoffs from each door (so they would know which door offered the most) and they even modified the game so that a disappeared door could be “reincarnated” with a single click. It made no difference; participants continued to refuse to close any doors. For them, the opportunity costs of closed doors loomed larger than the payoffs they could have had by sticking with the best door.

So here we have one of the key causes of indecision, namely a strong desire to “keep our options open,” i.e., to maintain positively labeled uncertainty. If achieving certainty is framed in terms of closing off options, we strive to avoid it. If uncertainty is framed as keeping our options open we try to maintain it, even if that entails missing out on an excellent choice. This tendency is illustrated by a folk-wisdom stereotype in the wild and wonderful world of dating-and-mating. He and she are in love and their relationship has been thriving for more than a year. She’d like to make it permanent, but he’s still reluctant to commit. Why? Because someone “better” might come along…

What could drive us to keep our options open, refusing to commit even when we end up letting our best opportunities pass us by? Could it be the way we think about probabilities? Try this rather grim thought-experiment: First, choose an age beyond your current age (for me, say, 75). Then, think of the probability that you’ll get cancer before you reach that age. Now, think of the probability that you’ll get cancer of the stomach. Think of the probability you’ll get lung cancer. The probability you’ll get bone cancer. Or cancer of the brain. Or breast cancer (if you’re a woman) or prostate cancer (if you’re a man). Or skin cancer. Or pancreatic cancer… If you’re like most people, unpacking “cancer” into a dozen or so varieties will make it seem more likely that you’ll get it than considering “cancer” in one lump—It just seems more probable that you’d end up with at least one of those varieties. The more ways we can think of something happening, the more likely we think it is.  Cognitive psychologists have found experimental evidence for this effect (for the curious, take a look at papers by Tversky and Kohler 1994 and Tversky and Rottenstreich 1997),

An even more startling effect was uncovered in a paper by Kirkpatrick and Epstein (1992). They offered people a choice between drawing a ticket from a lottery of 10 tickets, 9 losing and 1 winning, and a lottery with 100 tickets, 90 losing and 10 winning. The participants confirmed that they knew the probability of winning either lottery was .1, so there was no effect on their probability judgments. Nevertheless, when asked which they preferred most chose the 100-ticket lottery. Why? Because that lottery gave them 10 ways of winning whereas the other gave them only 1 way.

The more options we keep open, the more “winning tickets” we think we hold and the greater the hope we might get lucky. When we’ve committed to one of those options we may have gained certitude, but luck and hope have vanished.

Written by michaelsmithson

November 3, 2010 at 10:50 am

Science for Good and Evil: Dual Use Dilemmas

leave a comment »

For fascinating examples of attempts to control curiosity, look no further than debates and policies regarding scientific research and technological development. There are long-running debates about the extent to which scientists, engineers, and other creative enterprises can and should be regulated—and if so by whom. Popular images of scholars and scientists pursuing knowledge with horrific consequences for themselves and others range from the 16th-century legend of Faustus (reworked by numerous authors, e.g., Marlowe) to Bruce Banner.

The question of whether experiment X should be performed or technology Y developed is perennial. The most difficult versions of this question arise when pursuing the object of one’s curiosity violates others’ deeply held values or ethical principles, or induces great fear. These issues have a lengthy history. Present-day examples of scientific research and technological developments evoking this kind of conflict include stem cell research, human cloning, the Large Hadron Collider, and genetic modification of food crops.

Before it began operations, fears that the Large Hadron Collider (LHC) could generate a black hole that would swallow the Earth made international headlines, and debates over its safety have continued, including lawsuits intended to halt its operation. The nub of the problem of course is risk, and a peculiarly modern version of risk at that. The sociologist Ulrich Beck’s (1992) classic work crystallized a distinction between older and newer risks associated with experimentation and exploration. The older risks were localized and often restricted to the risk-takers themselves. The new risks, according to writers like Beck, are global and catastrophic. The concerns about the LHC fit Beck’s definition of the new risks.

When fears about proposed experiments or technological developments concern the potential misuse of potentially beneficial research or technology, debates of this kind are known as “dual use dilemmas.” There’s an active network of researchers on this topic. Recently I participated in a workshop at The Australian National University on this topic, from which a book should emerge next year.

Probably the most famous example is the controversy arising from the development of nuclear fission technology, which gave us the means to nuclear warfare on the one hand but numerous peacetime applications on the other. The fiercest debates these days on dual use dilemmas focus on biological experiments and nanotechnology. The Federation of American Scientists has provided a webpage source of fascinating case-studies in dual-use dilemmas involving biological research. The American National Research Council (NRC) 2004 report on “Biotechnological Research in an Age of Terrorism” is an influential source. Until recently, much of the commentary came from scientists, security experts or journalists. However, for a book-length treatment of this issue by ethicists, see Miller and Selgelid’s (2008) interesting work.

The NRC report listed “experiments of concern” as those including any of the following capabilities:

  1. demonstrating how to render a vaccine ineffective;
  2. enhancing resistance to therapeutically useful antibiotics or antiviral agents;
  3. enhancing the virulence of a pathogen or render a non-pathogen virulent;
  4. increasing the transmissibility of a pathogen;
  5. altering the host range of a pathogen;
  6. enabling the evasion of diagnosis and/or detection by established methods; and
  7. enabling the weaponization of a biological agent or toxin.

There are three kinds of concern underpinning dual-use dilemmas. The first arises from foreseeable misuses that could ensue from an experiment or new technology. Most obvious are experiments or developments intended to create weapons in the first place (e.g., German scientists responsible for gas warfare in World War I or American scientists responsible for atomic warfare at the end of World War II). But not as obvious are the opportunities to exploit nonmilitary research or technology. An example of potential misuse of a rather quotidian technology would be terrorists or organized crime networks exploiting illegal botox manufacturing facilities to distill botulinum toxin (see the recent Scientific American article on this).

Research results published in 2005 announced the complete genetic sequencing of the 1918 influenza A (H1N1) virus (a.k.a. the “Spanish flu”) and also its resurrection using reverse genetic techniques. This is the virus that killed between 20 and 100 million people in 1918–1919. Prior to publication of the reverse-engineering paper, the US National Science Advisory Board for Biosecurity (NSABB) was asked to consider the consequences. The NSABB decided that the scientific benefits flowing from publication of this information about the Spanish flu outweighed the risk of misuse. Understandably, publication of this information aroused concerns that malign agents could use it to reconstruct H1N1. The same issues have been raised concerning the publication of the H5N1 influenza (“bird flu”) genome.

The second type of concern is foreseeable catastrophic accidents that could arise from unintended consequences of research or technological developments. The possibility that current stockpiles of smallpox could be accidentally let loose is the kind of event to be concerned about here. Such an event also is, for some people, an argument against research enterprises such as the reengineering of H1N1.

The third type of concern is in some ways more worrying: Unforeseeable potential misuses or accidents. After all, a lot of research yields unanticipated findings and/or opportunities for new technological developments. A 2001 paper on mousepox virus research at The Australian National University is an example of this kind of serendipity. The researchers were on the track of a genetically engineered sterility treatment that would combat mouse plagues in Australia. But this research project also led to the creation of a highly virulent strain of mousepox. The strain the researchers created killed both mice with resistance to mousepox and mice vaccinated against mousepox.

Moreover, the principles by which this new strain was engineered were readily generalizable, and raised the possibility of creating a highly virulent strain of smallpox resistant to available vaccines. Indeed, in 2003 a team of scientists at St Louis University published research in which they had extended the mousepox results to cowpox, a virus that can infect humans. The fear, of course, is that these technological possibilities could be exploited by terrorists. Recent advances in synthetic genomics have magnified this problem. It is now possible not only to enhance the virulence of extant viruses, but also create new viruses from scratch.

The moral and political questions raised by cases like these are not easy to resolve, for at least three reasons. First, the pros and cons often are unknown to at least some extent. Second, even the known pros and cons usually weigh heavily on both sides of the argument. There are very good reasons for going ahead with the research and very good reasons for prohibiting it.

The third reason applies only to some cases, and it makes those cases the toughest of all. “Dual dilemmas” is a slightly misleading phrase, in a technical sense that relates to this third reason. Many cases are really just tradeoffs, where at least in principle rational negotiators could weigh the costs and benefits according to their lights and arrive at an optimal decision. But some cases genuinely are social dilemmas, in the following sense of the term: Choices dictated by rational, calculating self-interest nevertheless will lead to the destruction of the common good and, ultimately, everyone’s own interests.

Social dilemmas aren’t new. Garrett Hardin’s “tragedy of the commons” is a famous and much debated example. A typical arms-race is another obvious example. Researchers in countries A and B know that each country has the means to revive an extinct, virulent pathogen that could be exploited as a bioweapon. If the researchers in country A revive the pathogen and researchers in country B do not, country A temporarily enjoys a tactical advantage over country B. However, it also imposes a risk on both A and B of theft or accidental release of the pathogen. If country B responds by duplicating this feat then B regains equal footing with A but now has multiplied the risk of accidental release or theft. Conversely, if A restrains from reviving the pathogen then B could play A for a sucker by reviving it. It is in each country’s self-interest to revive the pathogen in order to avoid being trumped by the other, but the result is the creation of dread risks that neither country wants to bear. You may have heard about “Prisoner’s Dilemma” or “Chicken Game.” These are types of social dilemmas, and some dual use dilemmas are structurally similar to them.

Social dilemmas present a very difficult problem in the regulation of curiosity (and other kinds of choice behavior) because solutions cannot be worked out solely on the basis of appeals to rational self-interest. Once you know what to look for, you can find social dilemmas in all sorts of places. The research literature on how to deal with and resolve them includes contributions from economics, psychology, political science, anthropology, sociology and applied mathematics (I co-edited a book on social dilemmas back in 1999). This research has had applications ranging from environmental policy formation to marriage guidance counseling. But it shouldn’t surprise you too much to learn that most of the early work on social dilemmas stemmed from and was supported by the American military.

So to conclude, let’s try putting the proverbial shoe on the other foot. The dual-use dilemmas literature focuses almost exclusively on scientific research and technological developments that could be weaponized. But what about the reverse process—Military research and development with nonmilitary benefits? Or, for that matter, R & D from illicit or immoral sources that yield legitimate spinoffs and applications? These prospects appear to have been relatively neglected.

Nevertheless, it isn’t hard to find examples: The internet, for one. The net originated with the American Defense Advanced Research Projects Agency (DARPA), was rapidly taken up by university-based researchers via their defense-funded research grants, and morphed by the late 1980’s into the NSFnet. Once the net escaped its military confines, certain less than licit industries spearheaded its development.  As Peter Nowak portrays it in his entertaining and informative (if sometimes overstated) book Sex, Bombs and Burgers, the pornography industry was responsible for internet-defining innovations such as live video streaming, video-conferencing and key aspects of internet security provision. Before long, mainstream businesses were quietly adopting ways of using the net pioneered by the porn industry.

Of course, I’m not proposing that the National Science Foundation should start funding R&D in the porn industry. My point is just that this “other” dual use dilemma merits greater attention and study than it has received so far.

Written by michaelsmithson

November 3, 2010 at 10:49 am

Follow

Get every new post delivered to your Inbox.