ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Uncertainty

When Is It Folly to Be Wise?

with 3 comments

There are things we’d rather not know. Some of these are temporary; we’d like to know them eventually but not just now. Others, less common, are things we never want(ed) to know.

In this post I’ll focus on the temporary kind. Temporary ignorance has many uses, some of which are not immediately obvious. I’ve already mentioned a few of them in earlier posts. One of these is entertainment. Many forms of entertainment require temporary audience ignorance, including all forms of story-telling and jokes. No unknowns? No mysteries? No surprises? Then no entertainment.

Games are an example of entertainment where uncertainty has a key role even in games of skill. A game that is a foregone conclusion is not very entertaining. Games of skill are like tests but more fun. Why? Partly because games have more uncertainty built into them than tests do, and so they tease us with a mix of outcomes due to skill and sheer luck. More than 25 years ago, a clinical neuropsychologist working in a large hospital told me how he ended up exploiting this connection between games and tests. One of his chief duties was to assess the state and recovery of cognitive functions of patients in a head trauma unit—Often victims of automobile accidents or strokes. The well-established tests of memory, motor control and sustained attention had good psychometric properties but they were boring. Some patients refused to take them; others complied but only with a desultory effort.

Then inspiration struck: My colleague noticed that anyone who could manage it would head down the ward corridor to play Space Invaders. Here was a ready-made test of attention and motor control built into a game. Moreover, repeatedly playing the game actually would facilitate patients’ recovery, so unlike the standard cognitive tests this “test” had a therapeutic effect. He attached a computer to the back of the game, established benchmark measures such as how long players would last if they did nothing or moved the joystick randomly, and started recording individual patients’ results. The results were a clinician’s dream—Meaningful data tracking patients’ recovery and a therapeutic exercise.

Some psychologists who should know better (e.g., Gudykunst and Nishida 2001) have declared that the emotional accompaniment of uncertainty is anxiety. Really? What about thrill, excitement, anticipation, or hope? We can’t feel thrill, excitement, or anticipation without the unknowns that compel them. And as for hope, if there’s no uncertainty then there’s no hope. These positive emotions aren’t merely permitted under uncertainty, they require uncertainty. To my knowledge, no serious investigation has been made into the emotional concomitants of omniscience, but in fact, there is only one human emotional state I associate with omniscience (aside from smugness)—Boredom.

We don’t just think we’re curious or interested; we feel curious or interested. Curiosity and interest have an emotional cutting-edge. Intellectuals, artists and researchers have a love-hate emotional relationship with their own ignorance. On the one hand, they are in the business of vanquishing ignorance and resolving uncertainties. On the other, they need an endless supply of the unknowns, uncertainties, riddles, problems and even paradoxes that are the oxygen of the creative mind. One of the hallmarks of scientists’ reactions to Horgan’s (1996) book, “The End of Science,” was their distress at Horgan’s message that science might be running out of things to discover. Moreover, artists are not attracted to obvious ideas, nor scientists to easy problems. They want their unknowns to be knowable and problems to be solvable, but also interesting and challenging.

Recently an Honours student undertaking her first independent research project came to me for some statistical advice. She sounded frustrated and upset. Gradually it became apparent that hardly any of her experimental work had turned out as expected, and the standard techniques she’d been taught were not helping her to analyze her data and interpret her findings. I explained that she might have to learn about another technique that could help here. She asked me, “Is research always this difficult?” I replied with Piet Hein’s aphorism, “Problems worthy of attack prove their worth by fighting back.” Her eyes narrowed. “Well, now that you put it that way…” Immediately I knew that this student had the makings of a researcher.

A final indication of the truly ambivalent relationship creative folk have with their favorite unknowns is that they miss them once they’ve been dispatched. Andrew Wiles, the mathematician who proved Fermat’s Last Theorem, spoke openly of his sense of loss for the problem that had possessed him for more than 7 years.

And finally, let’s take one more step to reach a well-known but often forgotten observation: Freedom is positively labeled uncertainty about the future. There isn’t much more to it than that. No future uncertainties in your life? Everything about your future is fore-ordained? Then you have no choices and therefore no freedom. As with intellectuals and their unknowns, we want many of our future unknowns to be ultimately knowable but not foreordained. We crave at least some freedom of choice.

People are willing to make sacrifices for their freedom, and here I am not referring only to a choice between freedom and a dreadful confinement or tyrannical oppression. Instead, I have in mind tradeoffs between freedom and desirable, even optimal but locked-in outcomes. People will cling to their freedom to choose even if it means refusing excellent choices.

A 2004 paper by Jiwoong Shin and Daniel Ariely, described in Ariely’s entertaining book “Predictably Irrational” (2008, pp. 145-153) reports the results of experimental evidence for this claim. Shin and Ariely set up an online game with 3 clickable doors, each of which yielded a range of payoffs (e.g. between 1 and 10 cents). The object of the game was to make as much money as possible in 100 clicks. There was a twist: Every time one door was clicked, the others would shrink by a certain amount, and if unchosen for sufficiently many times a door would disappear altogether. Shin and Ariely found that even bright university (MIT) students would forgo top earnings in order to keep all the doors open. Shin and Ariely tried providing the participants with the exact monetary payoffs from each door (so they would know which door offered the most) and they even modified the game so that a disappeared door could be “reincarnated” with a single click. It made no difference; participants continued to refuse to close any doors. For them, the opportunity costs of closed doors loomed larger than the payoffs they could have had by sticking with the best door.

So here we have one of the key causes of indecision, namely a strong desire to “keep our options open,” i.e., to maintain positively labeled uncertainty. If achieving certainty is framed in terms of closing off options, we strive to avoid it. If uncertainty is framed as keeping our options open we try to maintain it, even if that entails missing out on an excellent choice. This tendency is illustrated by a folk-wisdom stereotype in the wild and wonderful world of dating-and-mating. He and she are in love and their relationship has been thriving for more than a year. She’d like to make it permanent, but he’s still reluctant to commit. Why? Because someone “better” might come along…

What could drive us to keep our options open, refusing to commit even when we end up letting our best opportunities pass us by? Could it be the way we think about probabilities? Try this rather grim thought-experiment: First, choose an age beyond your current age (for me, say, 75). Then, think of the probability that you’ll get cancer before you reach that age. Now, think of the probability that you’ll get cancer of the stomach. Think of the probability you’ll get lung cancer. The probability you’ll get bone cancer. Or cancer of the brain. Or breast cancer (if you’re a woman) or prostate cancer (if you’re a man). Or skin cancer. Or pancreatic cancer… If you’re like most people, unpacking “cancer” into a dozen or so varieties will make it seem more likely that you’ll get it than considering “cancer” in one lump—It just seems more probable that you’d end up with at least one of those varieties. The more ways we can think of something happening, the more likely we think it is.  Cognitive psychologists have found experimental evidence for this effect (for the curious, take a look at papers by Tversky and Kohler 1994 and Tversky and Rottenstreich 1997),

An even more startling effect was uncovered in a paper by Kirkpatrick and Epstein (1992). They offered people a choice between drawing a ticket from a lottery of 10 tickets, 9 losing and 1 winning, and a lottery with 100 tickets, 90 losing and 10 winning. The participants confirmed that they knew the probability of winning either lottery was .1, so there was no effect on their probability judgments. Nevertheless, when asked which they preferred most chose the 100-ticket lottery. Why? Because that lottery gave them 10 ways of winning whereas the other gave them only 1 way.

The more options we keep open, the more “winning tickets” we think we hold and the greater the hope we might get lucky. When we’ve committed to one of those options we may have gained certitude, but luck and hope have vanished.

Written by michaelsmithson

November 3, 2010 at 10:50 am

Science for Good and Evil: Dual Use Dilemmas

leave a comment »

For fascinating examples of attempts to control curiosity, look no further than debates and policies regarding scientific research and technological development. There are long-running debates about the extent to which scientists, engineers, and other creative enterprises can and should be regulated—and if so by whom. Popular images of scholars and scientists pursuing knowledge with horrific consequences for themselves and others range from the 16th-century legend of Faustus (reworked by numerous authors, e.g., Marlowe) to Bruce Banner.

The question of whether experiment X should be performed or technology Y developed is perennial. The most difficult versions of this question arise when pursuing the object of one’s curiosity violates others’ deeply held values or ethical principles, or induces great fear. These issues have a lengthy history. Present-day examples of scientific research and technological developments evoking this kind of conflict include stem cell research, human cloning, the Large Hadron Collider, and genetic modification of food crops.

Before it began operations, fears that the Large Hadron Collider (LHC) could generate a black hole that would swallow the Earth made international headlines, and debates over its safety have continued, including lawsuits intended to halt its operation. The nub of the problem of course is risk, and a peculiarly modern version of risk at that. The sociologist Ulrich Beck’s (1992) classic work crystallized a distinction between older and newer risks associated with experimentation and exploration. The older risks were localized and often restricted to the risk-takers themselves. The new risks, according to writers like Beck, are global and catastrophic. The concerns about the LHC fit Beck’s definition of the new risks.

When fears about proposed experiments or technological developments concern the potential misuse of potentially beneficial research or technology, debates of this kind are known as “dual use dilemmas.” There’s an active network of researchers on this topic. Recently I participated in a workshop at The Australian National University on this topic, from which a book should emerge next year.

Probably the most famous example is the controversy arising from the development of nuclear fission technology, which gave us the means to nuclear warfare on the one hand but numerous peacetime applications on the other. The fiercest debates these days on dual use dilemmas focus on biological experiments and nanotechnology. The Federation of American Scientists has provided a webpage source of fascinating case-studies in dual-use dilemmas involving biological research. The American National Research Council (NRC) 2004 report on “Biotechnological Research in an Age of Terrorism” is an influential source. Until recently, much of the commentary came from scientists, security experts or journalists. However, for a book-length treatment of this issue by ethicists, see Miller and Selgelid’s (2008) interesting work.

The NRC report listed “experiments of concern” as those including any of the following capabilities:

  1. demonstrating how to render a vaccine ineffective;
  2. enhancing resistance to therapeutically useful antibiotics or antiviral agents;
  3. enhancing the virulence of a pathogen or render a non-pathogen virulent;
  4. increasing the transmissibility of a pathogen;
  5. altering the host range of a pathogen;
  6. enabling the evasion of diagnosis and/or detection by established methods; and
  7. enabling the weaponization of a biological agent or toxin.

There are three kinds of concern underpinning dual-use dilemmas. The first arises from foreseeable misuses that could ensue from an experiment or new technology. Most obvious are experiments or developments intended to create weapons in the first place (e.g., German scientists responsible for gas warfare in World War I or American scientists responsible for atomic warfare at the end of World War II). But not as obvious are the opportunities to exploit nonmilitary research or technology. An example of potential misuse of a rather quotidian technology would be terrorists or organized crime networks exploiting illegal botox manufacturing facilities to distill botulinum toxin (see the recent Scientific American article on this).

Research results published in 2005 announced the complete genetic sequencing of the 1918 influenza A (H1N1) virus (a.k.a. the “Spanish flu”) and also its resurrection using reverse genetic techniques. This is the virus that killed between 20 and 100 million people in 1918–1919. Prior to publication of the reverse-engineering paper, the US National Science Advisory Board for Biosecurity (NSABB) was asked to consider the consequences. The NSABB decided that the scientific benefits flowing from publication of this information about the Spanish flu outweighed the risk of misuse. Understandably, publication of this information aroused concerns that malign agents could use it to reconstruct H1N1. The same issues have been raised concerning the publication of the H5N1 influenza (“bird flu”) genome.

The second type of concern is foreseeable catastrophic accidents that could arise from unintended consequences of research or technological developments. The possibility that current stockpiles of smallpox could be accidentally let loose is the kind of event to be concerned about here. Such an event also is, for some people, an argument against research enterprises such as the reengineering of H1N1.

The third type of concern is in some ways more worrying: Unforeseeable potential misuses or accidents. After all, a lot of research yields unanticipated findings and/or opportunities for new technological developments. A 2001 paper on mousepox virus research at The Australian National University is an example of this kind of serendipity. The researchers were on the track of a genetically engineered sterility treatment that would combat mouse plagues in Australia. But this research project also led to the creation of a highly virulent strain of mousepox. The strain the researchers created killed both mice with resistance to mousepox and mice vaccinated against mousepox.

Moreover, the principles by which this new strain was engineered were readily generalizable, and raised the possibility of creating a highly virulent strain of smallpox resistant to available vaccines. Indeed, in 2003 a team of scientists at St Louis University published research in which they had extended the mousepox results to cowpox, a virus that can infect humans. The fear, of course, is that these technological possibilities could be exploited by terrorists. Recent advances in synthetic genomics have magnified this problem. It is now possible not only to enhance the virulence of extant viruses, but also create new viruses from scratch.

The moral and political questions raised by cases like these are not easy to resolve, for at least three reasons. First, the pros and cons often are unknown to at least some extent. Second, even the known pros and cons usually weigh heavily on both sides of the argument. There are very good reasons for going ahead with the research and very good reasons for prohibiting it.

The third reason applies only to some cases, and it makes those cases the toughest of all. “Dual dilemmas” is a slightly misleading phrase, in a technical sense that relates to this third reason. Many cases are really just tradeoffs, where at least in principle rational negotiators could weigh the costs and benefits according to their lights and arrive at an optimal decision. But some cases genuinely are social dilemmas, in the following sense of the term: Choices dictated by rational, calculating self-interest nevertheless will lead to the destruction of the common good and, ultimately, everyone’s own interests.

Social dilemmas aren’t new. Garrett Hardin’s “tragedy of the commons” is a famous and much debated example. A typical arms-race is another obvious example. Researchers in countries A and B know that each country has the means to revive an extinct, virulent pathogen that could be exploited as a bioweapon. If the researchers in country A revive the pathogen and researchers in country B do not, country A temporarily enjoys a tactical advantage over country B. However, it also imposes a risk on both A and B of theft or accidental release of the pathogen. If country B responds by duplicating this feat then B regains equal footing with A but now has multiplied the risk of accidental release or theft. Conversely, if A restrains from reviving the pathogen then B could play A for a sucker by reviving it. It is in each country’s self-interest to revive the pathogen in order to avoid being trumped by the other, but the result is the creation of dread risks that neither country wants to bear. You may have heard about “Prisoner’s Dilemma” or “Chicken Game.” These are types of social dilemmas, and some dual use dilemmas are structurally similar to them.

Social dilemmas present a very difficult problem in the regulation of curiosity (and other kinds of choice behavior) because solutions cannot be worked out solely on the basis of appeals to rational self-interest. Once you know what to look for, you can find social dilemmas in all sorts of places. The research literature on how to deal with and resolve them includes contributions from economics, psychology, political science, anthropology, sociology and applied mathematics (I co-edited a book on social dilemmas back in 1999). This research has had applications ranging from environmental policy formation to marriage guidance counseling. But it shouldn’t surprise you too much to learn that most of the early work on social dilemmas stemmed from and was supported by the American military.

So to conclude, let’s try putting the proverbial shoe on the other foot. The dual-use dilemmas literature focuses almost exclusively on scientific research and technological developments that could be weaponized. But what about the reverse process—Military research and development with nonmilitary benefits? Or, for that matter, R & D from illicit or immoral sources that yield legitimate spinoffs and applications? These prospects appear to have been relatively neglected.

Nevertheless, it isn’t hard to find examples: The internet, for one. The net originated with the American Defense Advanced Research Projects Agency (DARPA), was rapidly taken up by university-based researchers via their defense-funded research grants, and morphed by the late 1980’s into the NSFnet. Once the net escaped its military confines, certain less than licit industries spearheaded its development.  As Peter Nowak portrays it in his entertaining and informative (if sometimes overstated) book Sex, Bombs and Burgers, the pornography industry was responsible for internet-defining innovations such as live video streaming, video-conferencing and key aspects of internet security provision. Before long, mainstream businesses were quietly adopting ways of using the net pioneered by the porn industry.

Of course, I’m not proposing that the National Science Foundation should start funding R&D in the porn industry. My point is just that this “other” dual use dilemma merits greater attention and study than it has received so far.

Written by michaelsmithson

November 3, 2010 at 10:49 am

Secrecy and Lies: Widely Condemned and Widely Used

leave a comment »

Probably the most obvious, direct way in which ignorance is “socially constructed” is when people impose it on one another. As is often the case with ignorance, the idea of imposing it on other people tends to bring up negative images of detrimental acts with sinister motives. This is quite understandable. The world contains innumerable examples of unethical secrecy, lies, and other outrages by which powerful agents keep the less powerful “in the dark.” Withholding information from others who have a right to know is high-handed at the very least, and lying to them is even worse.

My interest in this subject, however, begins with the observation that despite the fact that most of us dislike having information withheld from us and dislike being lied to even more, both secrecy and lying are very widespread practices. So there’s a strong asymmetry here: We dislike it being done to us but we’re quite willing to do it to others. Moreover, information withholders and liars usually believe they have sound moral justifications for their actions. I venture to say that nearly all of us have kept at least temporary secrets or lied for what we believed were good reasons (I certainly have). Given these observations, it shouldn’t be too surprising to find social norms advocating withholding or concealing information and even lying.

Let’s begin with a fairly uncontroversial and benign example of a social norm for temporarily withholding information in the service of a desirable event: Creating pleasant surprises. Receiving gifts, watching movies, and reading novels are activities that can be ruined if some miscreant gives away their hidden contents. A social norm has it that we don’t reveal the contents of a gift-wrapped birthday present to its intended recipient, or the ending of a movie we’ve seen to a friend who hasn’t. So here is an agreement between the knower and the ignoramus; most of us want our birthday presents to be surprises and we don’t want to know how a movie ends before we’ve seen it.

Information-withholding norms often are purpose-built. A fascinating example can be found in experimental research on humans and other animals, in the method called “blinding.” Research participants are “blinded” by not knowing which experimental condition they have been placed in (e.g., are they getting the new wonder drug or a placebo?) Experimenters are “blinded” when they don’t know which experimental condition each participant is assigned to. The idea of blinding the experimenter goes back to Claude Bernard, the great 19th century French physiologist and medical scientist. A “double-blind” experiment is one that fulfills both of these conditions.

Norms and rules for enforcing selective ignorance pervade ordinary social life. Many occupational roles not only require specialized knowledge but also specific ignorance– restrictions on access to information specified by one’s role. A well-known case in point is the military concept of the “need to know,” whereby even personnel with appropriate security clearances must require information for the performance of their official duties in order to be granted access to it.

Organizational norms enforcing restricted access to information can have a downside, even when such restrictions are central to the organization’s purposes. The Sept. 11, 2001, attacks revealed difficulties due to the strongly compartmentalized information silos produced by the strict “need to know” culture of American intelligence agencies. The 9/11 Commission recommended a shift in the intelligence community from the “need to know” culture to a “responsibility to provide” approach, later implemented in the 2004 Intelligence Reform and Terrorism Protection Act.

Confidentiality is another social norm for withholding information that is premised on a moral injunction. Being asked and agreeing to treat information confidentially brings a moral responsibility not to reveal it to others. At times, confidentiality can collide with other moral principles. A researcher or journalist interviewing heroin addicts about heroin usage will want to guarantee interviewees anonymity and confidentiality. After all, the interviewees are going to be admitting to illegal acts. But what if an interviewee reveals that they have murdered someone? Duty-of care principles would compel the interviewer to report the crime to the authorities. Ethical principles regarding confidentiality also can come into conflict with the law. Returning to our researcher or journalist, what if the researcher’s data or the journalist’s tapes are subpoenaed by a court of law? The human research ethics committee on which I’ve served at my university forbade researchers to promise research participants absolute confidentiality– They could promise only “confidentiality as far as the law allows.”

Moral injunctions regarding selective ignorance abound in childrearing. Responsible parents have to deal with the question of what children should and should not know, or at least when. This issue is perennial and mundane, but it can be an ethical and moral minefield nevertheless. When should children find out about reproduction? When should they know about illicit drugs? For a more agonizing case, consider children of a parent who has a heritable disease: When should they take a genetic marker test to determine whether they have inherited it, and when should they know the result? A recent news story about Ugandan draft policy recommending that HIV positive children be informed of their illness at age 10 has understandably generated heated debate.

Now let us venture onto thinner ice: Social norms that promote lying. There is a large philosophical literature on lying, perhaps the most well-known sourcebook being Bok’s (1978) masterwork. Bok takes a rather severe position about lying and liars, concluding that lies seldom can be justified. Even a pragmatist who disregards ethical and moral arguments against lying still would have to admit that lying is risky—One’s reputation can suffer irreparably damage. On balance, evidence points to a widespread belief that omitting to disclose information is not as bad (or at least, not as risky) as lying. For instance, Burgoon, Callister, and Hunsaker’s (1994) investigation of equivocation or omission versus falsification in doctor-patient interactions found that about 85% of the participants admitted to omission but only 34% admitted to falsification. Likewise, Brown and Levinson’s (1987) pioneering anthropological work on politeness suggests that people intending to be polite to one another will resort to what they consider to be ambiguity or vagueness more than outright distortion or deception.

Nevertheless, lying is common enough to suggest that many of us are willing to take the risks. As social psychologist W. P. Robinson (1996: 207) puts it, “The more competitive the situation and the more serious the consequences of winning or losing, the more likely it is that deception will be normative or required.” Examples in the social order where deception is normative or required abound: Competitive games, political and military conflict are the most obvious examples, with business not far behind. And so liars can be romantic heroes. Lionized liars include spies, military commanders who outwit their foes, superheroes with secret identities, detectives who not only uncover deceit but deceive criminals, and even successful con artists.

However, competition is far from being the only justification for lying. Perhaps the most common norms encouraging deception are those guiding polite conversation, in particular, tact. Much tactfulness amounts to omission (avoiding saying impolite things), but it can readily extend to distortion as well. Tactful dissembling ranges from “softening” utterances that might offend their recipient to outright lies. To soften a phrase, we replace it with a less potent alternative (e.g., “not terribly good” instead of “really bad”). In one of my many failed attempts at phrase-softening, the colleague who had received my gentle critique remarked “I must remember from now on, Mike, that when you say something is ‘not quite true’ you actually mean it’s utter rubbish.”

Parents frequently have to deal with the question of whether to lie to their children. Should children be led to believe in Santa Claus, and if so, when and how should they find out he doesn’t exist? Even just permitting a child to believe in Santa Claus, the Easter Bunny or the Tooth Fairy requires tacit complicity with falsehoods. But plenty of responsible, well-intentioned parents who love their children go further by actively sustaining these illusions. In fact, parental lying is widespread and it goes far beyond Santa Claus and the Tooth Fairy. In the third episode of the recent Politically Incorrect Parenting Show, a TV series aired in Australia and New Zealand, Dr. Nigel Latta discusses parental lying with the aim of openly discussing its pros and cons. Interestingly, this practice has received hardly any attention from researchers studying childrearing practices. Here’s a recent news story about University of Toronto studies investigating how and why parents lie to their children. The most common reasons parents gave were to influence children’s behavior and emotional states.

And finally, information concealment and lying play roles in many kinds of humor. For instance, one version of “taking the Mickey” requires the jokester to lie initially and only eventually let the victim in on the joke. I’d just arrived in the department where I now work when I was approached by one of my new colleagues. Our conversation started off like this:

Colleague: I understand your name is ‘Michael.’
Michael: Yes, it is.
C: Well, my name also is ‘Michael.’ Both of us can’t be called ‘Michael,’ it will cause confusion.
M: What, really?
C: Yes… I was here first. You’ll simply have to be called something else.
M: I often go by ‘Mike,’ would that do?
C: Yes, perhaps. But you’ll have to insist on being called that, you know…

I didn’t realize my leg was being pulled until a couple of remarks further along. My colleague’s dry wit and deadpan delivery had me completely fooled. We became good friends, although I did tell him that he was such an effective liar that I’d have to keep an eye on him.

Written by michaelsmithson

November 3, 2010 at 10:48 am

Virtuous Ignorance

leave a comment »

In his by now famous review of Sarah Palin’s “Going Rogue,” Jonathan Raban described her book as a “four-hundred-page paean to virtuous ignorance.” Raban was referring to the kind of homespun philosophy that is exemplified by expressions such as “I don’t know much about art, but I know what I like.” He characterizes it as a “commonsense conservatism” whereby morally upright laypeople are better able than experts to judge the merits of, say, American foreign policy, because too much expertise clouds judgment.  Some commentators opined that “virtuous ignorance” is another way of saying “anti-intellectual.” Be that as it may, I would like to focus on the concept of virtuous ignorance itself. I’ve mentioned this concept elsewhere (Smithson 2008), and I think it holds plenty of riches for anyone who cares to dig into it. Once we know what to look for, virtuous ignorance turns up in lots of places, and not just among right-wing politicians or, for that matter, laypeople.

I’ll begin by drawing a distinction between ignorance as a virtue and ignorance as a preferred state. An old friend of mine was involved in early selection tests administered to men applying to the Australian Air Force. He used to claim credit for having selected two airmen in particular: One who dropped a dud bomb on a bicyclist during a practice run and another who retracted the landing gear on a Mirage fighter-jet when it was on the ground. My friend also told me that one of the admission test questions asked applicants for the name of the artist who painted “Blue Boy.” If they correctly responded “Thomas Gainsborough,” that was sufficient to reject them. There was no place in the Australian Air Force for men who knew that Gainsborough painted “Blue Boy.”

Is this an example of virtuous ignorance? Not quite. While it displays a preference for Air Force men who do not know their Gainsborough, it stops short of claiming this as a virtue. In other words, the selectors may not have believed that knowing about Gainsborough diminished a man’s worth. Instead, they may have been merely pragmatic: Perhaps they believed that men with interests in and education about the arts would not fit in well to the Australian Air Force of the time.

Here’s an example closer to virtuous ignorance, and one that isn’t attributable to conservative politicians or anti-intellectuals. In fact, it involves highly educated people. Back in the 70’s I took a PhD in sociology. My then fellow sociology graduate students and I hadn’t read any psychology beyond a few selected works by Freud, but we all knew we hated psychology, with the possible exception of Freud. More than that, some of us also knew that psychology was full of reductionist, positivist, medical-model-following reactionaries who were blind instruments of the capitalist order.

Eventually, nevertheless, my curiosity was piqued. What was so dastardly about psychology? I decided to find out. When I revealed to my peers that I had started taking a class in psychology, my status immediately plummeted. For them, not studying psychology indicated superiority of intellect. True cognoscenti would know better than to waste their time reading up on such an obviously misguided discipline.

Years later, I ran across anecdotal evidence that this view was not confined to students. A psychologist colleague attended a seminar by a very prominent sociologist, and was inspired by the sociologist’s portrayal of a revolutionary future for the social sciences. However, he was puzzled that no mention was made of psychology or psychologists. Afterward, he approached the professor and asked him how psychology would fit into his vision of the future. The sociologist replied that he had once, long ago, taken a class in psychology but tried not to think about that unfortunate lapse in judgment. These days, I work in a psychology department. Clearly, I have gone over to the Dark Side.

I’m not picking on sociologists. Plenty of examples are available in other disciplines. Another psychologist friend of mine, very well educated and highly intelligent, once confessed to me that she was proud that she knew nothing about economics. In these examples, ignorance is made virtuous by converting it into a kind of status marker. Professing ignorance of the “right” things can be an indicator of high status. In many societies, class and/or caste distinctions have been partly based on this.  Not so long ago in English society, knowledge of a mere craft or trade (versus an art or discipline) was considered beneath nobility.

So the issue isn’t in the decision or preference not to know X, it’s the moral judgments about those who do or don’t know X. This is the key to the fascinating realm of virtuous ignorance. At first, the very phrase may seem almost oxymoronic, or at the very least, something to make fun of. However, it isn’t too difficult to find cases of virtuous ignorance with serious moral purposes underpinning them. More on this shortly.

But first, can people think that ignorance is virtuous in other ways than conferring status on the ignoramus? In the first words to his book, “Strange Weather” (Ross 1991), on the Acknowledgements page, the self-appointed critic of science Andrew Ross declared “This book is dedicated to all the science teachers I never had. It could only have been written without them.” Taken literally, the second sentence seems unexceptional. I think we would all grant that had he even a bare-bones scientific education, Ross could not have written the book that he did. But some of those rebutting Ross’ work, such as biologist Paul Gross and mathematician Norman Levitt, interpreted his declaration as a hubristic “boast” (Gross and Levitt 1994: 91).

Why might this dedication have been a boast, and if so, what could Ross have been boasting about? A clear hint comes in the Introduction on pg. 8: “As I lacked the training of a scientific intellectual and the accompanying faith, however vestigial or self-critical, in the certainties of the scientific method… My position, then, became that of a cultural critic…” There we have it: Ross is claiming not only that scientists are indoctrinated into a faith, but also anyone undergoing scientific “training” becomes contaminated by this faith and, worse still, cannot be rid of it even through self-criticism. It’s like the stain of Original Sin. Therefore, contamination can be avoided only by not studying science.

It isn’t a big leap from the idea that knowledge might be a contaminant to the notion of taboos against knowledge. Where do knowledge taboos come from? What creates and sustains them? The anthropologist Mary Douglas wrote a classic book “Purity and Danger” (Douglas 1967), with an intriguing first approximation to a general theory of taboos. She claimed that there are two kinds of proscription in taboos: One against threats or dangers, and another against pollution or contamination.

So, let’s try this out on knowledge. First, some information could be declared off-limits because knowing it would pose a danger (to oneself or to others). Or perhaps the process of acquiring the information would incur dangers. What kinds of danger lurk here? In the first case, perhaps the most obvious example is knowledge that can be used against other people. In the second case, the dangers could range from physical (e.g., experimenting with explosives) to social (e.g., committing illegal acts in the pursuit of knowledge).

Now, consider the second kind of taboo, pollution or contamination. “Innocence,” when it refers to a kind of saintly naivety, is a familiar but interesting kind of virtuous ignorance. In Christian traditions it is associated with the tree of knowledge, forbidden fruit, and the events that got humanity kicked out of Eden. Those who have lost their innocence or known sin are “tainted.” Mary Douglas defined pollution as dirt in the wrong place. Translated into the realm of knowledge and ignorance, the crucial idea is that information can be in a “wrong place.” For instance, some kinds of knowledge may be considered appropriate for adults but not for children. One of the most intriguing aspects of innocence is the notion that its maintenance often requires the protection of innocents by more knowledgeable (and therefore unclean) guardians.

At first glance, proscriptions against knowledge raise familiar images of oppression and domination—Book burning, bans against certain teachings. This is fair enough; examples abound of ignorance imposed by one group on another. Nevertheless, virtuous ignorance also can be found at the base of benign or even benevolent arrangements agreed to by society as a whole.

Privacy, for instance, amounts to a multilateral ignorance arrangement, whereby we agree that certain kinds of information about ourselves will not be available to others without our consent. Respecting others’ privacy is virtuous behavior, thus, virtuous ignorance. People who violate privacy norms pose a threat to the person(s) about whom they have obtained private information. A right to privacy amounts to a right to at least some control over who knows what about you. Secrecy is unilateral; privacy is multilateral and therefore privacy invokes social norms for good conduct. Virtuous people do not poke their noses into matters that are none of their business.

Social relations based on trust operate in a similar manner. Trust is not about concealing information, but trust relationships do require observance of an interesting kind of privacy. If one person is monitoring another or insisting that they fully account for their actions, the person under surveillance will conclude that the monitor does not trust them. Trust entails running the risk of being exploited but increases opportunities by rendering the truster more mobile and able to establish cooperative relations more quickly than someone who insists on surveillance and binding contractual relations. Trust, therefore, is both an example of a social relation that requires tolerance of undesired uncertainty (the risk of being exploited) in favor of desired uncertainty (freedom to seize opportunities for new relations) and, of course, virtuous behavior. Good friends don’t place one another under 24-7 surveillance.

And so, we have arrived at examples of virtuous ignorance that are socially mandated and underpin some important forms of social capital: Privacy, trust, and friendship. Well and good, one might say, but these are examples of self-imposed, voluntary ignorance. What about the virtues of imposing ignorance on others? I’ll take a tour through some of that territory next time.

Written by michaelsmithson

November 3, 2010 at 10:00 am