Posts Tagged ‘Ignorance’
It’s coming up to a year since I began this blog. In my usual fashion, I set myself the unrealistic goal of writing a post every week. This is only the 37th, so I’ve fallen short by a considerable margin. On the other hand, most of those posts have been on the order of 1500 words long, for a total of about 55,500 words thus far. That seems a fair whack of the keyboard, and it’s been fun too.
In an earlier post I proposed that because of the ways in which knowledge economies work, we increasingly live in an “ignorance society.” In the same year that Sheldon Ungar’s paper on ignorance as a public problem appeared, another paper came out by Joanne Roberts and John Armitage with the intriguing title “The Ignorance Economy.” Their stated purpose was to critique the notion of a knowledge economy via an investigation of ignorance from an economic standpoint.
As Roberts and Armitage (and many others) have said, knowledge as a commodity has several distinctive features. Once knowledge is consumed, it does not disappear and indeed its consumption may result in the development of more knowledge. The consumption of knowledge is non-zero-sum and can be non-excludable. Knowledge is a multiplier resource in this sense. Finally, knowledge is not subject to diminishing returns.
Interestingly, Roberts and Armitage do not say anything substantial about ignorance as a commodity. We already have some characterizations handy from this blog and elsewhere. Like knowledge, ignorance can be non-zero-sum and non-excludable in the sense that my being ignorant about X doesn’t prevent you from also being ignorant about X, nor does greater ignorance on my part necessarily decrease your ignorance. Ignorance also does not seem susceptible to diminishing returns. And of course, new knowledge can generate ignorance, and an important aspect of an effective knowledge-based economy is its capacity for identifying and clarifying unknowns. Even in a booming knowledge economy, ignorance can be a growth industry in its own right.
There are obvious examples of economies that could, in some sense, be called “ignorance economies.” Education and training are ignorance economies in the sense that educators and trainers make their living via a continual supply of ignoramuses who are willing to pay for the privilege of shedding that status. Likewise, governmental and corporate organizations paying knowledge experts enable those experts to make a living out of selling their expertise to those who lack it. This is simply the “underbelly” of knowledge economies, as Roberts and Armitage point out.
But what about making ignorance pay? Roberts and Armitage observe that agents in knowledge economies set about this in several ways. First, there is the age-old strategy of intellectual property protection via copyright, patents, or outright secrecy. Hastening the obsolescence of knowledge and/or skills is another strategy. Entire trades, crafts and even professions have been de-skilled or rendered obsolete. And how about that increasingly rapid deluge of updates and “upgrades” imposed on us?
A widespread observation about the knowledge explosion is that it generates an ensuing ignorance explosion, both arising from and resulting in increasing specialization. The more specialized a knowledge economy is, the greater are certain opportunities to exploit ignorance for economic gains. These opportunities arise in at least three forms. First, there are potential coordination and management roles for anyone (or anything) able to pull a large unstructured corpus of data into a usable structure or, better still, a “big picture.” Second, making sense of data has become a major industry in its own right, giving rise to ironically specialized domains of expertise such as statistics and information technology.
Third, Roberts and Armitage point to the long-established trend for consumer products to require less knowledge for their effective use. So consumers are enticed to become more ignorant about how these products work, how to repair or maintain them, and how they are produced. You don’t have to be a Marxist to share a cynical but wry guffaw with Roberts and Armitage as they confess, along with the rest of us, to writing their article using a computer whose workings they are happily ignorant about. One must admit that this is an elegant, if nihilistic solution to Sheldon Ungar’s problem that the so-called information age has made it difficult to agree on a human-sized common stock of knowledge that we all should share.
Oddly, Roberts and Armitage neglect two additional (also age-old) strategies for exploiting ignorance for commercial gain and/or political power. First, an agent can spread disinformation and, if successful, make money or power out of deception. Second, an agent can generate uncertainty in the minds of a target population, and leverage wealth and/or power out of that uncertainty. Both strategies have numerous exemplars throughout history, from legitimate commercial or governmental undertakings to terrorism and organized crime.
Roberts and Armitage also neglect the kinds of ignorance-based “social capital” that I have written about, both in this blog and elsewhere. Thus, for example, in many countries the creation and maintenance of privacy, secrecy and censorship engage economic agents of considerable size in both the private and public sectors. All three are, of course, ignorance arrangements. Likewise, trust-based relations have distinct economic advantages over relations based on assurance through contracts, and trust is partially an ignorance arrangement.
More prosaically, do people make their living by selling their ignorance? I once met a man who claimed he did so, primarily on a consulting basis. His sales-pitch boiled down to declaring “If you can make something clear to me, you can make it clear to anyone.” He was effectively making the role of a “beta-tester” pay off. Perhaps we may see the emergence of niche markets for specific kinds of ignoramuses.
But there already is, arguably, a sustainable market for generalist ignoramuses. Roberts and Armitage moralize about the neglect by national governments of “regional ignorance economies,” by which they mean subpopulations of workers lacking any qualifications whatsoever. Yet these are precisely the kinds of workers needed to perform jobs for which everyone else would be over-qualified and, knowledge economy or not, such jobs are likely to continue abounding for some time to come.
I’ve watched seven children on my Australian middle-class suburban cul-de-sac grow to adulthood over the past 14 years. Only one of them has gone to university. Why? Well, for example, one of them realized he could walk out of school after 10th grade, go to the mines, drive a big machine and immediately command a $90,000 annual salary. The others made similar choices, although not as high-paying as his but still favorable in short-term comparisons to their age-mates heading off to uni to rack up tens-of-thousands-of-dollars debts. The recipe for maintaining a ready supply of generalist ignoramuses is straightforward: Make education or training sufficiently unaffordable and difficult, and/or unqualified work sufficiently remunerative and easy. An anti-intellectual mainstream culture helps, too, by the way.
Hi, I’m back again after a few weeks’ travel (presenting papers at conferences). I’ve already posted material on this blog about the “ignorance explosion.” Numerous writings have taken up the theme that there is far too much relevant information for any of us to learn and process and the problem is worsening, despite the benefits of the internet and effective search-engines. We all have had to become more hyper-specialized and fragmented in our knowledge-bases than our forebears, and many of us find it very difficult as a result to agree with one another about the “essential” knowledge that every child should receive in their education and that every citizen should possess.
Well, here is a modest proposal for one such essential: We should all become expert about experts and expertise. That is, we should develop meta-expertise.
We can’t know everything, but knowing an expert when we see one, being able to tell the difference between an expert and an impostor, and knowing what it takes to become an expert can guide our search for assistance in all things about which we’re ignorant. A meta-expert should:
- Know the broad parameters of and requirements for attaining expertise;
- Be able to distinguish a genuine expert from a pretender or a charlatan;
- Know whether expertise is and when it is not attainable in a given domain;
- Possess effective criteria for evaluating expertise, within reasonable limits; and
- Be aware of the limitations of specialized expertise.
Let’s start with that strongly democratic source of expertise: Wikipedia’s take on experts:
“In many domains there are objective measures of performance capable of distinguishing experts from novices: expert chess players will almost always win games against recreational chess players; expert medical specialists are more likely to diagnose a disease correctly; etc.”
That said, the Wikipedia entry also raises a potentially vexing point, namely that “expertise” may come down to merely a matter of consensus, often dictated by the self-same “experts.” Examples readily spring to mind in areas where objective measures are hard to come by, such as the arts. But consider also domains where objective measures may be obtainable but not assessable by laypeople. Higher mathematics is a good example. Only a tiny group of people on the planet were capable of assessing whether Andrew Wiles really had proven Fermat’s Theorem. The rest of us have to take their word for it.
A crude but useful dichotomy splits views about expertise into two camps: Constructivist and performative. The constructivist view emphasizes the influence of communities of practice in determining what expertise is and who is deemed to have it. The performative view portrays expertise as a matter of learning through deliberative practice. Both views have their points, and many domains of expertise have elements of both. Even domains where objective indicators of expertise are available can have constructivist underpinnings. A proficient modern-day undergraduate physics student would fail late 19th-century undergraduate physics exams; and experienced medical practitioners emigrating from one country to another may find their qualifications and experience unrecognized by their adopted country.
What are the requirements for attaining deep expertise? Two popular criteria are talent and deliberative practice. Re deliberate practice, a much-discussed rule of thumb is the “10,000 hour rule.” This rule was popularized in Malcolm Gladwell’s book Outliers and some authors misattribute it to him. It actually dates back to studies of chess masters in the 1970’s (see Ericsson, K. A., R. Th. Krampe, and C. Tesch-Römer, 1993), and its generalizability to other domains still is debatable. Nevertheless, the 10K rule has some merit, and unfortunately it has been routinely ignored in many psychological studies comparing “experts” with novices, where the “experts” often are undergraduates who have been given a few hours’ practice on a relatively trivial task.
The 10K rule can be a useful guide but there’s an important caveat. It may be a necessary but it is by no means a sufficient condition for guaranteeing deep expertise. At least three other conditions have to be met: Deliberative and effective practice in a domain where deep expertise is attainable. Despite this quite simple line of reasoning, plenty of published authors have committed the error of viewing the 10K rule as both necessary and sufficient. Gladwell didn’t make this mistake, but Jane McGonigal’s recent book on video and computer games devotes considerable space to the notion that because gamers are spending upwards of 10K hours playing games they must be attaining deep “expertise” of some kind. Perhaps some may be, provided they are playing games of sufficient depth. But many will not. (BTW, McGonigal’s book is worth a read despite her over-the-top optimism about how games can save the world—and take a look at her game-design collaborator Bogost’s somewhat dissenting review of her book).
Back to the caveats. First, no deliberation makes practice useless. Having spent approximately 8 hours every day sleeping for the past 61 years (178,120 hours) hasn’t made me an expert on sleep. Likewise, deliberative but ineffective practice methods deny us top-level expertise. Early studies of Morse Code experts demonstrated that mere deliberative practice did not guarantee best performance results; specific training regimes were required instead. Autodidacts with insight and aspirations to attain the highest performative levels in their domains eventually realise how important getting the “right” coaching or teaching is.
Finally, there is the problem of determining whether effective, deliberative practice yields deep expertise in any domain. The domain may simply not be “deep” enough. In games of strategy, tic-tac-toe is a clear example of insufficient depth, checkers is a less obvious but still clear example, whereas chess and go clearly have sufficient depth.
Tic-tac-toe aside, are there domains that possess depth where deep expertise nevertheless is unattainable? There are, at least, some domains that are deeply complex where “experts” perform no better then less trained individuals or simple algorithms. Psychotherapy is one such domain. There is a plethora of studies demonstrating that clinical psychologists’ predictions of patient outcomes are worse than simple linear regression models (cf. Dawes’ searing indictment in his 1994 book) and that sometimes experts’ decisions are no more accurate than beginners’ decisions and simple decision aids. Similar results have been reported for financial planners and political experts. In Philip Tetlock’s 2005 book on so-called “expert” predictions, he finds that many so-called experts perform no better than chance in predicting political events, financial trends, and so on.
What can explain the absence of deep expertise in these instances? Tetlock attributes experts’ poor performance to two factors, among others: Hyperspecialization and overconfidence. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.” Tetlock also claims that cognitive style counts: “Foxes” tend to outperform “hedgehogs.” These terms are taken from Isaiah Berlin’s popular essay: Foxes know a little about lots of things, whereas hedgehogs know one big thing.
Another contributing factor may be a lack of meta-cognitive insight on the part of the experts. A hallmark of expertise is ignoring (not ignorance). This proposition may sound less counter-intuitive if it’s rephrased to say that experts know what to ignore. In an earlier post I mentioned Mary Omodei and her colleagues’ chapter in a 2005 book on professionals’ decision making in connection with this claim. Their chapter opens with the observation of a widespread assumption that domain experts also know how to optimally allocate their cognitive resources when making judgments or decisions in their domain. Their research with expert fire-fighting commanders cast doubt on this assumption.
The key manipulations in the Omodei simulated fire-fighting experiments determined the extent to which commanders had unrestricted access to “complete” information about the fires, weather conditions, and other environmental matters. They found that commanders performed more poorly when information access was unrestricted than when they had to request information from subordinates. They also found that commanders performed more poorly when they believed all available information was reliable than when they believed that some of it was unreliable. The disquieting implication of these findings is that domain expertise doesn’t include meta-cognitive expertise.
Cognitive biases and styles aside, another contributing set of factors may be the characteristics of the complex, deep domains themselves that render deep expertise very difficult to attain. Here is a list of tests you can apply to such domains by way of evaluating their potential for the development of genuine expertise:
- Stationarity? Is the domain stable enough for generalizable methods to be derived? In chaotic systems long-range prediction is impossible because of initial-condition sensitivity. In human history, politics and culture, the underlying processes may not be stationary at all.
- Rarity? When it comes to prediction, rare phenomena simply are difficult to predict (see my post on making the wrong decisions most of the time for the right reasons).
- Observability? Can the outcomes of predictions or decisions be directly or immediately observed? For example in psychology, direct observation of mental states is nearly impossible, and in climatology the consequences of human interventions will take a very long time to unfold.
- Objective or even impartial criteria? For instance, what is “good,” “beautiful,” or even “acceptable” in domains such as music, dance or the visual arts? Are such domains irreducibly subjective and culture-bound?
- Testability? Are there clear criteria for when an expert has succeeded or failed? Or is there too much “wiggle-room” to be able to tell?
Finally, here are a few tests that can be used to evaluate the “experts” in your life:
- Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?
- Walking the walk: Is the expert an active practitioner in their domain (versus being a critic or a commentator)?
- Overconfidence: Ask your expert to make yes-no predictions in their domain of expertise, and before any of these predictions can be tested ask them to estimate the percentage of time they’re going to be correct. Compare that estimate with the resulting percentage correct. If their estimate was too high then your expert may suffer from over-confidence.
- Confirmation bias: We’re all prone to this, but some more so than others. Is your expert reasonably open to evidence or viewpoints contrary to their own views?
- Hedgehog-Fox test: Tetlock found that Foxes were better-calibrated and more able to entertain self-disconfirming counterfactuals than hedgehogs, but allowed that hedgehogs can occasionally be “stunningly right” in a way that foxes cannot. Is your expert a fox or a hedgehog?
- Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?
The Intergovernmental Panel on Climate Change (IPCC) guidelines for their 2007 report stipulated how its contributors were to convey uncertainties regarding climate change scientific evidence, conclusions, and predictions. Budescu et al.’s (2009) empirical investigation of how laypeople interpret verbal probability expressions (e.g., “very likely”) in the IPCC report revealed several problematic aspects of those interpretations, and a paper I have co-authored with Budescu’s team (Smithson, et al., 2011) raises additional issues.
Recently the IPCC has amended their guidelines, partly in response to the Budescu paper. Granting a broad consensus among climate scientists that climate change is accelerating and that humans have been a causal factor therein, the issue of how best to represent and communicate uncertainties about climate change science nevertheless remains a live concern. I’ll focus on the issues around probability expressions in a subsequent post, but in this one I want to address the issue of communicating “uncertainty” in a broader sense.
Why does it matter? First, the public needs to know that climate change science actually has uncertainties. Otherwise, they could be misled into believing either that scientists have all the answers or suffer from unwarranted dogmatism. Likewise, policy makers, decision makers and planners need to know the magnitudes (where possible) and directions of these uncertainties. Thus, the IPCC is to be commended for bringing uncertainties to the fore its 2007 report, and for attempting to establish standards for communicating them.
Second, the public needs to know what kinds uncertainties are in the mix. This concern sits at the foundation of the first and second recommendations of the Budescu paper. Their first suggestion is to differentiate between the ambiguous or vague description of an event and the likelihood of its occurrence. The example the authors give is “It is very unlikely that the meridonial overturning circulation will undergo a large abrupt transition during the 21st century” (emphasis added). The first italicized phrase expresses probabilistic uncertainty whereas the second embodies a vague description. People may have different interpretations of both phrases. They might disagree on what range of probabilities is referred to by “very likely” or on what is meant by a “large abrupt” change. Somewhat more worryingly, they might agree on how likely the “large abrupt” change is while failing to realize that they have different interpretations of that change in mind.
The crucial point here is that probability and vagueness are distinct kinds of uncertainty (see, e.g., Smithson, 1989). While the IPCC 2007 report is consistently explicit regarding probabilistic expressions, it only intermittently attends to matters of vagueness. For example, in the statement “It is likely that heat waves have become more frequent over most land areas” (IPCC 2007, pg. 30) the term “heat waves” remains undefined and the time-span is unspecified. In contrast, just below that statement is this one: “It is likely that the incidence of extreme high sea level3 has increased at a broad range of sites worldwide since 1975.” Footnote 3 then goes on to clarify “extreme high sea level” by the following: “Excluding tsunamis, which are not due to climate change. Extreme high sea level depends on average sea level and on regional weather systems. It is defined here as the highest 1% of hourly values of observed sea level at a station for a given reference period.”
The Budescu paper’s second recommendation is to specify the sources of uncertainty, such as whether these arise from disagreement among specialists, absence of data, or imprecise data. Distinguishing between uncertainty arising from disagreement and uncertainty arising from an imprecise but consensual assessment is especially important. In my experience, the former often is presented as if it is the latter. An interval for near-term ocean level increases of 0.2 to 0.8 metres might be the consensus among experts, but it could also represent two opposing camps, one estimating 0.2 metres and the other 0.8.
The IPCC report guidelines for reporting uncertainty do raise the issue of agreement: “Where uncertainty is assessed qualitatively, it is characterised by providing a relative sense of the amount and quality of evidence (that is, information from theory, observations or models indicating whether a belief or proposition is true or valid) and the degree of agreement (that is, the level of concurrence in the literature on a particular finding).” (IPCC 2007, pg. 27) The report then states that levels of agreement will be denoted by “high,” “medium,” and so on while the amount of evidence will be expressed as “much,”, “medium,” and so on.
As it turns out, the phrase “high agreement and much evidence” occurs seven times in the report and “high agreement and medium evidence” occurs twice. No other agreement phrases are used. These occurrences are almost entirely in the sections devoted to climate change mitigation and adaptation, as opposed to assessments of previous and future climate change. Typical examples are:
“There is high agreement and much evidence that with current climate change mitigation policies and related sustainable development practices, global GHG emissions will continue to grow over the next few decades.” (IPCC 2007, pg. 44) and
“There is high agreement and much evidence that all stabilisation levels assessed can be achieved by deployment of a portfolio of technologies that are either currently available or expected to be commercialised in coming decades, assuming appropriate and effective incentives are in place for development, acquisition, deployment and diffusion of technologies and addressing related barriers.” (IPCC2007, pg. 68)
The IPICC guidelines for other kinds of expert assessments do not explicitly refer to disagreement: “Where uncertainty is assessed more quantitatively using expert judgement of the correctness of underlying data, models or analyses, then the following scale of confidence levels is used to express the assessed chance of a finding being correct: very high confidence at least 9 out of 10; high confidence about 8 out of 10; medium confidence about 5 out of 10; low confidence about 2 out of 10; and very low confidence less than 1 out of 10.” (IPCC 2007, pg. 27) A typical statement of this kind is “By 2080, an increase of 5 to 8% of arid and semi-arid land in Africa is projected under a range of climate scenarios (high confidence).” (IPCC 2007, pg. 50)
That said, some parts of the IPCC report do convey disagreeing projections or estimates, where the disagreements are among models and/or scenarios, especially in the section on near-term predictions of climate change and its impacts. For instance, on pg. 47 of the 2007 report the graph below charts mid-century global warming relative to 1980-99. The six stabilization categories are those described in the Fourth Assessment Report (AR4).
Although this graph effectively represents both imprecision and disagreement (or conflict), it slightly underplays both by truncating the scale at the right-hand side. The next figure shows how the graph would appear if the full range of categories V and VI were included. Both the apparent imprecision of V and VI and the extent of disagreement between VI and categories I-III are substantially greater once we have the full picture.
There are understandable motives for concealing or disguising some kinds of uncertainty, especially those that could be used by opponents to bolster their own positions. Chief among these is uncertainty arising from conflict. In a series of experiments Smithson (1999) demonstrated that people regard precise but disagreeing risk messages as more troubling than informatively equivalent imprecise but agreeing messages. Moreover, they regard the message sources as less credible and less trustworthy in the first case than in the second. In short, conflict is a worse kind of uncertainty than ambiguity or vagueness. Smithson (1999) labeled this phenomenon “conflict aversion.” Cabantous (2007) confirmed and extended those results by demonstrating that insurers would charge a higher premium for insurance against mishaps whose risk information was conflictive than if the risk information was merely ambiguous.
Conflict aversion creates a genuine communications dilemma for disagreeing experts. On the one hand, public revelation of their disagreement can result in a loss of credibility or trust in experts on all sides of the dispute. Laypeople have an intuitive heuristic that if the evidence for any hypothesis is uncertain, then equally able experts should have considered the same evidence and agreed that the truth-status of that hypothesis is uncertain. When Peter Collignon, professor of microbiology at The Australian National University, cast doubt on the net benefit of the Australian Fluvax program in 2010, he attracted opprobrium from colleagues and health authorities on grounds that he was undermining public trust in vaccines and the medical expertise behind them. On the other hand, concealing disagreements runs the risk of future public disclosure and an even greater erosion of trust (lying experts are regarded as worse than disagreeing ones). The problem of how to communicate uncertainties arising from disagreement and vagueness simultaneously and distinguishably has yet to be solved.
Budescu, D.V., Broomell, S. and Por, H.-H. (2009) Improving the communication of uncertainty in the reports of the Intergovernmental panel on climate change. Psychological Science, 20, 299–308.
Cabantous, L. (2007). Ambiguity aversion in the field of insurance: Insurers’ attitudes to imprecise and conflicting probability estimates. Theory and Decision, 62, 219–240.
Intergovernmental Panel on Climate Change (2007). Summary for policymakers: Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Retrieved May 2010 from http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf.
Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.
Smithson, M. (1999). Conflict Aversion: Preference for Ambiguity vs. Conflict in Sources and Evidence. Organizational Behavior and Human Decision Processes, 79: 179-198.
Smithson, M., Budescu, D.V., Broomell, S. and Por, H.-H. (2011) Never Say “Not:” Impact of Negative Wording in Probability Phrases on Imprecise Probability Judgments. Accepted for presentation at the Seventh International Symposium on Imprecise Probability: Theories and Applications, Innsbruck, Austria, 25-28 July 2011.
“Don’t pay any attention to the critics—Don’t even ignore them.” ~ Samuel Goldwyn
When I first started exploring ignorance and related topics, it occurred to me that not-knowing has a passive and an active voice. To be ignorant of something is the passive voice—Ignorance is a state. Ignoring something is an action. I want to explore various aspects of ignoring in this and perhaps some subsequent posts.
To begin, ignoring attracts a moral charge that ignorance usually doesn’t. For instance, innocence can be construed as a special case of ignorance. Innocents don’t ignore corrupting information; they’re just unaware of its existence. Lots of communications to people who are ignoring something or someone are chastisements. Ignoring is akin to commission, whereas being ignorant is more like omission. Ignoring has an element of will or choice about it that being ignorant does not. So people are more likely to ascribe a moral status to an act of ignoring than to a state of ignorance.
For instance, reader response to a recent “Courier Mail” story on April 11 whose main point was “Three men have been rescued after they drove around Road Closed signs and into floodwaters in central Queensland” was uncharitable, to say the least. Comments and letters to the editor expressed desires for the men to be named, shamed, fined and otherwise punished for wasting taxpayers’ money and needlessly imperiling the rescuers.
Criminal negligence cases often make it clear that while the law may regard ignorance as scant excuse, ignoring is even worse. Ignoring imputes culpability straightaway. Halah Touryalai’s blog on Forbes in March: “Irving Picard, the Trustee seeking to reclaim billions for Madoff’s victims, claims Merrill Lynch International was creating and selling products tied to Madoff feeder funds even though it was aware of possible fraud within Bernard L. Madoff Investment Securities.”
Despite the clear distinction between ignorance and ignoring, people can and do confuse the two. Andrew Rotherham’s May 12 blog at Time accuses American educators and policy-makers of ignoring the burgeoning crisis regarding educational outcomes for Hispanic schoolchildren. But it isn’t clear whether the educators are aware of this problem (and ignoring it) or not (and therefore ignorant about it). There are so many festering and looming crises to keep track of these days that various sectors of the public regularly get caned for “ignoring” crises when in all likelihood they are just ignorant of them.
In a more straightforward case, the Sydney Herald Sun’s March 1 headine, “One-in-four girls ignoring cervical cancer vaccine,” simply has got it wrong. The actual message in the article is not that schoolgirls in question are ignoring the vaccine, but that they’re ignorant of it and also of the cancer itself.
Communicators of all stripes take note: The distinction between ignoring and ignorance is important and worth preserving. Let us not tolerate, on our watch, a linguistically criminal slide into the elision of that distinction through misusage or mental laziness.
Because it is an act and therefore can be intentional, ignoring has uses as a social or psychological tactic that ignorance never can have. There is a plethora of self-help remedies out there which, when you scratch the surface, boil down to tactical or even strategic ignoring. I’ll mention just two examples of such injunctions: “Don’t sweat the small stuff” and “live in the present.”
The first admonishes us to discount the “small stuff” to some extent, presumably so we can pay attention to the “big stuff” (whatever that may be). This simple notion has spawned several self-help bestsellers. The second urges us to disregard the past and future and focus attention on the here-and-how instead. This advice has been reinvented many times, in my short lifetime I’ve seen it crop up all the way from the erstwhile hippie sensibilities embodied in slogans such as “be here now” to the present-day therapeutic cottage industry of “mindfulness.”
Even prescriptions for rational decision-making contain injunctions to ignore certain things. Avoiding the “sunk cost fallacy” is one example. Money, time, or other irrecoverable resources that already have been spent in pursuing a goal should not be considered along with future potential costs in deciding whether to persist in pursuing the goal. There’s a nice treatment of this on the less wrong site. The Mind Your Decisions blog also presents a few typical examples of the sunk cost fallacy in everyday life. The main point here is that a rational decisional framework prescribes ignoring sunk costs.
Once we shift attention from ignoring things to ignoring people, the landscape becomes even more interesting. Ignoring people, it would seem, occupies important places in commonsense psychology. The earliest parental advice I received regarding what to do about bullies was to ignore them. My parents meant well, and it turned out that this worked in a few instances. But some bullies required standing up to.
Ignoring people also gets some airplay as part of a strategy or at least a tactic. For instance, how should parents deal with disrespectful behavior from their children? Well, one parenting site says not to ignore such behavior. Another admonishes you to ignore it. Commonsense psychology can be self-contradicting. It’s good old commonsense psychology that tells us “opposites attract” and yet “birds of a feather flock together,” “look before you leap” but “(s)he who hesitates is lost,” “many hands make light the work” but “too many cooks spoil the broth,” and so on.
Given that ignoring has a moral valence, what kinds of justifications are there for ignoring people? There are earnest discussion threads on such moral quandaries as ignoring people who are nice to you. In this thread, by the way, many of the contributors conclude that it’s OK to do so, especially if the nice person has traits that they can’t abide.
Some social norms or relationships entail ignoring behaviors or avoiding communication with certain people. One of the clearest examples of this is the kin-avoidance rules in some Australian Indigenous cultures. An instance is the ban on speaking with or even being in close proximity to one’s mother-in-law. The Central Land Council site describes the rule thus: “This relationship requires a social distance, such that they may not be able to be in the same room or car.”
Some religious communities such as the Amish have institutionalized shunning as a means of social control. As Wenger (1953) describes it, “The customary practice includes refusal to eat at the same table, even within the family, the refusal of ordinary social intercourse, the refusal to work together in farming operations, etc.” So, shunning entails ignoring. Wenger’s article also details some of the early religious debates over when and to what extent shunning should be imposed.
Ostracism has a powerful impact because it feels like rejection. Social psychologist Kipling Williams has studied the effects of ostracism for a number of years now, and without any apparent trace of irony remarks that it was “ignored by social scientists for 100 years.” Among his ingenious experiments is one demonstrating that people feel the pain of rejection when they’re ignored by a cartoon character on a computer screen. Williams goes as far as to characterize ostracism as an “invisible form of bullying.”
So, for an interesting contrast between the various moral and practical justifications you can find for ignoring others, try a search on the phrase “ignoring me.” There, you’ll find a world of agony. This is another example to add to my earlier post about lies and secrecy, where we seem to forget about the Golden Rule. We lie to others but hate being lied to. We also are willing to ignore others but hate being ignored in turn. Well, perhaps unless you’re being ignored by your mother-in-law.
A recent policy paper by Frank Bannister and Regina Connolly asks whether transparency is an unalloyed good in e-government. As the authors point out, the advent of Wikileaks has brought the issue of “e-transparency” into the domain of public debate. Broadly, e-transparency in government refers to access to the data, processes, decisions and actions of governments mediated by information communications technology (ICT).
Debates about the extent to which governments should (or can) be transparent have a lengthy history. The prospect of e-transparency adds considerations of affordability and potential breadth of citizen response and participation. Bannister and Connolly begin their discussion by setting aside the most common objections to transparency: Clear requirements for national security and commercial confidentiality in the service of protecting citizenry or other national interests. What other reasonable objections to transparency, let alone e-transparency, might there be?
Traditional arguments for transparency in government are predicated on three values assertions.
- The public has a right to know. Elected office-holders and appointed public or civil servants alike are accountable to their constituencies. Accountability is impossible without transparency; therefore good government requires transparency.
- Good government requires building trust between the governors and the governed, which can only arise if the governors are accountable to the governed.
- Effective citizen participation in a democracy is possible only if the citizenry is sufficiently educated and informed to make good decisions. Education and information both entail transparency.
Indeed, you can find affirmations of these assertions in the Obama administration’s White House Press Office statement on this issue.
Note that the first of these arguments is a claim to a right, whereas the second and third are claims about consequences. The distinction is important. A right is, by definition, non-negotiable and, in principle, inalienable. Arguments for good consequences, on the other hand, are utilitarian instead of deontological. Utilitarian arguments can be countered by “greater good” arguments and therefore are negotiable.
Japanese official pronouncements about the state of the recent Fukujima plant disaster therefore were expected to be more or less instantaneous and accurate. Even commentary from sources such as the Bulletin of Atomic Scientists averred that official reports should have been forthcoming sooner about the magnitude and scope of the disaster: “Denied such transparency, media outlets and the public may come to distrust official statements.” The gist of this commentary was that transparency would pay off better than secrecy, and the primary payoff would be increased trust in the Japanese government.
However, there are counter-arguments to the belief that transparency is a necessary or even contributing factor in building trust in government. A recent study by Stephan Gimmelikhuikjsen (2010) suggests that when the minutes of local council deliberations were made available online citizens’ evaluations of council competence declined in comparison to citizens who did not have access to that information. If transparency reveals incompetency then it may not increase trust after all. This finding is congruent with observations that a total accountability culture often also is a blame culture.
There’s another more subtle issue, namely that insistence on accountability and the surveillance levels required thereby are incompatible with trust relations. People who trust one another do not place each other under 24-7 surveillance, nor do they hold them accountable for every action or decision. Trust may be built up via surveillance and accountability, but once it has been established then the social norms around trust relations sit somewhat awkwardly alongside norms regarding transparency. The latter are more compatible with contractual relations than trust relations.
Traditional arguments against transparency (or at least, in favour of limiting transparency) also come in deontological and utilitarian flavors. The right of public servants and even politicians to personal privacy stands against the right of the public to know: One deontological principle versus another. ICT developments have provided new tools to monitor in increasing detail what workers do and how they do it, but as yet there seem to be few well thought-out guidelines for how far the public (or anyone else) should be permitted to go in monitoring government employees or office-holders.
Then there are the costs and risks of disclosure, which these days include exposure to litigation and the potential for data to be hacked. E-transparency is said to cost considerably less than traditional transparency and can deliver much greater volumes of data. Nonetheless, Bannister and Connolly caution that some cost increases can occur, firstly in the formalization, recording and editing of what previously were informal and unrecorded processes or events and secondly in the maintenance and updating of data-bases. The advent of radio and television shortened the expected time for news to reach the public and expanded the expected proportion of the public who would receive the news. ICT developments have boosted both of these expectations enormously.
Even if the lower cost argument is true, lower costs and increased capabilities also bring new problems and risks. Chief among these, according to Bannister and Connolly, are misinterpretation and misuse of data, and inadvertent enablement of misuses. On the one hand, ICT has provided the public with tools to process and analyse information that were unavailable to the radio and TV generations. On the other hand, data seldom speak for themselves, and what they have to say depends crucially on how they are selected and analyzed. Bannister and Connolly mentioned school league tables as a case in point. For a tongue-in-cheek example of the kind of simplistic analyses Bannister and Connolly fear, look no further than Crikey’s treatment of data on the newly-fledged Australian My School website.
Here’s another anti-transparency argument, not considered by Bannister and Connolly, grounded in a solid democratic tradition: The secret ballot. Secret ballots stifle vote-buying because the buyer cannot be sure of whom their target voted for. This argument has been extended (see, for instance, the relevant Freakonomics post) to defend secrecy regarding campaign contributions. Anonymous donations deter influence-peddling, so the argument runs, because candidates can’t be sure the supposed contributors actually contributed. It would not be difficult to generalize it further to include voting by office-holders on crucial bills, or certain kinds of decisions. There are obvious objections to this argument, but it also has some appeal. After all, there is plenty of vote-buying and influence-peddling purveyed by lobby groups outfitted and provisioned for just such undertakings.
Finally, there is a transparency bugbear known to any wise manager who has tried to implement systems to make their underlings accountable—Gaming the system. Critics of school league tables claim they motivate teachers to tailor curricula to the tests or even indulge in outright cheating (there are numerous instances of the latter, here and here for a couple of recent examples). Nor is this limited to underling-boss relations. You can find it in any competitive market. Last year Eliot Van Buskirk posted an intriguing piece on how marketers are gaming social media in terms of artificially inflated metrics such as number of friends or YouTube views.
In my 1989 book, I pointed out that information has come to resemble hard currency, and the “information society” is also an increasingly regulated, litigious society. This combination motivates those under surveillance, evaluation, or accountability regimes to distort or simply omit potentially discrediting information. Bannister and Connolly point to the emergence of a “non-recording” culture in public service: “Where public servants are concerned about the impact of data release, one solution is to not create or record the data in the first place.” To paraphrase the conclusion I came to in 1989, the new dilemma is that the control strategies designed to enhance transparency may actually increase intentional opacity.
I should close by mentioning that I favor transparency. My purpose in this post has been to point out some aspects of the arguments for and against it that need further thought, especially in this time of e-everything.
Any blog whose theme is ignorance and uncertainty should get around to discussing delusions sooner or later. I am to give a lecture on the topic to third-year neuropsych students this week, so a post about it naturally follows. Delusions are said to be a concomitant and indeed a product of other cognitive or other psychological pathologies, and traditionally research on delusions was conducted in clinical psychology and psychiatry. Recently, though, some others have got in on the act: Neuroscientists and philosophers.
The connection with neuroscience probably is obvious. Some kinds of delusion, as we’ll see, beg for a neurological explanation. But why have philosophers taken an interest? To get ourselves in the appropriately philosophical mood let’s begin by asking, what is a delusion?
Here’s the Diagnostic and Statistical Manual definition (2000):
“A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary.”
But how does that differ from:
- A mere error in reasoning?
- Confirmation bias?
- Self-enhancement bias?
There’s a plethora of empirical research verifying that most of us, most of the time, are poor logicians and even worse statisticians. Likewise, there’s a substantial body of research documenting our tendency to pay more attention to and seek out information that confirms what we already believe, and to ignore or avoid contrary information. And then there’s the Lake Wobegon effect—The one where a large majority of us think we’re a better driver than average, less racially biased than average, more intelligent than average, and so on. But somehow none of these cognitive peccadilloes seem to be “delusions” on the same level as believing that you’re Napoleon or that Barack Obama is secretly in love with you.
Delusions are more than violations of reasoning (in fact, they may involve no pathology in reasoning at all). Nor are they merely cases of biased perception or wishful thinking. There seems to be more to a psychotic delusion than any of these characteristics; otherwise all of us are deluded most of the time and the concept loses its clinical cutting-edge.
One approach to defining them is to say that they entail a failure to comply with “procedural norms” for belief formation, particularly those involving the weighing and assessment of evidence. Procedural norms aren’t the same as epistemic norms (for instance, most of us are not Humean skeptics, nor do we update our beliefs using Bayes’ Theorem or think in terms of subjective expected utility calculations—But that doesn’t mean we’re deluded). So the appeal to procedural norms excuses “normal” reasoning errors, confirmation and self-enhancement biases. Instead, these are more like widely held social norms. The DSM definition has a decidedly social constructionist aspect to it. A belief is delusional if everyone else disbelieves it and everyone else believes the evidence against it is incontrovertible.
So, definitional difficulties remain (especially regarding religious beliefs or superstitions). In fact, there’s a website here making an attempt to “crowd-source” definitions of delusions. The nub of the problem is that it is hard to define a concept such as delusion without sliding from descriptions of what “normal” people believe and how they form beliefs into prescriptions for what people should believe or how they should form beliefs. Once we start down the prescriptive track, we encounter the awkward fact that we don’t have an uncontestable account of what people ought to believe or how they should arrive at their beliefs.
One element common to many definitions of delusion is the lack of insight on the part of the deluded. They’re meta-ignorant: They don’t know that they’re mistaken. But this notion poses some difficult problems for the potential victim of a delusion. In what senses can a person rationally believe they are (or have been) deluded? Straightaway we can knock out the following: “My current belief in X is false.” If I know believing X is wrong, then clearly I don’t believe X. Similarly, I can’t validly claim that all my current beliefs are false, or that the way I form beliefs always produces false beliefs.
Here are some defensible examples of self-insight that incorporates delusions:
- I believe I have held false beliefs in the past.
- I believe I may hold false beliefs in the future.
- I believe that some of my current beliefs may be false (but I don’t know which ones).
- I believe that the way I form any belief is unreliable (but I don’t know when it fails).
As you can see, self-insight regarding delusions is like self-insight into your own meta-ignorance (the stuff you don’t know you don’t know). You can spot it in your past and hypothesize it for your future, but you won’t be able to self-diagnose it in the here-and-now.
On the other hand, meta-ignorance and delusional thinking are easy to spot in others. For observers, it may seem obvious that someone is deluded generally in the sense that the way they form beliefs is unreliable. Usually generalized delusional thinking is a component in some type of psychosis or severe brain trauma.
But what’s really difficult to explain is monothematic delusions. These are what they sound like, namely specific delusional beliefs that have a single theme. The explanatory problem arises because the monothematically deluded person may otherwise seem cognitively competent. They can function in the everyday world, they can reason, their memories are accurate, and they form beliefs we can agree with except on one topic.
Could some monothematic delusions have a different basis from others?
Some theorists have distinguished Telic (goal-directed) from Thetic (truth-directed) delusions. Telic delusions (functional in the sense that they satisfy a goal) might be explained by a motivational basis. A combination of motivation and affective consequences (e.g., believing Q is distressing, therefore better to believe not-Q) could be a basis for delusional belief. An example is the de Clerambault syndrome, the belief that someone of high social status is secretly in love with oneself.
Thetic delusions are somewhat more puzzling, but also quite interesting. Maher (1974, etc.) said long ago that delusions arise from normal responses to anomalous experiences. Take Capgras syndrome, the belief that one’s nearest & dearest have been replaced by lookalike impostors. A recent theory about Capgras begins with the idea that if face recognition depends on a specific cognitive module, then it is possible for that to be damaged without affecting other cognitive abilities. A two-route model of face recognition holds that there are two sub-modules:
- A ventral visuo-semantic pathway for visual encoding and overt recognition, and
- A dorsal visuo-affective pathway for covert autonomic recognition and affective response to familiar faces.
For prosopagnosia sufferers the ventral system has been damaged, whereas for Capgras sufferers the dorsal system has been damaged. So here seems to be the basis for the “anomalous” experience that gives rise to Capgras syndrome. But not everyone whose dorsal system is damaged ends up with Capgras syndrome. What else could be going on?
Maher’s claim amounts to a one-factor theory about thetic delusions. The unusual experience (e.g., no longer feeling emotions when you see your nearest and dearest) becomes explained by the delusion (e.g., they’ve been replaced by impostors). A two-factor theory claims that reasoning also has to be defective (e.g., a tendency to leap to conclusions) or some motivational bias has to operate. Capgras or Cotard syndrome (the latter is a belief that one is dead) sounds like a reasoning pathology is involved, whereas de Clerambault syndrome or reverse Othello syndrome (deluded belief in the fidelity of one’s spouse) sounds like it’s propelled by a motivational bias.
What is the nature of the “second factor” in the Capgras delusion?
- Capgras patients are aware that their belief seems bizarre to others, but they are not persuaded by counter-arguments or evidence to the contrary.
- Davies et al. (2001) propose that, specifically, Capgras patients have lost the ability to refrain from believing that things are the way they appear to be. However, Capgras patients are not susceptible to visual illusions.
- McLaughlin (2009) posits that Capgras patients are susceptible to affective illusions, in the sense that a feeling of unfamiliarity leads straight to a belief in that unfamiliarity. But even if true, this account still doesn’t explain the persistence of that belief in the face of massive counter-evidence.
What about the patients who have a disconnection between their face recognition modules and their autonomic nervous systems but do not have Capgras? Turns out that the site of their damage differs from that of Capgras sufferers. But little is known about the differences between them in terms of phenomenology (e.g., whether loved ones also feel unfamiliar to the non-Capgras patients).
Where does all this leave us? To being with, we are reminded that a label (“delusion”) doesn’t bring with it a unitary phenomenon. There may be distinct types of delusions with quite distinct etiologies. The human sciences are especially vulnerable to this pitfall, because humans have fairly effective commonsensical theories about human beings—folk psychology and folk sociology—from which the human sciences borrow heavily. We’re far less likely to be (mis)guided by common sense when theorizing about things like mitochondria or mesons.
Second, there is a clear need for continued cross-disciplinary collaboration in studying delusions, particularly between cognitive and personality psychologists, neuroscientists, and philosophers of mind. “Delusion” and “self-deception” pose definitional and conceptual difficulties that rival anything in the psychological lexicon. The identification of specific neural structures implicated in particular delusions is crucial to understanding and treating them. The interaction between particular kinds of neurological trauma and other psychological traits or dispositions appears to be a key but is at present only poorly understood.
Last, but not least, this gives research on belief formation and reasoning a cutting edge, placing it at the clinical neuroscientific frontier. There may be something to the old commonsense notion that logic and madness are closely related. By the way, for an accessible and entertaining treatment of this theme in the history of mathematics, take a look at LogiComix.
Books such as Nicholas Taleb’s Fooled by Randomness and the psychological literature on our mental foibles such as gambler’s fallacy warn us to beware randomness. Well and good, but randomness actually is one of the most domesticated kinds of uncertainty. In fact, it is one form of uncertainty we can and do exploit.
One obvious way randomness can be exploited is in designing scientific experiments. To experimentally compare, say, two different fertilizers for use in growing broad beans, an ideal would be to somehow ensure that the bean seedlings exposed to one fertilizer were identical in all ways to the bean seedlings exposed to the other fertilizer. That isn’t possible in any practical sense. Instead, we can randomly assign each seedling to receive one or the other fertilizer. We won’t end up with two identical groups of seedlings, but the differences between those groups will have occurred by chance. If their subsequent growth-rates differ by more than we would reasonably expect by chance alone, then we can infer that one fertilizer is likely to have been more effective than the other.
Another commonplace exploitation of randomness is random sampling, which is used in all sorts of applications from quality-control engineering to marketing surveys. By randomly sampling a specific percentage of manufactured components coming off the production line, a quality-control analyst can decide whether a batch should be scrapped or not. By randomly sampling from a population of consumers, a marketing researcher can estimate the percentage of that population who prefer a particular brand of a consumer item, and also calculate how likely that estimate is to be within 1% of the true percentage at the time.
There is a less well-known use for randomness, one that in some respects is quite counter-intuitive. We can exploit randomness to improve our chances of making the right decision. The story begins with Tom Cover’s 1987 chapter which presents what Dov Samet and his co-authors recognized in their 2002 paper as a solution to a switching decision that has been at the root of various puzzles and paradoxes.
Probably the most famous of these is the “two envelope” problem. You’re a contestant in a game show, and the host offers you a choice between two envelopes, each containing a cheque of a specific value. The host explains that one of the cheques is for a greater amount than the other, and offers you the opportunity to toss a fair coin to select one envelope to open. After that, she says, you may choose either to retain the envelope you’ve selected or exchange it for the other. You toss the coin, open the selected envelope, and see the value of the cheque therein. Of course, you don’t know the value of the other cheque, so regardless of which way you choose, you have a probability of ½ of ending up with the larger cheque. There’s an appealing but fallacious argument that says you should switch, but we’re not going to go into that here.
Cover presents a remarkable decisional algorithm whereby you can make that probability exceed ½.
- Having chosen your envelope via the coin-toss, use a random number generator to provide you with a number anywhere between zero and some value you know to be greater than the largest cheque’s value.
- If this number is larger than the value of the cheque you’ve seen, exchange envelopes.
- If not, keep the envelope you’ve been given.
Here’s a “reasonable person’s proof” that this works (for more rigorous and general proofs, see Robert Snapp’s 2005 treatment or Samet et al., 2002). I’ll take the role of the game-show contestant and you can be the host. Suppose $X1 and $X2 are the amounts in the two envelopes. You have provided the envelopes and so you know that X1, say, is larger than X2. You’ve also told me that these amounts are less than $100 (the specific range doesn’t matter). You toss a fair coin, and if it lands Heads you give me the envelope containing X1 whereas if it lands Tails you give me the one containing X2. I open my envelope and see the amount there. Let’s call my amount Y. All I know at this point is that the probability that Y = X1 is ½ and so is the probability that Y = X2.
I now use a random number generator to produce a number between 0 and 100. Let’s call this number Z. Cover’s algorithm says I should switch envelopes if Z is larger than Y and I should retain my envelope if Z is less than or equal to Y. The claim is that my chance of ending up with the envelope containing X1 is greater than ½.
As the picture below illustrates, the probability that my randomly generated Z has landed at X2 or below is X2/100, and the probability that Z has landed at X1 or below is X1/100. Likewise, the probability that Z has exceeded X2 is 1 – X2/100, and the probability that Z has exceeded X1 is 1 – X1/100.
The proof now needs four steps to complete it:
- If Y = X1 then I’ll make the right decision if I decide to keep my envelope, i.e., if Y is less than or equal to X1, and my probability of doing so is X1/100.
- If Y = X2 then I’ll make the right decision if I decide to exchange my envelope, i.e., if Y is greater than X2, and my probability of doing so is 1 – X2/100.
- The probability that Y = X1 is ½ and the probability that Y = X2 also is ½. So my total probability of ending up with the envelope containing X1 is
½ of X1/100, which is X1/200, plus ½ of 1 – X2/100, which is ½ – X2/200.
That works out to ½ + X1/200 – X2/200.
- But X1 is larger than X2, so X1/200 – X2/200 must be larger than 0.
Therefore, ½ + X1/200 – X2/200 is larger than ½.
Fine, you might say, but could this party trick ever help us in a real-world decision? Yes, it could. Suppose you’re the director of a medical clinic with a tight budget in a desperate race against time to mount a campaign against a disease outbreak in your region. You have two treatments available to you but the research literature doesn’t tell you which one is better than the other. You have time and resources to test only one of those treatments before deciding which one to adopt for your campaign.
Toss a fair coin, letting it decide which treatment you test. The resulting cure-rate from the chosen treatment will be some number, Y, between 0% and 100%. The structure of your decisional situation now is identical to the two-envelope setup described above. Use a random number generator to generate a number, Z, between 0 and 100. If Z is less than or equal to Y use your chosen treatment for your campaign. If Z is greater than Y use the other treatment instead. You chance of having chosen the treatment that would have yielded the higher cure-rate under your test conditions will be larger than ½ and you’ll be able to defend your decision if you’re held accountable to any constituency or stakeholders.
In fact, there are ways whereby you may be able to do even better than this in a real-world situation. One is by shortening the range, if you know that the cure-rate is not going to exceed some limit, say L, below 100%. The reason this would help is because X1/2L – X2/2L will be greater than X1/200 – X2/200. The highest it can be is 1 – X2/X1. Another way, as Snapp (2005) points out, is by knowing the probability distribution generating X1 and X2. Knowing that distribution boosts your probability of being correct to ¾.
However, before we rush off to use Cover’s algorithm for all kinds of decisions, let’s consider its limitations. Returning to the disease outbreak scenario, suppose you have good reasons to suspect that one treatment (Ta, say) is better than the other (Tb). You could just go with Ta and defend your decision by pointing out that, according to your evidence the probability that Ta actually is better than Tb is greater than ½. Let’s denote this probability by P.
A reasonable question is whether you could do better than P by using Cover’s algorithm. Here’s my claim:
- If you test Ta or Tb and use the Cover algorithm to decide whether to use it for your campaign or switch to the other treatment, your probability of having chosen the treatment that would have given you the best test-result cure rate will converge to the Cover algorithm’s probability of a correct choice. This may or may not be greater than P (remember, P is greater than ½).
This time, let X1 denote the higher cure rate and X2 denote the lower cure-rate you would have got, depending on whether the treatment you tested was the better or the worse.
- If the cure rate for Ta is X1 then you’ll make the right decision if you decide to use Ta, i.e., if Y is less than or equal to X1, and your probability of doing so is X1/100.
- If the cure rate for Ta is X2 then you’ll make the right decision if you decide to use Tb, i.e., if Y is greater than X2, and your probability of doing so is 1 – X2/100.
- We began by supposing the probability that the cure rate for Ta is X1 is P, which is greater than ½. The probability that the cure rate for Ta is X2 is 1 – P, which is less than ½. So your total probability of ending up with the treatment whose cure rate is X1 is
P*X1/100 + (1 – P)*(1 – X2/100).
The question we want to address is when this probability is greater than P, i.e.,
P*X1/100 + (1 – P)*(1 – X2/100) > P.
It turns out that a rearrangement of this inequality gives us a clue.
- First, we subtract P*X1/100 from both sides to get
(1 – P)*( 1 – X2/100) > P – P*X1/100.
- Now, we divide both sides of this inequality by 1 – P to get
( 1 – X2/100)/P > P*(1 – X1/100)/(1 – P),
and then divide both sides by ( 1 – X1/100) to get
(1 – X2/100)/( 1 – X1/100) > P/(1 – P).
We can now see that the values of X2 and X1 have to make the odds of the Cover algorithm larger than the odds resulting from P. If P = .6, say, then P/(1 – P) = .6/.4 = 1.5. Thus, for example, if X2 = 40% and X1 = 70% then (1 – X2/100)/( 1 – X1/100) = .6/.3 = 2.0 and the Cover algorithm will improve your chances of making the right choice. However, if X2 = 40% and X1 = 60% then the algorithm offers no improvement on P and if we increase X2 above 40% the algorithm will return a lower probability than P. So, if you already have strong evidence that one alternative is better than the other then don’t bother using the Cover algorithm.
Nevertheless, by exploiting randomness we’ve ended up with a decisional guide that can apply to real-world situations. Faced with being able to test only one of two alternatives, if you are undecided about which one is superior but can only test one alternative, test one of them and use Cover’s algorithm to decide which to adopt. You’ll end up with a higher probability of making the right decision than tossing a coin.