ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Philosophy

Writing about “Agnotology, Ignorance and Uncertainty”

with 2 comments

From time to time I receive invitations to contribute to various “encyclopedias.” Recent examples include an entry on “confidence intervals” in the International Encyclopedia of Statistical Science (Springer, 2010) and an entry on “uncertainty” in the Encyclopedia of Human Behavior (Elsevier, 1994, 2012). The latter link goes to the first (1994) edition; the second edition is due out in 2012. I’ve duly updated and revised my 1994 entry for the 2012 edition.

Having been raised by a librarian (my mother worked in the Seattle Public Library for 23 years), I’m a believer in the value of good reference works. So, generally I’m willing to accept invitations to contribute to them. These days there is a niche market even for non-digital works of this kind, and of course the net has led to numerous hybrid versions.

Despite the fact that such invitations are regarded as markers of professional esteem, they don’t count for much in the university system where I work because they aren’t original research publications. Same goes for textbooks. Thus, for my younger academic colleagues, writing encyclopedia entries or, worse still, writing textbooks actually can harm their careers. They understandably avoid doing so, which leaves it to older academics like me.

Some of these encyclopedias have interesting moments on the world stage. The International Encyclopedia of Statistical Science has been said to have set a record for the number of countries involved (105, via the 619 contributing authors). Its editors were nominated for the 2011 Nobel Peace Prize, apparently the first time any statisticians had received this honor. Meanwhile, V.S. Ramachandran, editor of the Encyclopedia of Human Behavior, was selected by Time Magazine as one of the world’s most influential people of 2011.

However, I digress. The Sage Encyclopedia of Philosophy and the Social Sciences is an intriguing proposal for a reference work that bridges these two intellectual cultures. I regard this aim as laudable, and I’m fortunate insofar as the areas where I work have a tradition of dialogs linking philosophers and social scientists. So, I was delighted to be asked to provide an entry on “agnotology, ignorance and uncertainty”. There is, however, a bit of a catch.

The guidelines for contributors state that “Entries should be written at a level appropriate for students who do not have an extensive background either in philosophy or the social sciences and for academics from other disciplines… it is essential that a reader versed in philosophy only or mostly, or alternatively, in social sciences, should gain by reading entries that aim at expanding their knowledge of concepts and theories as these have developed in the complementary area.” All of this is supposed to be achieved for a treatment of “agnotology, ignorance and uncertainty” in just 1,000 words, with a short list of “further readings” at the end. All of my posts in this blog thus far exceed 1,000 words (gulp). Can I be sufficiently concise without butchering or omitting crucial content?

Here’s my first draft (word count: 1,018). See what you think.

AGNOTOLOGY, IGNORANCE AND UNCERTAINTY

“Agnotology” is the study of ignorance (from the Greek “agnosis”). “Ignorance,” “uncertainty,” and related terms refer variously to the absence of knowledge, doubt, and false belief. This topic has a long history in Western philosophy, rooted in the Socratic tradition. It has a considerably shorter and, until recently, sporadic treatment in the human sciences. This entry focuses on relatively recent developments within and exchanges between both domains.

A key starting-point is that anyone attributing ignorance cannot avoid making claims to know something about who is ignorant of what: A is ignorant from B’s viewpoint if A fails to agree with or show awareness of ideas which B defines as actually or potentially valid. A and B can be identical, so that A self-attributes ignorance. Numerous scholars thereby have noted the distinction between conscious ignorance (known unknowns, learned ignorance) and meta-ignorance (unknown unknowns, ignorance squared).

The topic has been beset with terminological difficulties, due to the scarcity and negative cast of terms referring to unknowns. Several scholars have constructed typologies of unknowns, in attempts to make explicit their most important properties. Smithson’s book, Ignorance and Uncertainty: Emerging Paradigms, pointed out the distinction between being ignorant of something and ignoring something, the latter being akin to treating something as irrelevant or taboo. Knorr-Cetina coined the term “negative knowledge” to describe knowledge about the limits of the knowable. Various authors have tried to distinguish reducible from irreducible unknowns.

Two fundamental concerns have been at the forefront of philosophical and social scientific approaches to unknowns. The first of these is judgment, learning and decision making in the absence of complete information. Prescriptive frameworks advise how this ought to be done, and descriptive frameworks describe how humans (or other species) do so. A dominant prescriptive framework since the second half of the 20th century is subjective expected utility theory (SEU), whose central tenet is that decisional outcomes are to be evaluated by their expected utility, i.e., the product of their probability and their utility (e.g., monetary value, although utility may be based on subjective appraisals). According to SEU, a rational decision maker chooses the option that maximizes her/his expected utility. Several descriptive theories in psychology and behavioral economics (e.g., Prospect Theory and Rank-Dependent Expected Utility Theory) have amended SEU to render it more descriptively accurate while retaining some of its “rational” properties.

The second concern is the nature and genesis of unknowns. While many scholars have treated unknowns as arising from limits to human experience and cognitive capacity, increasing attention has been paid recently to the thesis that unknowns are socially constructed, many of them intentionally so. Smithson’s 1989 book was among the earliest to take up the thesis that unknowns are socially constructed. Related work includes Robert Proctor’s 1995 Cancer Wars and Ulrich Beck’s 1992 Risk Society. Early in the 21st century this thesis has become more mainstream. Indeed, the 2008 edited volume bearing “agnotology” in its title focuses on how culture, politics, and social dynamics shape what people do not know.

Philosophers and social scientists alike have debated whether there are different kinds of unknowns. This issue is important because if there is only one kind then only one prescriptive decisional framework is necessary and it also may be the case that humans have evolved one dominant way of making decisions with unknowns. On the other hand, different kinds of unknowns may require distinct methods for dealing with them.

In philosophy and mathematics the dominant formal framework for dealing with unknowns has been one or another theory of probability. However, Max Black’s ground-breaking 1937 paper proposed that vagueness and ambiguity are distinguishable from each other, from probability, and also from what he called “generality.” The 1960’s and 70’s saw a proliferation of mathematical and philosophical frameworks purporting to encompass non-probabilistic unknowns, such as fuzzy set theory, rough sets, fuzzy logic, belief functions, and imprecise probabilities. Debates continue to this day over whether any of these alternatives are necessary, whether all unknowns can be reduced to some form of probability, and whether there are rational accounts of how to deal with non-probabilistic unknowns. The chief contenders currently include generalized probability frameworks (including imprecise probabilities, credal sets, belief functions), robust Bayesian techniques, and hybrid fuzzy logic techniques.

In the social sciences, during the early 1920’s Keynes distinguished between evidentiary “strength” and “weight,” while Knight similarly separated “risk” (probabilities are known precisely) from “uncertainty” (probabilities are not known). Ellsberg’s classic 1961 experiments demonstrated that people’s choices can be influenced by how imprecisely probabilities are known (i.e., “ambiguity”), and his results have been replicated and extended by numerous studies. Smithson’s 1989 book proposed a taxonomy of unknowns and his 1999 experiments showed that choices also are influenced by uncertainty arising from conflict (disagreeing evidence from equally credible sources); those results also have been replicated.

More recent empirical research on how humans process unknowns has utilized brain imaging methods. Several studies have suggested that Knightian uncertainty (ambiguity) and risk differentially activate the ventral systems that evaluate potential rewards (the so-called “reward center”) and the prefrontal and parietal regions, with the latter two becoming more active under ambiguity. Other kinds of unknowns have yet to be widely studied in this fashion but research on them is emerging. Nevertheless, the evidence thus far suggests that the human brain treats unknowns as if there are different kinds.

Finally, there are continuing debates regarding whether different kinds of unknowns should be incorporated in prescriptive decision making frameworks and, if so, how a rational agent should deal with them. There are several decisional frameworks incorporating ambiguity or imprecision, some of which date back to the mid-20th century, and recently at least one incorporating conflict as well. The most common recommendation for decision making under ambiguity amounts to a type of worst-case analysis. For instance, given a lower and upper estimate of the probability of event E, the usual advice is to use the lower probability for evaluating bets on E occurring but to use the upper probability for bets against E. However, the general issue of what constitutes rational behavior under non-probabilistic uncertainties such as ambiguity, fuzziness or conflict remains unresolved.

Further Readings

Bammer, G. and Smithson, M. (Eds.), (2008). Uncertainty and Risk: Multidisciplinary Perspectives. London: Earthscan.

Beck, Ulrich (1999). World Risk Society. Oxford: Polity Press.

Black, M. (1937). Vagueness: An exercise in logical analysis. Philosophy of Science, 4, 427-455.

Gardenfors, P. and Sahlin, N.-E. (Eds.), (1988). Decision, Probability, and Utility: Selected Readings. Cambridge, UK: Cambridge University Press.

Proctor, R. and Schiebinger, L. (Eds.), (2008). Agnotology: The Making and Unmaking of Ignorance. Stanford, CA: Stanford University Press.

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.

Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman Hall.

Ignoring Stuff and People

leave a comment »

“Don’t pay any attention to the critics—Don’t even ignore them.” ~ Samuel Goldwyn

When I first started exploring ignorance and related topics, it occurred to me that not-knowing has a passive and an active voice. To be ignorant of something is the passive voice—Ignorance is a state. Ignoring something is an action. I want to explore various aspects of ignoring in this and perhaps some subsequent posts.

To begin, ignoring attracts a moral charge that ignorance usually doesn’t. For instance, innocence can be construed as a special case of ignorance. Innocents don’t ignore corrupting information; they’re just unaware of its existence. Lots of communications to people who are ignoring something or someone are chastisements. Ignoring is akin to commission, whereas being ignorant is more like omission. Ignoring has an element of will or choice about it that being ignorant does not. So people are more likely to ascribe a moral status to an act of ignoring than to a state of ignorance.

For instance, reader response to a recent “Courier Mail” story on April 11 whose main point was “Three men have been rescued after they drove around Road Closed signs and into floodwaters in central Queensland” was uncharitable, to say the least. Comments and letters to the editor expressed desires for the men to be named, shamed, fined and otherwise punished for wasting taxpayers’ money and needlessly imperiling the rescuers.

Criminal negligence cases often make it clear that while the law may regard ignorance as scant excuse, ignoring is even worse. Ignoring imputes culpability straightaway. Halah Touryalai’s blog on Forbes in March: “Irving Picard, the Trustee seeking to reclaim billions for Madoff’s victims, claims Merrill Lynch International was creating and selling products tied to Madoff feeder funds even though it was aware of possible fraud within Bernard L. Madoff Investment Securities.”

Despite the clear distinction between ignorance and ignoring, people can and do confuse the two. Andrew Rotherham’s May 12 blog at Time accuses American educators and policy-makers of ignoring the burgeoning crisis regarding educational outcomes for Hispanic schoolchildren. But it isn’t clear whether the educators are aware of this problem (and ignoring it) or not (and therefore ignorant about it). There are so many festering and looming crises to keep track of these days that various sectors of the public regularly get caned for “ignoring” crises when in all likelihood they are just ignorant of them.

In a more straightforward case, the Sydney Herald Sun’s March 1 headine, “One-in-four girls ignoring cervical cancer vaccine,” simply has got it wrong. The actual message in the article is not that schoolgirls in question are ignoring the vaccine, but that they’re ignorant of it and also of the cancer itself.

Communicators of all stripes take note: The distinction between ignoring and ignorance is important and worth preserving. Let us not tolerate, on our watch, a linguistically criminal slide into the elision of that distinction through misusage or mental laziness.

Because it is an act and therefore can be intentional, ignoring has uses as a social or psychological tactic that ignorance never can have. There is a plethora of self-help remedies out there which, when you scratch the surface, boil down to tactical or even strategic ignoring. I’ll mention just two examples of such injunctions: “Don’t sweat the small stuff” and “live in the present.”

The first admonishes us to discount the “small stuff” to some extent, presumably so we can pay attention to the “big stuff” (whatever that may be). This simple notion has spawned several self-help bestsellers. The second urges us to disregard the past and future and focus attention on the here-and-how instead. This advice has been reinvented many times, in my short lifetime I’ve seen it crop up all the way from the erstwhile hippie sensibilities embodied in slogans such as “be here now” to the present-day therapeutic cottage industry of “mindfulness.”

Even prescriptions for rational decision-making contain injunctions to ignore certain things. Avoiding the “sunk cost fallacy” is one example. Money, time, or other irrecoverable resources that already have been spent in pursuing a goal should not be considered along with future potential costs in deciding whether to persist in pursuing the goal. There’s a nice treatment of this on the less wrong site. The Mind Your Decisions blog also presents a few typical examples of the sunk cost fallacy in everyday life. The main point here is that a rational decisional framework prescribes ignoring sunk costs.

Once we shift attention from ignoring things to ignoring people, the landscape becomes even more interesting. Ignoring people, it would seem, occupies important places in commonsense psychology. The earliest parental advice I received regarding what to do about bullies was to ignore them. My parents meant well, and it turned out that this worked in a few instances. But some bullies required standing up to.

For those of us who aren’t sure how to go about it, there are even instructions and guides on how to ignore people.

Ignoring people also gets some airplay as part of a strategy or at least a tactic. For instance, how should parents deal with disrespectful behavior from their children? Well, one parenting site says not to ignore such behavior. Another admonishes you to ignore it. Commonsense psychology can be self-contradicting. It’s good old commonsense psychology that tells us “opposites attract” and yet “birds of a feather flock together,” “look before you leap” but “(s)he who hesitates is lost,” “many hands make light the work” but “too many cooks spoil the broth,” and so on.

Given that ignoring has a moral valence, what kinds of justifications are there for ignoring people? There are earnest discussion threads on such moral quandaries as ignoring people who are nice to you. In this thread, by the way, many of the contributors conclude that it’s OK to do so, especially if the nice person has traits that they can’t abide.

Some social norms or relationships entail ignoring behaviors or avoiding communication with certain people. One of the clearest examples of this is the kin-avoidance rules in some Australian Indigenous cultures. An instance is the ban on speaking with or even being in close proximity to one’s mother-in-law. The Central Land Council site describes the rule thus: “This relationship requires a social distance, such that they may not be able to be in the same room or car.”

Some religious communities such as the Amish have institutionalized shunning as a means of social control. As Wenger (1953) describes it, “The customary practice includes refusal to eat at the same table, even within the family, the refusal of ordinary social intercourse, the refusal to work together in farming operations, etc.” So, shunning entails ignoring. Wenger’s article also details some of the early religious debates over when and to what extent shunning should be imposed.

Ostracism has a powerful impact because it feels like rejection. Social psychologist Kipling Williams has studied the effects of ostracism for a number of years now, and without any apparent trace of irony remarks that it was “ignored by social scientists for 100 years.” Among his ingenious experiments is one demonstrating that people feel the pain of rejection when they’re ignored by a cartoon character on a computer screen. Williams goes as far as to characterize ostracism as an “invisible form of bullying.”

So, for an interesting contrast between the various moral and practical justifications you can find for ignoring others, try a search on the phrase “ignoring me.” There, you’ll find a world of agony. This is another example to add to my earlier post about lies and secrecy, where we seem to forget about the Golden Rule. We lie to others but hate being lied to. We also are willing to ignore others but hate being ignored in turn. Well, perhaps unless you’re being ignored by your mother-in-law.

Written by michaelsmithson

May 19, 2011 at 3:32 pm

I Can’t Believe What I Teach

with 2 comments

For the past 34 years I’ve been compelled to teach a framework that I’ve long known is flawed. A better framework exists and has been available for some time. Moreover, I haven’t been forced to do this by any tyrannical regime or under threats of great harm to me if I teach this alternative instead. And it gets worse: I’m not the only one. Thousands of other university instructors have been doing the same all over the world.

I teach statistical methods in a psychology department. I’ve taught courses ranging from introductory undergraduate through graduate levels, and I’m in charge of that part of my department’s curriculum. So, what’s the problem—Why haven’t I abandoned the flawed framework for its superior alternative?

Without getting into technicalities, let’s call the flawed framework the “Neyman-Pearson” approach and the alternative the “Bayes” approach. My statistical background was formed as I completed an undergraduate degree in mathematics during 1968-72. My first courses in probability and statistics were Neyman-Pearson and I picked up the rudiments of Bayes toward the end of my degree. At the time I thought these were simply two valid alternative ways of understanding probability.

Several years later I was a newly-minted university lecturer teaching introductory statistics to fearful and sometimes reluctant students in the social sciences. The statistical methods used in the social science research were Neyman-Pearson, so of course I taught Neyman-Pearson. Students, after all, need to learn to read the literature of their discipline.

Gradually, and through some of my research into uncertainty, I became aware of the severe problems besetting the Neyman-Pearson framework. I found that there was a lengthy history of devastating criticisms raised against Neyman-Pearson even within the social sciences, criticisms that had been ignored by practising researchers and gatekeepers to research publication.

However, while the Bayesian approach may have been conceptually superior, in the late ‘70’s through early ‘80’s it suffered from mathematical and computational impracticalities. It provided few usable methods for dealing with complex problems. Disciplines such as psychology were held in thrall to Neyman-Pearson by a combination of convention and the practical requirements of complex research designs. If I wanted to provide students or, for that matter, colleagues who came to me for advice, with effective statistical tools for serious research then usually Neyman-Pearson techniques were all I could offer.

But what to do about teaching? No university instructor takes a formal oath to teach the truth, the whole truth, and nothing but the truth; but for those of us who’ve been called to teach it feels as though we do. I was sailing perilously close to committing Moore’s Paradox in the classroom (“I assert Neyman-Pearson but I don’t believe it”).

I tried slipping in bits and pieces alerting students to problematic aspects of Neyman-Pearson and the existence of the Bayesian alternative. These efforts may have assuaged my conscience but they did not have much impact, with one important exception. The more intellectually proactive students did seem to catch on to the idea that theories of probability and statistics are just that—Theories, not god-given commandments.

Then Bayes got a shot in the arm. In the mid-80’s some powerful computational techniques were adapted and developed that enabled this framework to fight at the same weight as Neyman-Pearson and even better it. These techniques sail under the banner of Markov chain Monte Carlo methods, and by the mid-90’s software was available (free!) to implement them. The stage was set for the Bayesian revolution. I began to dream of writing a Bayesian introductory statistics textbook for psychology students that would set the discipline free and launch the next generation of researchers.

It didn’t happen that way. Psychology was still deeply mired in Neyman-Pearson and, in fact, in a particularly restrictive version of it. I’ll spare you the details other than saying that it focused, for instance, on whether the researcher could reject the claim that an experimental effect was nonexistent. I couldn’t interest my colleagues in learning Bayesian techniques, let alone undergraduate students.

By the late ‘90’s a critical mass of authoritative researchers convinced the American Psychological Association to form a task-force to reform statistical practice, but this reform really amounted to shifting from the restrictive Neyman-Pearson orientation to a more liberal one that embraced estimating how big an experimental effect is and setting a “confidence interval” around it.

It wasn’t the Bayesian revolution, but I leapt onto this initiative because both reforms were a long stride closer to the Bayesian framework and would still enable students to read the older Neyman-Pearson dominated research literature. So, I didn’t write a Bayesian textbook after all. My 2000 introductory textbook was, so far as I’m aware, one of the first to teach introductory statistics to psychology students from a confidence interval viewpoint. It was generally received well by fellow reformers, and I got a contract to write a kind of researcher’s confidence interval handbook that came out in 2003. The confidence interval reform in psychology was under weigh, and I’d booked a seat on the juggernaut.

Market-wise, my textbook flopped. I’m not singing the blues about this, nor do I claim sour grapes. For whatever reasons, my book just didn’t take the market by storm. Shortly after it came out, a colleague mentioned to me that he’d been at a UK conference with a symposium on statistics teaching where one of the speakers proclaimed my book the “best in the world” for explaining confidence intervals and statistical power. But when my colleague asked if the speaker was using it in the classroom he replied that he was writing his own. And so better-selling introductory textbooks continued to appear. A few of them referred to the statistical reforms supposedly happening in psychology but the majority did not. Most of them are the nth edition of a well-established book that has long been selling well to its set of long-serving instructors and their students.

My 2003 handbook fared rather better. I had put some software resources for computing confidence intervals on a webpage and these got a lot of use. These, and my handbook, got picked up by researchers and their graduate students. Several years on, the stuff my scripts did started to appear in mainstream commercial statistics packages. It seemed that this reform was occurring mainly at the advanced undergraduate, graduate and researcher levels. Introductory undergraduate statistical education in psychology remained (and still remains) largely untouched by it.

Meanwhile, what of the Bayesian movement? In this decade, graduate-level social science oriented Bayesian textbooks began to appear. I recently reviewed several of them and have just sent off an invited review of another. In my earlier review I concluded that the market still lacked an accessible graduate-level treatment oriented towards psychology, a gap that may have been filled by the book I’ve just finished reviewing.

Have I tried teaching Bayesian methods? Yes, but thus far only in graduate-level workshops, and on my own time (i.e., not as part of the official curriculum). I’ll be doing so again in the second half of this year, hoping to recruit some of my colleagues as well as graduate students. Next year I’ll probably introduce a module on Bayes for our 4th-year (Honours) students.

It’s early days, however, and we remain far from being able to revamp the entire curriculum. Bayesian techniques still rarely appear in the mainstream research literature in psychology, and so students still need to learn Neyman-Pearson to read that literature with a knowledgably critical eye. A sea-change may be happening, but it’s going to take years (possibly even decades).

Will I try writing a Bayesian textbook? I already know from experience that writing a textbook is a lot of time and hard work, often for little reward. Moreover, in many universities (including mine) writing a textbook counts for nothing. It doesn’t bring research money, it usually doesn’t enhance the university’s (or the author’s) scholarly reputation, it isn’t one of the university’s “performance indicators,” and it seldom brings much income to the author. The typical university attitude towards textbooks is as if the stork brings them. Writing a textbook, therefore, has to be motivated mainly by a passion for teaching. So I’m thinking about it…

E-Transparency

with 2 comments

A recent policy paper by Frank Bannister and Regina Connolly asks whether transparency is an unalloyed good in e-government. As the authors point out, the advent of Wikileaks has brought the issue of “e-transparency” into the domain of public debate. Broadly, e-transparency in government refers to access to the data, processes, decisions and actions of governments mediated by information communications technology (ICT).

Debates about the extent to which governments should (or can) be transparent have a lengthy history. The prospect of e-transparency adds considerations of affordability and potential breadth of citizen response and participation. Bannister and Connolly begin their discussion by setting aside the most common objections to transparency: Clear requirements for national security and commercial confidentiality in the service of protecting citizenry or other national interests. What other reasonable objections to transparency, let alone e-transparency, might there be?

Traditional arguments for transparency in government are predicated on three values assertions.

  1. The public has a right to know. Elected office-holders and appointed public or civil servants alike are accountable to their constituencies. Accountability is impossible without transparency; therefore good government requires transparency.
  2. Good government requires building trust between the governors and the governed, which can only arise if the governors are accountable to the governed.
  3. Effective citizen participation in a democracy is possible only if the citizenry is sufficiently educated and informed to make good decisions. Education and information both entail transparency.

Indeed, you can find affirmations of these assertions in the Obama administration’s White House Press Office statement on this issue.

Note that the first of these arguments is a claim to a right, whereas the second and third are claims about consequences. The distinction is important. A right is, by definition, non-negotiable and, in principle, inalienable. Arguments for good consequences, on the other hand, are utilitarian instead of deontological. Utilitarian arguments can be countered by “greater good” arguments and therefore are negotiable.

Japanese official pronouncements about the state of the recent Fukujima plant disaster therefore were expected to be more or less instantaneous and accurate. Even commentary from sources such as the Bulletin of Atomic Scientists averred that official reports should have been forthcoming sooner about the magnitude and scope of the disaster: “Denied such transparency, media outlets and the public may come to distrust official statements.” The gist of this commentary was that transparency would pay off better than secrecy, and the primary payoff would be increased trust in the Japanese government.

However, there are counter-arguments to the belief that transparency is a necessary or even contributing factor in building trust in government. A recent study by Stephan Gimmelikhuikjsen (2010) suggests that when the minutes of local council deliberations were made available online citizens’ evaluations of council competence declined in comparison to citizens who did not have access to that information. If transparency reveals incompetency then it may not increase trust after all. This finding is congruent with observations that a total accountability culture often also is a blame culture.

There’s another more subtle issue, namely that insistence on accountability and the surveillance levels required thereby are incompatible with trust relations. People who trust one another do not place each other under 24-7 surveillance, nor do they hold them accountable for every action or decision. Trust may be built up via surveillance and accountability, but once it has been established then the social norms around trust relations sit somewhat awkwardly alongside norms regarding transparency. The latter are more compatible with contractual relations than trust relations.

Traditional arguments against transparency (or at least, in favour of limiting transparency) also come in deontological and utilitarian flavors. The right of public servants and even politicians to personal privacy stands against the right of the public to know: One deontological principle versus another. ICT developments have provided new tools to monitor in increasing detail what workers do and how they do it, but as yet there seem to be few well thought-out guidelines for how far the public (or anyone else) should be permitted to go in monitoring government employees or office-holders.

Then there are the costs and risks of disclosure, which these days include exposure to litigation and the potential for data to be hacked. E-transparency is said to cost considerably less than traditional transparency and can deliver much greater volumes of data. Nonetheless, Bannister and Connolly caution that some cost increases can occur, firstly in the formalization, recording and editing of what previously were informal and unrecorded processes or events and secondly in the maintenance and updating of data-bases. The advent of radio and television shortened the expected time for news to reach the public and expanded the expected proportion of the public who would receive the news. ICT developments have boosted both of these expectations enormously.

Even if the lower cost argument is true, lower costs and increased capabilities also bring new problems and risks. Chief among these, according to Bannister and Connolly, are misinterpretation and misuse of data, and inadvertent enablement of misuses. On the one hand, ICT has provided the public with tools to process and analyse information that were unavailable to the radio and TV generations. On the other hand, data seldom speak for themselves, and what they have to say depends crucially on how they are selected and analyzed. Bannister and Connolly mentioned school league tables as a case in point. For a tongue-in-cheek example of the kind of simplistic analyses Bannister and Connolly fear, look no further than Crikey’s treatment of data on the newly-fledged Australian My School website.

Here’s another anti-transparency argument, not considered by Bannister and Connolly, grounded in a solid democratic tradition: The secret ballot. Secret ballots stifle vote-buying because the buyer cannot be sure of whom their target voted for. This argument has been extended (see, for instance, the relevant Freakonomics post) to defend secrecy regarding campaign contributions. Anonymous donations deter influence-peddling, so the argument runs, because candidates can’t be sure the supposed contributors actually contributed. It would not be difficult to generalize it further to include voting by office-holders on crucial bills, or certain kinds of decisions. There are obvious objections to this argument, but it also has some appeal. After all, there is plenty of vote-buying and influence-peddling purveyed by lobby groups outfitted and provisioned for just such undertakings.

Finally, there is a transparency bugbear known to any wise manager who has tried to implement systems to make their underlings accountable—Gaming the system. Critics of school league tables claim they motivate teachers to tailor curricula to the tests or even indulge in outright cheating (there are numerous instances of the latter, here and here for a couple of recent examples). Nor is this limited to underling-boss relations. You can find it in any competitive market. Last year Eliot Van Buskirk posted an intriguing piece on how marketers are gaming social media in terms of artificially inflated metrics such as number of friends or YouTube views.

In my 1989 book, I pointed out that information has come to resemble hard currency, and the “information society” is also an increasingly regulated, litigious society. This combination motivates those under surveillance, evaluation, or accountability regimes to distort or simply omit potentially discrediting information. Bannister and Connolly point to the emergence of a “non-recording” culture in public service: “Where public servants are concerned about the impact of data release, one solution is to not create or record the data in the first place.” To paraphrase the conclusion I came to in 1989, the new dilemma is that the control strategies designed to enhance transparency may actually increase intentional opacity.

I should close by mentioning that I favor transparency. My purpose in this post has been to point out some aspects of the arguments for and against it that need further thought, especially in this time of e-everything.

Written by michaelsmithson

April 15, 2011 at 1:17 am

Delusions: What and Why

leave a comment »

Any blog whose theme is ignorance and uncertainty should get around to discussing delusions sooner or later. I am to give a lecture on the topic to third-year neuropsych students this week, so a post about it naturally follows. Delusions are said to be a concomitant and indeed a product of other cognitive or other psychological pathologies, and traditionally research on delusions was conducted in clinical psychology and psychiatry. Recently, though, some others have got in on the act: Neuroscientists and philosophers.

The connection with neuroscience probably is obvious. Some kinds of delusion, as we’ll see, beg for a neurological explanation. But why have philosophers taken an interest? To get ourselves in the appropriately philosophical mood let’s begin by asking, what is a delusion?

Here’s the Diagnostic and Statistical Manual definition (2000):

“A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary.”

But how does that differ from:

  1. A mere error in reasoning?
  2. Confirmation bias?
  3. Self-enhancement bias?

There’s a plethora of empirical research verifying that most of us, most of the time, are poor logicians and even worse statisticians. Likewise, there’s a substantial body of research documenting our tendency to pay more attention to and seek out information that confirms what we already believe, and to ignore or avoid contrary information. And then there’s the Lake Wobegon effect—The one where a large majority of us think we’re a better driver than average, less racially biased than average, more intelligent than average, and so on. But somehow none of these cognitive peccadilloes seem to be “delusions” on the same level as believing that you’re Napoleon or that Barack Obama is secretly in love with you.

Delusions are more than violations of reasoning (in fact, they may involve no pathology in reasoning at all). Nor are they merely cases of biased perception or wishful thinking. There seems to be more to a psychotic delusion than any of these characteristics; otherwise all of us are deluded most of the time and the concept loses its clinical cutting-edge.

One approach to defining them is to say that they entail a failure to comply with “procedural norms” for belief formation, particularly those involving the weighing and assessment of evidence. Procedural norms aren’t the same as epistemic norms (for instance, most of us are not Humean skeptics, nor do we update our beliefs using Bayes’ Theorem or think in terms of subjective expected utility calculations—But that doesn’t mean we’re deluded). So the appeal to procedural norms excuses “normal” reasoning errors, confirmation and self-enhancement biases. Instead, these are more like widely held social norms. The DSM definition has a decidedly social constructionist aspect to it. A belief is delusional if everyone else disbelieves it and everyone else believes the evidence against it is incontrovertible.

So, definitional difficulties remain (especially regarding religious beliefs or superstitions). In fact, there’s a website here making an attempt to “crowd-source” definitions of delusions. The nub of the problem is that it is hard to define a concept such as delusion without sliding from descriptions of what “normal” people believe and how they form beliefs into prescriptions for what people should believe or how they should form beliefs. Once we start down the prescriptive track, we encounter the awkward fact that we don’t have an uncontestable account of what people ought to believe or how they should arrive at their beliefs.

One element common to many definitions of delusion is the lack of insight on the part of the deluded. They’re meta-ignorant: They don’t know that they’re mistaken. But this notion poses some difficult problems for the potential victim of a delusion. In what senses can a person rationally believe they are (or have been) deluded? Straightaway we can knock out the following: “My current belief in X is false.” If I know believing X is wrong, then clearly I don’t believe X. Similarly, I can’t validly claim that all my current beliefs are false, or that the way I form beliefs always produces false beliefs.

Here are some defensible examples of self-insight that incorporates delusions:

  1. I believe I have held false beliefs in the past.
  2. I believe I may hold false beliefs in the future.
  3. I believe that some of my current beliefs may be false (but I don’t know which ones).
  4. I believe that the way I form any belief is unreliable (but I don’t know when it fails).

As you can see, self-insight regarding delusions is like self-insight into your own meta-ignorance (the stuff you don’t know you don’t know).  You can spot it in your past and hypothesize it for your future, but you won’t be able to self-diagnose it in the here-and-now.

On the other hand, meta-ignorance and delusional thinking are easy to spot in others. For observers, it may seem obvious that someone is deluded generally in the sense that the way they form beliefs is unreliable. Usually generalized delusional thinking is a component in some type of psychosis or severe brain trauma.

But what’s really difficult to explain is monothematic delusions. These are what they sound like, namely specific delusional beliefs that have a single theme. The explanatory problem arises because the monothematically deluded person may otherwise seem cognitively competent. They can function in the everyday world, they can reason, their memories are accurate, and they form beliefs we can agree with except on one topic.

Could some monothematic delusions have a different basis from others?

Some theorists have distinguished Telic (goal-directed) from Thetic (truth-directed) delusions. Telic delusions (functional in the sense that they satisfy a goal) might be explained by a motivational basis. A combination of motivation and affective consequences (e.g., believing Q is distressing, therefore better to believe not-Q) could be a basis for delusional belief. An example is the de Clerambault syndrome, the belief that someone of high social status is secretly in love with oneself.

Thetic delusions are somewhat more puzzling, but also quite interesting. Maher (1974, etc.) said long ago that delusions arise from normal responses to anomalous experiences. Take Capgras syndrome, the belief that one’s nearest & dearest have been replaced by lookalike impostors. A recent theory about Capgras begins with the idea that if face recognition depends on a specific cognitive module, then it is possible for that to be damaged without affecting other cognitive abilities. A two-route model of face recognition holds that there are two sub-modules:

  • A ventral visuo-semantic pathway for visual encoding and overt recognition, and
  • A dorsal visuo-affective pathway for covert autonomic recognition and affective response to familiar faces.

For prosopagnosia sufferers the ventral system has been damaged, whereas for Capgras sufferers the dorsal system has been damaged. So here seems to be the basis for the “anomalous” experience that gives rise to Capgras syndrome. But not everyone whose dorsal system is damaged ends up with Capgras syndrome. What else could be going on?

Maher’s claim amounts to a one-factor theory about thetic delusions. The unusual experience (e.g., no longer feeling emotions when you see your nearest and dearest) becomes explained by the delusion (e.g., they’ve been replaced by impostors). A two-factor theory claims that reasoning also has to be defective (e.g., a tendency to leap to conclusions) or some motivational bias has to operate. Capgras or Cotard syndrome (the latter is a belief that one is dead) sounds like a reasoning pathology is involved, whereas de Clerambault syndrome or reverse Othello syndrome (deluded belief in the fidelity of one’s spouse) sounds like it’s propelled by a motivational bias.

What is the nature of the “second factor” in the Capgras delusion?

  1. Capgras patients are aware that their belief seems bizarre to others, but they are not persuaded by counter-arguments or evidence to the contrary.
  2. Davies et al. (2001) propose that, specifically, Capgras patients have lost the ability to refrain from believing that things are the way they appear to be. However, Capgras patients are not susceptible to visual illusions.
  3. McLaughlin (2009) posits that Capgras patients are susceptible to affective illusions, in the sense that a feeling of unfamiliarity leads straight to a belief in that unfamiliarity. But even if true, this account still doesn’t explain the persistence of that belief in the face of massive counter-evidence.

What about the patients who have a disconnection between their face recognition modules and their autonomic nervous systems but do not have Capgras? Turns out that the site of their damage differs from that of Capgras sufferers. But little is known about the differences between them in terms of phenomenology (e.g., whether loved ones also feel unfamiliar to the non-Capgras patients).

Where does all this leave us? To being with, we are reminded that a label (“delusion”) doesn’t bring with it a unitary phenomenon. There may be distinct types of delusions with quite distinct etiologies. The human sciences are especially vulnerable to this pitfall, because humans have fairly effective commonsensical theories about human beings—folk psychology and folk sociology—from which the human sciences borrow heavily. We’re far less likely to be (mis)guided by common sense when theorizing about things like mitochondria or mesons.

Second, there is a clear need for continued cross-disciplinary collaboration in studying delusions, particularly between cognitive and personality psychologists, neuroscientists, and philosophers of mind. “Delusion” and “self-deception” pose definitional and conceptual difficulties that rival anything in the psychological lexicon.  The identification of specific neural structures implicated in particular delusions is crucial to understanding and treating them. The interaction between particular kinds of neurological trauma and other psychological traits or dispositions appears to be a key but is at present only poorly understood.

Last, but not least, this gives research on belief formation and reasoning a cutting edge, placing it at the clinical neuroscientific frontier. There may be something to the old commonsense notion that logic and madness are closely related. By the way, for an accessible and entertaining treatment of this theme in the history of mathematics,  take a look at LogiComix.

Not Knowing When or Where You’re At

with 3 comments

My stepfather, at 86 years of age, just underwent major surgery to remove a considerable section of his colon. He’d been under heavy sedation for several days, breathing through a respirator, with drip-feeds for hydration and sustenance. After that, he was gradually reawakened. Bob found himself in a room he’d never seen before, and he had no sense of what day or time it was. He had no memory of events between arriving at the hospital and waking up in the ward. He had to figure out where and when he was.

These are what philosophers are fond of calling “self-locating beliefs.” They say we learn two quite different kinds of things about the world—Things about what goes on in this world, and things about where and when we are in this world.

Bob has always had an excellent sense of direction. He’s one of those people who can point due North when he’s in a basement. I, on the other hand, have such a poor sense of direction that I’ve been known to blunder into a broom-closet when attempting to exit from a friend’s house. So I find the literature on self-locating beliefs personally relevant, and fascinating for the problems they pose for reasoning about uncertainty.

Adam Elga’s classic paper published in 2000 made the so-called “Sleeping Beauty” self-locating belief problem famous: “Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking.”

You have just woken up. What probability should you assign to the proposition that the coin landed Heads? Some people answer 1/2 because the coin is fair, your prior probability that it lands Heads should be 1/2, and the fact that you have just awakened adds no other information.

Others say the answer is 1/3. There are 3 possible awakenings, of which only 1 arises from the coin landing Heads. Therefore, given that you have ended up with one of these possible awakenings, the probability that it’s a “Heads” awakening is 1/3. Elga himself opted for this answer. However, the debates continued long after the publication of his paper with many ingenious arguments for both answers (and even a few others). Defenders of one position or the other became known as “halfers” and “thirders.”

But suppose we accept Elga’s answer: What of it? It raises a problem for widely accepted ideas about how and why our beliefs should change if we consider the probability we’d assign the coin landing Heads before the researchers imposed their experiment on us. We’d say a fair coin has a probability of 1/2 of landing Heads. But on awakening, Elga says we should now believe that probability is 1/3. But we haven’t received any new information about the coin or anything else relevant to the outcome of the coin-toss. In standard accounts of conditional probability, we should alter this probability only on grounds of having acquired some new information– But all the information about the experiment was given to us before we were put to sleep!

Here’s another example, the Shangri La Journey problem from Frank Arntzenius (2003):

“There are two paths to Shangri La, the path by the Mountains, and the path by the Sea. A fair coin will be tossed by the guardians to determine which path you will take: if Heads you go by the Mountains, if Tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La you will forever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But, just as soon as you enter Shangri La, your memory of this Beauteous Journey will be erased and be replaced by a memory of the Journey by the Mountains.”

Arntzenius takes this case to provide a counterexample to the standard account of how conditional probability works. As in the Sleeping Beauty problem, consider what the probability we’d assign to the coin landing Heads should be at different times. Our probability before the journey should be 1/2, since the only relevant information we have is that the coin is fair. Now, suppose the coin actually lands Heads. Our probability of Heads after we set out and see that we are traveling by the mountains should be 1, because we now known the outcome of the coin toss. But once we pass through the gates of Shangri La, Arntzenius argues, our probability should revert to 1/2: “for you will know that you would have had the memories that you have either way, and hence you know that the only relevant information that you have is that the coin is fair.”

Well, this goes against the standard Bayesian account of conditional probabilities. Letting H = Heads and M = memory of the Journey by the Mountains, according to Bayes’ formula we should update our belief that the coin landed heads by computing
P(H|M) = P(M|H)P(H)/P(M),
where P(H) is the probability of Heads after we know the outcome of the coin toss. According to our setup, P(H) = P(M|H) = P(M) = 1. Therefore, P(H|M) = 1. Arntzenius declares that because our probability of Heads should revert to 1/2, something is wrong with Bayesian conditionalization.

The difficulty arises because the Bayesian updating rule—conditionalization—requires certainties to be permanent: once you’re certain of something, you should always be certain of it. But when we consider self-locating beliefs, there seem to be cases where this requirement clearly is false. For example, one can reasonably change from being certain that it’s one time to being certain that it’s another.

This is the kind of process my stepfather went through as he reoriented himself to what his family and medical caretakers assured him is the here-and-now. It was rather jarring for him at first, but fortunately he isn’t a Bayesian. He could sneak peeks at newspapers and clocks to see if those concurred with what he was being told, and like most of us he could be comfortable with the notion of shifting from being certain it was, say March 20th to being certain that it was March 26th. But what if he were a Bayesian?

Bob: What’s going on?
Nearest & Dearest: You’re in Overlake Hospital and you’ve been sedated for 6 days. It’s Saturday the 26th of March.
Bob: I can see I’m in a hospital ward, but I’m certain the date is the 20th of March because my memory tells me that a moment ago that was the case.
N&D: But it really is the 26th; you’ve had major surgery and had to be sedated for 6 days. We can show you newspapers and so on if you don’t believe us.
Bob: My personal probability that it is the 20th was 1, last I recall, and it still is. No evidence you provide me can shift me from a probability of 1 because I’m using Bayes’ Rule to update my beliefs. You may as well try to convince me the sun won’t rise tomorrow.
N&D: Uhhh…

A Bayesian faces additional problems that do not and need not trouble my stepfather. One issue concerns identity. Bayesian conditionalization is only well-defined if the subject has a unique set of prior beliefs, i.e., a “unique predecessor.” How should we extend conditionalization in order to accommodate non-unique predecessors? For instance, suppose we’re willing to entertain both the 1/2 and the 1/3 answers to the Sleeping Beauty conundrum?

Meacham’s (2010) prescription for multiple predecessors is to represent them with an interval: “A natural proposal is to require our credences to lie in the span of the credences conditionalization prescribes to our predecessors.” But the pair of credences {1/3, 1/2} that the Sleeping Beauty puzzle leaves us with does not lend any plausibility to values in between them. For instance, it surely would be silly to average them and declare that the answer to this riddle is 5/12—Neither the thirders nor the haflers would endorse this solution.

A while ago, I (Smithson, 1999) and more recently, Gajdos & Vergnaud (2011), said that there is a crucial difference between two sharp but disagreeing predecessors {1/3, 1/2} and two vague but agreeing ones {[1/3, 1/2], [1/3, 1/2]}. Moreover, it is not irrational to prefer the second situation to the first, as I showed that many people do. Cabantous et al. (2011) recently report that insurers would charge a higher premium for insurance when expert risk estimates are precise but conflicting than when they agree but are imprecise.

In short, standard probability theories have difficulty not only with self-location belief updating but, more generally, with any situation that presents multiple equally plausible probability assessments. The traditional Bayesian can’t handle multiple selves but the rest of us can.

Exploiting Randomness

with 3 comments

Books such as Nicholas Taleb’s Fooled by Randomness and the psychological literature on our mental foibles such as gambler’s fallacy warn us to beware randomness. Well and good, but randomness actually is one of the most domesticated kinds of uncertainty. In fact, it is one form of uncertainty we can and do exploit.

One obvious way randomness can be exploited is in designing scientific experiments. To experimentally compare, say, two different fertilizers for use in growing broad beans, an ideal would be to somehow ensure that the bean seedlings exposed to one fertilizer were identical in all ways to the bean seedlings exposed to the other fertilizer. That isn’t possible in any practical sense. Instead, we can randomly assign each seedling to receive one or the other fertilizer. We won’t end up with two identical groups of seedlings, but the differences between those groups will have occurred by chance. If their subsequent growth-rates differ by more than we would reasonably expect by chance alone, then we can infer that one fertilizer is likely to have been more effective than the other.

Another commonplace exploitation of randomness is random sampling, which is used in all sorts of applications from quality-control engineering to marketing surveys. By randomly sampling a specific percentage of manufactured components coming off the production line, a quality-control analyst can decide whether a batch should be scrapped or not. By randomly sampling from a population of consumers, a marketing researcher can estimate the percentage of that population who prefer a particular brand of a consumer item, and also calculate how likely that estimate is to be within 1% of the true percentage at the time.

There is a less well-known use for randomness, one that in some respects is quite counter-intuitive. We can exploit randomness to improve our chances of making the right decision. The story begins with Tom Cover’s 1987 chapter which presents what Dov Samet and his co-authors recognized in their 2002 paper as a solution to a switching decision that has been at the root of various puzzles and paradoxes.

Probably the most famous of these is the “two envelope” problem. You’re a contestant in a game show, and the host offers you a choice between two envelopes, each containing a cheque of a specific value. The host explains that one of the cheques is for a greater amount than the other, and offers you the opportunity to toss a fair coin to select one envelope to open. After that, she says, you may choose either to retain the envelope you’ve selected or exchange it for the other. You toss the coin, open the selected envelope, and see the value of the cheque therein. Of course, you don’t know the value of the other cheque, so regardless of which way you choose, you have a probability of ½ of ending up with the larger cheque. There’s an appealing but fallacious argument that says you should switch, but we’re not going to go into that here.

Cover presents a remarkable decisional algorithm whereby you can make that probability exceed ½.

  1. Having chosen your envelope via the coin-toss, use a random number generator to provide you with a number anywhere between zero and some value you know to be greater than the largest cheque’s value.
  2. If this number is larger than the value of the cheque you’ve seen, exchange envelopes.
  3. If not, keep the envelope you’ve been given.

Here’s a “reasonable person’s proof” that this works (for more rigorous and general proofs, see Robert Snapp’s 2005 treatment or Samet et al., 2002). I’ll take the role of the game-show contestant and you can be the host. Suppose $X1 and $X2 are the amounts in the two envelopes. You have provided the envelopes and so you know that X1, say, is larger than X2. You’ve also told me that these amounts are less than $100 (the specific range doesn’t matter). You toss a fair coin, and if it lands Heads you give me the envelope containing X1 whereas if it lands Tails you give me the one containing X2. I open my envelope and see the amount there. Let’s call my amount Y. All I know at this point is that the probability that Y = X1 is ½ and so is the probability that Y = X2.

I now use a random number generator to produce a number between 0 and 100. Let’s call this number Z. Cover’s algorithm says I should switch envelopes if Z is larger than Y and I should retain my envelope if Z is less than or equal to Y. The claim is that my chance of ending up with the envelope containing X1 is greater than ½.

As the picture below illustrates, the probability that my randomly generated Z has landed at X2 or below is X2/100, and the probability that Z has landed at X1 or below is X1/100. Likewise, the probability that Z has exceeded X2 is 1 – X2/100, and the probability that Z has exceeded X1 is 1 – X1/100.

clip_image001

The proof now needs four steps to complete it:

  1. If Y = X1 then I’ll make the right decision if I decide to keep my envelope, i.e., if Y is less than or equal to X1, and my probability of doing so is X1/100.
  2. If Y = X2 then I’ll make the right decision if I decide to exchange my envelope, i.e., if Y is greater than X2, and my probability of doing so is 1 – X2/100.
  3. The probability that Y = X1 is ½ and the probability that Y = X2 also is ½. So my total probability of ending up with the envelope containing X1 is
    ½ of X1/100, which is X1/200, plus ½ of 1 – X2/100, which is ½ – X2/200.
    That works out to ½ + X1/200 – X2/200.
  4. But X1 is larger than X2, so X1/200 – X2/200 must be larger than 0.
    Therefore, ½ + X1/200 – X2/200 is larger than ½.

Fine, you might say, but could this party trick ever help us in a real-world decision? Yes, it could. Suppose you’re the director of a medical clinic with a tight budget in a desperate race against time to mount a campaign against a disease outbreak in your region. You have two treatments available to you but the research literature doesn’t tell you which one is better than the other. You have time and resources to test only one of those treatments before deciding which one to adopt for your campaign.

Toss a fair coin, letting it decide which treatment you test. The resulting cure-rate from the chosen treatment will be some number, Y, between 0% and 100%. The structure of your decisional situation now is identical to the two-envelope setup described above. Use a random number generator to generate a number, Z, between 0 and 100. If Z is less than or equal to Y use your chosen treatment for your campaign. If Z is greater than Y use the other treatment instead. You chance of having chosen the treatment that would have yielded the higher cure-rate under your test conditions will be larger than ½ and you’ll be able to defend your decision if you’re held accountable to any constituency or stakeholders.

In fact, there are ways whereby you may be able to do even better than this in a real-world situation. One is by shortening the range, if you know that the cure-rate is not going to exceed some limit, say L, below 100%. The reason this would help is because X1/2L – X2/2L will be greater than X1/200 – X2/200. The highest it can be is 1 – X2/X1. Another way, as Snapp (2005) points out, is by knowing the probability distribution generating X1 and X2. Knowing that distribution boosts your probability of being correct to ¾.

However, before we rush off to use Cover’s algorithm for all kinds of decisions, let’s consider its limitations. Returning to the disease outbreak scenario, suppose you have good reasons to suspect that one treatment (Ta, say) is better than the other (Tb). You could just go with Ta and defend your decision by pointing out that, according to your evidence the probability that Ta actually is better than Tb is greater than ½. Let’s denote this probability by P.

A reasonable question is whether you could do better than P by using Cover’s algorithm.  Here’s my claim:

  • If you test Ta or Tb and use the Cover algorithm to decide whether to use it for your campaign or switch to the other treatment, your probability of having chosen the treatment that would have given you the best test-result cure rate will converge to the Cover algorithm’s probability of a correct choice. This may or may not be greater than P (remember, P is greater than ½).

This time, let X1 denote the higher cure rate and X2 denote the lower cure-rate you would have got, depending on whether the treatment you tested was the better or the worse.

  1. If the cure rate for Ta is X1 then you’ll make the right decision if you decide to use Ta, i.e., if Y is less than or equal to X1, and your probability of doing so is X1/100.
  2. If the cure rate for Ta is X2 then you’ll make the right decision if you decide to use Tb, i.e., if Y is greater than X2, and your probability of doing so is 1 – X2/100.
  3. We began by supposing the probability that the cure rate for Ta is X1 is P, which is greater than ½. The probability that the cure rate for Ta is X2 is 1 – P, which is less than ½.   So your total probability of ending up with the treatment whose cure rate is X1 is
    P*X1/100 + (1 – P)*(1 – X2/100).
    The question we want to address is when this probability is greater than P, i.e.,
    P*X1/100 + (1 – P)*(1 – X2/100) > P.
    It turns out that a rearrangement of this inequality gives us a clue.
  4. First, we subtract P*X1/100 from both sides to get
    (1 – P)*( 1 – X2/100) > P – P*X1/100.
  5. Now, we divide both sides of this inequality by 1 – P to get
    ( 1 – X2/100)/P > P*(1 – X1/100)/(1 – P),
    and then divide both sides by ( 1 – X1/100) to get
    (1 – X2/100)/( 1 – X1/100) > P/(1 – P).

We can now see that the values of X2 and X1 have to make the odds of the Cover algorithm larger than the odds resulting from P. If P = .6, say, then P/(1 – P) = .6/.4 = 1.5. Thus, for example, if X2 = 40% and X1 = 70% then (1 – X2/100)/( 1 – X1/100) = .6/.3 = 2.0 and the Cover algorithm will improve your chances of making the right choice.  However, if X2 = 40% and X1 = 60% then the algorithm offers no improvement on P and if we increase X2 above 40% the algorithm will return a lower probability than P. So, if you already have strong evidence that one alternative is better than the other then don’t bother using the Cover algorithm.

Nevertheless, by exploiting randomness we’ve ended up with a decisional guide that can apply to real-world situations. Faced with being able to test only one of two alternatives, if you are undecided about which one is superior but can only test one alternative, test one of them and use Cover’s algorithm to decide which to adopt.  You’ll end up with a higher probability of making the right decision than tossing a coin.

 

Written by michaelsmithson

March 21, 2011 at 9:52 am

Why We (Usually) Know Less than We Think We Do

with one comment

Most of the time, most of us are convinced that we know far more than we are entitled to, even by our own commonsensical notions of what real knowledge is. There are good reasons for this, and let me hasten to say I do it too.

I’m not just referring to things we think we know that turn out to be wrong. In fact, let’s restrict our attention initially to those bits of knowledge we claim for ourselves that turn out to be true. If I say “I know that X is the case” and X really is the case, then why might I still be making a mistaken claim?

To begin with, I might claim I know X is the case because I got that message from a source I trust. Indeed, the vast majority of what we usually consider “knowledge” isn’t even second-hand. It really is just third-hand or even further removed from direct experience. Most of what we think we know not only is just stuff we’ve been told by someone, it’s stuff we’ve been told by someone who in turn was told it by someone else who in turn… I sometimes ask classrooms of students how many of them know the Earth is round. Almost all hands go up. I then ask how many of them could prove it, or offer a reasonable argument in its favor that would pass even a mild skeptic’s scrutiny. Very few (usually no) hands go up.

The same problem for our knowledge-claims crops up if we venture onto riskier taboo-ridden ground, such as whether we really know who our biological parents are. As I’ve described in an earlier post, whenever obtaining first-hand knowledge is costly or risky, we’re compelled to take second- or third-hand information on faith or trust. I’ve also mentioned in an earlier post our capacity for vicarious learning; to this we can add our enormous capacity for absorbing information from others’ accounts (including other’s accounts of others’ accounts…). As a species, we are extremely well set-up to take on second- and third-hand information and convert it into “knowledge.”

The difference, roughly speaking, is between asserting a belief and backing it up with supporting evidence or arguments, or at least first-hand experience. Classic social constructivism begins with the observation that most of what we think we know is “constructed” in the sense of being fed to us via parents, schools, the media, and so on. This line of argument can be pushed quite far, depending on the assumptions one is willing to entertain. A radical skeptic can argue that even so straightforward a first-hand operation as measuring the length of a straight line relies on culturally specific conventions about what “measurement,” “length,” “straight,” and “line” mean.

A second important sense in which our claims to know things are overblown arises from our propensity to fill in the blanks, both in recalling past events and interpreting current ones. A friend gestures to a bowl of food he’s eating and says, “This stuff is hot.” If we’re at table in an Indian restaurant eating curries, I’ll fill in the blank by inferring that he means chilli-hot or spicy. On the other hand if we’re in a Russian restaurant devouring plates of piroshki, I’ll infer that he means high temperature. In either situation I’ll think I know what my friend means but, strictly speaking, I don’t. A huge amount of what we think of as understanding in communication of nearly every kind relies on inferences and interpretations of this kind. Memory is similarly “reconstructive,” filling in the blanks amid the fragments of genuine recollections to generate an account that sounds plausible and coherent.

Hindsight bias is a related problem. Briefly, this is a tendency to over-estimate the extent to which we “knew it all along” when a prediction comes true. The typical psychological experiment demonstrating this finds that subjects recall their confidence in their prediction of an event as being greater if the event occurs than if it doesn’t. An accessible recent article by Paul Goodwin points out that an additional downside to hindsight bias is that it can make us over-confident about our predictive abilities.

Even a dyed-in-the-wool realist can reasonably wonder why so much of what we think we know is indirect, unsupported by evidence, and/or inferred. Aside from balm for our egos, what do we get out of our unrealistically inflated view of our own knowledge? One persuasive, if obvious, argument is that if we couldn’t act on our storehouse of indirect knowledge we’d be paralyzed with indecision. Real-time decision making in everyday life requires fairly prompt responses and we can ill afford to defer many of those decisions on grounds of less than perfect understanding. There is Oliver Heaviside’s famous declaration, “I do not refuse my dinner simply because I do not understand the process of digestion.”

Another argument invites us to consider being condemned to an endlessly costly effort to replace our indirect knowledge with first-hand counterparts or the requisite supporting evidence and/or arguments. A third reason is that communication would become nearly impossibly cumbersome, with everyone treating all messages “literally” and demanding full definitions and explications of each word or phrase.

Perhaps the most unsettling domain where we mislead ourselves about how much we know is the workings of our own minds. Introspection has a chequered history in psychology and a majority of cognitive psychologists these days would hold it to be an invalid and unreliable source of data on mental processes. The classic modern paper in this vein is Robert Nisbett and Timothy Wilson’s 1977 work, in which they concluded that people often are unable to accurately report even the existence of their responses evoked by stimuli or that a cognitive process has occurred. Even when they are aware of both the stimuli and the cognitive process evoked thereby, they may be inaccurate about the effect the former had on the latter.

What are we doing instead of genuine introspection? First, we use our own intuitive causal theories to fill in the blanks. Asked why I’m in a good mood today, I riffle through recent memories searching for plausible causes rather than recalling the actual cause-effect sequence that put me in a good mood. Second, we use our own folk psychological theories about how the mind works, which provide us with plausible accounts of our cognitive processes.

Wilson and Nisbett realized that there are powerful motivations for our unawareness of our unawareness:

“It is naturally preferable, from the standpoint of prediction and subjective feelings of control, to believe that we have such access. It is frightening to believe that one has no more certain knowledge of the workings of one’s own mind than would an outsider with intimate knowledge of one’s history and of the stimuli present at the time the cognitive process occurred.”

Because self-reports about mental processes and their outcomes are bread and butter in much psychological research, it should come as no surprise that debates about introspection have continued in the discipline to the present day. One of the richest recent contributions to these debates is the collaboration between psychologist Russell Hurlburt and philosopher Eric Schwitzgebel, resulting in their 2007 book, “Describing Inner Experience? Proponent Meets Skeptic.”

Hurlburt is the inventor and proponent of Descriptive Experience Sampling (DES), a method of gathering introspective data that attempts to circumvent the usual pitfalls when people are asked to introspect. In DES, a beeper goes off at random intervals signaling the subject to pay attention to their “inner experience” at the moment of the beep. The subject then writes a brief description of this experience. Later, the subject is interviewed by a trained DES researcher, with the goal of enabling the subject to produce an accurate and unbiased description, so far as that is possible. The process continues over several sessions, to enable the researcher to gain some generalizable information about the subject’s typical introspective dispositions and experiences.

Schwitzgebel is, of course, the skeptic in the piece, having written extensively about the limitations and perils of introspection. He describes five main reasons for his skepticism about DES.

  1. Many conscious states are fleeting and unstable.
  2. Most of us have no great experience or training in introspection; and even Hurlburt allows that subjects have to be trained to some extent during the early DES sessions.
  3. Both our interest and available stimuli are external to us, so we don’t have a large storehouse of evidence or descriptors for inner experiences. Consequently, we have to adapt descriptors of external matters to describing inner ones, often resulting in confusion.
  4. Introspection requires focused attention on conscious experience which in turn alters that experience. If we’re being asked to recall an inner experience then we must rely on memory, with its well-known shortcomings and reconstructive proclivities.
  5. Interpretation and theorizing are required for introspection. Schwitzgebel concludes that introspection may be adequate for gross categorizations of conscious experiences or states, but not for describing higher cognitive or emotive processes.

Their book has stimulated further debate, culminating in a recent special issue of the Journal of Consciousness Studies, whose contents have been listed in the March 3rd (2011) post on Schwitzgebel’s blog, The Splintered Mind. The articles therein make fascinating reading, along with Hurlburt’s and Schwitzgebel’s rejoinders and (to some extent) reformulations of their respective positions. Nevertheless, the state of play remains that we know a lot less about our inner selves than we’d like to.

Written by michaelsmithson

March 5, 2011 at 2:56 pm

Can We Make “Good” Decisions Under Ignorance?

with 2 comments

There are well-understood formal frameworks for decision making under risk, i.e., where we know all the possible outcomes of our acts and also know the probabilities of those outcomes. There are even prescriptions for decision making when we don’t quite know the probabilities but still know what the outcomes could be. Under ignorance we not only don’t know the probabilities, we may not even know all of the possible outcomes. Shorn of their probabilities and a completely known outcome space, normative frameworks such as expected utility theory stand silent. Can there be such a thing as a good decision-making method under ignorance? What criteria or guidelines make sense for decision making when we know next to nothing?

At first glance, the notion of criteria for good decisions under ignorance may seem absurd. Here is a simplified list of criteria for “good” (i.e., rational) decisions under risk:

  1. Your decision should be based on your current assets.
  2. Your decision should be based on the possible consequences of all possible outcomes.
  3. You must be able to rank all of the consequences in order of preference and assign a probability to each possible outcome.
  4. Your choice should maximize your expected utility, or roughly speaking, the likelihood of those outcomes that yield highly preferred consequences.

In non-trivial decisions, this prescription requires a vast amount of knowledge, computation, and time. In many situations at least one of these requirements isn’t met, and often none of them are.

This problem has been recognized for a long time, although it has been framed in rather different ways. In the 1950’s at least two spokespeople emerged for decision making under ignorance. The economist Herbert Simon proposed “bounded rationality” as an alternative to expected utility theory, in recognition of the fact that decision makers have limited time, information-gathering resources, and cognitive capacity. He coined the term “satisficing” to describe criteria for decisions that may fall short of optimality but are “good enough” and also humanly feasible to achieve. Simon also championed the use of “heuristics,” rules of thumb for reasoning and decision making that, again, are not optimal but work well most of the time. These ideas have been elaborated since by many others, including Gerd Gigerenzer’s “fast and frugal” heuristics and Gary Klein’s “naturalistic” decision making. These days bounded rationality has many proponents.

Around the same time that Simon was writing about bounded rationality, political economist Charles Lindblom emerged as an early proponent of various kinds of “incrementalism,” which he engagingly called “muddling through.” Whereas Simon and his descendants focused on the individual decision maker, Lindblom wrote about decision making mainly in institutional settings. One issue that the bounded rationality people tended to neglect was highlighted by Lindblom, namely the problem of not knowing what our preferences are when the issues are complex:

“Except roughly and vaguely, I know of no way to describe–or even to understand–what my relative evaluations are for, say, freedom and security, speed and accuracy in governmental decisions, or low taxes and better schools than to describe my preferences among specific policy choices that might be made between the alternatives in each of the pairs… one simultaneously chooses a policy to attain certain objectives and chooses the objectives themselves.” (Lindblom 1959, pg. 82).

Lindblom’s characterization of “muddling through” also is striking for its rejection of means-ends analysis. For him, the means and ends are entwined together in the policy options under consideration.  “Decision-making is ordinarily formalized as a means-ends relationship: means are conceived to be evaluated and chosen in the light of ends finally selected independently of and prior to the choice of means… Typically, …such a means- ends relationship is absent from the branch method, where means and ends are simultaneously chosen.” (Lindblom 1959, pg. 83).

In the absence of a means-end analysis, how can the decision or policy maker know what is a good decision or policy? Lindblom’s answer is consensus among policy makers: “Agreement on objectives failing, there is no standard of ‘correctness.’… the test is agreement on policy itself, which remains possible even when agreement on values is not.” (Lindblom 1959, pg. 83)

Lindblom’s prescription is to restrict decisional alternatives to small or incremental deviations from the status quo. He claims that “A wise policy-maker consequently expects that his policies will achieve only part of what he hopes and at the same time will produce unanticipated consequences he would have preferred to avoid. If he proceeds through a succession of incremental changes, he avoids serious lasting mistakes in several ways.” First, a sequence of small steps will have given the policy maker grounds for predicting the consequences of an additional similar step. Second, he claims that small steps are more easily corrected or reversed than large ones. Third, stakeholders are more likely to agree on small changes than on radical ones.

How, then, does Lindblom think his approach will not descend into groupthink or extreme confirmation bias? Through diversity and pluralism among the stakeholders involved in decision making:

“… agencies will want among their own personnel two types of diversification: administrators whose thinking is organized by reference to policy chains other than those familiar to most members of the organization and, even more commonly, administrators whose professional or personal values or interests create diversity of view (perhaps coming from different specialties, social classes, geographical areas) so that, even within a single agency, decision-making can be fragmented and parts of the agency can serve as watchdogs for other parts.”

Naturally, Lindblom’s prescriptions and claims were widely debated. There is much to criticize, and he didn’t present much hard evidence that his prescriptions would work. Nevertheless, he ventured beyond the bounded rationality camp in four important respects. First, he brought into focus the prospect that we may not have knowable preferences. Second, he realized that means and ends may not be separable and may be reshaped in the very process of making a decision. Third, he mooted the criteria of choosing incremental and corrigible changes over large and irreversible ones. Fourth, he observed that many decisions are embedded in institutional or social contexts that may be harnessed to enhance decision making. All four of these advances suggest implications for decision making under ignorance.

Roger Kasperson contributed a chapter on “coping with deep uncertainty” to Gabriele Bammer’s and my 2008 book. By “deep uncertainty” he means “situations in which, typically, the phenomena… are characterized by high levels of ignorance and are still only poorly understood scientifically, and where modelling and subjective judgments must substitute extensively for estimates based upon experience with actual events and outcomes, or ethical rules must be formulated to substitute for risk-based decisions.” (Kasperson 2008, pg. 338) His list of options open to decision makers confronted with deep uncertainty includes the following:

  1. Delay to gather more information and conduct more studies in the hope of reducing uncertainty across a spectrum of risk;
  2. Interrelate risk and uncertainty to target critical uncertainties for priority further analysis, and compare technology and development options to determine whether clearly preferable options exist for proceeding;
  3. Enlarge the knowledge base for decisions through lateral thinking and broader perspective;
  4. Invoke the precautionary principle;
  5. Use an adaptive management approach; and
  6. Build a resilient society.

He doesn’t recommend these unconditionally, but instead writes thoughtfully about their respective strengths and weaknesses. Kasperson also observes that “The greater the uncertainty, the greater the need for social trust… The combination of deep uncertainty and high social distrust is often a recipe for conflict and stalemate.” (Kasperson 2008, pg. 345)

At the risk of leaping too far and too fast, I’ll conclude by presenting my list of criteria and recommendations for decisions under ignorance. I’ve incorporated material from the bounded rationality perspective, some of Lindblom’s suggestions, bits from Kasperson, my own earlier writings and from other sources not mentioned in this post. You’ll see that the first two major headings echo the first two in the expected utility framework, but beneath each of them I’ve slipped in some caveats and qualifications.

  1. Your decision should be based on your current assets.
    a. If possible, know which assets can be traded and which are non-negotiable.
    b. If some options are decisively better (worse) than others considering the range of risk that may exist, then choose them (get rid of them).
    c. Consider options themselves as assets. Try to retain them or create new ones.
    d. Regard your capacity to make decisions as an asset. Make sure you don’t become paralyzed by uncertainty.
  2. Your decision should be based on the possible consequences.
    a. Be aware of the possibility that means and ends may be inseparable and that your choice may reshape both means and ends.
    b. Beware unacceptable ends-justify-means arguments.
    c. Avoid irreversible or incorrigible alternatives if possible.
    d. Seek alternatives that are “robust” regardless of outcome.
    e. Where appropriate, invoke the precautionary principle.
    f. Seek alternatives whose consequences are observable.
    g. Plan to allocate resources for monitoring consequences and (if appropriate) gathering more information.
  3. Don’t assume that getting rid of ignorance and uncertainty is always a good idea.
    a. See 1.c. and 2.c. above. Options and corrigibility require uncertainty; freedom of choice is positively badged uncertainty.
    b. Interventions that don’t alter people’s uncertainty orientations will be frustrated with attempts by people to re-establish the level of uncertainty they are comfortable with.
    c. Ignorance and uncertainty underpin particular kinds of social capital. Eliminate ignorance and uncertainty and you also eliminate that social capital, so make sure you aren’t throwing any babies out with the bathwater.
    d. Other people are not always interested in reducing ignorance and uncertainty. They need uncertainty to have freedom to make their own decisions. They may want ignorance to avoid culpability.
  4. Where possible, build and utilize relationships based on trust instead of contracts.
    a. Contracts presume and require predictive knowledge, e.g., about who can pay whom how much and when. Trust relationships are more flexible and robust under uncertainty.
    b. Contracts lock the contractors in, incurring opportunity costs that trust relationships may avoid.

Managing and decision making under ignorance is, of course, far too large a topic for a single post, and I’ll be returning to it in the near future. Meanwhile, I’m hoping this piece at least persuades readers that the notion of criteria for good decisions under ignorance may not be absurd after all.

A Knowledge Economy but an Ignorance Society?

leave a comment »

In an intriguing 2008 paper sociologist Sheldon Ungar asked why, in the age of “knowledge” or “information,” ignorance not only persists but seems to have increased and intensified. There’s a useful sociological posting on Ungar’s paper that this post is intended to supplement. Along with an information explosion, we also have had an ignorance explosion: Most of us are confronted to a far greater degree than our forebears with the sheer extent of what we don’t know and what we (individually and collectively) shall never know.

I forecast this development (among others) in my 1985 paper where I called my (then) fellow sociologists’ attention to the riches to be mined from studying how we construct the unknown, accuse others of having too much ignorance, claim ignorance for ourselves when we try to evade culpability, and so forth. I didn’t get many takers, but there’s nothing remarkable about that. Ideas seem to have a time of their own, and that paper and my 1989 book were a bit ahead of that time.

Instead, the master-concepts of the knowledge economy (Peter Drucker’s 1969 coinage) and information society (Fritz Machlup 1962) were all the rage in the ‘80’s. Citizens in such societies were to become better educated and more intelligent than their forebears or their less fortunate counterparts in other societies. The evidence for this claim, however, is mixed.

On the one hand, the average IQ has been increasing in a number of countries for some time, so the kind of intelligence IQ measures has improved. On the other, we routinely receive news of apparent declines in various intellective skills such as numeracy and literacy. On the one hand, thanks to the net, many laypeople can and do become knowledgeable about medical matters that concern them. On the other, there is ample documentation of high levels of public ignorance regarding heart disease and strokes, many of the effects of smoking or alcohol consumption, and basic medication instructions.

Likewise, again thanks to the net, people can and do become better-informed about current, especially local, events so that social media such as Twitter are redefining the nature of “news.” On the other, as Putnam (1999) grimly observed, the typical recent university graduate knows less about public affairs than did the average high school graduate in the 1940’s, “despite the proliferation of sources of information.” Indeed, according to Mark Liberman’s 2006 posting, the question of how ignorant Americans are has become a kind of national sport. Other countries have joined in (for instance, the Irish).

In addition to concerns about lack of knowledge, alarms frequently are raised regarding the proliferation and persistence of erroneous beliefs, often with a sub-text saying that surely in the age of information we would be rid of these. Scott Lilienfeld, assistant professor of psychology and consulting editor at the Skeptical Inquirer, sees the prevalence of pseudoscientific beliefs as by-products of two phenomena: the (mis)information explosion and the scientific illiteracy of the general population. From the Vancouver Sun on November 25th, an op-ed piece by Janice Kennedy had this to say:

“Mis- and disinformation, old fears and prejudices, breathtaking knowledge gaps – all share the same stage, all bathe in the same spotlight glow as thoughtful contributions and informed opinions. The Internet is the great democratizer. Everyone has a voice, and every voice can be heard. Including those that should stifle themselves… Add to these realities the presence of the radio and television talk show – hardly a new phenomenon, but one that has exploded in popularity, thanks to our Internet-led dumbing down – and you have the perfect complement. Shockingly ignorant things are said, repeated and, magnified a millionfold by the populist momentum of cyberspace and sensationalist talk shows, accorded a credibility once unthinkable.”

Now, I want to set Ungar’s paper alongside the attributions of ignorance that people make to those who disagree with them. If you set up a Google alert for the word “ignorance” then the most common result will be just this kind of attribution: X doesn’t see things correctly (i.e., my way) because X is ignorant. Behind many such attributions is a notion widely shared by social scientists and other intellectuals of yore that there is a common stock of knowledge that all healthy, normally-functioning members of society should know. We should all not only speak the same language but also know the laws of the land, the first verse of our national anthem, that 2 + 2 = 4, that we require oxygen to breathe, where babies come from, where we can get food, and so on and so forth.

The trouble with this notion is that the so-called information age has made it increasingly difficult for everyone to agree on what this common stock of knowledge should include while still being small enough for the typical human to absorb it all before reaching adulthood.

For instance, calculators may have made mental arithmetic unnecessary for the average citizen to “get by.” But what about the capacity to think mathematically? Being able to understand a graph, compound interest, probability and risk, or the difference between a two-fold increase in area versus in volume are not obviated by calculators. Which parts of mathematics should be part of compulsory education? This kind of question does not merely concern which bits of knowledge should be retained from what people used to know—The truly vexing problem is which bits of the vastly larger and rapidly increasing storehouse of current knowledge should we require everyone to know.

Ungar suggests some criteria for deciding what is important for people to know, and of course he is not the first to do so. For him, ignorance becomes a “functional deficit” when it prevents people from being able “to deal with important social, citizenship, and personal or practical issues.” Thus, sexually active people should know about safe sex and the risks involved if it isn’t practiced; sunbathers should know what a UV index is, automobile drivers should understand the relevant basic physics of motion and the workings of their vehicles, and smokers should know about the risks they take. These criteria are akin to the “don’t die of ignorance” public health and safety campaigns that began with the one on AIDS in the 1980’s.

But even these seemingly straightforward criteria soon run into difficulties. Suppose you’re considering purchasing a hybrid car such as the Prius. Is the impact of the Prius on the environment less than that of a highly fuel-efficient conventional automobile? Yes, if you consider the impact of running these two vehicles. No, if you consider the impacts from their manufacture. So how many miles (or kilometers) would you have to run each vehicle before the net impact of the conventional car exceeds that of the Prius? It turns out that the answer to that question depends on the kind of driving you’ll be doing. To figure all this out on your own is not a trivial undertaking. Even consulting experts who may have done it for you requires a reasonably high level of technological literacy, not to mention time. And yet, this laboriously informed purchasing decision is just what Ungar means by “an important social, citizenship, and personal or practical issue.” In fact, it ticks all four of those boxes.

Now imagine trying to be a well-informed citizen not only on the merits of the Prius, but the host of other issues awaiting your input such as climate change mitigation and adaptation, the situation in Afghanistan, responses to terrorism threats, the socio-economic consequences of globalization, and so on and on. Thus, in the end Ungar has to concede that “it is impossible to produce a full-blown, stable or consensual inventory of a stock of knowledge that well-informed members should hold.” The instability of such an inventory has been a fact of life for eons (e.g., the need to know Latin in order to read nearly anything of importance vanished long ago). Likewise, the impossibility of a “full blown” inventory is not new; that became evident well before the age of information. What is new is the extraordinary difficulty in achieving consensus on even small parts of this inventory.

It has become increasingly difficult to be a well-informed citizen on a variety of important issues, and these issues are therefore difficult to discuss in general public forums, let alone dinner-table conversation. Along with the disappearance of the informed citizen, we have witnessed the disappearance of the public intellectual. What have replaced both of these are the specialist and the celebrity. We turn to specialists to tell us what to believe; we turn to celebrities to tell us what to care about.

One of Ungar’s main points is that we haven’t ended up with a knowledge society, but only a knowledge economy. The key aspects of this economy are that knowledge and information are multiplier resources, whereas interest is bounded and attention is strictly zero-sum. Public ignorance of key issues and reliance on specialists is the norm, whereas the occurrence of pockets and snippets of public widely shared knowledge becomes exceptional. Thus, we live not only in a risk society but, increasingly, in an ignorance society.

Written by michaelsmithson

December 14, 2010 at 12:45 pm

Follow

Get every new post delivered to your Inbox.