ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Cognition

Scientists on Trial: Risk Communication Becomes Riskier

with 5 comments

Back in late May 2011, there were news stories of charges of manslaughter laid against six earthquake experts and a government advisor responsible for evaluating the threat of natural disasters in Italy, on grounds that they allegedly failed to give sufficient warning about the devastating L’Aquila earthquake in 2009. In addition, plaintiffs in a separate civil case are seeking damages in the order of €22.5 million (US$31.6 million). The first hearing of the criminal trial occurred on Tuesday the 20th of September, and the second session is scheduled for October 1st.

According to Judge Giuseppe Romano Gargarella, the defendants gave inexact, incomplete and contradictory information about whether smaller tremors in L’Aquila six months before the 6.3 magnitude quake on 6 April, which killed 308 people, were to be considered warning signs of the quake that eventuated. L’Aquila was largely flattened, and thousands of survivors lived in tent camps or temporary housing for months.

If convicted, the defendants face up to 15 years in jail and almost certainly will suffer career-ending consequences. While manslaughter charges for natural disasters have precedents in Italy, they have previously concerned breaches of building codes in quake-prone areas. Interestingly, no action has yet been taken against the engineers who designed the buildings that collapsed, or government officials responsible for enforcing building code compliance. However, there have been indications of lax building codes and the possibility of local corruption.

The trial has, naturally, outraged scientists and others sympathetic to the plight of the earthquake experts. An open letter by the Istituto Nazionale di Geofisica e Vulcanologia (National Institute of Geophysics and Volcanology) said the allegations were unfounded and amounted to “prosecuting scientists for failing to do something they cannot do yet — predict earthquakes”. The AAAS has presented a similar letter, which can be read here.

In pre-trial statements, the defence lawyers also have argued that it was impossible to predict earthquakes. “As we all know, quakes aren’t predictable,” said Marcello Melandri, defence lawyer for defendant Enzo Boschi, who was president of Italy’s National Institute of Geophysics and Volcanology). The implication is that because quakes cannot be predicted, the accusations that the commission’s scientists and civil protection experts should have warned that a major quake was imminent are baseless.

Unfortunately, the Istituto Nazionale di Geofisica e Vulcanologia, the AAAS, and the defence lawyers were missing the point. It seems that failure to predict quakes is not the substance of the accusations. Instead, it is poor communication of the risks, inappropriate reassurance of the local population and inadequate hazard assessment. Contrary to earlier reports, the prosecution apparently is not claiming the earthquake should have been predicted. Instead, their focus is on the nature of the risk messages and advice issued by the experts to the public.

Examples raised by the prosecution include a memo issued after a commission meeting on 31 March 2009 stating that a major quake was “improbable,” a statement to local media that six months of low-magnitude tremors was not unusual in the highly seismic region and did not mean a major quake would follow, and an apparent discounting of the notion that the public should be worried. Against this, defence lawyer Melandri has been reported saying that the panel “never said, ‘stay calm, there is no risk’”.

It is at this point that the issues become both complex (by their nature) and complicated (by people). Several commentators have pointed out that the scientists are distinguished experts, by way of asserting that they are unlikely to have erred in their judgement of the risks. But they are being accused of “incomplete, imprecise, and contradictory information” communication to the public. As one of the civil parties to the lawsuit put it, “Either they didn’t know certain things, which is a problem, or they didn’t know how to communicate what they did know, which is also a problem.”

So, the experts’ scientific expertise is not on trial. Instead, it is their expertise in risk communication. As Stephen S. Hall’s excellent essay in Nature points out, regardless of the outcome this trial is likely to make many scientists more reluctant to engage with the public or the media about risk assessments of all kinds. The AAAS letter makes this point too. And regardless of which country you live in, it is unwise to think “Well, that’s Italy for you. It can’t happen here.” It most certainly can and probably will.

Matters are further complicated by the abnormal nature of the commission meeting on the 31st of March at a local government office in L’Aquila. Boschi claims that these proceedings normally are closed whereas this meeting was open to government officials, and he and the other scientists at the meeting did not realize that the officials’ agenda was to calm the public. The commission did not issue its usual formal statement, and the minutes of the meeting were not completed, until after the earthquake had occurred. Instead, two members of the commission, Franco Barberi and Bernardo De Bernardinis, along with the mayor and an official from Abruzzo’s civil-protection department, held a now (in)famous press conference after the meeting where they issued reassuring messages.

De Bernardinis, an expert on floods but not earthquakes, incorrectly stated that the numerous earthquakes of the swarm would lessen the risk of a larger earthquake by releasing stress. He also agreed with a journalist’s suggestion that residents enjoy a glass of wine instead of worrying about an impending quake.

The prosecution also is arguing that the commission should have reminded residents in L’Aquila of the fragility of many older buildings, advised them to make preparations for a quake, and reminded them of what to do in the event of a quake. This amounts to an accusation of a failure to perform a duty of care, a duty that many scientists providing risk assessments may dispute that they bear.

After all, telling the public what they should or should not do is a civil or governmental matter, not a scientific one. Thomas Jordan’s essay in New Scientist brings in this verdict: “I can see no merit in prosecuting public servants who were trying in good faith to protect the public under chaotic circumstances. With hindsight their failure to highlight the hazard may be regrettable, but the inactions of a stressed risk-advisory system can hardly be construed as criminal acts on the part of individual scientists.” As Jordan points out, there is a need to separate the role of science advisors from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms. This would seem to be a key issue that urgently needs to be worked through, given the need for scientific input into decisions about extreme hazards and events, both natural and human-caused.

Scientists generally are not trained in communication or in dealing with the media, and communication about risks is an especially tricky undertaking. I would venture to say that the prosecution, defence, judge, and journalists reporting on the trial will not be experts in risk communication either. The problems in risk communication are well known to psychologists and social scientists specializing in its study, but not to non-specialists. Moreover, these specialists will tell you that solutions to those problems are hard to come by.

For example, Otway and Wynne (1989) observed in a classic paper that an “effective” risk message has to simultaneously reassure by saying the risk is tolerable and panic will not help, and warn by stating what actions need to be taken should an emergency arise. They coined the term “reassurance arousal paradox” to describe this tradeoff (which of course is not a paradox, but a tradeoff). The appropriate balance is difficult to achieve, and is made even more so by the fact that not everyone responds in the same way to the same risk message.

It is also well known that laypeople do not think of risks in the same way as risk experts (for instance, laypeople tend to see “hazard” and “risk” as synonyms), nor do they rate risk severity in line with the product of probability and magnitude of consequence, nor do they understand probability—especially low probabilities. Given all of this, it will be interesting to see how the prosecution attempts to establish that the commission’s risk communications contained “incomplete, imprecise, and contradictory information.”

Scientists who try to communicate risks are aware of some of these issues, but usually (and understandably) uninformed about the psychology of risk perception (see, for instance, my posts here and here on communicating uncertainty about climate science). I’ll close with just one example. A recent International Commission on Earthquake Forecasting (ICEF) report argues that frequently updated hazard probabilities are the best way to communicate risk information to the public. Jordan, chair of the ICEF, recommends that “Seismic weather reports, if you will, should be put out on a daily basis.” Laudable as this prescription is, there are at least three problems with it.

Weather reports with probabilities of rain typically present probabilities neither close to 0 nor to 1. Moreover, they usually are anchored on tenths (e.g., .2, or .6 but not precise numbers like .23162 or .62947). Most people have reasonable intuitions about mid-range probabilities such as .2 or .6. But earthquake forecasting has very low probabilities, as was the case in the lead-up to the L’Aquila event. Italian seismologists had estimated the probability of a large earthquake in the next three days had increased from 1 in 200,000, before the earthquake swarm began, to 1 in 1,000 following the two large tremors the day before the quake.

The first problem arises from the small magnitude of these probabilities. Because people are limited in their ability to comprehend and evaluate extreme probabilities, highly unlikely events usually are either ignored or overweighted. The tendency to ignore low-probability events has been cited to account for the well-established phenomenon that homeowners tend to be under-insured against low probability hazards (e.g., earthquake, flood and hurricane damage) in areas prone to those hazards. On the other hand, the tendency to over-weight low-probability events has been used to explain the same people’s propensity to purchase lottery tickets. The point is that low-probability events either excite people out of proportion to their likelihood or fail to excite them altogether.

The second problem is in understanding the increase in risk from 1 in 200,000 to 1 in 1,000. Most people are readily able to comprehend the differences between mid-range probabilities such as an increase in the chance of rain from .2 to .6. However, they may not appreciate the magnitude of the difference between the two low probabilities in our example. For instance, an experimental study with jurors in mock trials found that although DNA evidence is typically expressed in terms of probability (specifically, the probability that the DNA sample could have come from a randomly selected person in the population), jurors were equally likely to convict on the basis of a probability of 1 in 1,000 as a probability of 1 in 1 billion. At the very least, the public would need some training and accustoming to miniscule probabilities.

All this leads us to the third problem. Otway and Wynne’s “reassurance arousal paradox” is exacerbated by risk communications about extremely low-probability hazards, no matter how carefully they are crafted. Recipients of such messages will be highly suggestible, especially when the stakes are high. So, what should the threshold probability be for determining when a “don’t ignore this” message is issued? It can’t be the imbecilic Dick Cheney zero-risk threshold for terrorism threats, but what should it be instead?

Note that this is a matter for policy-makers to decide, not scientists, even though scientific input regarding potential consequences of false alarms and false reassurances should be taken into account. Criminal trials and civil lawsuits punishing the bearers of false reassurances will drive risk communicators to lower their own alarm thresholds, thereby ensuring that they will sound false alarms increasingly often (see my post about making the “wrong” decision most of the time for the “right” reasons).

Risk communication regarding low-probability, high-stakes hazards is one of the most difficult kinds of communication to perform effectively, and most of its problems remain unsolved. The L’Aquila trial probably will have an inhibitory impact on scientists’ willingness to front the media or the public. But it may also stimulate scientists and decision-makers to work together for the resolution of these problems.

Can Greater Noise Yield Greater Accuracy?

with one comment

I started this post in Hong Kong airport, having just finished one conference and heading to Innsbruck for another. The Hong Kong meeting was on psychometrics and the Innsbruck conference was on imprecise probabilities (believe it or not, these topics actually do overlap). Anyhow, Annemarie Zand Scholten gave a neat paper at the math psych meeting in which she pointed out that, contrary to a strong intuition that most of us have, introducing and accounting for measurement error can actually sharpen up measurement. Briefly, the key idea is that an earlier “error-free” measurement model of, say, human comparisons between pairs of objects on some dimensional characteristic (e.g., length) could only enable researchers to recover the order of object length but not any quantitative information about how much longer people were perceiving one object to be than another.

I’ll paraphrase (and amend slightly) one of Annemarie’s illustrations of her thesis, to build intuition about how her argument works. In our perception lab, we present subjects with pairs of lines and ask them to tell us which line they think is the longer. One subject, Hawkeye Harriet, perfectly picks the longer of the two lines every time—regardless of how much longer one is than the other. Myopic Myra, on the other hand, has imperfect visual discrimination and thus sometimes gets it wrong. But she’s less likely to choose the wrong line if the two lines’ lengths considerably differ from one another. In short, Myra’s success-rate is positively correlated with the difference between the two line-lengths whereas Harriet’s uniformly 100% success rate clearly is not.

Is there a way that Myra’s success- and error-rates could tell us exactly how long each object is, relative to the others? Yes. Let pij be the probability that Myra picks the ith object as longer than the jth object, and pji = 1 – pij be the probability that Myra picks the jth object as longer than the ith object. If the ith object has length Li and the jth object has length Lj, then if pij/pji = Li/Lj, Myra’s choice-rates perfectly mimic the ratio of the ith and jth objects’ lengths. This neat relationship owes its nature to the fact that a characteristic such as length has an absolute zero, so we can meaningfully compare lengths by taking ratios.

How about temperature? This is slightly trickier, because if we’re using a popular scale such as Celsius or Fahrenheit then the zero-point of the scale isn’t absolute in the sense that length has an absolute zero (i.e., you can have Celsius and Fahrenheit readings below zero, and each scale’s zero-point differs from the other). Thus, 60 degrees Fahrenheit is not twice as warm as 30 degrees Fahrenheit. However, the differences between temperatures can be compared via ratios. For instance, 40 degrees F is twice as far from 20 degrees F as 10 degrees F is.

We just need a common “reference” object against which to compare each of the others. Suppose we’re asking Myra to choose which of a pair of objects is the warmer. Assuming that Myra’s choices are transitive, there will be an object she chooses less often than any of the others in all of the paired comparisons. Let’s refer to that object as the Jth object. Now suppose the ith object has temperature Ti,the jth object has temperature Tj, and the Jth object has temperature TJ which is lower than both Ti and Tj. Then if Myra’s choice-rate ratio is
piJ/pjJ = (Ti – TJ)/( Tj – TJ),
she functions as a perfect measuring instrument for temperature comparisons between the ith and jth objects. Again, Hawkeye Harriet’s choice-rates will be piJ = 1 and pjJ = 1 no matter what Ti and Tj are, so her ratio always is 1.

If we didn’t know what the ratios of those lengths or temperature differences were, Myra would be a much better measuring instrument than Harriet even though Harriet never makes mistakes. Are there such situations? Yes, especially when it comes to measuring mental or psychological characteristics for which we have no direct access, such as subjective sensation, mood, or mental task difficulty.

Which of 10 noxious stimuli is the more aversive? Which of 12 musical rhythms makes you feel more joyous? Which of 20 types of puzzle is the more difficult? In paired comparisons between each possible pair of stimuli, rhythms or puzzles, Hawkeye Harriet will pick what for her is the correct pair every time, so all we’ll get from her is the rank-order of stimuli, rhythms and puzzles. Myopic Myra will less reliably and less accurately choose what for her is the correct pair, but her choice-rates will be correlated with how dissimilar each pair is. We’ll recover much more precise information about the underlying structure of the stimulus set from error-prone Myra.

Annemarie’s point about measurement is somewhat related to another fascinating phenomenon known as stochastic resonance. Briefly paraphrasing the Wikipedia entry for stochastic resonance (SR), SR occurs when a measurement or signal-detecting system’s signal-to-noise ratio increases when a moderate amount of noise is added to the incoming signal or to the system itself. SR usually is observed either in bistable or sub-threshold systems. Too little noise results in the system being insufficiently sensitive to the signal; too much noise overwhelms the signal. Evidence for SR has been found in several species, including humans. For example, a 1996 paper in Nature reported a demonstration that subjects asked to detect a sub-threshold impulse via mechanical stimulation of a fingertip maximized the percentage of correct detections when the signal was mixed with a moderate level of noise. One way of thinking about the optimized version of Myopic Myra as a measurement instrument is to model her as a “noisy discriminator,” with her error-rate induced by an optimal random noise-generator mixed with an otherwise error-free discriminating mechanism.

Written by michaelsmithson

August 14, 2011 at 10:47 am

Expertise on Expertise

with 5 comments

Hi, I’m back again after a few weeks’ travel (presenting papers at conferences). I’ve already posted material on this blog about the “ignorance explosion.” Numerous writings have taken up the theme that there is far too much relevant information for any of us to learn and process and the problem is worsening, despite the benefits of the internet and effective search-engines. We all have had to become more hyper-specialized and fragmented in our knowledge-bases than our forebears, and many of us find it very difficult as a result to agree with one another about the “essential” knowledge that every child should receive in their education and that every citizen should possess.

Well, here is a modest proposal for one such essential: We should all become expert about experts and expertise. That is, we should develop meta-expertise.

We can’t know everything, but knowing an expert when we see one, being able to tell the difference between an expert and an impostor, and knowing what it takes to become an expert can guide our search for assistance in all things about which we’re ignorant. A meta-expert should:

  1. Know the broad parameters of and requirements for attaining expertise;
  2. Be able to distinguish a genuine expert from a pretender or a charlatan;
  3. Know whether expertise is and when it is not attainable in a given domain;
  4. Possess effective criteria for evaluating expertise, within reasonable limits; and
  5. Be aware of the limitations of specialized expertise.

Let’s start with that strongly democratic source of expertise: Wikipedia’s take on experts:

“In many domains there are objective measures of performance capable of distinguishing experts from novices: expert chess players will almost always win games against recreational chess players; expert medical specialists are more likely to diagnose a disease correctly; etc.”

That said, the Wikipedia entry also raises a potentially vexing point, namely that “expertise” may come down to merely a matter of consensus, often dictated by the self-same “experts.” Examples readily spring to mind in areas where objective measures are hard to come by, such as the arts. But consider also domains where objective measures may be obtainable but not assessable by laypeople. Higher mathematics is a good example. Only a tiny group of people on the planet were capable of assessing whether Andrew Wiles really had proven Fermat’s Theorem. The rest of us have to take their word for it.

A crude but useful dichotomy splits views about expertise into two camps: Constructivist and performative. The constructivist view emphasizes the influence of communities of practice in determining what expertise is and who is deemed to have it. The performative view portrays expertise as a matter of learning through deliberative practice. Both views have their points, and many domains of expertise have elements of both. Even domains where objective indicators of expertise are available can have constructivist underpinnings. A proficient modern-day undergraduate physics student would fail late 19th-century undergraduate physics exams; and experienced medical practitioners emigrating from one country to another may find their qualifications and experience unrecognized by their adopted country.

What are the requirements for attaining deep expertise? Two popular criteria are talent and deliberative practice. Re deliberate practice, a much-discussed rule of thumb is the “10,000 hour rule.” This rule was popularized in Malcolm Gladwell’s book Outliers and some authors misattribute it to him. It actually dates back to studies of chess masters in the 1970’s (see Ericsson, K. A., R. Th. Krampe, and C. Tesch-Römer, 1993), and its generalizability to other domains still is debatable. Nevertheless, the 10K rule has some merit, and unfortunately it has been routinely ignored in many psychological studies comparing “experts” with novices, where the “experts” often are undergraduates who have been given a few hours’ practice on a relatively trivial task.

The 10K rule can be a useful guide but there’s an important caveat. It may be a necessary but it is by no means a sufficient condition for guaranteeing deep expertise. At least three other conditions have to be met: Deliberative and effective practice in a domain where deep expertise is attainable. Despite this quite simple line of reasoning, plenty of published authors have committed the error of viewing the 10K rule as both necessary and sufficient. Gladwell didn’t make this mistake, but Jane McGonigal’s recent book on video and computer games devotes considerable space to the notion that because gamers are spending upwards of 10K hours playing games they must be attaining deep “expertise” of some kind. Perhaps some may be, provided they are playing games of sufficient depth. But many will not. (BTW, McGonigal’s book is worth a read despite her over-the-top optimism about how games can save the world—and take a look at her game-design collaborator Bogost’s somewhat dissenting review of her book).

Back to the caveats. First, no deliberation makes practice useless. Having spent approximately 8 hours every day sleeping for the past 61 years (178,120 hours) hasn’t made me an expert on sleep. Likewise, deliberative but ineffective practice methods deny us top-level expertise. Early studies of Morse Code experts demonstrated that mere deliberative practice did not guarantee best performance results; specific training regimes were required instead. Autodidacts with insight and aspirations to attain the highest performative levels in their domains eventually realise how important getting the “right” coaching or teaching is.

Finally, there is the problem of determining whether effective, deliberative practice yields deep expertise in any domain. The domain may simply not be “deep” enough. In games of strategy, tic-tac-toe is a clear example of insufficient depth, checkers is a less obvious but still clear example, whereas chess and go clearly have sufficient depth.

Tic-tac-toe aside, are there domains that possess depth where deep expertise nevertheless is unattainable? There are, at least, some domains that are deeply complex where “experts” perform no better then less trained individuals or simple algorithms. Psychotherapy is one such domain. There is a plethora of studies demonstrating that clinical psychologists’ predictions of patient outcomes are worse than simple linear regression models (cf. Dawes’ searing indictment in his 1994 book) and that sometimes experts’ decisions are no more accurate than beginners’ decisions and simple decision aids. Similar results have been reported for financial planners and political experts. In Philip Tetlock’s 2005 book on so-called “expert” predictions, he finds that many so-called experts perform no better than chance in predicting political events, financial trends, and so on.

What can explain the absence of deep expertise in these instances? Tetlock attributes experts’ poor performance to two factors, among others: Hyperspecialization and overconfidence. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.” Tetlock also claims that cognitive style counts: “Foxes” tend to outperform “hedgehogs.” These terms are taken from Isaiah Berlin’s popular essay: Foxes know a little about lots of things, whereas hedgehogs know one big thing.

Another contributing factor may be a lack of meta-cognitive insight on the part of the experts. A hallmark of expertise is ignoring (not ignorance). This proposition may sound less counter-intuitive if it’s rephrased to say that experts know what to ignore. In an earlier post I mentioned Mary Omodei and her colleagues’ chapter in a 2005 book on professionals’ decision making in connection with this claim. Their chapter opens with the observation of a widespread assumption that domain experts also know how to optimally allocate their cognitive resources when making judgments or decisions in their domain. Their research with expert fire-fighting commanders cast doubt on this assumption.

The key manipulations in the Omodei simulated fire-fighting experiments determined the extent to which commanders had unrestricted access to “complete” information about the fires, weather conditions, and other environmental matters. They found that commanders performed more poorly when information access was unrestricted than when they had to request information from subordinates. They also found that commanders performed more poorly when they believed all available information was reliable than when they believed that some of it was unreliable. The disquieting implication of these findings is that domain expertise doesn’t include meta-cognitive expertise.

Cognitive biases and styles aside, another contributing set of factors may be the characteristics of the complex, deep domains themselves that render deep expertise very difficult to attain. Here is a list of tests you can apply to such domains by way of evaluating their potential for the development of genuine expertise:

  1. Stationarity? Is the domain stable enough for generalizable methods to be derived? In chaotic systems long-range prediction is impossible because of initial-condition sensitivity. In human history, politics and culture, the underlying processes may not be stationary at all.
  2. Rarity? When it comes to prediction, rare phenomena simply are difficult to predict (see my post on making the wrong decisions most of the time for the right reasons).
  3. Observability? Can the outcomes of predictions or decisions be directly or immediately observed? For example in psychology, direct observation of mental states is nearly impossible, and in climatology the consequences of human interventions will take a very long time to unfold.
  4. Objective or even impartial criteria? For instance, what is “good,” “beautiful,” or even “acceptable” in domains such as music, dance or the visual arts? Are such domains irreducibly subjective and culture-bound?
  5. Testability? Are there clear criteria for when an expert has succeeded or failed? Or is there too much “wiggle-room” to be able to tell?

Finally, here are a few tests that can be used to evaluate the “experts” in your life:

  1. Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?
  2. Walking the walk: Is the expert an active practitioner in their domain (versus being a critic or a commentator)?
  3. Overconfidence: Ask your expert to make yes-no predictions in their domain of expertise, and before any of these predictions can be tested ask them to estimate the percentage of time they’re going to be correct. Compare that estimate with the resulting percentage correct. If their estimate was too high then your expert may suffer from over-confidence.
  4. Confirmation bias: We’re all prone to this, but some more so than others. Is your expert reasonably open to evidence or viewpoints contrary to their own views?
  5. Hedgehog-Fox test: Tetlock found that Foxes were better-calibrated and more able to entertain self-disconfirming counterfactuals than hedgehogs, but allowed that hedgehogs can occasionally be “stunningly right” in a way that foxes cannot. Is your expert a fox or a hedgehog?
  6. Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?

Written by michaelsmithson

August 11, 2011 at 11:26 am

Ignoring Stuff and People

leave a comment »

“Don’t pay any attention to the critics—Don’t even ignore them.” ~ Samuel Goldwyn

When I first started exploring ignorance and related topics, it occurred to me that not-knowing has a passive and an active voice. To be ignorant of something is the passive voice—Ignorance is a state. Ignoring something is an action. I want to explore various aspects of ignoring in this and perhaps some subsequent posts.

To begin, ignoring attracts a moral charge that ignorance usually doesn’t. For instance, innocence can be construed as a special case of ignorance. Innocents don’t ignore corrupting information; they’re just unaware of its existence. Lots of communications to people who are ignoring something or someone are chastisements. Ignoring is akin to commission, whereas being ignorant is more like omission. Ignoring has an element of will or choice about it that being ignorant does not. So people are more likely to ascribe a moral status to an act of ignoring than to a state of ignorance.

For instance, reader response to a recent “Courier Mail” story on April 11 whose main point was “Three men have been rescued after they drove around Road Closed signs and into floodwaters in central Queensland” was uncharitable, to say the least. Comments and letters to the editor expressed desires for the men to be named, shamed, fined and otherwise punished for wasting taxpayers’ money and needlessly imperiling the rescuers.

Criminal negligence cases often make it clear that while the law may regard ignorance as scant excuse, ignoring is even worse. Ignoring imputes culpability straightaway. Halah Touryalai’s blog on Forbes in March: “Irving Picard, the Trustee seeking to reclaim billions for Madoff’s victims, claims Merrill Lynch International was creating and selling products tied to Madoff feeder funds even though it was aware of possible fraud within Bernard L. Madoff Investment Securities.”

Despite the clear distinction between ignorance and ignoring, people can and do confuse the two. Andrew Rotherham’s May 12 blog at Time accuses American educators and policy-makers of ignoring the burgeoning crisis regarding educational outcomes for Hispanic schoolchildren. But it isn’t clear whether the educators are aware of this problem (and ignoring it) or not (and therefore ignorant about it). There are so many festering and looming crises to keep track of these days that various sectors of the public regularly get caned for “ignoring” crises when in all likelihood they are just ignorant of them.

In a more straightforward case, the Sydney Herald Sun’s March 1 headine, “One-in-four girls ignoring cervical cancer vaccine,” simply has got it wrong. The actual message in the article is not that schoolgirls in question are ignoring the vaccine, but that they’re ignorant of it and also of the cancer itself.

Communicators of all stripes take note: The distinction between ignoring and ignorance is important and worth preserving. Let us not tolerate, on our watch, a linguistically criminal slide into the elision of that distinction through misusage or mental laziness.

Because it is an act and therefore can be intentional, ignoring has uses as a social or psychological tactic that ignorance never can have. There is a plethora of self-help remedies out there which, when you scratch the surface, boil down to tactical or even strategic ignoring. I’ll mention just two examples of such injunctions: “Don’t sweat the small stuff” and “live in the present.”

The first admonishes us to discount the “small stuff” to some extent, presumably so we can pay attention to the “big stuff” (whatever that may be). This simple notion has spawned several self-help bestsellers. The second urges us to disregard the past and future and focus attention on the here-and-how instead. This advice has been reinvented many times, in my short lifetime I’ve seen it crop up all the way from the erstwhile hippie sensibilities embodied in slogans such as “be here now” to the present-day therapeutic cottage industry of “mindfulness.”

Even prescriptions for rational decision-making contain injunctions to ignore certain things. Avoiding the “sunk cost fallacy” is one example. Money, time, or other irrecoverable resources that already have been spent in pursuing a goal should not be considered along with future potential costs in deciding whether to persist in pursuing the goal. There’s a nice treatment of this on the less wrong site. The Mind Your Decisions blog also presents a few typical examples of the sunk cost fallacy in everyday life. The main point here is that a rational decisional framework prescribes ignoring sunk costs.

Once we shift attention from ignoring things to ignoring people, the landscape becomes even more interesting. Ignoring people, it would seem, occupies important places in commonsense psychology. The earliest parental advice I received regarding what to do about bullies was to ignore them. My parents meant well, and it turned out that this worked in a few instances. But some bullies required standing up to.

For those of us who aren’t sure how to go about it, there are even instructions and guides on how to ignore people.

Ignoring people also gets some airplay as part of a strategy or at least a tactic. For instance, how should parents deal with disrespectful behavior from their children? Well, one parenting site says not to ignore such behavior. Another admonishes you to ignore it. Commonsense psychology can be self-contradicting. It’s good old commonsense psychology that tells us “opposites attract” and yet “birds of a feather flock together,” “look before you leap” but “(s)he who hesitates is lost,” “many hands make light the work” but “too many cooks spoil the broth,” and so on.

Given that ignoring has a moral valence, what kinds of justifications are there for ignoring people? There are earnest discussion threads on such moral quandaries as ignoring people who are nice to you. In this thread, by the way, many of the contributors conclude that it’s OK to do so, especially if the nice person has traits that they can’t abide.

Some social norms or relationships entail ignoring behaviors or avoiding communication with certain people. One of the clearest examples of this is the kin-avoidance rules in some Australian Indigenous cultures. An instance is the ban on speaking with or even being in close proximity to one’s mother-in-law. The Central Land Council site describes the rule thus: “This relationship requires a social distance, such that they may not be able to be in the same room or car.”

Some religious communities such as the Amish have institutionalized shunning as a means of social control. As Wenger (1953) describes it, “The customary practice includes refusal to eat at the same table, even within the family, the refusal of ordinary social intercourse, the refusal to work together in farming operations, etc.” So, shunning entails ignoring. Wenger’s article also details some of the early religious debates over when and to what extent shunning should be imposed.

Ostracism has a powerful impact because it feels like rejection. Social psychologist Kipling Williams has studied the effects of ostracism for a number of years now, and without any apparent trace of irony remarks that it was “ignored by social scientists for 100 years.” Among his ingenious experiments is one demonstrating that people feel the pain of rejection when they’re ignored by a cartoon character on a computer screen. Williams goes as far as to characterize ostracism as an “invisible form of bullying.”

So, for an interesting contrast between the various moral and practical justifications you can find for ignoring others, try a search on the phrase “ignoring me.” There, you’ll find a world of agony. This is another example to add to my earlier post about lies and secrecy, where we seem to forget about the Golden Rule. We lie to others but hate being lied to. We also are willing to ignore others but hate being ignored in turn. Well, perhaps unless you’re being ignored by your mother-in-law.

Written by michaelsmithson

May 19, 2011 at 3:32 pm

Delusions: What and Why

leave a comment »

Any blog whose theme is ignorance and uncertainty should get around to discussing delusions sooner or later. I am to give a lecture on the topic to third-year neuropsych students this week, so a post about it naturally follows. Delusions are said to be a concomitant and indeed a product of other cognitive or other psychological pathologies, and traditionally research on delusions was conducted in clinical psychology and psychiatry. Recently, though, some others have got in on the act: Neuroscientists and philosophers.

The connection with neuroscience probably is obvious. Some kinds of delusion, as we’ll see, beg for a neurological explanation. But why have philosophers taken an interest? To get ourselves in the appropriately philosophical mood let’s begin by asking, what is a delusion?

Here’s the Diagnostic and Statistical Manual definition (2000):

“A false belief based on incorrect inference about external reality that is firmly sustained despite what almost everyone else believes and despite what constitutes incontrovertible and obvious proof or evidence to the contrary.”

But how does that differ from:

  1. A mere error in reasoning?
  2. Confirmation bias?
  3. Self-enhancement bias?

There’s a plethora of empirical research verifying that most of us, most of the time, are poor logicians and even worse statisticians. Likewise, there’s a substantial body of research documenting our tendency to pay more attention to and seek out information that confirms what we already believe, and to ignore or avoid contrary information. And then there’s the Lake Wobegon effect—The one where a large majority of us think we’re a better driver than average, less racially biased than average, more intelligent than average, and so on. But somehow none of these cognitive peccadilloes seem to be “delusions” on the same level as believing that you’re Napoleon or that Barack Obama is secretly in love with you.

Delusions are more than violations of reasoning (in fact, they may involve no pathology in reasoning at all). Nor are they merely cases of biased perception or wishful thinking. There seems to be more to a psychotic delusion than any of these characteristics; otherwise all of us are deluded most of the time and the concept loses its clinical cutting-edge.

One approach to defining them is to say that they entail a failure to comply with “procedural norms” for belief formation, particularly those involving the weighing and assessment of evidence. Procedural norms aren’t the same as epistemic norms (for instance, most of us are not Humean skeptics, nor do we update our beliefs using Bayes’ Theorem or think in terms of subjective expected utility calculations—But that doesn’t mean we’re deluded). So the appeal to procedural norms excuses “normal” reasoning errors, confirmation and self-enhancement biases. Instead, these are more like widely held social norms. The DSM definition has a decidedly social constructionist aspect to it. A belief is delusional if everyone else disbelieves it and everyone else believes the evidence against it is incontrovertible.

So, definitional difficulties remain (especially regarding religious beliefs or superstitions). In fact, there’s a website here making an attempt to “crowd-source” definitions of delusions. The nub of the problem is that it is hard to define a concept such as delusion without sliding from descriptions of what “normal” people believe and how they form beliefs into prescriptions for what people should believe or how they should form beliefs. Once we start down the prescriptive track, we encounter the awkward fact that we don’t have an uncontestable account of what people ought to believe or how they should arrive at their beliefs.

One element common to many definitions of delusion is the lack of insight on the part of the deluded. They’re meta-ignorant: They don’t know that they’re mistaken. But this notion poses some difficult problems for the potential victim of a delusion. In what senses can a person rationally believe they are (or have been) deluded? Straightaway we can knock out the following: “My current belief in X is false.” If I know believing X is wrong, then clearly I don’t believe X. Similarly, I can’t validly claim that all my current beliefs are false, or that the way I form beliefs always produces false beliefs.

Here are some defensible examples of self-insight that incorporates delusions:

  1. I believe I have held false beliefs in the past.
  2. I believe I may hold false beliefs in the future.
  3. I believe that some of my current beliefs may be false (but I don’t know which ones).
  4. I believe that the way I form any belief is unreliable (but I don’t know when it fails).

As you can see, self-insight regarding delusions is like self-insight into your own meta-ignorance (the stuff you don’t know you don’t know).  You can spot it in your past and hypothesize it for your future, but you won’t be able to self-diagnose it in the here-and-now.

On the other hand, meta-ignorance and delusional thinking are easy to spot in others. For observers, it may seem obvious that someone is deluded generally in the sense that the way they form beliefs is unreliable. Usually generalized delusional thinking is a component in some type of psychosis or severe brain trauma.

But what’s really difficult to explain is monothematic delusions. These are what they sound like, namely specific delusional beliefs that have a single theme. The explanatory problem arises because the monothematically deluded person may otherwise seem cognitively competent. They can function in the everyday world, they can reason, their memories are accurate, and they form beliefs we can agree with except on one topic.

Could some monothematic delusions have a different basis from others?

Some theorists have distinguished Telic (goal-directed) from Thetic (truth-directed) delusions. Telic delusions (functional in the sense that they satisfy a goal) might be explained by a motivational basis. A combination of motivation and affective consequences (e.g., believing Q is distressing, therefore better to believe not-Q) could be a basis for delusional belief. An example is the de Clerambault syndrome, the belief that someone of high social status is secretly in love with oneself.

Thetic delusions are somewhat more puzzling, but also quite interesting. Maher (1974, etc.) said long ago that delusions arise from normal responses to anomalous experiences. Take Capgras syndrome, the belief that one’s nearest & dearest have been replaced by lookalike impostors. A recent theory about Capgras begins with the idea that if face recognition depends on a specific cognitive module, then it is possible for that to be damaged without affecting other cognitive abilities. A two-route model of face recognition holds that there are two sub-modules:

  • A ventral visuo-semantic pathway for visual encoding and overt recognition, and
  • A dorsal visuo-affective pathway for covert autonomic recognition and affective response to familiar faces.

For prosopagnosia sufferers the ventral system has been damaged, whereas for Capgras sufferers the dorsal system has been damaged. So here seems to be the basis for the “anomalous” experience that gives rise to Capgras syndrome. But not everyone whose dorsal system is damaged ends up with Capgras syndrome. What else could be going on?

Maher’s claim amounts to a one-factor theory about thetic delusions. The unusual experience (e.g., no longer feeling emotions when you see your nearest and dearest) becomes explained by the delusion (e.g., they’ve been replaced by impostors). A two-factor theory claims that reasoning also has to be defective (e.g., a tendency to leap to conclusions) or some motivational bias has to operate. Capgras or Cotard syndrome (the latter is a belief that one is dead) sounds like a reasoning pathology is involved, whereas de Clerambault syndrome or reverse Othello syndrome (deluded belief in the fidelity of one’s spouse) sounds like it’s propelled by a motivational bias.

What is the nature of the “second factor” in the Capgras delusion?

  1. Capgras patients are aware that their belief seems bizarre to others, but they are not persuaded by counter-arguments or evidence to the contrary.
  2. Davies et al. (2001) propose that, specifically, Capgras patients have lost the ability to refrain from believing that things are the way they appear to be. However, Capgras patients are not susceptible to visual illusions.
  3. McLaughlin (2009) posits that Capgras patients are susceptible to affective illusions, in the sense that a feeling of unfamiliarity leads straight to a belief in that unfamiliarity. But even if true, this account still doesn’t explain the persistence of that belief in the face of massive counter-evidence.

What about the patients who have a disconnection between their face recognition modules and their autonomic nervous systems but do not have Capgras? Turns out that the site of their damage differs from that of Capgras sufferers. But little is known about the differences between them in terms of phenomenology (e.g., whether loved ones also feel unfamiliar to the non-Capgras patients).

Where does all this leave us? To being with, we are reminded that a label (“delusion”) doesn’t bring with it a unitary phenomenon. There may be distinct types of delusions with quite distinct etiologies. The human sciences are especially vulnerable to this pitfall, because humans have fairly effective commonsensical theories about human beings—folk psychology and folk sociology—from which the human sciences borrow heavily. We’re far less likely to be (mis)guided by common sense when theorizing about things like mitochondria or mesons.

Second, there is a clear need for continued cross-disciplinary collaboration in studying delusions, particularly between cognitive and personality psychologists, neuroscientists, and philosophers of mind. “Delusion” and “self-deception” pose definitional and conceptual difficulties that rival anything in the psychological lexicon.  The identification of specific neural structures implicated in particular delusions is crucial to understanding and treating them. The interaction between particular kinds of neurological trauma and other psychological traits or dispositions appears to be a key but is at present only poorly understood.

Last, but not least, this gives research on belief formation and reasoning a cutting edge, placing it at the clinical neuroscientific frontier. There may be something to the old commonsense notion that logic and madness are closely related. By the way, for an accessible and entertaining treatment of this theme in the history of mathematics,  take a look at LogiComix.

Why We (Usually) Know Less than We Think We Do

with one comment

Most of the time, most of us are convinced that we know far more than we are entitled to, even by our own commonsensical notions of what real knowledge is. There are good reasons for this, and let me hasten to say I do it too.

I’m not just referring to things we think we know that turn out to be wrong. In fact, let’s restrict our attention initially to those bits of knowledge we claim for ourselves that turn out to be true. If I say “I know that X is the case” and X really is the case, then why might I still be making a mistaken claim?

To begin with, I might claim I know X is the case because I got that message from a source I trust. Indeed, the vast majority of what we usually consider “knowledge” isn’t even second-hand. It really is just third-hand or even further removed from direct experience. Most of what we think we know not only is just stuff we’ve been told by someone, it’s stuff we’ve been told by someone who in turn was told it by someone else who in turn… I sometimes ask classrooms of students how many of them know the Earth is round. Almost all hands go up. I then ask how many of them could prove it, or offer a reasonable argument in its favor that would pass even a mild skeptic’s scrutiny. Very few (usually no) hands go up.

The same problem for our knowledge-claims crops up if we venture onto riskier taboo-ridden ground, such as whether we really know who our biological parents are. As I’ve described in an earlier post, whenever obtaining first-hand knowledge is costly or risky, we’re compelled to take second- or third-hand information on faith or trust. I’ve also mentioned in an earlier post our capacity for vicarious learning; to this we can add our enormous capacity for absorbing information from others’ accounts (including other’s accounts of others’ accounts…). As a species, we are extremely well set-up to take on second- and third-hand information and convert it into “knowledge.”

The difference, roughly speaking, is between asserting a belief and backing it up with supporting evidence or arguments, or at least first-hand experience. Classic social constructivism begins with the observation that most of what we think we know is “constructed” in the sense of being fed to us via parents, schools, the media, and so on. This line of argument can be pushed quite far, depending on the assumptions one is willing to entertain. A radical skeptic can argue that even so straightforward a first-hand operation as measuring the length of a straight line relies on culturally specific conventions about what “measurement,” “length,” “straight,” and “line” mean.

A second important sense in which our claims to know things are overblown arises from our propensity to fill in the blanks, both in recalling past events and interpreting current ones. A friend gestures to a bowl of food he’s eating and says, “This stuff is hot.” If we’re at table in an Indian restaurant eating curries, I’ll fill in the blank by inferring that he means chilli-hot or spicy. On the other hand if we’re in a Russian restaurant devouring plates of piroshki, I’ll infer that he means high temperature. In either situation I’ll think I know what my friend means but, strictly speaking, I don’t. A huge amount of what we think of as understanding in communication of nearly every kind relies on inferences and interpretations of this kind. Memory is similarly “reconstructive,” filling in the blanks amid the fragments of genuine recollections to generate an account that sounds plausible and coherent.

Hindsight bias is a related problem. Briefly, this is a tendency to over-estimate the extent to which we “knew it all along” when a prediction comes true. The typical psychological experiment demonstrating this finds that subjects recall their confidence in their prediction of an event as being greater if the event occurs than if it doesn’t. An accessible recent article by Paul Goodwin points out that an additional downside to hindsight bias is that it can make us over-confident about our predictive abilities.

Even a dyed-in-the-wool realist can reasonably wonder why so much of what we think we know is indirect, unsupported by evidence, and/or inferred. Aside from balm for our egos, what do we get out of our unrealistically inflated view of our own knowledge? One persuasive, if obvious, argument is that if we couldn’t act on our storehouse of indirect knowledge we’d be paralyzed with indecision. Real-time decision making in everyday life requires fairly prompt responses and we can ill afford to defer many of those decisions on grounds of less than perfect understanding. There is Oliver Heaviside’s famous declaration, “I do not refuse my dinner simply because I do not understand the process of digestion.”

Another argument invites us to consider being condemned to an endlessly costly effort to replace our indirect knowledge with first-hand counterparts or the requisite supporting evidence and/or arguments. A third reason is that communication would become nearly impossibly cumbersome, with everyone treating all messages “literally” and demanding full definitions and explications of each word or phrase.

Perhaps the most unsettling domain where we mislead ourselves about how much we know is the workings of our own minds. Introspection has a chequered history in psychology and a majority of cognitive psychologists these days would hold it to be an invalid and unreliable source of data on mental processes. The classic modern paper in this vein is Robert Nisbett and Timothy Wilson’s 1977 work, in which they concluded that people often are unable to accurately report even the existence of their responses evoked by stimuli or that a cognitive process has occurred. Even when they are aware of both the stimuli and the cognitive process evoked thereby, they may be inaccurate about the effect the former had on the latter.

What are we doing instead of genuine introspection? First, we use our own intuitive causal theories to fill in the blanks. Asked why I’m in a good mood today, I riffle through recent memories searching for plausible causes rather than recalling the actual cause-effect sequence that put me in a good mood. Second, we use our own folk psychological theories about how the mind works, which provide us with plausible accounts of our cognitive processes.

Wilson and Nisbett realized that there are powerful motivations for our unawareness of our unawareness:

“It is naturally preferable, from the standpoint of prediction and subjective feelings of control, to believe that we have such access. It is frightening to believe that one has no more certain knowledge of the workings of one’s own mind than would an outsider with intimate knowledge of one’s history and of the stimuli present at the time the cognitive process occurred.”

Because self-reports about mental processes and their outcomes are bread and butter in much psychological research, it should come as no surprise that debates about introspection have continued in the discipline to the present day. One of the richest recent contributions to these debates is the collaboration between psychologist Russell Hurlburt and philosopher Eric Schwitzgebel, resulting in their 2007 book, “Describing Inner Experience? Proponent Meets Skeptic.”

Hurlburt is the inventor and proponent of Descriptive Experience Sampling (DES), a method of gathering introspective data that attempts to circumvent the usual pitfalls when people are asked to introspect. In DES, a beeper goes off at random intervals signaling the subject to pay attention to their “inner experience” at the moment of the beep. The subject then writes a brief description of this experience. Later, the subject is interviewed by a trained DES researcher, with the goal of enabling the subject to produce an accurate and unbiased description, so far as that is possible. The process continues over several sessions, to enable the researcher to gain some generalizable information about the subject’s typical introspective dispositions and experiences.

Schwitzgebel is, of course, the skeptic in the piece, having written extensively about the limitations and perils of introspection. He describes five main reasons for his skepticism about DES.

  1. Many conscious states are fleeting and unstable.
  2. Most of us have no great experience or training in introspection; and even Hurlburt allows that subjects have to be trained to some extent during the early DES sessions.
  3. Both our interest and available stimuli are external to us, so we don’t have a large storehouse of evidence or descriptors for inner experiences. Consequently, we have to adapt descriptors of external matters to describing inner ones, often resulting in confusion.
  4. Introspection requires focused attention on conscious experience which in turn alters that experience. If we’re being asked to recall an inner experience then we must rely on memory, with its well-known shortcomings and reconstructive proclivities.
  5. Interpretation and theorizing are required for introspection. Schwitzgebel concludes that introspection may be adequate for gross categorizations of conscious experiences or states, but not for describing higher cognitive or emotive processes.

Their book has stimulated further debate, culminating in a recent special issue of the Journal of Consciousness Studies, whose contents have been listed in the March 3rd (2011) post on Schwitzgebel’s blog, The Splintered Mind. The articles therein make fascinating reading, along with Hurlburt’s and Schwitzgebel’s rejoinders and (to some extent) reformulations of their respective positions. Nevertheless, the state of play remains that we know a lot less about our inner selves than we’d like to.

Written by michaelsmithson

March 5, 2011 at 2:56 pm

Follow

Get every new post delivered to your Inbox.