ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Uncertainty

A Few (More) Myths about “Big Data”

leave a comment »

Following on from Kate Crawford’s recent and excellent elaboration of six myths about “big data”, I should like to add four more that highlight important issues about such data that can misguide us if we ignore them or are ignorant of them.

Myth 7: Big data are precise.

As with analyses of almost any other kind of data, big data analyses largely consists of estimates. Often these estimates are based on sample data rather than population data, and the samples may not be representative of their referent populations (as Crawford points out, but also see Myth 8). Moreover, big data are even less likely than “ordinary” data to be free of recording errors or deliberate falsification.

Even when the samples are good and the sample data are accurately recorded, estimates still are merely estimates, and the most common mistake decision makers and other stakeholders make about estimates is treating them as if they are precise or exact. In a 1990 paper I referred to this as the fallacy of false precision. Estimates always are imprecise, and ignoring how imprecise they are is equivalent to ignoring how wrong they could be. Major polling companies gradually learned to report confidence intervals or error-rates along with their estimates and to take these seriously, but most government departments apparently have yet to grasp this obvious truth.

Why might estimate error be a greater problem for big data than for “ordinary” data? There are at least two reasons. First, it is likely to be more difficult to verify the integrity or veracity of big data simply because it is integrated from numerous sources. Second, if big datasets are constructed from multiple sources, each consisting of an estimate with its own imprecision, then these imprecisions may propagate. To give a brief illustration, if estimate X has variance x2, estimate Y has variance y2, X and Y are independent of one another, and our “big” dataset consists of adding X+Y to get Z, then the variance of Z will be x2 + y2.

Myth 8: Big data are accurate.

There are two senses in which big data may be inaccurate, in addition to random variability (i.e., sampling error): Biases, and measurement confounds. Economic indicators of such things as unemployment rates, inflation, or GDP in most countries are biased. The bias stems from the “shadow” (off the books) economic activity in most countries. There is little evidence that economic policy makers in most countries pay any attention to such distortions when using economic indicators to inform policies.

Measurement confounds are a somewhat more subtle issue, but the main idea is that data may not measure what we think it is measuring because it is influenced by extraneous factors. Economic indicators are, again, good examples but there are plenty of others (don’t get me started on the idiotic bibliometrics and other KPIs that are imposed on us academics in the name of “performance” assessment). Web analytics experts are just beginning to face up to this problem. For instance, webpage dwell times are not just influenced by how interested the visitor is in the content of a webpage, but may also reflect such things as how difficult the contents are to understand, the visitor’s attention span, or the fact that they left their browsing device to do something else and then returned much later. As in Myth 7, bias and measurement confounds may be compounded in big data to a greater extent than they are in small data, simply because big data often combines multiple measures.

Myth 9. Big data are stable.

Data often are not recorded just once, but re-recorded as better information becomes available or as errors are discovered. In a recent Wall Street Journal article, economist Samuel Rines presented several illustrations of how unstable economic indicator estimates are in the U.S. For example, he observed that in November 2012 the first official estimate of net employment increase was 146,000 new jobs. By the third revision that number had increased by 68% to 247,000. In another instance, he pointed out that American GDP annual estimates each year typically are revised several times, and often substantially, as the year slides into the past.

Again, there is little evidence that people crafting policy or making decisions based on these numbers take their inherent instability into account. One may protest that often decisions must be made before “final” revisions can be completed. However, where such revisions in the past have been recorded, the degree of instability in these indicators should not be difficult to estimate. These could be taken into account, at the very least, in worst- and best-case scenario generation.

Myth 10. We have adequate computing power and techniques to analyse big data.

Analysing big data is a computationally intense undertaking, and at least some worthwhile analytical goals are beyond our reach, in terms of computing power and even, in some cases, techniques. I’ll give just one example. Suppose we want to model the total dwell time per session of a typical user who is browsing the web. The number of items on which the user dwells is a random variable, and so is the amount of dwell time for each item. The total dwell time, then, is what is called a “randomly stopped sum”. The expression for the probability distribution of a randomly stopped sum doesn’t have a closed form (it’s an infinite sum), so it can’t be modelled via conventional statistical estimation techniques (least-squares or maximum likelihood). Instead, there are two viable approaches: Simulation and Bayesian hierarchical MCMC. I’m writing a paper on this topic, and from my own experience I can declare that either technique would require a super-computer for datasets of the kind dealt with, e.g., by NRS PADD.

Written by michaelsmithson

July 26, 2013 at 6:57 am

Statistical Significance On Trial

with 2 comments

There is a long-running love-hate relationship between the legal and statistical professions, and two vivid examples of this have surfaced in recent news stories, one situated in a court of appeal in London and the other in the U.S. Supreme Court. Briefly, the London judge ruled that Bayes’ theorem must not be used in evidence unless the underlying statistics are “firm;” while the U.S. Supreme Court unanimously ruled that a drug company’s non-disclosure of adverse side-effects cannot be justified by an appeal to the statistical non-significance of those effects. Each case, in its own way, shows why it is high time to find a way to establish an effective rapprochement between these two professions.

The Supreme Court decision has been applauded by statisticians, whereas the London decision has appalled statisticians of similar stripe. Both decisions require some unpacking to understand why statisticians would cheer one and boo the other, and why these are important decisions not only for both the statistical and legal professions but for other domains and disciplines whose practices hinge on legal and statistical codes and frameworks.

This post focuses on the Supreme Court decision. The culprit was a homoeopathic zinc-based medicine, Zicam, manufactured by Matrixx Initivatives, Inc. and advertised as a remedy for the common cold. Matrixx ignored reports from users and doctors since 1999 that Zicam caused some users to experience burning sensations or even to lose the sense of smell. When this story was aired by a doctor on Good Morning America in 2004, Matrixx stock price plummeted.

The company’s defense was that these side-effects were “not statistically significant.” In the ensuing fallout, Matrixx was faced with more than 200 lawsuits by Zicam users, but the case in point here is Siracusano vs Matrixx, in which Mr. Siracusano was suing on behalf of investors on grounds that they had been misled. After a few iterations through the American court system, the question that the Supreme Court ruled on was whether a claim of securities fraud is valid against a company that neglected to warn consumers about effects that had been found to be statistically non-significant. As insider-knowledgeable Stephen Ziliak’s insightful essay points out, the decision will affect drug supply regulation, securities regulation, liability and the nature of adverse side-effects disclosed by drug companies. Ziliak was one of the “friends of the court” providing expert advice on the case.

A key point in this dispute is whether statistical nonsignificance can be used to infer that a potential side-effect is, for practical purposes, no more likely to occur when using the medicine than when not. Among statisticians it is a commonplace that such inferences are illogical (and illegitimate). There are several reasons for this, but I’ll review just two here.

These reasons have to do with common misinterpretations of the measure of statistical significance. Suppose Matrixx had conducted a properly randomized double-blind experiment comparing Zicam-using subjects with those using an indistinguishable placebo, and observed the difference in side-effect rates between the two groups of subjects. One has to bear in mind that random assignment of subjects to one group or the other doesn’t guarantee equivalence between the groups. So, it’s possible that even if there really is no difference between Zicam and the placebo regarding the side-effect, a difference between the groups might occur by “luck of the draw.”

The indicator of statistical significance in this context would be the probability of observing a difference at least as large as the one found in the study if the hypothesis of no difference were true. If this probability is found to be very low (typically .05 or less) then the experimenters will reject the no-difference hypothesis on the grounds that the data they’ve observed would be very unlikely to occur if that hypothesis were true. They will then declare that there is a statistically significant difference between the Zicam and placebo groups. If this probability is not sufficiently low (i.e., greater than .05) the experimenters will decide not to reject the no-difference hypothesis and announce that the difference they found was statistically non-significant.

So the first reason for concern is that Matrixx acted as if statistical nonsignificance entitles one to believe in the hypothesis of no-difference. However, failing to reject the hypothesis of no difference doesn’t entitle one to believe in it. It’s still possible that a difference might exist and the experiment failed to find it because it didn’t have enough subjects or because the experimenters were “unlucky.” Matrixx has plenty of company in committing this error; I know plenty of seasoned researchers who do the same, and I’ve already canvassed the well-known bias in fields such as psychology not to publish experiments that failed to find significant effects.

The second problem arises from a common intuition that the probability of observing a difference at least as large as the one found in the study if the hypothesis of no difference were true tells us something about the inverse—the probability that the no-difference hypothesis is true if we find a difference at least as large as the one observed in our study, or, worse still, the probability that the no-difference hypothesis is true. However, the first probability on its own tells us nothing about the other two.

For a quick intuitive, if fanciful, example let’s imagine randomly sampling one person from the world’s population and our hypothesis is that s/he will be Australian. On randomly selecting our person, all that we know about her initially is that she speaks English.

There are about 750 million first-or second-language English speakers world-wide, and about 23 million Australians. Of the 23 million Australians, about 21 million of them fit the first- or second-language English description. Given that our person speaks English, how likely is it that we’ve found an Australian? The probability that we’ve found an Australian given that we’ve picked an English-speaker is 21/750 = .03. So there goes our hypothesis. However, had we picked an Australian (i.e., given that our hypothesis were true), the probability that s/he speaks English is 21/23 = .91.

See also Ziliak and McCloskey’s 2008 book, which mounts a swinging demolition of the unquestioned application of statistical significance in a variety of domains.

Aside from the judgment about statistical nonsignificance, the most important stipulation of the Supreme Court’s decision is that “something more” is required before a drug company can justifiably decide to not disclose a drug’s potential side-effects. What should this “something more” be? This sounds as if it would need judgments about the “importance” of the side-effects, which could open multiple cans of worms (e.g., Which criteria for importance? According to what or whose standards?). Alternatively, why not simply require drug companies to report all occurrences of adverse side-effects and include the best current estimates of their rates among the population of users?

A slightly larger-picture view of the Matrixx defense resonates with something that I’ve observed in even the best and brightest of my students and colleagues (oh, and me too). And that is the hope that somehow probability or statistical theories will get us off the hook when it comes to making judgments and decisions in the face of uncertainty. It can’t and won’t, especially when it comes to matters of medical, clinical, personal, political, economic, moral, aesthetic, and all the other important kinds of importance.

Written by michaelsmithson

October 22, 2011 at 11:31 pm

Scientists on Trial: Risk Communication Becomes Riskier

with 5 comments

Back in late May 2011, there were news stories of charges of manslaughter laid against six earthquake experts and a government advisor responsible for evaluating the threat of natural disasters in Italy, on grounds that they allegedly failed to give sufficient warning about the devastating L’Aquila earthquake in 2009. In addition, plaintiffs in a separate civil case are seeking damages in the order of €22.5 million (US$31.6 million). The first hearing of the criminal trial occurred on Tuesday the 20th of September, and the second session is scheduled for October 1st.

According to Judge Giuseppe Romano Gargarella, the defendants gave inexact, incomplete and contradictory information about whether smaller tremors in L’Aquila six months before the 6.3 magnitude quake on 6 April, which killed 308 people, were to be considered warning signs of the quake that eventuated. L’Aquila was largely flattened, and thousands of survivors lived in tent camps or temporary housing for months.

If convicted, the defendants face up to 15 years in jail and almost certainly will suffer career-ending consequences. While manslaughter charges for natural disasters have precedents in Italy, they have previously concerned breaches of building codes in quake-prone areas. Interestingly, no action has yet been taken against the engineers who designed the buildings that collapsed, or government officials responsible for enforcing building code compliance. However, there have been indications of lax building codes and the possibility of local corruption.

The trial has, naturally, outraged scientists and others sympathetic to the plight of the earthquake experts. An open letter by the Istituto Nazionale di Geofisica e Vulcanologia (National Institute of Geophysics and Volcanology) said the allegations were unfounded and amounted to “prosecuting scientists for failing to do something they cannot do yet — predict earthquakes”. The AAAS has presented a similar letter, which can be read here.

In pre-trial statements, the defence lawyers also have argued that it was impossible to predict earthquakes. “As we all know, quakes aren’t predictable,” said Marcello Melandri, defence lawyer for defendant Enzo Boschi, who was president of Italy’s National Institute of Geophysics and Volcanology). The implication is that because quakes cannot be predicted, the accusations that the commission’s scientists and civil protection experts should have warned that a major quake was imminent are baseless.

Unfortunately, the Istituto Nazionale di Geofisica e Vulcanologia, the AAAS, and the defence lawyers were missing the point. It seems that failure to predict quakes is not the substance of the accusations. Instead, it is poor communication of the risks, inappropriate reassurance of the local population and inadequate hazard assessment. Contrary to earlier reports, the prosecution apparently is not claiming the earthquake should have been predicted. Instead, their focus is on the nature of the risk messages and advice issued by the experts to the public.

Examples raised by the prosecution include a memo issued after a commission meeting on 31 March 2009 stating that a major quake was “improbable,” a statement to local media that six months of low-magnitude tremors was not unusual in the highly seismic region and did not mean a major quake would follow, and an apparent discounting of the notion that the public should be worried. Against this, defence lawyer Melandri has been reported saying that the panel “never said, ‘stay calm, there is no risk’”.

It is at this point that the issues become both complex (by their nature) and complicated (by people). Several commentators have pointed out that the scientists are distinguished experts, by way of asserting that they are unlikely to have erred in their judgement of the risks. But they are being accused of “incomplete, imprecise, and contradictory information” communication to the public. As one of the civil parties to the lawsuit put it, “Either they didn’t know certain things, which is a problem, or they didn’t know how to communicate what they did know, which is also a problem.”

So, the experts’ scientific expertise is not on trial. Instead, it is their expertise in risk communication. As Stephen S. Hall’s excellent essay in Nature points out, regardless of the outcome this trial is likely to make many scientists more reluctant to engage with the public or the media about risk assessments of all kinds. The AAAS letter makes this point too. And regardless of which country you live in, it is unwise to think “Well, that’s Italy for you. It can’t happen here.” It most certainly can and probably will.

Matters are further complicated by the abnormal nature of the commission meeting on the 31st of March at a local government office in L’Aquila. Boschi claims that these proceedings normally are closed whereas this meeting was open to government officials, and he and the other scientists at the meeting did not realize that the officials’ agenda was to calm the public. The commission did not issue its usual formal statement, and the minutes of the meeting were not completed, until after the earthquake had occurred. Instead, two members of the commission, Franco Barberi and Bernardo De Bernardinis, along with the mayor and an official from Abruzzo’s civil-protection department, held a now (in)famous press conference after the meeting where they issued reassuring messages.

De Bernardinis, an expert on floods but not earthquakes, incorrectly stated that the numerous earthquakes of the swarm would lessen the risk of a larger earthquake by releasing stress. He also agreed with a journalist’s suggestion that residents enjoy a glass of wine instead of worrying about an impending quake.

The prosecution also is arguing that the commission should have reminded residents in L’Aquila of the fragility of many older buildings, advised them to make preparations for a quake, and reminded them of what to do in the event of a quake. This amounts to an accusation of a failure to perform a duty of care, a duty that many scientists providing risk assessments may dispute that they bear.

After all, telling the public what they should or should not do is a civil or governmental matter, not a scientific one. Thomas Jordan’s essay in New Scientist brings in this verdict: “I can see no merit in prosecuting public servants who were trying in good faith to protect the public under chaotic circumstances. With hindsight their failure to highlight the hazard may be regrettable, but the inactions of a stressed risk-advisory system can hardly be construed as criminal acts on the part of individual scientists.” As Jordan points out, there is a need to separate the role of science advisors from that of civil decision-makers who must weigh the benefits of protective actions against the costs of false alarms. This would seem to be a key issue that urgently needs to be worked through, given the need for scientific input into decisions about extreme hazards and events, both natural and human-caused.

Scientists generally are not trained in communication or in dealing with the media, and communication about risks is an especially tricky undertaking. I would venture to say that the prosecution, defence, judge, and journalists reporting on the trial will not be experts in risk communication either. The problems in risk communication are well known to psychologists and social scientists specializing in its study, but not to non-specialists. Moreover, these specialists will tell you that solutions to those problems are hard to come by.

For example, Otway and Wynne (1989) observed in a classic paper that an “effective” risk message has to simultaneously reassure by saying the risk is tolerable and panic will not help, and warn by stating what actions need to be taken should an emergency arise. They coined the term “reassurance arousal paradox” to describe this tradeoff (which of course is not a paradox, but a tradeoff). The appropriate balance is difficult to achieve, and is made even more so by the fact that not everyone responds in the same way to the same risk message.

It is also well known that laypeople do not think of risks in the same way as risk experts (for instance, laypeople tend to see “hazard” and “risk” as synonyms), nor do they rate risk severity in line with the product of probability and magnitude of consequence, nor do they understand probability—especially low probabilities. Given all of this, it will be interesting to see how the prosecution attempts to establish that the commission’s risk communications contained “incomplete, imprecise, and contradictory information.”

Scientists who try to communicate risks are aware of some of these issues, but usually (and understandably) uninformed about the psychology of risk perception (see, for instance, my posts here and here on communicating uncertainty about climate science). I’ll close with just one example. A recent International Commission on Earthquake Forecasting (ICEF) report argues that frequently updated hazard probabilities are the best way to communicate risk information to the public. Jordan, chair of the ICEF, recommends that “Seismic weather reports, if you will, should be put out on a daily basis.” Laudable as this prescription is, there are at least three problems with it.

Weather reports with probabilities of rain typically present probabilities neither close to 0 nor to 1. Moreover, they usually are anchored on tenths (e.g., .2, or .6 but not precise numbers like .23162 or .62947). Most people have reasonable intuitions about mid-range probabilities such as .2 or .6. But earthquake forecasting has very low probabilities, as was the case in the lead-up to the L’Aquila event. Italian seismologists had estimated the probability of a large earthquake in the next three days had increased from 1 in 200,000, before the earthquake swarm began, to 1 in 1,000 following the two large tremors the day before the quake.

The first problem arises from the small magnitude of these probabilities. Because people are limited in their ability to comprehend and evaluate extreme probabilities, highly unlikely events usually are either ignored or overweighted. The tendency to ignore low-probability events has been cited to account for the well-established phenomenon that homeowners tend to be under-insured against low probability hazards (e.g., earthquake, flood and hurricane damage) in areas prone to those hazards. On the other hand, the tendency to over-weight low-probability events has been used to explain the same people’s propensity to purchase lottery tickets. The point is that low-probability events either excite people out of proportion to their likelihood or fail to excite them altogether.

The second problem is in understanding the increase in risk from 1 in 200,000 to 1 in 1,000. Most people are readily able to comprehend the differences between mid-range probabilities such as an increase in the chance of rain from .2 to .6. However, they may not appreciate the magnitude of the difference between the two low probabilities in our example. For instance, an experimental study with jurors in mock trials found that although DNA evidence is typically expressed in terms of probability (specifically, the probability that the DNA sample could have come from a randomly selected person in the population), jurors were equally likely to convict on the basis of a probability of 1 in 1,000 as a probability of 1 in 1 billion. At the very least, the public would need some training and accustoming to miniscule probabilities.

All this leads us to the third problem. Otway and Wynne’s “reassurance arousal paradox” is exacerbated by risk communications about extremely low-probability hazards, no matter how carefully they are crafted. Recipients of such messages will be highly suggestible, especially when the stakes are high. So, what should the threshold probability be for determining when a “don’t ignore this” message is issued? It can’t be the imbecilic Dick Cheney zero-risk threshold for terrorism threats, but what should it be instead?

Note that this is a matter for policy-makers to decide, not scientists, even though scientific input regarding potential consequences of false alarms and false reassurances should be taken into account. Criminal trials and civil lawsuits punishing the bearers of false reassurances will drive risk communicators to lower their own alarm thresholds, thereby ensuring that they will sound false alarms increasingly often (see my post about making the “wrong” decision most of the time for the “right” reasons).

Risk communication regarding low-probability, high-stakes hazards is one of the most difficult kinds of communication to perform effectively, and most of its problems remain unsolved. The L’Aquila trial probably will have an inhibitory impact on scientists’ willingness to front the media or the public. But it may also stimulate scientists and decision-makers to work together for the resolution of these problems.

The Stapel Case and Data Fabrication

with 6 comments

By now it’s all over the net (e.g., here) and international news media: Tilburg University sacked high-profile social psychologist Diederik Stapel, after he was outed as having faked data in his research. Stapel was director of the Tilburg Institute for Behavioral Economics Research, a successful researcher and fundraiser, and as a colleague expressed it, “the poster boy of Dutch social psychology.” He had more than 100 papers published, some in the flagship journals not just of psychology but of science generally (e.g., Science), and won prestigious awards for his research on social cognition and stereotyping.

Tilburg University Rector Philip Eijlander said that Stapel had admitted to using faked data, apparently after Eijlander confronted him with allegations by graduate student research assistants that his research conduct was fraudulent. The story goes that the assistants had identified evidence of data entry by Stapel via copy-and-paste.

Willem Levelt, psycholinguist and former president of the Royal Netherlands Academy of Arts and Sciences, is to lead a panel investigating the extent of the fraud. That extent could be very widespread indeed. In a press conference the Tilburg rector made it clear that Stapel’s fraudulent research practices may have ranged over a number of years. All of his papers would be suspected, and the panel will advise on which papers will have to be retracted. Likewise, the editors of all journals that Stapel published in are also investigating details of his papers that were published in these journals. Then there are Stapel’s own students and research collaborators, whose data and careers may have been contaminated by his.

I feel sorry for my social psychological colleagues, who are reeling in shock and dismay. Some of my closest colleagues knew Stapel (one was a fellow graduate student with him), and none of them suspected him. Among those who knew him well and worked with him, Stapel apparently was respected as a researcher and trusted as a man of integrity. They are asking themselves how his cheating could have gone undetected for so long, and how such deeds could be prevented in the future. They fear its impact on public perception of their discipline and trust in scientific researchers generally.

An understandable knee-jerk reaction is to call for stricter regulation of scientific research, and alterations to the training of researchers. Mark Van Vugt and Anjana Ahuja’s blog post exemplifies this reaction, when they essentially accuse social psychologists of being more likely to engage in fraudulent research because some of them use deception of subjects in their experiments:

“…this means that junior social psychologists are being trained to deceive people and this might be a first violation of scientific integrity. It would be good to have a frank discussion about deception in our discipline. It is not being tolerated elsewhere so why should it be in our field.”

They make several recommendations for reform, including the declaration that “… ultimately it is through training our psychology students into doing ethically sound research that we can tackle scientific fraud. This is no easy feat.”

The most obvious problems with Van Vugt’s and Ahuja’s recommendations are, first, that there is no clear connection between using deception in research designs and faking data, and second, many psychology departments already include research ethics in researcher education and training. Stapel isn’t ignorant of research ethics. But a deeper problem is that none of their recommendations and, thus far, very few of the comments I have seen about this or similar cases, address three of the main considerations in any criminal case: Means, opportunity, and motive.

Let me speak to means and opportunity first. Attempts to more strictly regulate the conduct of scientific research are very unlikely to prevent data fakery, for the simple reason that it’s extremely easy to do in a manner that is extraordinarily difficult to detect. Many of us “fake data” on a regular basis when we run simulations. Indeed, simulating from the posterior distribution is part and parcel of Bayesian statistical inference. It would be (and probably has been) child’s play to add fake cases to one’s data by simulating from the posterior and then jittering them randomly to ensure that the false cases look like real data. Or, if you want to fake data from scratch, there is plenty of freely available code for randomly generating multivariate data with user-chosen probability distributions, means, standard deviations, and correlational structure. So, the means and opportunities are on hand for virtually all of us. They are the very same means that underpin a great deal of (honest) research. It is impossible to prevent data fraud by these means through conventional regulatory mechanisms.

Now let us turn to motive. The most obvious and comforting explanations of cheats like psychologists Stapel or Hauser, or plagiarists like statistician Wegman and political scientist Fischer, are those appealing to their personalities. This is the “X cheated because X is psychopathic” explanation. It’s comforting because it lets the rest of us off the hook (“I wouldn’t cheat because I’m not a psychopath”). Unfortunately this kind of explanation is very likely to be wrong. Most of us cheat on something somewhere along the line. Cheating is rife, for example, among undergraduate university students (among whom are our future researchers!), so psychopathy certainly cannot be the deciding factor there. What else could be the motivational culprit? How about the competitive pressures on researchers generated by the contemporary research culture?

Cognitive psychologist E,J, Wagenmakers (as quoted in Andrew Gelman’s thoughtful recent post) is among the few thus far who have addressed possible motivating factors inherent in the present-day research climate. He points out that social psychology has become very competitive, and

“high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself. There is a slippery slope here though; although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….”

I would add to E.J.’s observations the following points.

First, social psychology journals (and journals for other areas in psychology) exhibit a strong bias towards publishing only studies that have achieved a statistically significant result. This bias is widely believed in by researchers and their students. The obvious temptation arising from this is to ease an inconclusive finding into being conclusive by adding more “favorable” cases or making some of the unfavorable ones more favorable.

Second, and of course readers will recognize one of my hobby-horses here, the addiction in psychology to hypothesis-testing over parameter estimation amounts to an insistence that every study yield a conclusion or decision: Did the null hypothesis get rejected? The obvious remedy for this is to develop a publication climate that does not insist that each and every study be “conclusive,” but instead emphasizes the importance of a cumulative science built on multiple independent studies, careful parameter estimates and multiple tests of candidate theories. This adds an ethical and motivational rationale to calls for a shift to Bayesian methods in psychology.

Third, journal editors and reviewers routinely insist on more than one study to an article. On the surface, this looks like what I’ve just asked for, a healthy insistence on independent replication. It isn’t, for two reasons. First, even if the multiple studies are replications, they aren’t independent because they come from the same authors and/or lab. Genuinely independent replicated studies would be published in separate papers written by non-overlapping sets of authors from separate labs. However, genuinely independent replication earns no kudos and therefore is uncommon (not just in psychology, either—other sciences suffer from this problem, including those that used to have a tradition of independent replication).

The second reason is that journal editors don’t merely insist on study replications, they also favor studies that come up with “consistent” rather than “inconsistent” findings (i.e., privileging “successful” replications over “failed” replications). By insisting on multiple studies that reproduce the original findings, journal editors are tempting researchers into corner-cutting or outright fraud in the name of ensuring that that first study’s findings actually get replicated. E.J.’s observation that surprising hypotheses are unlikely to be supported by data goes double (squared, actually) when it comes to replication—Support for a surprising hypothesis may occur once in a while, but it is unlikely to occur twice in a row. Again, remedies are obvious: Develop a publication climate which encourages or even insists on independent replication, that treats well-conducted “failed” replications identically to well-conducted “successful” ones, and which does not privilege “replications” from the same authors or lab of the original study.

None of this is meant to say I fall for cultural determinism—Most researchers face the pressures and motivations described above, but few cheat. So personality factors may also exert an influence, along with circumstances specific to those of us who give in to the temptations of cheating. Nevertheless if we want to prevent more Stapels, we’ll get farther by changing the research culture and its motivational effects than we will by exhorting researchers to be good or lecturing them about ethical principles of which they’re already well aware. And we’ll get much farther than we would in a futile attempt to place the collection and entry of every single datum under surveillance by some Stasi-for-scientists.

Written by michaelsmithson

September 14, 2011 at 9:36 am

An Ignorance Economy?

with 2 comments

It’s coming up to a year since I began this blog. In my usual fashion, I set myself the unrealistic goal of writing a post every week. This is only the 37th, so I’ve fallen short by a considerable margin. On the other hand, most of those posts have been on the order of 1500 words long, for a total of about 55,500 words thus far. That seems a fair whack of the keyboard, and it’s been fun too.

In an earlier post I proposed that because of the ways in which knowledge economies work, we increasingly live in an “ignorance society.” In the same year that Sheldon Ungar’s paper on ignorance as a public problem appeared, another paper came out by Joanne Roberts and John Armitage with the intriguing title “The Ignorance Economy.” Their stated purpose was to critique the notion of a knowledge economy via an investigation of ignorance from an economic standpoint.

As Roberts and Armitage (and many others) have said, knowledge as a commodity has several distinctive features. Once knowledge is consumed, it does not disappear and indeed its consumption may result in the development of more knowledge. The consumption of knowledge is non-zero-sum and can be non-excludable. Knowledge is a multiplier resource in this sense. Finally, knowledge is not subject to diminishing returns.

Interestingly, Roberts and Armitage do not say anything substantial about ignorance as a commodity. We already have some characterizations handy from this blog and elsewhere. Like knowledge, ignorance can be non-zero-sum and non-excludable in the sense that my being ignorant about X doesn’t prevent you from also being ignorant about X, nor does greater ignorance on my part necessarily decrease your ignorance. Ignorance also does not seem susceptible to diminishing returns. And of course, new knowledge can generate ignorance, and an important aspect of an effective knowledge-based economy is its capacity for identifying and clarifying unknowns. Even in a booming knowledge economy, ignorance can be a growth industry in its own right.

There are obvious examples of economies that could, in some sense, be called “ignorance economies.” Education and training are ignorance economies in the sense that educators and trainers make their living via a continual supply of ignoramuses who are willing to pay for the privilege of shedding that status. Likewise, governmental and corporate organizations paying knowledge experts enable those experts to make a living out of selling their expertise to those who lack it. This is simply the “underbelly” of knowledge economies, as Roberts and Armitage point out.

But what about making ignorance pay? Roberts and Armitage observe that agents in knowledge economies set about this in several ways. First, there is the age-old strategy of intellectual property protection via copyright, patents, or outright secrecy. Hastening the obsolescence of knowledge and/or skills is another strategy. Entire trades, crafts and even professions have been de-skilled or rendered obsolete. And how about that increasingly rapid deluge of updates and “upgrades” imposed on us?

A widespread observation about the knowledge explosion is that it generates an ensuing ignorance explosion, both arising from and resulting in increasing specialization. The more specialized a knowledge economy is, the greater are certain opportunities to exploit ignorance for economic gains. These opportunities arise in at least three forms. First, there are potential coordination and management roles for anyone (or anything) able to pull a large unstructured corpus of data into a usable structure or, better still, a “big picture.” Second, making sense of data has become a major industry in its own right, giving rise to ironically specialized domains of expertise such as statistics and information technology.

Third, Roberts and Armitage point to the long-established trend for consumer products to require less knowledge for their effective use. So consumers are enticed to become more ignorant about how these products work, how to repair or maintain them, and how they are produced. You don’t have to be a Marxist to share a cynical but wry guffaw with Roberts and Armitage as they confess, along with the rest of us, to writing their article using a computer whose workings they are happily ignorant about. One must admit that this is an elegant, if nihilistic solution to Sheldon Ungar’s problem that the so-called information age has made it difficult to agree on a human-sized common stock of knowledge that we all should share.

Oddly, Roberts and Armitage neglect two additional (also age-old) strategies for exploiting ignorance for commercial gain and/or political power. First, an agent can spread disinformation and, if successful, make money or power out of deception. Second, an agent can generate uncertainty in the minds of a target population, and leverage wealth and/or power out of that uncertainty. Both strategies have numerous exemplars throughout history, from legitimate commercial or governmental undertakings to terrorism and organized crime.

Roberts and Armitage also neglect the kinds of ignorance-based “social capital” that I have written about, both in this blog and elsewhere. Thus, for example, in many countries the creation and maintenance of privacy, secrecy and censorship engage economic agents of considerable size in both the private and public sectors. All three are, of course, ignorance arrangements. Likewise, trust-based relations have distinct economic advantages over relations based on assurance through contracts, and trust is partially an ignorance arrangement.

More prosaically, do people make their living by selling their ignorance? I once met a man who claimed he did so, primarily on a consulting basis. His sales-pitch boiled down to declaring “If you can make something clear to me, you can make it clear to anyone.” He was effectively making the role of a “beta-tester” pay off. Perhaps we may see the emergence of niche markets for specific kinds of ignoramuses.

But there already is, arguably, a sustainable market for generalist ignoramuses. Roberts and Armitage moralize about the neglect by national governments of “regional ignorance economies,” by which they mean subpopulations of workers lacking any qualifications whatsoever. Yet these are precisely the kinds of workers needed to perform jobs for which everyone else would be over-qualified and, knowledge economy or not, such jobs are likely to continue abounding for some time to come.

I’ve watched seven children on my Australian middle-class suburban cul-de-sac grow to adulthood over the past 14 years. Only one of them has gone to university. Why? Well, for example, one of them realized he could walk out of school after 10th grade, go to the mines, drive a big machine and immediately command a $90,000 annual salary. The others made similar choices, although not as high-paying as his but still favorable in short-term comparisons to their age-mates heading off to uni to rack up tens-of-thousands-of-dollars debts. The recipe for maintaining a ready supply of generalist ignoramuses is straightforward: Make education or training sufficiently unaffordable and difficult, and/or unqualified work sufficiently remunerative and easy. An anti-intellectual mainstream culture helps, too, by the way.

Written by michaelsmithson

September 11, 2011 at 1:31 pm

Can Greater Noise Yield Greater Accuracy?

with one comment

I started this post in Hong Kong airport, having just finished one conference and heading to Innsbruck for another. The Hong Kong meeting was on psychometrics and the Innsbruck conference was on imprecise probabilities (believe it or not, these topics actually do overlap). Anyhow, Annemarie Zand Scholten gave a neat paper at the math psych meeting in which she pointed out that, contrary to a strong intuition that most of us have, introducing and accounting for measurement error can actually sharpen up measurement. Briefly, the key idea is that an earlier “error-free” measurement model of, say, human comparisons between pairs of objects on some dimensional characteristic (e.g., length) could only enable researchers to recover the order of object length but not any quantitative information about how much longer people were perceiving one object to be than another.

I’ll paraphrase (and amend slightly) one of Annemarie’s illustrations of her thesis, to build intuition about how her argument works. In our perception lab, we present subjects with pairs of lines and ask them to tell us which line they think is the longer. One subject, Hawkeye Harriet, perfectly picks the longer of the two lines every time—regardless of how much longer one is than the other. Myopic Myra, on the other hand, has imperfect visual discrimination and thus sometimes gets it wrong. But she’s less likely to choose the wrong line if the two lines’ lengths considerably differ from one another. In short, Myra’s success-rate is positively correlated with the difference between the two line-lengths whereas Harriet’s uniformly 100% success rate clearly is not.

Is there a way that Myra’s success- and error-rates could tell us exactly how long each object is, relative to the others? Yes. Let pij be the probability that Myra picks the ith object as longer than the jth object, and pji = 1 – pij be the probability that Myra picks the jth object as longer than the ith object. If the ith object has length Li and the jth object has length Lj, then if pij/pji = Li/Lj, Myra’s choice-rates perfectly mimic the ratio of the ith and jth objects’ lengths. This neat relationship owes its nature to the fact that a characteristic such as length has an absolute zero, so we can meaningfully compare lengths by taking ratios.

How about temperature? This is slightly trickier, because if we’re using a popular scale such as Celsius or Fahrenheit then the zero-point of the scale isn’t absolute in the sense that length has an absolute zero (i.e., you can have Celsius and Fahrenheit readings below zero, and each scale’s zero-point differs from the other). Thus, 60 degrees Fahrenheit is not twice as warm as 30 degrees Fahrenheit. However, the differences between temperatures can be compared via ratios. For instance, 40 degrees F is twice as far from 20 degrees F as 10 degrees F is.

We just need a common “reference” object against which to compare each of the others. Suppose we’re asking Myra to choose which of a pair of objects is the warmer. Assuming that Myra’s choices are transitive, there will be an object she chooses less often than any of the others in all of the paired comparisons. Let’s refer to that object as the Jth object. Now suppose the ith object has temperature Ti,the jth object has temperature Tj, and the Jth object has temperature TJ which is lower than both Ti and Tj. Then if Myra’s choice-rate ratio is
piJ/pjJ = (Ti – TJ)/( Tj – TJ),
she functions as a perfect measuring instrument for temperature comparisons between the ith and jth objects. Again, Hawkeye Harriet’s choice-rates will be piJ = 1 and pjJ = 1 no matter what Ti and Tj are, so her ratio always is 1.

If we didn’t know what the ratios of those lengths or temperature differences were, Myra would be a much better measuring instrument than Harriet even though Harriet never makes mistakes. Are there such situations? Yes, especially when it comes to measuring mental or psychological characteristics for which we have no direct access, such as subjective sensation, mood, or mental task difficulty.

Which of 10 noxious stimuli is the more aversive? Which of 12 musical rhythms makes you feel more joyous? Which of 20 types of puzzle is the more difficult? In paired comparisons between each possible pair of stimuli, rhythms or puzzles, Hawkeye Harriet will pick what for her is the correct pair every time, so all we’ll get from her is the rank-order of stimuli, rhythms and puzzles. Myopic Myra will less reliably and less accurately choose what for her is the correct pair, but her choice-rates will be correlated with how dissimilar each pair is. We’ll recover much more precise information about the underlying structure of the stimulus set from error-prone Myra.

Annemarie’s point about measurement is somewhat related to another fascinating phenomenon known as stochastic resonance. Briefly paraphrasing the Wikipedia entry for stochastic resonance (SR), SR occurs when a measurement or signal-detecting system’s signal-to-noise ratio increases when a moderate amount of noise is added to the incoming signal or to the system itself. SR usually is observed either in bistable or sub-threshold systems. Too little noise results in the system being insufficiently sensitive to the signal; too much noise overwhelms the signal. Evidence for SR has been found in several species, including humans. For example, a 1996 paper in Nature reported a demonstration that subjects asked to detect a sub-threshold impulse via mechanical stimulation of a fingertip maximized the percentage of correct detections when the signal was mixed with a moderate level of noise. One way of thinking about the optimized version of Myopic Myra as a measurement instrument is to model her as a “noisy discriminator,” with her error-rate induced by an optimal random noise-generator mixed with an otherwise error-free discriminating mechanism.

Written by michaelsmithson

August 14, 2011 at 10:47 am

Expertise on Expertise

with 5 comments

Hi, I’m back again after a few weeks’ travel (presenting papers at conferences). I’ve already posted material on this blog about the “ignorance explosion.” Numerous writings have taken up the theme that there is far too much relevant information for any of us to learn and process and the problem is worsening, despite the benefits of the internet and effective search-engines. We all have had to become more hyper-specialized and fragmented in our knowledge-bases than our forebears, and many of us find it very difficult as a result to agree with one another about the “essential” knowledge that every child should receive in their education and that every citizen should possess.

Well, here is a modest proposal for one such essential: We should all become expert about experts and expertise. That is, we should develop meta-expertise.

We can’t know everything, but knowing an expert when we see one, being able to tell the difference between an expert and an impostor, and knowing what it takes to become an expert can guide our search for assistance in all things about which we’re ignorant. A meta-expert should:

  1. Know the broad parameters of and requirements for attaining expertise;
  2. Be able to distinguish a genuine expert from a pretender or a charlatan;
  3. Know whether expertise is and when it is not attainable in a given domain;
  4. Possess effective criteria for evaluating expertise, within reasonable limits; and
  5. Be aware of the limitations of specialized expertise.

Let’s start with that strongly democratic source of expertise: Wikipedia’s take on experts:

“In many domains there are objective measures of performance capable of distinguishing experts from novices: expert chess players will almost always win games against recreational chess players; expert medical specialists are more likely to diagnose a disease correctly; etc.”

That said, the Wikipedia entry also raises a potentially vexing point, namely that “expertise” may come down to merely a matter of consensus, often dictated by the self-same “experts.” Examples readily spring to mind in areas where objective measures are hard to come by, such as the arts. But consider also domains where objective measures may be obtainable but not assessable by laypeople. Higher mathematics is a good example. Only a tiny group of people on the planet were capable of assessing whether Andrew Wiles really had proven Fermat’s Theorem. The rest of us have to take their word for it.

A crude but useful dichotomy splits views about expertise into two camps: Constructivist and performative. The constructivist view emphasizes the influence of communities of practice in determining what expertise is and who is deemed to have it. The performative view portrays expertise as a matter of learning through deliberative practice. Both views have their points, and many domains of expertise have elements of both. Even domains where objective indicators of expertise are available can have constructivist underpinnings. A proficient modern-day undergraduate physics student would fail late 19th-century undergraduate physics exams; and experienced medical practitioners emigrating from one country to another may find their qualifications and experience unrecognized by their adopted country.

What are the requirements for attaining deep expertise? Two popular criteria are talent and deliberative practice. Re deliberate practice, a much-discussed rule of thumb is the “10,000 hour rule.” This rule was popularized in Malcolm Gladwell’s book Outliers and some authors misattribute it to him. It actually dates back to studies of chess masters in the 1970’s (see Ericsson, K. A., R. Th. Krampe, and C. Tesch-Römer, 1993), and its generalizability to other domains still is debatable. Nevertheless, the 10K rule has some merit, and unfortunately it has been routinely ignored in many psychological studies comparing “experts” with novices, where the “experts” often are undergraduates who have been given a few hours’ practice on a relatively trivial task.

The 10K rule can be a useful guide but there’s an important caveat. It may be a necessary but it is by no means a sufficient condition for guaranteeing deep expertise. At least three other conditions have to be met: Deliberative and effective practice in a domain where deep expertise is attainable. Despite this quite simple line of reasoning, plenty of published authors have committed the error of viewing the 10K rule as both necessary and sufficient. Gladwell didn’t make this mistake, but Jane McGonigal’s recent book on video and computer games devotes considerable space to the notion that because gamers are spending upwards of 10K hours playing games they must be attaining deep “expertise” of some kind. Perhaps some may be, provided they are playing games of sufficient depth. But many will not. (BTW, McGonigal’s book is worth a read despite her over-the-top optimism about how games can save the world—and take a look at her game-design collaborator Bogost’s somewhat dissenting review of her book).

Back to the caveats. First, no deliberation makes practice useless. Having spent approximately 8 hours every day sleeping for the past 61 years (178,120 hours) hasn’t made me an expert on sleep. Likewise, deliberative but ineffective practice methods deny us top-level expertise. Early studies of Morse Code experts demonstrated that mere deliberative practice did not guarantee best performance results; specific training regimes were required instead. Autodidacts with insight and aspirations to attain the highest performative levels in their domains eventually realise how important getting the “right” coaching or teaching is.

Finally, there is the problem of determining whether effective, deliberative practice yields deep expertise in any domain. The domain may simply not be “deep” enough. In games of strategy, tic-tac-toe is a clear example of insufficient depth, checkers is a less obvious but still clear example, whereas chess and go clearly have sufficient depth.

Tic-tac-toe aside, are there domains that possess depth where deep expertise nevertheless is unattainable? There are, at least, some domains that are deeply complex where “experts” perform no better then less trained individuals or simple algorithms. Psychotherapy is one such domain. There is a plethora of studies demonstrating that clinical psychologists’ predictions of patient outcomes are worse than simple linear regression models (cf. Dawes’ searing indictment in his 1994 book) and that sometimes experts’ decisions are no more accurate than beginners’ decisions and simple decision aids. Similar results have been reported for financial planners and political experts. In Philip Tetlock’s 2005 book on so-called “expert” predictions, he finds that many so-called experts perform no better than chance in predicting political events, financial trends, and so on.

What can explain the absence of deep expertise in these instances? Tetlock attributes experts’ poor performance to two factors, among others: Hyperspecialization and overconfidence. “We reach the point of diminishing marginal predictive returns for knowledge disconcertingly quickly,” he reports. “In this age of academic hyperspecialization, there is no reason for supposing that contributors to top journals—distinguished political scientists, area study specialists, economists, and so on—are any better than journalists or attentive readers of the New York Times in ‘reading’ emerging situations.” And the more famous the forecaster the more overblown the forecasts. “Experts in demand,” Tetlock says, “were more overconfident than their colleagues who eked out existences far from the limelight.” Tetlock also claims that cognitive style counts: “Foxes” tend to outperform “hedgehogs.” These terms are taken from Isaiah Berlin’s popular essay: Foxes know a little about lots of things, whereas hedgehogs know one big thing.

Another contributing factor may be a lack of meta-cognitive insight on the part of the experts. A hallmark of expertise is ignoring (not ignorance). This proposition may sound less counter-intuitive if it’s rephrased to say that experts know what to ignore. In an earlier post I mentioned Mary Omodei and her colleagues’ chapter in a 2005 book on professionals’ decision making in connection with this claim. Their chapter opens with the observation of a widespread assumption that domain experts also know how to optimally allocate their cognitive resources when making judgments or decisions in their domain. Their research with expert fire-fighting commanders cast doubt on this assumption.

The key manipulations in the Omodei simulated fire-fighting experiments determined the extent to which commanders had unrestricted access to “complete” information about the fires, weather conditions, and other environmental matters. They found that commanders performed more poorly when information access was unrestricted than when they had to request information from subordinates. They also found that commanders performed more poorly when they believed all available information was reliable than when they believed that some of it was unreliable. The disquieting implication of these findings is that domain expertise doesn’t include meta-cognitive expertise.

Cognitive biases and styles aside, another contributing set of factors may be the characteristics of the complex, deep domains themselves that render deep expertise very difficult to attain. Here is a list of tests you can apply to such domains by way of evaluating their potential for the development of genuine expertise:

  1. Stationarity? Is the domain stable enough for generalizable methods to be derived? In chaotic systems long-range prediction is impossible because of initial-condition sensitivity. In human history, politics and culture, the underlying processes may not be stationary at all.
  2. Rarity? When it comes to prediction, rare phenomena simply are difficult to predict (see my post on making the wrong decisions most of the time for the right reasons).
  3. Observability? Can the outcomes of predictions or decisions be directly or immediately observed? For example in psychology, direct observation of mental states is nearly impossible, and in climatology the consequences of human interventions will take a very long time to unfold.
  4. Objective or even impartial criteria? For instance, what is “good,” “beautiful,” or even “acceptable” in domains such as music, dance or the visual arts? Are such domains irreducibly subjective and culture-bound?
  5. Testability? Are there clear criteria for when an expert has succeeded or failed? Or is there too much “wiggle-room” to be able to tell?

Finally, here are a few tests that can be used to evaluate the “experts” in your life:

  1. Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?
  2. Walking the walk: Is the expert an active practitioner in their domain (versus being a critic or a commentator)?
  3. Overconfidence: Ask your expert to make yes-no predictions in their domain of expertise, and before any of these predictions can be tested ask them to estimate the percentage of time they’re going to be correct. Compare that estimate with the resulting percentage correct. If their estimate was too high then your expert may suffer from over-confidence.
  4. Confirmation bias: We’re all prone to this, but some more so than others. Is your expert reasonably open to evidence or viewpoints contrary to their own views?
  5. Hedgehog-Fox test: Tetlock found that Foxes were better-calibrated and more able to entertain self-disconfirming counterfactuals than hedgehogs, but allowed that hedgehogs can occasionally be “stunningly right” in a way that foxes cannot. Is your expert a fox or a hedgehog?
  6. Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?

Written by michaelsmithson

August 11, 2011 at 11:26 am

You Can Never Plan the Future by the Past

with 2 comments

The title of this post is, of course, a famous quotation from Edmund Burke. This is a personal account of an attempt to find an appropriate substitute for such a plan. My siblings and I persuaded our parents that the best option for financing their long-term in-home care is via a reverse-mortgage. At first glance, the problem seems fairly well-structured: Choose the best reverse mortgage setup for my elderly parents. After all, this is the kind of problem for which economists and actuaries claim to have appropriate methods.

There are two viable strategies for utilizing the loan from a reverse mortgage: Take out a line of credit from which my parents can draw as they wish, or a tenured (fixed) schedule of monthly payments to their nominated savings account. The line of credit (LOC) option’s main attraction is its flexibility. However, the LOC runs out when the equity in my parents’ property is exhausted, whereas the tenured payments (TP) continue as long as they live in their home. So if either of them is sufficiently long-lived then the TP could be the safer option. On the other hand, the LOC may be more robust against unexpected expenses (e.g., medical emergencies or house repairs). Of course, one can opt for a mixture of TP and LOC.

So, this sounds like a standard optimization problem: What’s the optimal mix of TP and LOC? Here we run into the first hurdle: “Optimal” by what criteria? One criterion is to maximize the expected remaining equity in the property. This criterion might be appealing to their offspring, but it doesn’t do my parents much good. Another criterion that should appeal to my parents is maximizing the expected funds available to them. Fortunately, my siblings and I are more concerned for our parents’ welfare than what we’d get from the equity, so we’re happy to go with the second criterion. Nevertheless, it’s worth noting that this issue poses a deeper problem in general—How would a family with interests in both criteria come up with an appropriate weighting for each of them, especially if family members disagreed on the importance of these criteria?

Meanwhile, having settled on an optimization criterion, the next step would seem to be computing the expected payout to my parents for various mixtures of TP and LOC. But wait a minute. Surely we also should be worried about the possibility that some financial exigency could exhaust their funds altogether. So, we could arguably consider a third criterion: Minimizing the probability of their running out of funds. So now we encounter a second hurdle: How do we weigh up maximizing expected payout to our parents against the likelihood that their funds could run out? It might seem as if maximizing payout would also minimize that probability, but this is not necessarily so. A strategy that maximized expected payout could also increase the variability of the available funds over time so that the probability of ruin is increased.

Then there are the unknowns: How long our parents might live, what expenses they might incur (e.g., medical or in-home care), inflation, the behaviour of the LIBOR index that determines the interest rate on what is drawn down from the mortgage, and appreciation or deprecation of the property value. It is possible to come up with plausible-looking models for each of these by using standard statistical tools, and that’s exactly what I did.

I pulled down life-expectancy tables for American men and women born when my parents were born, more than two decades of monthly data on inflation in the USA, a similar amount of monthly data on the LIBOR, and likewise for real-estate values in the area where my parents live. I fitted a several “lifetime” distributions to the relevant parts of the life-expectancy tables to model the probability of my parents living 1, 2, 3, … years longer given that they have survived to their mid-80’s and arrived at a model that fitted the data very well. I modeled the inflation, LIBOR and real-estate data with standard time-series (ARIMA) models whose squared correlations with the data were .91, .98, and .91 respectively—All very good fits.

Finally, my brothers and sisters-in-law obtained the necessary information from my mother regarding our parents’ expenses in the recent past, their income from pensions and so on, and we made some reasonable forecasts of additional expenses that we can foresee in the near term. The transition in this post from “I” to “we” is crucial. This was very much a joint effort. In particular, my youngest brother’s sister-in-law made most of the running on determining the ins and outs of reverse mortgages. She has a terrifically analytical intelligence, and we were able to cross-check one another’s perceptions, intuitions, and calculations.

Armed with all of this information and well-fitted models, it would seem that all we should need to do is run a large enough batch of simulations of the future for each reverse-mortgage scenario under consideration to get reliable estimates of expected payout, expected equity, the probability of ruin, and so on. The inflation model would simulate fluctuations in expenses, the LIBOR model would do so for the interest-rates, the real-estate model for the property value, and the life-expectancy model for how long our parents would live.

But there are at least two flaws in my approach. First, it assumes that my parents’ life-spans can best be estimated by considering them as if they are randomly chosen from the population of American men and women born when they were born who have survived to their mid-80’s. Should I take additional characteristics about them into account and base my estimates on only those who share those characteristics as well as their nation and birth-year? What about diet, or body-mass index, or various aspects of their medical histories? This issue is known as the reference-class problem, and it bedevils every school of statistical inference.

What did I do about this? I fudged my life-expectancy model to be “conservative,” i.e., so that it assumes my parents have a somewhat longer life-span than the original model suggests. In short, I tweaked my model as a risk-averse agent would—The longer my parents live, the greater the risk that they will run short of funds.

The second flaw in my approach is more fundamental. It assumes that the future is going to be just like the past. And before anyone says anything, yes, I’ve read Taleb’s The Black Swan (and was aware of most of the material he covered before reading his book), and yes, I’m aware of most criticisms that have been raised against the kind of models I’ve constructed. The most problematic assumption in my models is what is called stationarity, i.e., that the process driving the ups and downs of, say, the LIBOR index has stable characteristics. There were clear indications that the real-estate market fluctuations in the area where my parents live do not resemble a stationary process, and therefore I should not trust my ARIMA model very much despite its high correlation with the data.

Let me also point out the difference between my approach and the materials provided to us by potential lenders and the HUD counsellor. Their scenarios and forecasts are one-shot spreadsheets that don’t simulate my parents’ expenses, the impact of inflation, or fluctuations in real-estate markets. Indeed, the standard assumption about the latter in their spreadsheets is a constant appreciation in property value of 4% per year.

My simulations are literally equivalent to 10,000 spreadsheets for each scenario, each spreadsheet an appropriate random sample from an uncertain future, and capable of being tweaked to include possibilities such as substantial real-estate downturns. I also incorporated random “shock” expenditures on the order of $5-$75K to see how vulnerable each scenario was to unexpected expenses.

The upshot of all this was that the mix of LOC and TP had a substantial effect on the probability of running out of money, but not a large impact on expected balance or equity (the other factors had large impacts on those). So at least we could home in on a robust mix of LOC and TP, one that would have a lower risk of running out of money than others. This criterion became the primary driver in our choice. We also can monitor how our parents’ situation evolves and revise the mix if necessary.

What about maximizing expected utility? Or optimizing in any sense of the term? No, and no. The deep unknowns inherent even in this relatively well-structured problem make those unattainable goals. What can we do instead? Taleb’s advice is to pay attention to consequences instead of probabilities. This is known as “dominance reasoning.” If option A yields better outcomes than option B no matter what the probabilities of those outcomes are, choose option A. But life often isn’t that simple. We can’t do that here because the comparative outcomes of alternative mixtures of LOC and TP depend on probabilities.

Instead, we have ended up closer to the “bounded rationality” that Herbert Simon wrote about. We can’t claim to have optimized, but we do have robustness and corrigibility on our side, two important criteria for good decision making under ignorance (described in my recent post on that topic). Perhaps most importantly, the simulations gave us insights none of our intuitions could, into how variable the future can be and the consequences of that variability. Sir Edmund was right. We can’t plan the future by the past. But sometimes we can chart a steerable course into that future armed with a few clues from the past to give us an honest check on our intuitions, and a generous measure of scepticism about relying too much on those clues.

Communicating about Uncertainty in Climate Change, Part II

with 5 comments

In my previous post I attempted to provide an overview of the IPCC 2007 report’s approach to communicating about uncertainties regarding climate change and its impacts. This time I want to focus on how the report dealt with probabilistic uncertainty. It is this kind of uncertainty that the report treats most systematically. I mentioned in my previous post that Budescu et al.’s (2009) empirical investigation of how laypeople interpret verbal probability expressions (PEs, e.g., “very likely”) in the IPCC report revealed several problematic aspects, and a paper I have co-authored with Budescu’s team (Smithson, et al., 2011) yielded additional insights.

The approach adopted by the IPCC is one that has been used in other contexts, namely identifying probability intervals with verbal PEs. Their guidelines are as follows:
Virtually certain >99%; extremely likely >95%; very likely >90%; likely >66%; more likely than not > 50%; about as likely as not 33% to 66%; unlikely <33%; very unlikely <10%; extremely unlikely <5%; exceptionally unlikely <1%.

One unusual aspect of these guidelines is their overlapping intervals. For instance, “likely” takes the interval [.66,1] and thus contains the interval [.90,1] for “very likely,” and so on. The only interval that doesn’t overlap with others is “as likely as not.” Other interval-to-PE guidelines I am aware of use non-overlapping intervals. An early example is Sherman Kent’s attempt to standardize the meanings of verbal PEs in the American intelligence community.

Attempts to translate verbal PEs into numbers have a long and checkered history. Since the earliest days of probability theory, the legal profession has steadfastly refused to quantify its burdens of proof (“balance of probabilities” or “reasonable doubt”) despite the fact that they seem to explicitly refer to probabilities or at least degrees of belief. Weather forecasters debated the pros and cons of verbal versus numerical PEs for decades, with mixed results. A National Weather Service report on a 1997 survey of Juneau, Alaska residents found that although the rank-ordering of the mean numerical probabilities residents assigned to verbal PE’s reasonably agreed with those assumed by the organization, the residents’ probabilities tended to be less extreme than the organization’s assignments. For instance, “likely” had a mean of 62.5% whereas the organization’s assignments for this PE were 80-100%.

And thus we see a problem arising that has been long noted about individual differences in the interpretation of PEs but largely ignored when it comes to organizations. Since at least the 1960’s empirical studies have demonstrated that people vary widely in the numerical probabilities they associate with a verbal PE such as “likely.” It was this difficulty that doomed Sherman Kent’s attempt at standardization for intelligence analysts. Well, here we have the NWS associating it with 80-100% whereas the IPCC assigns it 66-100%. A failure of organizations and agencies to agree on number-to-PE translations leaves the public with an impossible brief. I’m reminded of the introduction of the now widely-used cyclone (hurricane) category 1-5 scheme (higher numerals meaning more dangerous storms) at a time when zoning for cyclone danger where I was living also had a 1-5 numbering system that went in the opposite direction (higher numerals indicating safer zones).

Another interesting aspect is the frequency of the PEs in the report itself. There are a total of 63 PEs therein. “Likely” occurs 36 times (more than half), and “very likely” 17 times. The remaining 10 occurrences are “very unlikely” (5 times), “virtually certain” (twice), “more likely than not” (twice), and “extremely unlikely” (once). There is a clear bias towards fairly extreme positively-worded PEs, perhaps because much of the IPCC report’s content is oriented towards presenting what is known and largely agreed on about climate change by climate scientists. As we shall see, the bias towards positively-worded PEs (e.g., “likely” rather than “unlikely”) may have served the IPCC well, whether intentionally or not.

In Budescu et al.’s experiment, subjects were assigned to one of four conditions. Subjects in the control group were not given any guidelines for interpreting the PEs, as would be the case for readers unaware of the report’s guidelines. Subjects in a “translation” condition had access to the guidelines given by the IPCC, at any time during the experiment. Finally, subjects in two “verbal-numerical translation” conditions saw a range of numerical values next to each PE in each sentence. One verbal-numerical group was shown the IPCC intervals and the other was shown narrower intervals (with widths of 10% and 5%).

Subjects were asked to provide lower, upper and “best” estimates of the probabilities they associated with each PE. As might be expected, these figures were most likely to be consistent with the IPCC guidelines in the verbal- numerical translation conditions, less likely in the translation condition, and least likely in the control condition. They were also less likely to be IPCC-consistent the more extreme the PE was (e.g., less consistent foro “very likely” than for “likely”). Consistency rates were generally low, and for the extremal PEs the deviations from the IPCC guidelines were regressive (i.e., subjects’ estimates were not extreme enough, thereby echoing the 1997 National Weather Service report findings).

One of the ironic claims by the Budescu group is that the IPCC 2007 report’s verbal probability expressions may convey excessive levels of imprecision and that some probabilities may be interpreted as less extreme than intended by the report authors. As I remarked in my earlier post, intervals do not distinguish between consensual imprecision and sharp disagreement. In the IPCC framework, the statement “The probability of event X is between .1 and .9 could mean “All experts regard this probability as being anywhere between .1 and .9” or “Some experts regard the probability as .1 and others as .9.” Budescu et al. realize this, but they also have this to say:

“However, we suspect that the variability in the interpretation of the forecasts exceeds the level of disagreement among the authors in many cases. Consider, for example, the statement that ‘‘average Northern Hemisphere temperatures during the second half of the 20th century were very likely higher than during any other 50-year period in the last 500 years’’ (IPCC, 2007, p. 8). It is hard to believe that the authors had in mind probabilities lower than 70%, yet this is how 25% of our subjects interpreted the term very likely!” (pg. 8).

One thing I’d noticed about the Budescu article was that their graphs suggested the variability in subjects’ estimates for negatively-worded PEs (e.g., “unlikely”) seemed greater than for positively worded PEs (e.g., “likely”). That is, subjects seemed to have less of a consensus about the meaning of the negatively-worded PEs. On reanalyzing their data, I focused on the six sentences that used the PE “very likely” or “very unlikely”. My statistical analyses of subjects’ lower, “best” and upper probability estimates revealed a less regressive mean and less dispersion for positive than for negative wording in all three estimates. Negative wording therefore resulted in more regressive estimates and less consensus regardless of experimental condition. You can see this in the box-plots below.

clip_image002

In this graph, the negative PEs’ estimates have been reverse-scored so that we can compare them directly with the positive PEs’ estimates. The “boxes” (the blue rectangles) contain the middle 50% of subjects’ estimates and these boxes are consistently longer for the negative PEs, regardless of experimental condition. The medians (i.e., the score below which 50% of the estimates fall) are the black dots, and these are fairly similar for positive and (reverse-scored) negative PEs. However, due to the negative PE boxes’ greater lengths, the mean estimates for the negative PEs end up being pulled further away from their positive PE counterparts.

There’s another effect that we confirmed statistically but also is clear from the box-plots. The difference between the lower and upper estimates is, on average, greater for the negatively-worded PEs. One implication of this finding is that the impact of negative wording is greatest on the lower estimates—And these are the subjects’ translations of the very thresholds specified in the IPCC guidelines.

If anything, these results suggest the picture is worse even than Budescu et al.’s assessment. They noted that 25% of the subjects interpreted “very likely” as having a “best” probability below 70%. The boxplots show that in three of the four experimental conditions at least 25% of the subjects provided a lower probability of less than 50% for “very likely”. If we turn to “very unlikely” the picture is worse still. In three of the four experimental conditions about 25% of the subjects returned an upper probability for “very unlikely” greater than 80%!

So, it seems that negatively-worded PEs are best avoided where possible. This recommendation sounds simple, but it could open a can of syntactical worms. Consider the statement “It is very unlikely that the MOC will undergo a large abrupt transition during the 21st century.” Would it be accurate to equate it with “It is very likely that the MOC will not undergo a large abrupt transition during the 21st century?” Perhaps not, despite the IPCC guidelines’ insistence otherwise. Moreover, turning the PE positive entails turning the event into a negative. In principle, we could have a mixture of negatively- and positively-worded PE’s and events (“It is (un)likely that A will (not) occur”). It is unclear at this point whether negative PEs or negative events are the more confusing, but inspection of the Budescu et al. data suggested that double-negatives were decidedly more confusing than any other combination.

As I write this, David Budescu is spearheading a multi-national study of laypeople’s interpretations of the IPCC probability expressions (I’ll be coordinating the Australian component). We’ll be able to compare these interpretations across languages and cultures. More anon!

References

Budescu, D.V., Broomell, S. and Por, H.-H. (2009) Improving the communication of uncertainty in the reports of the Intergovernmental panel on climate change. Psychological Science, 20, 299–308.

Intergovernmental Panel on Climate Change (2007). Summary for policymakers: Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Retrieved May 2010 from http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf.

Smithson, M., Budescu, D.V., Broomell, S. and Por, H.-H. (2011) Never Say “Not:” Impact of Negative Wording in Probability Phrases on Imprecise Probability Judgments. Accepted for presentation at the Seventh International Symposium on Imprecise Probability: Theories and Applications, Innsbruck, Austria, 25-28 July 2011.

Communicating about Uncertainty in Climate Change, Part I

with 5 comments

The Intergovernmental Panel on Climate Change (IPCC) guidelines for their 2007 report stipulated how its contributors were to convey uncertainties regarding climate change scientific evidence, conclusions, and predictions. Budescu et al.’s (2009) empirical investigation of how laypeople interpret verbal probability expressions (e.g., “very likely”) in the IPCC report revealed several problematic aspects of those interpretations, and a paper I have co-authored with Budescu’s team (Smithson, et al., 2011) raises additional issues.

Recently the IPCC has amended their guidelines, partly in response to the Budescu paper. Granting a broad consensus among climate scientists that climate change is accelerating and that humans have been a causal factor therein, the issue of how best to represent and communicate uncertainties about climate change science nevertheless remains a live concern. I’ll focus on the issues around probability expressions in a subsequent post, but in this one I want to address the issue of communicating “uncertainty” in a broader sense.

Why does it matter? First, the public needs to know that climate change science actually has uncertainties. Otherwise, they could be misled into believing either that scientists have all the answers or suffer from unwarranted dogmatism. Likewise, policy makers, decision makers and planners need to know the magnitudes (where possible) and directions of these uncertainties. Thus, the IPCC is to be commended for bringing uncertainties to the fore its 2007 report, and for attempting to establish standards for communicating them.

Second, the public needs to know what kinds uncertainties are in the mix. This concern sits at the foundation of the first and second recommendations of the Budescu paper. Their first suggestion is to differentiate between the ambiguous or vague description of an event and the likelihood of its occurrence. The example the authors give is “It is very unlikely that the meridonial overturning circulation will undergo a large abrupt transition during the 21st century” (emphasis added). The first italicized phrase expresses probabilistic uncertainty whereas the second embodies a vague description. People may have different interpretations of both phrases. They might disagree on what range of probabilities is referred to by “very likely” or on what is meant by a “large abrupt” change. Somewhat more worryingly, they might agree on how likely the “large abrupt” change is while failing to realize that they have different interpretations of that change in mind.

The crucial point here is that probability and vagueness are distinct kinds of uncertainty (see, e.g., Smithson, 1989). While the IPCC 2007 report is consistently explicit regarding probabilistic expressions, it only intermittently attends to matters of vagueness. For example, in the statement “It is likely that heat waves have become more frequent over most land areas” (IPCC 2007, pg. 30) the term “heat waves” remains undefined and the time-span is unspecified. In contrast, just below that statement is this one: “It is likely that the incidence of extreme high sea level3 has increased at a broad range of sites worldwide since 1975.” Footnote 3 then goes on to clarify “extreme high sea level” by the following: “Excluding tsunamis, which are not due to climate change. Extreme high sea level depends on average sea level and on regional weather systems. It is defined here as the highest 1% of hourly values of observed sea level at a station for a given reference period.”

The Budescu paper’s second recommendation is to specify the sources of uncertainty, such as whether these arise from disagreement among specialists, absence of data, or imprecise data. Distinguishing between uncertainty arising from disagreement and uncertainty arising from an imprecise but consensual assessment is especially important. In my experience, the former often is presented as if it is the latter. An interval for near-term ocean level increases of 0.2 to 0.8 metres might be the consensus among experts, but it could also represent two opposing camps, one estimating 0.2 metres and the other 0.8.

The IPCC report guidelines for reporting uncertainty do raise the issue of agreement: “Where uncertainty is assessed qualitatively, it is characterised by providing a relative sense of the amount and quality of evidence (that is, information from theory, observations or models indicating whether a belief or proposition is true or valid) and the degree of agreement (that is, the level of concurrence in the literature on a particular finding).” (IPCC 2007, pg. 27) The report then states that levels of agreement will be denoted by “high,” “medium,” and so on while the amount of evidence will be expressed as “much,”, “medium,” and so on.

As it turns out, the phrase “high agreement and much evidence” occurs seven times in the report and “high agreement and medium evidence” occurs twice. No other agreement phrases are used. These occurrences are almost entirely in the sections devoted to climate change mitigation and adaptation, as opposed to assessments of previous and future climate change. Typical examples are:
“There is high agreement and much evidence that with current climate change mitigation policies and related sustainable development practices, global GHG emissions will continue to grow over the next few decades.” (IPCC 2007, pg. 44) and
“There is high agreement and much evidence that all stabilisation levels assessed can be achieved by deployment of a portfolio of technologies that are either currently available or expected to be commercialised in coming decades, assuming appropriate and effective incentives are in place for development, acquisition, deployment and diffusion of technologies and addressing related barriers.” (IPCC2007, pg. 68)

The IPICC guidelines for other kinds of expert assessments do not explicitly refer to disagreement: “Where uncertainty is assessed more quantitatively using expert judgement of the correctness of underlying data, models or analyses, then the following scale of confidence levels is used to express the assessed chance of a finding being correct: very high confidence at least 9 out of 10; high confidence about 8 out of 10; medium confidence about 5 out of 10; low confidence about 2 out of 10; and very low confidence less than 1 out of 10.” (IPCC 2007, pg. 27) A typical statement of this kind is “By 2080, an increase of 5 to 8% of arid and semi-arid land in Africa is projected under a range of climate scenarios (high confidence).” (IPCC 2007, pg. 50)

That said, some parts of the IPCC report do convey disagreeing projections or estimates, where the disagreements are among models and/or scenarios, especially in the section on near-term predictions of climate change and its impacts. For instance, on pg. 47 of the 2007 report the graph below charts mid-century global warming relative to 1980-99. The six stabilization categories are those described in the Fourth Assessment Report (AR4).

clip_image002

Although this graph effectively represents both imprecision and disagreement (or conflict), it slightly underplays both by truncating the scale at the right-hand side. The next figure shows how the graph would appear if the full range of categories V and VI were included. Both the apparent imprecision of V and VI and the extent of disagreement between VI and categories I-III are substantially greater once we have the full picture.

clip_image004

There are understandable motives for concealing or disguising some kinds of uncertainty, especially those that could be used by opponents to bolster their own positions. Chief among these is uncertainty arising from conflict. In a series of experiments Smithson (1999) demonstrated that people regard precise but disagreeing risk messages as more troubling than informatively equivalent imprecise but agreeing messages. Moreover, they regard the message sources as less credible and less trustworthy in the first case than in the second. In short, conflict is a worse kind of uncertainty than ambiguity or vagueness. Smithson (1999) labeled this phenomenon “conflict aversion.” Cabantous (2007) confirmed and extended those results by demonstrating that insurers would charge a higher premium for insurance against mishaps whose risk information was conflictive than if the risk information was merely ambiguous.

Conflict aversion creates a genuine communications dilemma for disagreeing experts. On the one hand, public revelation of their disagreement can result in a loss of credibility or trust in experts on all sides of the dispute. Laypeople have an intuitive heuristic that if the evidence for any hypothesis is uncertain, then equally able experts should have considered the same evidence and agreed that the truth-status of that hypothesis is uncertain. When Peter Collignon, professor of microbiology at The Australian National University, cast doubt on the net benefit of the Australian Fluvax program in 2010, he attracted opprobrium from colleagues and health authorities on grounds that he was undermining public trust in vaccines and the medical expertise behind them. On the other hand, concealing disagreements runs the risk of future public disclosure and an even greater erosion of trust (lying experts are regarded as worse than disagreeing ones). The problem of how to communicate uncertainties arising from disagreement and vagueness simultaneously and distinguishably has yet to be solved.

References

Budescu, D.V., Broomell, S. and Por, H.-H. (2009) Improving the communication of uncertainty in the reports of the Intergovernmental panel on climate change. Psychological Science, 20, 299–308.

Cabantous, L. (2007). Ambiguity aversion in the field of insurance: Insurers’ attitudes to imprecise and conflicting probability estimates. Theory and Decision, 62, 219–240.

Intergovernmental Panel on Climate Change (2007). Summary for policymakers: Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Retrieved May 2010 from http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf.

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.

Smithson, M. (1999). Conflict Aversion: Preference for Ambiguity vs. Conflict in Sources and Evidence. Organizational Behavior and Human Decision Processes, 79: 179-198.

Smithson, M., Budescu, D.V., Broomell, S. and Por, H.-H. (2011) Never Say “Not:” Impact of Negative Wording in Probability Phrases on Imprecise Probability Judgments. Accepted for presentation at the Seventh International Symposium on Imprecise Probability: Theories and Applications, Innsbruck, Austria, 25-28 July 2011.

Follow

Get every new post delivered to your Inbox.