ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Trust

The Stapel Case and Data Fabrication

with 6 comments

By now it’s all over the net (e.g., here) and international news media: Tilburg University sacked high-profile social psychologist Diederik Stapel, after he was outed as having faked data in his research. Stapel was director of the Tilburg Institute for Behavioral Economics Research, a successful researcher and fundraiser, and as a colleague expressed it, “the poster boy of Dutch social psychology.” He had more than 100 papers published, some in the flagship journals not just of psychology but of science generally (e.g., Science), and won prestigious awards for his research on social cognition and stereotyping.

Tilburg University Rector Philip Eijlander said that Stapel had admitted to using faked data, apparently after Eijlander confronted him with allegations by graduate student research assistants that his research conduct was fraudulent. The story goes that the assistants had identified evidence of data entry by Stapel via copy-and-paste.

Willem Levelt, psycholinguist and former president of the Royal Netherlands Academy of Arts and Sciences, is to lead a panel investigating the extent of the fraud. That extent could be very widespread indeed. In a press conference the Tilburg rector made it clear that Stapel’s fraudulent research practices may have ranged over a number of years. All of his papers would be suspected, and the panel will advise on which papers will have to be retracted. Likewise, the editors of all journals that Stapel published in are also investigating details of his papers that were published in these journals. Then there are Stapel’s own students and research collaborators, whose data and careers may have been contaminated by his.

I feel sorry for my social psychological colleagues, who are reeling in shock and dismay. Some of my closest colleagues knew Stapel (one was a fellow graduate student with him), and none of them suspected him. Among those who knew him well and worked with him, Stapel apparently was respected as a researcher and trusted as a man of integrity. They are asking themselves how his cheating could have gone undetected for so long, and how such deeds could be prevented in the future. They fear its impact on public perception of their discipline and trust in scientific researchers generally.

An understandable knee-jerk reaction is to call for stricter regulation of scientific research, and alterations to the training of researchers. Mark Van Vugt and Anjana Ahuja’s blog post exemplifies this reaction, when they essentially accuse social psychologists of being more likely to engage in fraudulent research because some of them use deception of subjects in their experiments:

“…this means that junior social psychologists are being trained to deceive people and this might be a first violation of scientific integrity. It would be good to have a frank discussion about deception in our discipline. It is not being tolerated elsewhere so why should it be in our field.”

They make several recommendations for reform, including the declaration that “… ultimately it is through training our psychology students into doing ethically sound research that we can tackle scientific fraud. This is no easy feat.”

The most obvious problems with Van Vugt’s and Ahuja’s recommendations are, first, that there is no clear connection between using deception in research designs and faking data, and second, many psychology departments already include research ethics in researcher education and training. Stapel isn’t ignorant of research ethics. But a deeper problem is that none of their recommendations and, thus far, very few of the comments I have seen about this or similar cases, address three of the main considerations in any criminal case: Means, opportunity, and motive.

Let me speak to means and opportunity first. Attempts to more strictly regulate the conduct of scientific research are very unlikely to prevent data fakery, for the simple reason that it’s extremely easy to do in a manner that is extraordinarily difficult to detect. Many of us “fake data” on a regular basis when we run simulations. Indeed, simulating from the posterior distribution is part and parcel of Bayesian statistical inference. It would be (and probably has been) child’s play to add fake cases to one’s data by simulating from the posterior and then jittering them randomly to ensure that the false cases look like real data. Or, if you want to fake data from scratch, there is plenty of freely available code for randomly generating multivariate data with user-chosen probability distributions, means, standard deviations, and correlational structure. So, the means and opportunities are on hand for virtually all of us. They are the very same means that underpin a great deal of (honest) research. It is impossible to prevent data fraud by these means through conventional regulatory mechanisms.

Now let us turn to motive. The most obvious and comforting explanations of cheats like psychologists Stapel or Hauser, or plagiarists like statistician Wegman and political scientist Fischer, are those appealing to their personalities. This is the “X cheated because X is psychopathic” explanation. It’s comforting because it lets the rest of us off the hook (“I wouldn’t cheat because I’m not a psychopath”). Unfortunately this kind of explanation is very likely to be wrong. Most of us cheat on something somewhere along the line. Cheating is rife, for example, among undergraduate university students (among whom are our future researchers!), so psychopathy certainly cannot be the deciding factor there. What else could be the motivational culprit? How about the competitive pressures on researchers generated by the contemporary research culture?

Cognitive psychologist E,J, Wagenmakers (as quoted in Andrew Gelman’s thoughtful recent post) is among the few thus far who have addressed possible motivating factors inherent in the present-day research climate. He points out that social psychology has become very competitive, and

“high-impact publications are only possible for results that are really surprising. Unfortunately, most surprising hypotheses are wrong. That is, unless you test them against data you’ve created yourself. There is a slippery slope here though; although very few researchers will go as far as to make up their own data, many will “torture the data until they confess”, and forget to mention that the results were obtained by torture….”

I would add to E.J.’s observations the following points.

First, social psychology journals (and journals for other areas in psychology) exhibit a strong bias towards publishing only studies that have achieved a statistically significant result. This bias is widely believed in by researchers and their students. The obvious temptation arising from this is to ease an inconclusive finding into being conclusive by adding more “favorable” cases or making some of the unfavorable ones more favorable.

Second, and of course readers will recognize one of my hobby-horses here, the addiction in psychology to hypothesis-testing over parameter estimation amounts to an insistence that every study yield a conclusion or decision: Did the null hypothesis get rejected? The obvious remedy for this is to develop a publication climate that does not insist that each and every study be “conclusive,” but instead emphasizes the importance of a cumulative science built on multiple independent studies, careful parameter estimates and multiple tests of candidate theories. This adds an ethical and motivational rationale to calls for a shift to Bayesian methods in psychology.

Third, journal editors and reviewers routinely insist on more than one study to an article. On the surface, this looks like what I’ve just asked for, a healthy insistence on independent replication. It isn’t, for two reasons. First, even if the multiple studies are replications, they aren’t independent because they come from the same authors and/or lab. Genuinely independent replicated studies would be published in separate papers written by non-overlapping sets of authors from separate labs. However, genuinely independent replication earns no kudos and therefore is uncommon (not just in psychology, either—other sciences suffer from this problem, including those that used to have a tradition of independent replication).

The second reason is that journal editors don’t merely insist on study replications, they also favor studies that come up with “consistent” rather than “inconsistent” findings (i.e., privileging “successful” replications over “failed” replications). By insisting on multiple studies that reproduce the original findings, journal editors are tempting researchers into corner-cutting or outright fraud in the name of ensuring that that first study’s findings actually get replicated. E.J.’s observation that surprising hypotheses are unlikely to be supported by data goes double (squared, actually) when it comes to replication—Support for a surprising hypothesis may occur once in a while, but it is unlikely to occur twice in a row. Again, remedies are obvious: Develop a publication climate which encourages or even insists on independent replication, that treats well-conducted “failed” replications identically to well-conducted “successful” ones, and which does not privilege “replications” from the same authors or lab of the original study.

None of this is meant to say I fall for cultural determinism—Most researchers face the pressures and motivations described above, but few cheat. So personality factors may also exert an influence, along with circumstances specific to those of us who give in to the temptations of cheating. Nevertheless if we want to prevent more Stapels, we’ll get farther by changing the research culture and its motivational effects than we will by exhorting researchers to be good or lecturing them about ethical principles of which they’re already well aware. And we’ll get much farther than we would in a futile attempt to place the collection and entry of every single datum under surveillance by some Stasi-for-scientists.

Written by michaelsmithson

September 14, 2011 at 9:36 am

An Ignorance Economy?

with 2 comments

It’s coming up to a year since I began this blog. In my usual fashion, I set myself the unrealistic goal of writing a post every week. This is only the 37th, so I’ve fallen short by a considerable margin. On the other hand, most of those posts have been on the order of 1500 words long, for a total of about 55,500 words thus far. That seems a fair whack of the keyboard, and it’s been fun too.

In an earlier post I proposed that because of the ways in which knowledge economies work, we increasingly live in an “ignorance society.” In the same year that Sheldon Ungar’s paper on ignorance as a public problem appeared, another paper came out by Joanne Roberts and John Armitage with the intriguing title “The Ignorance Economy.” Their stated purpose was to critique the notion of a knowledge economy via an investigation of ignorance from an economic standpoint.

As Roberts and Armitage (and many others) have said, knowledge as a commodity has several distinctive features. Once knowledge is consumed, it does not disappear and indeed its consumption may result in the development of more knowledge. The consumption of knowledge is non-zero-sum and can be non-excludable. Knowledge is a multiplier resource in this sense. Finally, knowledge is not subject to diminishing returns.

Interestingly, Roberts and Armitage do not say anything substantial about ignorance as a commodity. We already have some characterizations handy from this blog and elsewhere. Like knowledge, ignorance can be non-zero-sum and non-excludable in the sense that my being ignorant about X doesn’t prevent you from also being ignorant about X, nor does greater ignorance on my part necessarily decrease your ignorance. Ignorance also does not seem susceptible to diminishing returns. And of course, new knowledge can generate ignorance, and an important aspect of an effective knowledge-based economy is its capacity for identifying and clarifying unknowns. Even in a booming knowledge economy, ignorance can be a growth industry in its own right.

There are obvious examples of economies that could, in some sense, be called “ignorance economies.” Education and training are ignorance economies in the sense that educators and trainers make their living via a continual supply of ignoramuses who are willing to pay for the privilege of shedding that status. Likewise, governmental and corporate organizations paying knowledge experts enable those experts to make a living out of selling their expertise to those who lack it. This is simply the “underbelly” of knowledge economies, as Roberts and Armitage point out.

But what about making ignorance pay? Roberts and Armitage observe that agents in knowledge economies set about this in several ways. First, there is the age-old strategy of intellectual property protection via copyright, patents, or outright secrecy. Hastening the obsolescence of knowledge and/or skills is another strategy. Entire trades, crafts and even professions have been de-skilled or rendered obsolete. And how about that increasingly rapid deluge of updates and “upgrades” imposed on us?

A widespread observation about the knowledge explosion is that it generates an ensuing ignorance explosion, both arising from and resulting in increasing specialization. The more specialized a knowledge economy is, the greater are certain opportunities to exploit ignorance for economic gains. These opportunities arise in at least three forms. First, there are potential coordination and management roles for anyone (or anything) able to pull a large unstructured corpus of data into a usable structure or, better still, a “big picture.” Second, making sense of data has become a major industry in its own right, giving rise to ironically specialized domains of expertise such as statistics and information technology.

Third, Roberts and Armitage point to the long-established trend for consumer products to require less knowledge for their effective use. So consumers are enticed to become more ignorant about how these products work, how to repair or maintain them, and how they are produced. You don’t have to be a Marxist to share a cynical but wry guffaw with Roberts and Armitage as they confess, along with the rest of us, to writing their article using a computer whose workings they are happily ignorant about. One must admit that this is an elegant, if nihilistic solution to Sheldon Ungar’s problem that the so-called information age has made it difficult to agree on a human-sized common stock of knowledge that we all should share.

Oddly, Roberts and Armitage neglect two additional (also age-old) strategies for exploiting ignorance for commercial gain and/or political power. First, an agent can spread disinformation and, if successful, make money or power out of deception. Second, an agent can generate uncertainty in the minds of a target population, and leverage wealth and/or power out of that uncertainty. Both strategies have numerous exemplars throughout history, from legitimate commercial or governmental undertakings to terrorism and organized crime.

Roberts and Armitage also neglect the kinds of ignorance-based “social capital” that I have written about, both in this blog and elsewhere. Thus, for example, in many countries the creation and maintenance of privacy, secrecy and censorship engage economic agents of considerable size in both the private and public sectors. All three are, of course, ignorance arrangements. Likewise, trust-based relations have distinct economic advantages over relations based on assurance through contracts, and trust is partially an ignorance arrangement.

More prosaically, do people make their living by selling their ignorance? I once met a man who claimed he did so, primarily on a consulting basis. His sales-pitch boiled down to declaring “If you can make something clear to me, you can make it clear to anyone.” He was effectively making the role of a “beta-tester” pay off. Perhaps we may see the emergence of niche markets for specific kinds of ignoramuses.

But there already is, arguably, a sustainable market for generalist ignoramuses. Roberts and Armitage moralize about the neglect by national governments of “regional ignorance economies,” by which they mean subpopulations of workers lacking any qualifications whatsoever. Yet these are precisely the kinds of workers needed to perform jobs for which everyone else would be over-qualified and, knowledge economy or not, such jobs are likely to continue abounding for some time to come.

I’ve watched seven children on my Australian middle-class suburban cul-de-sac grow to adulthood over the past 14 years. Only one of them has gone to university. Why? Well, for example, one of them realized he could walk out of school after 10th grade, go to the mines, drive a big machine and immediately command a $90,000 annual salary. The others made similar choices, although not as high-paying as his but still favorable in short-term comparisons to their age-mates heading off to uni to rack up tens-of-thousands-of-dollars debts. The recipe for maintaining a ready supply of generalist ignoramuses is straightforward: Make education or training sufficiently unaffordable and difficult, and/or unqualified work sufficiently remunerative and easy. An anti-intellectual mainstream culture helps, too, by the way.

Written by michaelsmithson

September 11, 2011 at 1:31 pm

E-Transparency

with 2 comments

A recent policy paper by Frank Bannister and Regina Connolly asks whether transparency is an unalloyed good in e-government. As the authors point out, the advent of Wikileaks has brought the issue of “e-transparency” into the domain of public debate. Broadly, e-transparency in government refers to access to the data, processes, decisions and actions of governments mediated by information communications technology (ICT).

Debates about the extent to which governments should (or can) be transparent have a lengthy history. The prospect of e-transparency adds considerations of affordability and potential breadth of citizen response and participation. Bannister and Connolly begin their discussion by setting aside the most common objections to transparency: Clear requirements for national security and commercial confidentiality in the service of protecting citizenry or other national interests. What other reasonable objections to transparency, let alone e-transparency, might there be?

Traditional arguments for transparency in government are predicated on three values assertions.

  1. The public has a right to know. Elected office-holders and appointed public or civil servants alike are accountable to their constituencies. Accountability is impossible without transparency; therefore good government requires transparency.
  2. Good government requires building trust between the governors and the governed, which can only arise if the governors are accountable to the governed.
  3. Effective citizen participation in a democracy is possible only if the citizenry is sufficiently educated and informed to make good decisions. Education and information both entail transparency.

Indeed, you can find affirmations of these assertions in the Obama administration’s White House Press Office statement on this issue.

Note that the first of these arguments is a claim to a right, whereas the second and third are claims about consequences. The distinction is important. A right is, by definition, non-negotiable and, in principle, inalienable. Arguments for good consequences, on the other hand, are utilitarian instead of deontological. Utilitarian arguments can be countered by “greater good” arguments and therefore are negotiable.

Japanese official pronouncements about the state of the recent Fukujima plant disaster therefore were expected to be more or less instantaneous and accurate. Even commentary from sources such as the Bulletin of Atomic Scientists averred that official reports should have been forthcoming sooner about the magnitude and scope of the disaster: “Denied such transparency, media outlets and the public may come to distrust official statements.” The gist of this commentary was that transparency would pay off better than secrecy, and the primary payoff would be increased trust in the Japanese government.

However, there are counter-arguments to the belief that transparency is a necessary or even contributing factor in building trust in government. A recent study by Stephan Gimmelikhuikjsen (2010) suggests that when the minutes of local council deliberations were made available online citizens’ evaluations of council competence declined in comparison to citizens who did not have access to that information. If transparency reveals incompetency then it may not increase trust after all. This finding is congruent with observations that a total accountability culture often also is a blame culture.

There’s another more subtle issue, namely that insistence on accountability and the surveillance levels required thereby are incompatible with trust relations. People who trust one another do not place each other under 24-7 surveillance, nor do they hold them accountable for every action or decision. Trust may be built up via surveillance and accountability, but once it has been established then the social norms around trust relations sit somewhat awkwardly alongside norms regarding transparency. The latter are more compatible with contractual relations than trust relations.

Traditional arguments against transparency (or at least, in favour of limiting transparency) also come in deontological and utilitarian flavors. The right of public servants and even politicians to personal privacy stands against the right of the public to know: One deontological principle versus another. ICT developments have provided new tools to monitor in increasing detail what workers do and how they do it, but as yet there seem to be few well thought-out guidelines for how far the public (or anyone else) should be permitted to go in monitoring government employees or office-holders.

Then there are the costs and risks of disclosure, which these days include exposure to litigation and the potential for data to be hacked. E-transparency is said to cost considerably less than traditional transparency and can deliver much greater volumes of data. Nonetheless, Bannister and Connolly caution that some cost increases can occur, firstly in the formalization, recording and editing of what previously were informal and unrecorded processes or events and secondly in the maintenance and updating of data-bases. The advent of radio and television shortened the expected time for news to reach the public and expanded the expected proportion of the public who would receive the news. ICT developments have boosted both of these expectations enormously.

Even if the lower cost argument is true, lower costs and increased capabilities also bring new problems and risks. Chief among these, according to Bannister and Connolly, are misinterpretation and misuse of data, and inadvertent enablement of misuses. On the one hand, ICT has provided the public with tools to process and analyse information that were unavailable to the radio and TV generations. On the other hand, data seldom speak for themselves, and what they have to say depends crucially on how they are selected and analyzed. Bannister and Connolly mentioned school league tables as a case in point. For a tongue-in-cheek example of the kind of simplistic analyses Bannister and Connolly fear, look no further than Crikey’s treatment of data on the newly-fledged Australian My School website.

Here’s another anti-transparency argument, not considered by Bannister and Connolly, grounded in a solid democratic tradition: The secret ballot. Secret ballots stifle vote-buying because the buyer cannot be sure of whom their target voted for. This argument has been extended (see, for instance, the relevant Freakonomics post) to defend secrecy regarding campaign contributions. Anonymous donations deter influence-peddling, so the argument runs, because candidates can’t be sure the supposed contributors actually contributed. It would not be difficult to generalize it further to include voting by office-holders on crucial bills, or certain kinds of decisions. There are obvious objections to this argument, but it also has some appeal. After all, there is plenty of vote-buying and influence-peddling purveyed by lobby groups outfitted and provisioned for just such undertakings.

Finally, there is a transparency bugbear known to any wise manager who has tried to implement systems to make their underlings accountable—Gaming the system. Critics of school league tables claim they motivate teachers to tailor curricula to the tests or even indulge in outright cheating (there are numerous instances of the latter, here and here for a couple of recent examples). Nor is this limited to underling-boss relations. You can find it in any competitive market. Last year Eliot Van Buskirk posted an intriguing piece on how marketers are gaming social media in terms of artificially inflated metrics such as number of friends or YouTube views.

In my 1989 book, I pointed out that information has come to resemble hard currency, and the “information society” is also an increasingly regulated, litigious society. This combination motivates those under surveillance, evaluation, or accountability regimes to distort or simply omit potentially discrediting information. Bannister and Connolly point to the emergence of a “non-recording” culture in public service: “Where public servants are concerned about the impact of data release, one solution is to not create or record the data in the first place.” To paraphrase the conclusion I came to in 1989, the new dilemma is that the control strategies designed to enhance transparency may actually increase intentional opacity.

I should close by mentioning that I favor transparency. My purpose in this post has been to point out some aspects of the arguments for and against it that need further thought, especially in this time of e-everything.

Written by michaelsmithson

April 15, 2011 at 1:17 am

Can We Make “Good” Decisions Under Ignorance?

with 2 comments

There are well-understood formal frameworks for decision making under risk, i.e., where we know all the possible outcomes of our acts and also know the probabilities of those outcomes. There are even prescriptions for decision making when we don’t quite know the probabilities but still know what the outcomes could be. Under ignorance we not only don’t know the probabilities, we may not even know all of the possible outcomes. Shorn of their probabilities and a completely known outcome space, normative frameworks such as expected utility theory stand silent. Can there be such a thing as a good decision-making method under ignorance? What criteria or guidelines make sense for decision making when we know next to nothing?

At first glance, the notion of criteria for good decisions under ignorance may seem absurd. Here is a simplified list of criteria for “good” (i.e., rational) decisions under risk:

  1. Your decision should be based on your current assets.
  2. Your decision should be based on the possible consequences of all possible outcomes.
  3. You must be able to rank all of the consequences in order of preference and assign a probability to each possible outcome.
  4. Your choice should maximize your expected utility, or roughly speaking, the likelihood of those outcomes that yield highly preferred consequences.

In non-trivial decisions, this prescription requires a vast amount of knowledge, computation, and time. In many situations at least one of these requirements isn’t met, and often none of them are.

This problem has been recognized for a long time, although it has been framed in rather different ways. In the 1950’s at least two spokespeople emerged for decision making under ignorance. The economist Herbert Simon proposed “bounded rationality” as an alternative to expected utility theory, in recognition of the fact that decision makers have limited time, information-gathering resources, and cognitive capacity. He coined the term “satisficing” to describe criteria for decisions that may fall short of optimality but are “good enough” and also humanly feasible to achieve. Simon also championed the use of “heuristics,” rules of thumb for reasoning and decision making that, again, are not optimal but work well most of the time. These ideas have been elaborated since by many others, including Gerd Gigerenzer’s “fast and frugal” heuristics and Gary Klein’s “naturalistic” decision making. These days bounded rationality has many proponents.

Around the same time that Simon was writing about bounded rationality, political economist Charles Lindblom emerged as an early proponent of various kinds of “incrementalism,” which he engagingly called “muddling through.” Whereas Simon and his descendants focused on the individual decision maker, Lindblom wrote about decision making mainly in institutional settings. One issue that the bounded rationality people tended to neglect was highlighted by Lindblom, namely the problem of not knowing what our preferences are when the issues are complex:

“Except roughly and vaguely, I know of no way to describe–or even to understand–what my relative evaluations are for, say, freedom and security, speed and accuracy in governmental decisions, or low taxes and better schools than to describe my preferences among specific policy choices that might be made between the alternatives in each of the pairs… one simultaneously chooses a policy to attain certain objectives and chooses the objectives themselves.” (Lindblom 1959, pg. 82).

Lindblom’s characterization of “muddling through” also is striking for its rejection of means-ends analysis. For him, the means and ends are entwined together in the policy options under consideration.  “Decision-making is ordinarily formalized as a means-ends relationship: means are conceived to be evaluated and chosen in the light of ends finally selected independently of and prior to the choice of means… Typically, …such a means- ends relationship is absent from the branch method, where means and ends are simultaneously chosen.” (Lindblom 1959, pg. 83).

In the absence of a means-end analysis, how can the decision or policy maker know what is a good decision or policy? Lindblom’s answer is consensus among policy makers: “Agreement on objectives failing, there is no standard of ‘correctness.’… the test is agreement on policy itself, which remains possible even when agreement on values is not.” (Lindblom 1959, pg. 83)

Lindblom’s prescription is to restrict decisional alternatives to small or incremental deviations from the status quo. He claims that “A wise policy-maker consequently expects that his policies will achieve only part of what he hopes and at the same time will produce unanticipated consequences he would have preferred to avoid. If he proceeds through a succession of incremental changes, he avoids serious lasting mistakes in several ways.” First, a sequence of small steps will have given the policy maker grounds for predicting the consequences of an additional similar step. Second, he claims that small steps are more easily corrected or reversed than large ones. Third, stakeholders are more likely to agree on small changes than on radical ones.

How, then, does Lindblom think his approach will not descend into groupthink or extreme confirmation bias? Through diversity and pluralism among the stakeholders involved in decision making:

“… agencies will want among their own personnel two types of diversification: administrators whose thinking is organized by reference to policy chains other than those familiar to most members of the organization and, even more commonly, administrators whose professional or personal values or interests create diversity of view (perhaps coming from different specialties, social classes, geographical areas) so that, even within a single agency, decision-making can be fragmented and parts of the agency can serve as watchdogs for other parts.”

Naturally, Lindblom’s prescriptions and claims were widely debated. There is much to criticize, and he didn’t present much hard evidence that his prescriptions would work. Nevertheless, he ventured beyond the bounded rationality camp in four important respects. First, he brought into focus the prospect that we may not have knowable preferences. Second, he realized that means and ends may not be separable and may be reshaped in the very process of making a decision. Third, he mooted the criteria of choosing incremental and corrigible changes over large and irreversible ones. Fourth, he observed that many decisions are embedded in institutional or social contexts that may be harnessed to enhance decision making. All four of these advances suggest implications for decision making under ignorance.

Roger Kasperson contributed a chapter on “coping with deep uncertainty” to Gabriele Bammer’s and my 2008 book. By “deep uncertainty” he means “situations in which, typically, the phenomena… are characterized by high levels of ignorance and are still only poorly understood scientifically, and where modelling and subjective judgments must substitute extensively for estimates based upon experience with actual events and outcomes, or ethical rules must be formulated to substitute for risk-based decisions.” (Kasperson 2008, pg. 338) His list of options open to decision makers confronted with deep uncertainty includes the following:

  1. Delay to gather more information and conduct more studies in the hope of reducing uncertainty across a spectrum of risk;
  2. Interrelate risk and uncertainty to target critical uncertainties for priority further analysis, and compare technology and development options to determine whether clearly preferable options exist for proceeding;
  3. Enlarge the knowledge base for decisions through lateral thinking and broader perspective;
  4. Invoke the precautionary principle;
  5. Use an adaptive management approach; and
  6. Build a resilient society.

He doesn’t recommend these unconditionally, but instead writes thoughtfully about their respective strengths and weaknesses. Kasperson also observes that “The greater the uncertainty, the greater the need for social trust… The combination of deep uncertainty and high social distrust is often a recipe for conflict and stalemate.” (Kasperson 2008, pg. 345)

At the risk of leaping too far and too fast, I’ll conclude by presenting my list of criteria and recommendations for decisions under ignorance. I’ve incorporated material from the bounded rationality perspective, some of Lindblom’s suggestions, bits from Kasperson, my own earlier writings and from other sources not mentioned in this post. You’ll see that the first two major headings echo the first two in the expected utility framework, but beneath each of them I’ve slipped in some caveats and qualifications.

  1. Your decision should be based on your current assets.
    a. If possible, know which assets can be traded and which are non-negotiable.
    b. If some options are decisively better (worse) than others considering the range of risk that may exist, then choose them (get rid of them).
    c. Consider options themselves as assets. Try to retain them or create new ones.
    d. Regard your capacity to make decisions as an asset. Make sure you don’t become paralyzed by uncertainty.
  2. Your decision should be based on the possible consequences.
    a. Be aware of the possibility that means and ends may be inseparable and that your choice may reshape both means and ends.
    b. Beware unacceptable ends-justify-means arguments.
    c. Avoid irreversible or incorrigible alternatives if possible.
    d. Seek alternatives that are “robust” regardless of outcome.
    e. Where appropriate, invoke the precautionary principle.
    f. Seek alternatives whose consequences are observable.
    g. Plan to allocate resources for monitoring consequences and (if appropriate) gathering more information.
  3. Don’t assume that getting rid of ignorance and uncertainty is always a good idea.
    a. See 1.c. and 2.c. above. Options and corrigibility require uncertainty; freedom of choice is positively badged uncertainty.
    b. Interventions that don’t alter people’s uncertainty orientations will be frustrated with attempts by people to re-establish the level of uncertainty they are comfortable with.
    c. Ignorance and uncertainty underpin particular kinds of social capital. Eliminate ignorance and uncertainty and you also eliminate that social capital, so make sure you aren’t throwing any babies out with the bathwater.
    d. Other people are not always interested in reducing ignorance and uncertainty. They need uncertainty to have freedom to make their own decisions. They may want ignorance to avoid culpability.
  4. Where possible, build and utilize relationships based on trust instead of contracts.
    a. Contracts presume and require predictive knowledge, e.g., about who can pay whom how much and when. Trust relationships are more flexible and robust under uncertainty.
    b. Contracts lock the contractors in, incurring opportunity costs that trust relationships may avoid.

Managing and decision making under ignorance is, of course, far too large a topic for a single post, and I’ll be returning to it in the near future. Meanwhile, I’m hoping this piece at least persuades readers that the notion of criteria for good decisions under ignorance may not be absurd after all.

Trust, Gullibility and Skepticism

leave a comment »

In my first post on this blog, I claimed that certain ignorance arrangements are socially mandated and underpin some forms of social capital. One of these is trust. Trust lives next-door to gullibility but also has faith, agnosticism, skepticism and paranoia as neighbors. I want to take a brief tour through this neighborhood.

Trust enhances our lives in ways that are easy to take for granted. Consider the vast realm of stuff most of us “just know,” such as our planet is round, our biological parents really are our biological parents, and we actually can digest protein but can’t digest cellulose. These are all things we’ve learned essentially third-hand, from trusted sources. We don’t have direct evidence for these ideas. Hardly any of us would be able to offer proofs or even solid evidence for the bulk of our commonsense knowledge and ideas about how the world works. Instead, we trust the sources, in some cases because of their authority (e.g., parents, teachers) and in others because of their sheer numbers (e.g., everyone we know and everything we read agrees that the planet is round). Those sources, in turn, rarely are people who actually tested commonsense first-hand.

Why is this trust-based network of beliefs generally good for us? Most obviously, it saves us time, effort and disruption. Skepticism is costly, and not just in terms of time and energy (imagine the personal and social cost of demanding DNA tests from your parents to prove that you really are their offspring). Perhaps a bit less obviously, it conveys trust from us to those who provided the information in the first place. As a teacher, I value the trust my students place in me. Without it, teaching would be very difficult indeed.

And even less obviously, it’s this trust that makes learning and the accumulation of knowledge possible. It means each of us doesn’t have to start from the beginning again. We can build on knowledge that we trust has been tested and verified by competent people before us. Imagine having to fully test your computer to make sure it does division correctly, and spare a thought for those number-crunchers among us who lived through the floating-point-divide bug in the Intel P5 chip.

More prosaically, about 10 years ago I found myself facing an awkward pair of possibilities. Either (a) Sample estimates of certain statistical parameters don’t become more precise with bigger random samples (i.e., all the textbooks are wrong), or (b) There’s a bug in a very popular stats package that’s been around for more than 3 decades. Of course, after due diligence I found the bug. “Eternal vigilance!” A colleague crowed when I described to him the time and trouble it took, not only to diagnose the bug but to convince the stats package company that their incomplete beta function algorithm was haywire. But there’s the rub: If eternal vigilance really were required then none of us would make any advances at all.

Back to trust—When does it cease to be trust and becomes gullibility? There are the costs of trying to check out all those unsubstantiated clichés, and we also suffer from confirmation bias. Both tendencies make us vulnerable to gullibility.

Gullibility often is taken to mean “easily deceived.” Closer to the mark would be “excessively trusting,” but that begs the question of what is “excessive.” Can gullibility be measured? Perhaps, but one way not to do so is via the various “gullibility tests” abounding on the web. A fairly typical example is this one, in which you can attain a high score as a “free thinker” simply by choosing answers that go against conventional beliefs or that incline towards conspiracy theories. However, distrust is no evidence for lack of gullibility. In fact, extreme distrust and paranoia are as obdurate against counter-evidence as any purblind belief; they become a kind of gullibility as well.

Can gullibility be distinguished from trust? Yes, according to studies such as the Psychological Science study claiming that Oxytocin makes people more trusting but not more gullible (see the MSN story on this study). Participants received either a placebo or an oxytocin nasal spray, and were then asked to play a trust game in which they received a certain amount of money they could share with a “trustee,” in whose hands the money would then triple. The trustee she could then transfer all, part, or none of it back to the participant. So the participant could make a lot of money, but only if the trustee was reliable and fair.

Participants played multiple rounds of the trust game with a computer and virtual partners, half of whom appeared to be reliable and half unreliable. Oxytocin recipients offered more money to the computer and the reliable partners than placebo recipients did. However, oxytocin and placebo recipients were equally reluctant to share money with unreliable partners. So, gullibility may be a failure to read social cues.

It may comfort to know that clever people can be gullible too. After all, academic intelligence is no guarantee of social or practical intelligence. The popular science magazine Discover ran an April Fool’s article in 1995 on the hotheaded naked ice borer, which attracted more mail than any previous article published therein. It was widely distributed by newswires and treated seriously in Ripley’s Believe It or Not! and The Unofficial X Files Companion. While the author of this and other hoaxes expressed a few pangs of guilt, none of this inhibited Discover’s blog site from carrying this piece on the “discovery” that gullibility is associated with the “inferior supra-credulus,” a region in the brain “unsually active in people with a tendency to believe horoscopes and papers invoking fancy brain scans,” and a single gene called WTF1. As many magicians and con artists have known, people who believe they are intelligent also often believe they can’t be fooled and, as a result, can be caught off-guard. A famous recent case is Stephen Greenspan, professor and author of a book on gullibility, who became a victim of Bernie Madoff. At the other end of the scale is comedian Joe Penner’s catchphrase “You can’t fool me, I’m too ignorant.

OK, but what about agnosticism? Why not avoid gullibility by refusing to believe any proposition for which you lack first-hand evidence or logico-mathematical proof? This sounds attractive at first, but the price of agnosticism is indecisiveness. Indecision can be costly (should you refuse your food because you don’t understand digestion?). Even if you try to evade indecision by tossing a coin you can still have accountability issues because, by most definitions, a decision is a deliberate choice of the most desirable option. Carried far enough, a radical agnostic would be unable to commit to important relationships, because those require trust.

So now let’s turn the tables. We place trust in much of our third-hand stock of “knowledge” because the alternatives are too costly. But our trust-based relationships actually require that we forego complete first-hand knowledge about those whom we trust. It is true that we usually trust only people who we know very well. Nevertheless, the fact remains that people who trust one another don’t monitor one another all the time, nor do they hold each other constantly accountable for every move they make. A hallmark of trust is allowing the trusted party privacy, room to move, and discretion.

Imagine that you’re a tenured professor at a university with a mix of undergraduate teaching, supervision of several PhD students, a research program funded by a National Science Foundation grant, a few outside consultancies each year, and the usual administrative committee duties. You’ve worked at your university for a dozen years and have undergone summary biannual performance evaluations. You’ve never had an unsatisfactory rating in these evaluations, you have a solid research record, and in fact you were promoted three years ago.

One day, your department head calls you into her office and tells you that there’s been a change in policy. The university now demands greater accountability from its academic staff. From now on, she will be monitoring your activities on a weekly basis. You will have to keep a log of how you spend each week on the job, describing your weekly teaching, supervision, research, and administrative efforts. Every cent you spend from your grant will be examined by her, and she will inspect the contents of your lectures, exams, and classroom assignments. Your web-browsing and emailing at work also will be monitored. You will be required to clock in and out of work to document how much time you spend on campus.

What effect would this have on you? Among other things, you would probably think you were distrusted by your department head. But why? she might ask. After all, this is purely an information-gathering exercise. You aren’t being evaluated. Moreover, it’s nothing personal; all the other academics are going to be monitored in the same way.

Most of us would still think that we weren’t trusted anymore. Intensive surveillance of that kind doesn’t fit with norms for trust behavior. It doesn’t seem plausible that all that effort invested monitoring your activities is “purely” for information gathering. Surely the information is going to be used for evaluation, or at the very least it could be used for such purposes in the future. Who besides your department head may have access to this information? And so on…

Chances are this would also stir up some powerful emotions. A 1996 paper by sociologists Fine and Holyfield observed that we don’t just think trust, we feel trust (pg. 25). By the same token, feeling entrusted by others is crucial for our self-esteem. Treatment of the kind described above would feel belittling, insulting, perhaps even humiliating. At first glance, the contract-like assurance of this kind of performance accountability seems to be sound managerial practice. However, it has hidden costs: Distrusted employees make disaffected and insulted employees who are less likely to be cooperative, volunteering, responsible or loyal—All qualities that spring from feeling entrusted.

Written by michaelsmithson

December 6, 2010 at 3:07 pm

Follow

Get every new post delivered to your Inbox.