ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Vagueness

Communicating about Uncertainty in Climate Change, Part I

with 5 comments

The Intergovernmental Panel on Climate Change (IPCC) guidelines for their 2007 report stipulated how its contributors were to convey uncertainties regarding climate change scientific evidence, conclusions, and predictions. Budescu et al.’s (2009) empirical investigation of how laypeople interpret verbal probability expressions (e.g., “very likely”) in the IPCC report revealed several problematic aspects of those interpretations, and a paper I have co-authored with Budescu’s team (Smithson, et al., 2011) raises additional issues.

Recently the IPCC has amended their guidelines, partly in response to the Budescu paper. Granting a broad consensus among climate scientists that climate change is accelerating and that humans have been a causal factor therein, the issue of how best to represent and communicate uncertainties about climate change science nevertheless remains a live concern. I’ll focus on the issues around probability expressions in a subsequent post, but in this one I want to address the issue of communicating “uncertainty” in a broader sense.

Why does it matter? First, the public needs to know that climate change science actually has uncertainties. Otherwise, they could be misled into believing either that scientists have all the answers or suffer from unwarranted dogmatism. Likewise, policy makers, decision makers and planners need to know the magnitudes (where possible) and directions of these uncertainties. Thus, the IPCC is to be commended for bringing uncertainties to the fore its 2007 report, and for attempting to establish standards for communicating them.

Second, the public needs to know what kinds uncertainties are in the mix. This concern sits at the foundation of the first and second recommendations of the Budescu paper. Their first suggestion is to differentiate between the ambiguous or vague description of an event and the likelihood of its occurrence. The example the authors give is “It is very unlikely that the meridonial overturning circulation will undergo a large abrupt transition during the 21st century” (emphasis added). The first italicized phrase expresses probabilistic uncertainty whereas the second embodies a vague description. People may have different interpretations of both phrases. They might disagree on what range of probabilities is referred to by “very likely” or on what is meant by a “large abrupt” change. Somewhat more worryingly, they might agree on how likely the “large abrupt” change is while failing to realize that they have different interpretations of that change in mind.

The crucial point here is that probability and vagueness are distinct kinds of uncertainty (see, e.g., Smithson, 1989). While the IPCC 2007 report is consistently explicit regarding probabilistic expressions, it only intermittently attends to matters of vagueness. For example, in the statement “It is likely that heat waves have become more frequent over most land areas” (IPCC 2007, pg. 30) the term “heat waves” remains undefined and the time-span is unspecified. In contrast, just below that statement is this one: “It is likely that the incidence of extreme high sea level3 has increased at a broad range of sites worldwide since 1975.” Footnote 3 then goes on to clarify “extreme high sea level” by the following: “Excluding tsunamis, which are not due to climate change. Extreme high sea level depends on average sea level and on regional weather systems. It is defined here as the highest 1% of hourly values of observed sea level at a station for a given reference period.”

The Budescu paper’s second recommendation is to specify the sources of uncertainty, such as whether these arise from disagreement among specialists, absence of data, or imprecise data. Distinguishing between uncertainty arising from disagreement and uncertainty arising from an imprecise but consensual assessment is especially important. In my experience, the former often is presented as if it is the latter. An interval for near-term ocean level increases of 0.2 to 0.8 metres might be the consensus among experts, but it could also represent two opposing camps, one estimating 0.2 metres and the other 0.8.

The IPCC report guidelines for reporting uncertainty do raise the issue of agreement: “Where uncertainty is assessed qualitatively, it is characterised by providing a relative sense of the amount and quality of evidence (that is, information from theory, observations or models indicating whether a belief or proposition is true or valid) and the degree of agreement (that is, the level of concurrence in the literature on a particular finding).” (IPCC 2007, pg. 27) The report then states that levels of agreement will be denoted by “high,” “medium,” and so on while the amount of evidence will be expressed as “much,”, “medium,” and so on.

As it turns out, the phrase “high agreement and much evidence” occurs seven times in the report and “high agreement and medium evidence” occurs twice. No other agreement phrases are used. These occurrences are almost entirely in the sections devoted to climate change mitigation and adaptation, as opposed to assessments of previous and future climate change. Typical examples are:
“There is high agreement and much evidence that with current climate change mitigation policies and related sustainable development practices, global GHG emissions will continue to grow over the next few decades.” (IPCC 2007, pg. 44) and
“There is high agreement and much evidence that all stabilisation levels assessed can be achieved by deployment of a portfolio of technologies that are either currently available or expected to be commercialised in coming decades, assuming appropriate and effective incentives are in place for development, acquisition, deployment and diffusion of technologies and addressing related barriers.” (IPCC2007, pg. 68)

The IPICC guidelines for other kinds of expert assessments do not explicitly refer to disagreement: “Where uncertainty is assessed more quantitatively using expert judgement of the correctness of underlying data, models or analyses, then the following scale of confidence levels is used to express the assessed chance of a finding being correct: very high confidence at least 9 out of 10; high confidence about 8 out of 10; medium confidence about 5 out of 10; low confidence about 2 out of 10; and very low confidence less than 1 out of 10.” (IPCC 2007, pg. 27) A typical statement of this kind is “By 2080, an increase of 5 to 8% of arid and semi-arid land in Africa is projected under a range of climate scenarios (high confidence).” (IPCC 2007, pg. 50)

That said, some parts of the IPCC report do convey disagreeing projections or estimates, where the disagreements are among models and/or scenarios, especially in the section on near-term predictions of climate change and its impacts. For instance, on pg. 47 of the 2007 report the graph below charts mid-century global warming relative to 1980-99. The six stabilization categories are those described in the Fourth Assessment Report (AR4).

clip_image002

Although this graph effectively represents both imprecision and disagreement (or conflict), it slightly underplays both by truncating the scale at the right-hand side. The next figure shows how the graph would appear if the full range of categories V and VI were included. Both the apparent imprecision of V and VI and the extent of disagreement between VI and categories I-III are substantially greater once we have the full picture.

clip_image004

There are understandable motives for concealing or disguising some kinds of uncertainty, especially those that could be used by opponents to bolster their own positions. Chief among these is uncertainty arising from conflict. In a series of experiments Smithson (1999) demonstrated that people regard precise but disagreeing risk messages as more troubling than informatively equivalent imprecise but agreeing messages. Moreover, they regard the message sources as less credible and less trustworthy in the first case than in the second. In short, conflict is a worse kind of uncertainty than ambiguity or vagueness. Smithson (1999) labeled this phenomenon “conflict aversion.” Cabantous (2007) confirmed and extended those results by demonstrating that insurers would charge a higher premium for insurance against mishaps whose risk information was conflictive than if the risk information was merely ambiguous.

Conflict aversion creates a genuine communications dilemma for disagreeing experts. On the one hand, public revelation of their disagreement can result in a loss of credibility or trust in experts on all sides of the dispute. Laypeople have an intuitive heuristic that if the evidence for any hypothesis is uncertain, then equally able experts should have considered the same evidence and agreed that the truth-status of that hypothesis is uncertain. When Peter Collignon, professor of microbiology at The Australian National University, cast doubt on the net benefit of the Australian Fluvax program in 2010, he attracted opprobrium from colleagues and health authorities on grounds that he was undermining public trust in vaccines and the medical expertise behind them. On the other hand, concealing disagreements runs the risk of future public disclosure and an even greater erosion of trust (lying experts are regarded as worse than disagreeing ones). The problem of how to communicate uncertainties arising from disagreement and vagueness simultaneously and distinguishably has yet to be solved.

References

Budescu, D.V., Broomell, S. and Por, H.-H. (2009) Improving the communication of uncertainty in the reports of the Intergovernmental panel on climate change. Psychological Science, 20, 299–308.

Cabantous, L. (2007). Ambiguity aversion in the field of insurance: Insurers’ attitudes to imprecise and conflicting probability estimates. Theory and Decision, 62, 219–240.

Intergovernmental Panel on Climate Change (2007). Summary for policymakers: Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Retrieved May 2010 from http://www.ipcc.ch/pdf/assessment-report/ar4/wg1/ar4-wg1-spm.pdf.

Smithson, M. (1989). Ignorance and Uncertainty: Emerging Paradigms. Cognitive Science Series. New York: Springer Verlag.

Smithson, M. (1999). Conflict Aversion: Preference for Ambiguity vs. Conflict in Sources and Evidence. Organizational Behavior and Human Decision Processes, 79: 179-198.

Smithson, M., Budescu, D.V., Broomell, S. and Por, H.-H. (2011) Never Say “Not:” Impact of Negative Wording in Probability Phrases on Imprecise Probability Judgments. Accepted for presentation at the Seventh International Symposium on Imprecise Probability: Theories and Applications, Innsbruck, Austria, 25-28 July 2011.

Not Knowing When or Where You’re At

with 3 comments

My stepfather, at 86 years of age, just underwent major surgery to remove a considerable section of his colon. He’d been under heavy sedation for several days, breathing through a respirator, with drip-feeds for hydration and sustenance. After that, he was gradually reawakened. Bob found himself in a room he’d never seen before, and he had no sense of what day or time it was. He had no memory of events between arriving at the hospital and waking up in the ward. He had to figure out where and when he was.

These are what philosophers are fond of calling “self-locating beliefs.” They say we learn two quite different kinds of things about the world—Things about what goes on in this world, and things about where and when we are in this world.

Bob has always had an excellent sense of direction. He’s one of those people who can point due North when he’s in a basement. I, on the other hand, have such a poor sense of direction that I’ve been known to blunder into a broom-closet when attempting to exit from a friend’s house. So I find the literature on self-locating beliefs personally relevant, and fascinating for the problems they pose for reasoning about uncertainty.

Adam Elga’s classic paper published in 2000 made the so-called “Sleeping Beauty” self-locating belief problem famous: “Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking.”

You have just woken up. What probability should you assign to the proposition that the coin landed Heads? Some people answer 1/2 because the coin is fair, your prior probability that it lands Heads should be 1/2, and the fact that you have just awakened adds no other information.

Others say the answer is 1/3. There are 3 possible awakenings, of which only 1 arises from the coin landing Heads. Therefore, given that you have ended up with one of these possible awakenings, the probability that it’s a “Heads” awakening is 1/3. Elga himself opted for this answer. However, the debates continued long after the publication of his paper with many ingenious arguments for both answers (and even a few others). Defenders of one position or the other became known as “halfers” and “thirders.”

But suppose we accept Elga’s answer: What of it? It raises a problem for widely accepted ideas about how and why our beliefs should change if we consider the probability we’d assign the coin landing Heads before the researchers imposed their experiment on us. We’d say a fair coin has a probability of 1/2 of landing Heads. But on awakening, Elga says we should now believe that probability is 1/3. But we haven’t received any new information about the coin or anything else relevant to the outcome of the coin-toss. In standard accounts of conditional probability, we should alter this probability only on grounds of having acquired some new information– But all the information about the experiment was given to us before we were put to sleep!

Here’s another example, the Shangri La Journey problem from Frank Arntzenius (2003):

“There are two paths to Shangri La, the path by the Mountains, and the path by the Sea. A fair coin will be tossed by the guardians to determine which path you will take: if Heads you go by the Mountains, if Tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La you will forever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But, just as soon as you enter Shangri La, your memory of this Beauteous Journey will be erased and be replaced by a memory of the Journey by the Mountains.”

Arntzenius takes this case to provide a counterexample to the standard account of how conditional probability works. As in the Sleeping Beauty problem, consider what the probability we’d assign to the coin landing Heads should be at different times. Our probability before the journey should be 1/2, since the only relevant information we have is that the coin is fair. Now, suppose the coin actually lands Heads. Our probability of Heads after we set out and see that we are traveling by the mountains should be 1, because we now known the outcome of the coin toss. But once we pass through the gates of Shangri La, Arntzenius argues, our probability should revert to 1/2: “for you will know that you would have had the memories that you have either way, and hence you know that the only relevant information that you have is that the coin is fair.”

Well, this goes against the standard Bayesian account of conditional probabilities. Letting H = Heads and M = memory of the Journey by the Mountains, according to Bayes’ formula we should update our belief that the coin landed heads by computing
P(H|M) = P(M|H)P(H)/P(M),
where P(H) is the probability of Heads after we know the outcome of the coin toss. According to our setup, P(H) = P(M|H) = P(M) = 1. Therefore, P(H|M) = 1. Arntzenius declares that because our probability of Heads should revert to 1/2, something is wrong with Bayesian conditionalization.

The difficulty arises because the Bayesian updating rule—conditionalization—requires certainties to be permanent: once you’re certain of something, you should always be certain of it. But when we consider self-locating beliefs, there seem to be cases where this requirement clearly is false. For example, one can reasonably change from being certain that it’s one time to being certain that it’s another.

This is the kind of process my stepfather went through as he reoriented himself to what his family and medical caretakers assured him is the here-and-now. It was rather jarring for him at first, but fortunately he isn’t a Bayesian. He could sneak peeks at newspapers and clocks to see if those concurred with what he was being told, and like most of us he could be comfortable with the notion of shifting from being certain it was, say March 20th to being certain that it was March 26th. But what if he were a Bayesian?

Bob: What’s going on?
Nearest & Dearest: You’re in Overlake Hospital and you’ve been sedated for 6 days. It’s Saturday the 26th of March.
Bob: I can see I’m in a hospital ward, but I’m certain the date is the 20th of March because my memory tells me that a moment ago that was the case.
N&D: But it really is the 26th; you’ve had major surgery and had to be sedated for 6 days. We can show you newspapers and so on if you don’t believe us.
Bob: My personal probability that it is the 20th was 1, last I recall, and it still is. No evidence you provide me can shift me from a probability of 1 because I’m using Bayes’ Rule to update my beliefs. You may as well try to convince me the sun won’t rise tomorrow.
N&D: Uhhh…

A Bayesian faces additional problems that do not and need not trouble my stepfather. One issue concerns identity. Bayesian conditionalization is only well-defined if the subject has a unique set of prior beliefs, i.e., a “unique predecessor.” How should we extend conditionalization in order to accommodate non-unique predecessors? For instance, suppose we’re willing to entertain both the 1/2 and the 1/3 answers to the Sleeping Beauty conundrum?

Meacham’s (2010) prescription for multiple predecessors is to represent them with an interval: “A natural proposal is to require our credences to lie in the span of the credences conditionalization prescribes to our predecessors.” But the pair of credences {1/3, 1/2} that the Sleeping Beauty puzzle leaves us with does not lend any plausibility to values in between them. For instance, it surely would be silly to average them and declare that the answer to this riddle is 5/12—Neither the thirders nor the haflers would endorse this solution.

A while ago, I (Smithson, 1999) and more recently, Gajdos & Vergnaud (2011), said that there is a crucial difference between two sharp but disagreeing predecessors {1/3, 1/2} and two vague but agreeing ones {[1/3, 1/2], [1/3, 1/2]}. Moreover, it is not irrational to prefer the second situation to the first, as I showed that many people do. Cabantous et al. (2011) recently report that insurers would charge a higher premium for insurance when expert risk estimates are precise but conflicting than when they agree but are imprecise.

In short, standard probability theories have difficulty not only with self-location belief updating but, more generally, with any situation that presents multiple equally plausible probability assessments. The traditional Bayesian can’t handle multiple selves but the rest of us can.

Can We Make “Good” Decisions Under Ignorance?

with 2 comments

There are well-understood formal frameworks for decision making under risk, i.e., where we know all the possible outcomes of our acts and also know the probabilities of those outcomes. There are even prescriptions for decision making when we don’t quite know the probabilities but still know what the outcomes could be. Under ignorance we not only don’t know the probabilities, we may not even know all of the possible outcomes. Shorn of their probabilities and a completely known outcome space, normative frameworks such as expected utility theory stand silent. Can there be such a thing as a good decision-making method under ignorance? What criteria or guidelines make sense for decision making when we know next to nothing?

At first glance, the notion of criteria for good decisions under ignorance may seem absurd. Here is a simplified list of criteria for “good” (i.e., rational) decisions under risk:

  1. Your decision should be based on your current assets.
  2. Your decision should be based on the possible consequences of all possible outcomes.
  3. You must be able to rank all of the consequences in order of preference and assign a probability to each possible outcome.
  4. Your choice should maximize your expected utility, or roughly speaking, the likelihood of those outcomes that yield highly preferred consequences.

In non-trivial decisions, this prescription requires a vast amount of knowledge, computation, and time. In many situations at least one of these requirements isn’t met, and often none of them are.

This problem has been recognized for a long time, although it has been framed in rather different ways. In the 1950’s at least two spokespeople emerged for decision making under ignorance. The economist Herbert Simon proposed “bounded rationality” as an alternative to expected utility theory, in recognition of the fact that decision makers have limited time, information-gathering resources, and cognitive capacity. He coined the term “satisficing” to describe criteria for decisions that may fall short of optimality but are “good enough” and also humanly feasible to achieve. Simon also championed the use of “heuristics,” rules of thumb for reasoning and decision making that, again, are not optimal but work well most of the time. These ideas have been elaborated since by many others, including Gerd Gigerenzer’s “fast and frugal” heuristics and Gary Klein’s “naturalistic” decision making. These days bounded rationality has many proponents.

Around the same time that Simon was writing about bounded rationality, political economist Charles Lindblom emerged as an early proponent of various kinds of “incrementalism,” which he engagingly called “muddling through.” Whereas Simon and his descendants focused on the individual decision maker, Lindblom wrote about decision making mainly in institutional settings. One issue that the bounded rationality people tended to neglect was highlighted by Lindblom, namely the problem of not knowing what our preferences are when the issues are complex:

“Except roughly and vaguely, I know of no way to describe–or even to understand–what my relative evaluations are for, say, freedom and security, speed and accuracy in governmental decisions, or low taxes and better schools than to describe my preferences among specific policy choices that might be made between the alternatives in each of the pairs… one simultaneously chooses a policy to attain certain objectives and chooses the objectives themselves.” (Lindblom 1959, pg. 82).

Lindblom’s characterization of “muddling through” also is striking for its rejection of means-ends analysis. For him, the means and ends are entwined together in the policy options under consideration.  “Decision-making is ordinarily formalized as a means-ends relationship: means are conceived to be evaluated and chosen in the light of ends finally selected independently of and prior to the choice of means… Typically, …such a means- ends relationship is absent from the branch method, where means and ends are simultaneously chosen.” (Lindblom 1959, pg. 83).

In the absence of a means-end analysis, how can the decision or policy maker know what is a good decision or policy? Lindblom’s answer is consensus among policy makers: “Agreement on objectives failing, there is no standard of ‘correctness.’… the test is agreement on policy itself, which remains possible even when agreement on values is not.” (Lindblom 1959, pg. 83)

Lindblom’s prescription is to restrict decisional alternatives to small or incremental deviations from the status quo. He claims that “A wise policy-maker consequently expects that his policies will achieve only part of what he hopes and at the same time will produce unanticipated consequences he would have preferred to avoid. If he proceeds through a succession of incremental changes, he avoids serious lasting mistakes in several ways.” First, a sequence of small steps will have given the policy maker grounds for predicting the consequences of an additional similar step. Second, he claims that small steps are more easily corrected or reversed than large ones. Third, stakeholders are more likely to agree on small changes than on radical ones.

How, then, does Lindblom think his approach will not descend into groupthink or extreme confirmation bias? Through diversity and pluralism among the stakeholders involved in decision making:

“… agencies will want among their own personnel two types of diversification: administrators whose thinking is organized by reference to policy chains other than those familiar to most members of the organization and, even more commonly, administrators whose professional or personal values or interests create diversity of view (perhaps coming from different specialties, social classes, geographical areas) so that, even within a single agency, decision-making can be fragmented and parts of the agency can serve as watchdogs for other parts.”

Naturally, Lindblom’s prescriptions and claims were widely debated. There is much to criticize, and he didn’t present much hard evidence that his prescriptions would work. Nevertheless, he ventured beyond the bounded rationality camp in four important respects. First, he brought into focus the prospect that we may not have knowable preferences. Second, he realized that means and ends may not be separable and may be reshaped in the very process of making a decision. Third, he mooted the criteria of choosing incremental and corrigible changes over large and irreversible ones. Fourth, he observed that many decisions are embedded in institutional or social contexts that may be harnessed to enhance decision making. All four of these advances suggest implications for decision making under ignorance.

Roger Kasperson contributed a chapter on “coping with deep uncertainty” to Gabriele Bammer’s and my 2008 book. By “deep uncertainty” he means “situations in which, typically, the phenomena… are characterized by high levels of ignorance and are still only poorly understood scientifically, and where modelling and subjective judgments must substitute extensively for estimates based upon experience with actual events and outcomes, or ethical rules must be formulated to substitute for risk-based decisions.” (Kasperson 2008, pg. 338) His list of options open to decision makers confronted with deep uncertainty includes the following:

  1. Delay to gather more information and conduct more studies in the hope of reducing uncertainty across a spectrum of risk;
  2. Interrelate risk and uncertainty to target critical uncertainties for priority further analysis, and compare technology and development options to determine whether clearly preferable options exist for proceeding;
  3. Enlarge the knowledge base for decisions through lateral thinking and broader perspective;
  4. Invoke the precautionary principle;
  5. Use an adaptive management approach; and
  6. Build a resilient society.

He doesn’t recommend these unconditionally, but instead writes thoughtfully about their respective strengths and weaknesses. Kasperson also observes that “The greater the uncertainty, the greater the need for social trust… The combination of deep uncertainty and high social distrust is often a recipe for conflict and stalemate.” (Kasperson 2008, pg. 345)

At the risk of leaping too far and too fast, I’ll conclude by presenting my list of criteria and recommendations for decisions under ignorance. I’ve incorporated material from the bounded rationality perspective, some of Lindblom’s suggestions, bits from Kasperson, my own earlier writings and from other sources not mentioned in this post. You’ll see that the first two major headings echo the first two in the expected utility framework, but beneath each of them I’ve slipped in some caveats and qualifications.

  1. Your decision should be based on your current assets.
    a. If possible, know which assets can be traded and which are non-negotiable.
    b. If some options are decisively better (worse) than others considering the range of risk that may exist, then choose them (get rid of them).
    c. Consider options themselves as assets. Try to retain them or create new ones.
    d. Regard your capacity to make decisions as an asset. Make sure you don’t become paralyzed by uncertainty.
  2. Your decision should be based on the possible consequences.
    a. Be aware of the possibility that means and ends may be inseparable and that your choice may reshape both means and ends.
    b. Beware unacceptable ends-justify-means arguments.
    c. Avoid irreversible or incorrigible alternatives if possible.
    d. Seek alternatives that are “robust” regardless of outcome.
    e. Where appropriate, invoke the precautionary principle.
    f. Seek alternatives whose consequences are observable.
    g. Plan to allocate resources for monitoring consequences and (if appropriate) gathering more information.
  3. Don’t assume that getting rid of ignorance and uncertainty is always a good idea.
    a. See 1.c. and 2.c. above. Options and corrigibility require uncertainty; freedom of choice is positively badged uncertainty.
    b. Interventions that don’t alter people’s uncertainty orientations will be frustrated with attempts by people to re-establish the level of uncertainty they are comfortable with.
    c. Ignorance and uncertainty underpin particular kinds of social capital. Eliminate ignorance and uncertainty and you also eliminate that social capital, so make sure you aren’t throwing any babies out with the bathwater.
    d. Other people are not always interested in reducing ignorance and uncertainty. They need uncertainty to have freedom to make their own decisions. They may want ignorance to avoid culpability.
  4. Where possible, build and utilize relationships based on trust instead of contracts.
    a. Contracts presume and require predictive knowledge, e.g., about who can pay whom how much and when. Trust relationships are more flexible and robust under uncertainty.
    b. Contracts lock the contractors in, incurring opportunity costs that trust relationships may avoid.

Managing and decision making under ignorance is, of course, far too large a topic for a single post, and I’ll be returning to it in the near future. Meanwhile, I’m hoping this piece at least persuades readers that the notion of criteria for good decisions under ignorance may not be absurd after all.

Privacy, Censorship, Ignorance and the Internet

leave a comment »

Wikileaks releases hundreds of thousands more classified US documents this weekend, and my wife and I have recently boogied our way through the American airport “naked scan” process (try googling ‘naked scan youtube’ to get a sample of public backlash against it). So, I have both censorship and privacy on my mind. They belong together. Concerns about privacy and censorship regarding the internet have been debated for more than a decade. All of these concerns are about who should have access to what and about whom, and whether regulation of such access is feasible.

Attempts to censor internet materials have largely been futile. In Australia (where I live) efforts to draft legislation requiring internet service providers to filter content have stalled for more than two years. Indeed, the net has undermined censorship powers over earlier mass communications media such as television. On grounds that it could prejudice criminal trials at the time, censorship authorities in the state of Victoria attempted to prevent its residents from watching “Underbelly,” a TV series devoted to gangland wars centered in Melbourne. They discovered to their chagrin that pirated versions of the program could be downloaded from various sources on the net.

How about privacy? Recently Facebook has been taken to task over privacy issues, and not unreasonably, although both Facebook and its users have contributed to those problems. On the users’ side, anyone can be tagged in a personally invasive or offensive photo and before Facebook can remove the photo it may already have been widely distributed or shared. Conventional law does not protect people who are captured by a photograph in public because that doesn’t constitute an invasion of privacy. On Facebook’s part, in 2007 it the Beacon program was launched whereby user rental records were released in public. Many people regarded this as a breach of privacy, and a lawsuit ensued, resulting in the shutdown of Beacon.

And then there was Kate’s party. Unbeknown to Kate Miller, an invitation to a party at her apartment was sent out on facebook. After prankster David Thorne posted the link on Twitter, people started RSVP’ing. After just one night, when it was taken down by Facebook, 60,000 people had said they were coming with a further 180,000 unconfirmed invitees. According to Thorne, this hoax was motivated by a desire to point out problems with privacy on Facebook and Twitter.

A cluster of concerns boils down to a dual use dilemma of the kind I described in an earlier post. The same characteristics of the net that defeat secrecy or censorship and democratize self-expression also can be used to invade privacy, steal identities, and pirate intellectual property or copyright material. For example, cookies are a common concern in the field of privacy, especially tracking cookies. Although most website developers use cookies for legitimate technical purposes, the potential for abuse is there. General concerns regarding Internet user privacy have become sufficient for a UN agency to issue a report on the dangers of identity fraud.

The chief dividing-line in debates about privacy and censorship is whether privacy is an absolute right or a negotiable privilege. Security experty Bruce Schneier’s 2006 essay on privacy weighs in on the side of privacy as a right. He points out that anti-privacy arguments such as “if you’re doing nothing wrong you have nothing to hide” assume that privacy is in the service of concealing wrongs. But love-making, diary-keeping, and intimate conversations are not examples of wrongdoings, and they indicate that privacy is a basic need.

Contrast this view with Google CEO Eric Schmidt’s vision of the future, in which children change their names at adulthood to escape embarrassing online dossiers of the kind compiled by Google. In a 2010 interview with him, Wall Street Journal columnist Holman Jenkins, Jr. records Mr. Schmidt predicting, “apparently seriously, that every young person one day will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends’ social media sites.” Mr. Schmidt goes on to opine that regulation of privacy isn’t needed because users right will abandon Google if it does anything “creepy” with their personal information. Among the more amusing comments posted in response, one respondent noted that Mr. Schmidt has blocked the Google Street-View images of his own holiday home on Nantucket.

Back to Wall Street Journal columnist Jenkins’ interview with the Google CEO: “Mr. Schmidt is surely right, though, that the questions go far beyond Google. ‘I don’t believe society understands what happens when everything is available, knowable and recorded by everyone all the time,’ he says.” This goes to the heart of the matter, and Joe and Jane Public aren’t the only ones who don’t understand this.

Theories and research about human communication have largely been hobbled by a default assumption is that miscommunication or misunderstanding is aberrant and should be eliminated. For example, the overwhelming emphasis is on studying how to detect deception rather than investigating how it is constituted and the important roles it plays in social interaction. Likewise, the literature on miscommunication is redolent in negative metaphors, with mechanistic terms like “breakdowns,” “disrepair;” and “distortion;” critical-theoretic terms such as “disadvantage,” “denial,” and “oppression.” In the human relations school “unshared” and “closed” communications carry with them moral opprobrium. So these perspectives are blind to the benefits of unshared communication such as privacy.

Some experts from other domains concur. Steve Rambam famously declares that “Privacy is dead. Get over it.” David Brin claims (in his rebuttal to a critique of his book, The Transparent Society) that “we already live in the openness experiment, and have for 200 years.” The implicit inference from all this is that if only we communicated fully and honestly with one another, all would go well.

Really?

Let’s cut to the chase. Imagine that all of us—ZAP!—are suddenly granted telepathy. Each of us has instant access to the innermost thoughts and feelings of our nearest and dearest, our bosses, subordinates, friends, neighbors, acquaintances and perfect strangers. The ideal of noise-free, transparent, totally honest communication finally is achieved. Forget the internet—Now there really is no such thing as privacy anymore. What would the consequences be?

In the short term, cataclysmic. Many personal relationships, organizations, governments, and international relations would fall (or be torn) apart. There would be some pleasant surprises, yes, but I claim there would be many more negative ones, for two reasons. First, we have a nearly universal tendency to self-bolster by deluding ourselves somewhat about how positively others regard us. Second, many of us would be outraged at finding out how extensively we’ve been hoodwinked by others, not just those with power over us but also our friends and allies. Look at who American governmental spokespeople rushed to forewarn and preempt about the Wikileaks release. It wasn’t their enemies. It was their friends.

What about the longer term? With the masks torn off and the same information about anyone’s thoughts available to everyone, would we all end up on equal footing? As Schneier pointed out in his 2008 critique of David Brin’s book, those who could hang onto positions of power would find their power greatly enhanced. Knowledge may be power, but it is a greater power for those with more resources to exploit it. It would be child’s play to detect any heretic failing to toe the party or corporate line. And the kind of targeted marketing ensuing from this would make today’s cookie-tracking efforts look like the fumbling in the dark that they are.

In all but the most benign social settings, there would be no such thing as “free thinking.” Yes, censorship and secrecy would perish, but so would privacy and therefore the refuge so poignantly captured by Die Geganken Sind Frei. The end result would certainly not be universal freedom of thought or expression.

Basic kinds of social capital would be obliterated and there would be massive cultural upheavals. Lying would become impossible, but so would politeness, tact, and civility. These all rely on potentially face-threatening utterances being softened, distorted, made ambiguous, or simply unsaid. Live pretence, play-acting or role-playing of any kind would be impossible. So would most personnel management methods. The doctor’s “bedside manner” or any professional’s mien would become untenable. For example, live classroom teaching would be very difficult indeed (“Pay attention, Jones, and stop fantasizing about Laura in the third row.” “I will, sir, when you stop fantasizing about her too.”). There would be no personas anymore, only personalities.

I said earlier that privacy and censorship belong together. As long as we want one, we’ll have to live with the other (and I leave it to you to decide which is the “one” and which the “other”). The question of who should know what about what or whom is a vexed question, and so it should be for anyone with a reasonably sophisticated understanding of the issues involved. And who should settle this question is an even more vexed question. I’ve raised only a few of the tradeoffs and dilemmas here. The fact that internet developments have generated debates about these issues is welcome. But frankly, those debates have barely scratched the surface (and that goes for this post!). Let’s hope they continue. We need them.

Written by michaelsmithson

November 29, 2010 at 10:46 am

How can Vagueness Earn Its Keep?

leave a comment »

Late in my childhood I discovered that if I was vague enough about my opinions no-one could prove me wrong. It was a short step to realizing that if I was totally vague I’d never be wrong. What is the load-bearing capacity of the elevator in my building? If I say “anywhere from 0 tons to infinity” then I can’t be wrong. What’s the probability that Samantha Stosur will win a Grand Slam tennis title next year? If I say “anywhere from 0 to 1” then once again I can’t be wrong.

There is at least one downside to vagueness, though: Indecision. Vagueness can be paralyzing. If the elevator’s designers’ estimate of its load-bearing capacity had been 0-to-infinity then no-one could decide whether any load should be lifted by it. If I can’t be more precise than 0-to-1 about Stosur’s chances of winning a Grand Slam title then I have no good reason to bet on or against that occurrence, no matter what odds the bookmakers offer me.

Nevertheless, in situations where we don’t know much but are called upon to estimate something, some amount of vagueness seems not only natural but also appropriately cautious. You don’t have to look hard to find vagueness; you can find it anywhere from politics to measurement. When you weigh yourself, your scale may only determine your weight within plus or minus half a pound. Even the most well-informed engineer’s estimate of load-bearing capacity will have an interval around it. The amount of imprecision in these cases is determined by what we know about the limitations of our measuring instruments or our mathematical models of load-bearing structures.

How would a sensible decision maker work with vague estimates?  Suppose our elevator specs say the load-bearing capacity is between 3800 and 4200 pounds. Our “worst-case” estimate is 3800. We would be prudent to use the elevator for loads weighing less than 3800 pounds and not for ones weighing more than that. But what if our goal was to overload the elevator? Then to be confident of overloading it, we’d want a load weighing more than 4200 pounds.

A similar prescription holds for betting on the basis of probabilities. We should use the lowest estimated probability of an event for considering bets that the event will happen and the highest estimated probability for bets that the event won’t happen. If I think the probability that Samantha Stosur will win a Grand Slam title next year is somewhere from 1/2 to 3/4 then I should accept any odds longer than an even bet that she’ll win one (i.e., I’ll use 1/2 as my betting probability on Stosur winning), and if I’m going to bet against it then I should accept any odds longer than 3 to 1 (i.e., I’ll use 1 – 3/4 = 1/4 as my betting probability against Stosur). For any odds offered to me corresponding to a probability inside my 1/2 to 3/4 interval, I don’t have any good reason to accept or reject a bet for or against Stosur.

Are there decisions where vagueness serves no distinct function? Yes: Two-alternative forced-choice decisions. A two-alternative forced choice compels the decision maker to elect one threshold to determine which alternative is chosen. Either we put a particular load into the elevator or we do not, regardless of how imprecise the load-bearing estimate is. The fact that we use our “worst-case” estimate of 3800 pounds for our decisional threshold makes our decisional behavior no different from someone whose load-bearing estimate is precisely 3800 pounds.

It’s only when we have a third “middle” option (such as not betting either way on Sam Stosur) that vagueness influences our decisional behavior distinctly from a precise estimation. Suppose Tom thinks the probability that Stosur will win a Grand Slam title is somewhere between 1/2 and 3/4 whereas Harry thinks it’s exactly 1/2. For bets on Stosur, Harry won’t accept anything shorter than even odds. For any odds shorter than that Harry will be willing to bet against Stosur. Offered any odds corresponding to a probability between 1/2 and 3/4, Tom could bet either way or even decide not to bet at all. Tom may choose the middle option (not betting at all) under those conditions, whereas Harry never does.

If this is how they always approach gambling, and if (let’s say) on average the midpoints of Tom’s probability intervals are as accurate as Harry’s precise estimates, it would seem that in the long run they’ll do about equally well. Harry might win more often than Tom but he’ll lose more often too, because Tom refuses more bets.

But is Tom missing out on anything? According to philosopher Adam Elga, he is. In his 2010 paper, “Subjective Probabilities Should Be Sharp,” Elga argues that perfectly rational agents always will have perfectly precise judged probabilities. He begins with his version of the “standard” rational agent, whose valuation is strictly monetary, and who judges the likelihood of events with precise probabilities. Then he offers two bets regarding the hypothesis (H) that it will rain tomorrow:

  • Bet A: If H is true, you lose $10. Otherwise you win $15.
  • Bet B: If H is true, you win $15. Otherwise you lose $10.

First he offers the bettor Bet A. Immediately after the bettor decides whether or not to accept Bet A, he offers Bet B.

Elga then declares that any perfectly rational bettor who is sequentially offered bets A and B with full knowledge in advance about the bets and no change of belief in H from one bet to the next will accept at least one of the bets. After all, accepting both of them nets the bettor $5 for sure. If Harry’s estimate of the probability of H, P(H), is less than .6 then he accepts Bet A, whereas if it is greater than .6 Harry rejects Bet A. If Harry’s P(H) is low enough he may decide that accepting Bet A and rejecting Bet B has higher expected return than accepting both bets.

On the other hand, suppose Tom’s assessment of P(H) is somewhere between 1/4 and 3/4. Bet A is optional because .6 lies in this interval, and so is Bet B. According to the rules for imprecise probabilities, Tom could (optionally) reject both bets. But rejecting both would net him $0, so he would miss out on $5 for sure.  According to Elga, this example suffices to show that vague probabilities do not lead to rational behavior.

But could Tom rationally refuse both bets? He could do so if not betting, to him, is worth at least $5 regardless of whether it rains tomorrow or not. In short, Tom could be rationally vague if not betting has sufficient positive value (or if betting is sufficiently aversive) for him. So, for a rational agent, vagueness comes at a price. Moreover, the greater the vagueness, the greater value the middle option has to be in order for vagueness to earn its keep. An interval for P(H) from 2/5 to 3/5 requires the no-bet option to have value equal to $5, whereas an interval of 1/4 to 3/4 requires that option to be worth $8.75.

Are there situations like this in the real world? A very common example is medical diagnostic testing (especially in preventative medicine), where the middle option is for the patient to undergo further examination or observation. Another example is courtroom trials. There are trials in which jurors may face more than the two options of conviction or acquittal, such as mental illness or diminished responsibility. The Scottish system with its Not Proven alternative has been the subject of great controversy, due to the belief that if it gives juries a let-out from returning Guilty verdicts. Not Proven is essentially a type of acquittal. The charges against the defendant are dismissed and she or he cannot be tried again for the same crime. About one-third of all jury acquittals in Scotland are the product of a Not Proven verdict.

Can an argument be made for Not Proven? A classic paper by Terry Connolly (1987) pointed out that a typical threshold probability of guilt associated with the phrase “beyond reasonable doubt” is in the [0.9, 1] range. For a logically consistent juror a threshold probability of 0.9 implies the difference between the subjective value of acquitting versus convicting the innocent is 9 times the difference in the value of convicting versus acquitting the guilty. Connolly demonstrated that the relative valuations of the four possible outcomes (convicting the guilty, acquitting the innocent, convicting the innocent and acquitting the guilty) that are compatible with such a high threshold probability are counterintuitive. Specifically, “. . . if one does [want to have a threshold of 0.9], one must be prepared to hold the acquittal of the guilty as highly desirable, at least in comparison to the other available outcomes” (pg. 111). He also showed that more intuitively reasonable valuations lead to unacceptably low threshold probabilities.

So there seems to be a conflict between the high threshold probability required by “beyond reasonable doubt” and the relative value we place on acquitting the guilty versus convicting them. In a 2006 paper I showed that incorporating a third middle option with a suitable mid-range threshold probability can resolve this quandary, permitting a rational juror to retain a high conviction threshold and still evaluate false acquittals as negatively as false convictions. In short, a middle option like “Not Proven” would seem to be just the ticket. The price paid for this solution is a more stringent standard of proof for outright acquittal than any probability of guilt shy of .9, because Not Proven requires a standard of proof between .9 and some lower probability.

But what do people do? Two of my Honours students, Sara Deady and Lavinia Gracik, decided to find out. Sara and Lavinia both have backgrounds in law and psychology. They designed and conducted experiments in which mock jurors deliberated on realistic trial cases. In one condition the Not Proven, Guilty and Acquitted options were available to the jurors, whereas in another condition only the conventional two were available.

Their findings, published in our 2007 paper, contradicted the view that the Not Proven option attracts jurors away from returning convictions. Instead, Not Proven more often supplanted outright acquittals. Judged probabilities of guilt from jurors returning Not Proven were mid-range, just as the “rational juror” model said they should be. They acquitted defendants only if they thought the probability of guilt was quite low.

Vagueness therefore earns its keep through the value of the “middle” option that it invokes. Usually that value is determined by a tradeoff between two desirable properties. In measurement, the tradeoff often is speed vs accuracy or expense vs precision. In decisions, it’s decisiveness vs avoidance of undesirable errors. In betting or investment, cautiousness vs opportunities for high returns. And in negotiating agreements such as policies, it’s consensus vs clarity.

Written by michaelsmithson

November 16, 2010 at 1:15 am

Follow

Get every new post delivered to your Inbox.