## Posts Tagged ‘**conflicting information**’

## Not Knowing When or Where You’re At

My stepfather, at 86 years of age, just underwent major surgery to remove a considerable section of his colon. He’d been under heavy sedation for several days, breathing through a respirator, with drip-feeds for hydration and sustenance. After that, he was gradually reawakened. Bob found himself in a room he’d never seen before, and he had no sense of what day or time it was. He had no memory of events between arriving at the hospital and waking up in the ward. He had to figure out where and when he was.

These are what philosophers are fond of calling “self-locating beliefs.” They say we learn two quite different kinds of things about the world—Things about what goes on in this world, and things about where and when we are in this world.

Bob has always had an excellent sense of direction. He’s one of those people who can point due North when he’s in a basement. I, on the other hand, have such a poor sense of direction that I’ve been known to blunder into a broom-closet when attempting to exit from a friend’s house. So I find the literature on self-locating beliefs personally relevant, and fascinating for the problems they pose for reasoning about uncertainty.

Adam Elga’s classic paper published in 2000 made the so-called “Sleeping Beauty” self-locating belief problem famous: “Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking.”

You have just woken up. What probability should you assign to the proposition that the coin landed Heads? Some people answer 1/2 because the coin is fair, your prior probability that it lands Heads should be 1/2, and the fact that you have just awakened adds no other information.

Others say the answer is 1/3. There are 3 possible awakenings, of which only 1 arises from the coin landing Heads. Therefore, given that you have ended up with one of these possible awakenings, the probability that it’s a “Heads” awakening is 1/3. Elga himself opted for this answer. However, the debates continued long after the publication of his paper with many ingenious arguments for both answers (and even a few others). Defenders of one position or the other became known as “halfers” and “thirders.”

But suppose we accept Elga’s answer: What of it? It raises a problem for widely accepted ideas about how and why our beliefs should change if we consider the probability we’d assign the coin landing Heads before the researchers imposed their experiment on us. We’d say a fair coin has a probability of 1/2 of landing Heads. But on awakening, Elga says we should now believe that probability is 1/3. But we haven’t received any new information about the coin or anything else relevant to the outcome of the coin-toss. In standard accounts of conditional probability, we should alter this probability only on grounds of having acquired some new information– But all the information about the experiment was given to us *before* we were put to sleep!

Here’s another example, the Shangri La Journey problem from Frank Arntzenius (2003):

“There are two paths to Shangri La, the path by the Mountains, and the path by the Sea. A fair coin will be tossed by the guardians to determine which path you will take: if Heads you go by the Mountains, if Tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La you will forever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But, just as soon as you enter Shangri La, your memory of this Beauteous Journey will be erased and be replaced by a memory of the Journey by the Mountains.”

Arntzenius takes this case to provide a counterexample to the standard account of how conditional probability works. As in the Sleeping Beauty problem, consider what the probability we’d assign to the coin landing Heads should be at different times. Our probability before the journey should be 1/2, since the only relevant information we have is that the coin is fair. Now, suppose the coin actually lands Heads. Our probability of Heads after we set out and see that we are traveling by the mountains should be 1, because we now known the outcome of the coin toss. But once we pass through the gates of Shangri La, Arntzenius argues, our probability should revert to 1/2: “for you will know that you would have had the memories that you have either way, and hence you know that the only relevant information that you have is that the coin is fair.”

Well, this goes against the standard Bayesian account of conditional probabilities. Letting H = Heads and M = memory of the Journey by the Mountains, according to Bayes’ formula we should update our belief that the coin landed heads by computing

P(H|M) = P(M|H)P(H)/P(M),

where P(H) is the probability of Heads after we know the outcome of the coin toss. According to our setup, P(H) = P(M|H) = P(M) = 1. Therefore, P(H|M) = 1. Arntzenius declares that because our probability of Heads should revert to 1/2, something is wrong with Bayesian conditionalization.

The difficulty arises because the Bayesian updating rule—conditionalization—requires certainties to be permanent: once you’re certain of something, you should always be certain of it. But when we consider self-locating beliefs, there seem to be cases where this requirement clearly is false. For example, one can reasonably change from being certain that it’s one time to being certain that it’s another.

This is the kind of process my stepfather went through as he reoriented himself to what his family and medical caretakers assured him is the here-and-now. It was rather jarring for him at first, but fortunately he isn’t a Bayesian. He could sneak peeks at newspapers and clocks to see if those concurred with what he was being told, and like most of us he could be comfortable with the notion of shifting from being certain it was, say March 20^{th} to being certain that it was March 26^{th}. But what if he *were* a Bayesian?

Bob: What’s going on?

Nearest & Dearest: You’re in Overlake Hospital and you’ve been sedated for 6 days. It’s Saturday the 26^{th} of March.

Bob: I can see I’m in a hospital ward, but I’m certain the date is the 20^{th} of March because my memory tells me that a moment ago that was the case.

N&D: But it really is the 26^{th}; you’ve had major surgery and had to be sedated for 6 days. We can show you newspapers and so on if you don’t believe us.

Bob: My personal probability that it is the 20^{th} was 1, last I recall, and it still is. No evidence you provide me can shift me from a probability of 1 because I’m using Bayes’ Rule to update my beliefs. You may as well try to convince me the sun won’t rise tomorrow.

N&D: Uhhh…

A Bayesian faces additional problems that do not and need not trouble my stepfather. One issue concerns identity. Bayesian conditionalization is only well-defined if the subject has a unique set of prior beliefs, i.e., a “unique predecessor.” How should we extend conditionalization in order to accommodate non-unique predecessors? For instance, suppose we’re willing to entertain both the 1/2 and the 1/3 answers to the Sleeping Beauty conundrum?

Meacham’s (2010) prescription for multiple predecessors is to represent them with an interval: “A natural proposal is to require our credences to lie in the span of the credences conditionalization prescribes to our predecessors.” But the pair of credences {1/3, 1/2} that the Sleeping Beauty puzzle leaves us with does not lend any plausibility to values in between them. For instance, it surely would be silly to average them and declare that the answer to this riddle is 5/12—Neither the thirders nor the haflers would endorse this solution.

A while ago, I (Smithson, 1999) and more recently, Gajdos & Vergnaud (2011), said that there is a crucial difference between two sharp but disagreeing predecessors {1/3, 1/2} and two vague but agreeing ones {[1/3, 1/2], [1/3, 1/2]}. Moreover, it is not irrational to prefer the second situation to the first, as I showed that many people do. Cabantous et al. (2011) recently report that insurers would charge a higher premium for insurance when expert risk estimates are precise but conflicting than when they agree but are imprecise.

In short, standard probability theories have difficulty not only with self-location belief updating but, more generally, with any situation that presents multiple equally plausible probability assessments. The traditional Bayesian can’t handle multiple selves but the rest of us can.