Trust, Gullibility and Skepticism
In my first post on this blog, I claimed that certain ignorance arrangements are socially mandated and underpin some forms of social capital. One of these is trust. Trust lives next-door to gullibility but also has faith, agnosticism, skepticism and paranoia as neighbors. I want to take a brief tour through this neighborhood.
Trust enhances our lives in ways that are easy to take for granted. Consider the vast realm of stuff most of us “just know,” such as our planet is round, our biological parents really are our biological parents, and we actually can digest protein but can’t digest cellulose. These are all things we’ve learned essentially third-hand, from trusted sources. We don’t have direct evidence for these ideas. Hardly any of us would be able to offer proofs or even solid evidence for the bulk of our commonsense knowledge and ideas about how the world works. Instead, we trust the sources, in some cases because of their authority (e.g., parents, teachers) and in others because of their sheer numbers (e.g., everyone we know and everything we read agrees that the planet is round). Those sources, in turn, rarely are people who actually tested commonsense first-hand.
Why is this trust-based network of beliefs generally good for us? Most obviously, it saves us time, effort and disruption. Skepticism is costly, and not just in terms of time and energy (imagine the personal and social cost of demanding DNA tests from your parents to prove that you really are their offspring). Perhaps a bit less obviously, it conveys trust from us to those who provided the information in the first place. As a teacher, I value the trust my students place in me. Without it, teaching would be very difficult indeed.
And even less obviously, it’s this trust that makes learning and the accumulation of knowledge possible. It means each of us doesn’t have to start from the beginning again. We can build on knowledge that we trust has been tested and verified by competent people before us. Imagine having to fully test your computer to make sure it does division correctly, and spare a thought for those number-crunchers among us who lived through the floating-point-divide bug in the Intel P5 chip.
More prosaically, about 10 years ago I found myself facing an awkward pair of possibilities. Either (a) Sample estimates of certain statistical parameters don’t become more precise with bigger random samples (i.e., all the textbooks are wrong), or (b) There’s a bug in a very popular stats package that’s been around for more than 3 decades. Of course, after due diligence I found the bug. “Eternal vigilance!” A colleague crowed when I described to him the time and trouble it took, not only to diagnose the bug but to convince the stats package company that their incomplete beta function algorithm was haywire. But there’s the rub: If eternal vigilance really were required then none of us would make any advances at all.
Back to trust—When does it cease to be trust and becomes gullibility? There are the costs of trying to check out all those unsubstantiated clichés, and we also suffer from confirmation bias. Both tendencies make us vulnerable to gullibility.
Gullibility often is taken to mean “easily deceived.” Closer to the mark would be “excessively trusting,” but that begs the question of what is “excessive.” Can gullibility be measured? Perhaps, but one way not to do so is via the various “gullibility tests” abounding on the web. A fairly typical example is this one, in which you can attain a high score as a “free thinker” simply by choosing answers that go against conventional beliefs or that incline towards conspiracy theories. However, distrust is no evidence for lack of gullibility. In fact, extreme distrust and paranoia are as obdurate against counter-evidence as any purblind belief; they become a kind of gullibility as well.
Can gullibility be distinguished from trust? Yes, according to studies such as the Psychological Science study claiming that Oxytocin makes people more trusting but not more gullible (see the MSN story on this study). Participants received either a placebo or an oxytocin nasal spray, and were then asked to play a trust game in which they received a certain amount of money they could share with a “trustee,” in whose hands the money would then triple. The trustee she could then transfer all, part, or none of it back to the participant. So the participant could make a lot of money, but only if the trustee was reliable and fair.
Participants played multiple rounds of the trust game with a computer and virtual partners, half of whom appeared to be reliable and half unreliable. Oxytocin recipients offered more money to the computer and the reliable partners than placebo recipients did. However, oxytocin and placebo recipients were equally reluctant to share money with unreliable partners. So, gullibility may be a failure to read social cues.
It may comfort to know that clever people can be gullible too. After all, academic intelligence is no guarantee of social or practical intelligence. The popular science magazine Discover ran an April Fool’s article in 1995 on the hotheaded naked ice borer, which attracted more mail than any previous article published therein. It was widely distributed by newswires and treated seriously in Ripley’s Believe It or Not! and The Unofficial X Files Companion. While the author of this and other hoaxes expressed a few pangs of guilt, none of this inhibited Discover’s blog site from carrying this piece on the “discovery” that gullibility is associated with the “inferior supra-credulus,” a region in the brain “unsually active in people with a tendency to believe horoscopes and papers invoking fancy brain scans,” and a single gene called WTF1. As many magicians and con artists have known, people who believe they are intelligent also often believe they can’t be fooled and, as a result, can be caught off-guard. A famous recent case is Stephen Greenspan, professor and author of a book on gullibility, who became a victim of Bernie Madoff. At the other end of the scale is comedian Joe Penner’s catchphrase “You can’t fool me, I’m too ignorant.”
OK, but what about agnosticism? Why not avoid gullibility by refusing to believe any proposition for which you lack first-hand evidence or logico-mathematical proof? This sounds attractive at first, but the price of agnosticism is indecisiveness. Indecision can be costly (should you refuse your food because you don’t understand digestion?). Even if you try to evade indecision by tossing a coin you can still have accountability issues because, by most definitions, a decision is a deliberate choice of the most desirable option. Carried far enough, a radical agnostic would be unable to commit to important relationships, because those require trust.
So now let’s turn the tables. We place trust in much of our third-hand stock of “knowledge” because the alternatives are too costly. But our trust-based relationships actually require that we forego complete first-hand knowledge about those whom we trust. It is true that we usually trust only people who we know very well. Nevertheless, the fact remains that people who trust one another don’t monitor one another all the time, nor do they hold each other constantly accountable for every move they make. A hallmark of trust is allowing the trusted party privacy, room to move, and discretion.
Imagine that you’re a tenured professor at a university with a mix of undergraduate teaching, supervision of several PhD students, a research program funded by a National Science Foundation grant, a few outside consultancies each year, and the usual administrative committee duties. You’ve worked at your university for a dozen years and have undergone summary biannual performance evaluations. You’ve never had an unsatisfactory rating in these evaluations, you have a solid research record, and in fact you were promoted three years ago.
One day, your department head calls you into her office and tells you that there’s been a change in policy. The university now demands greater accountability from its academic staff. From now on, she will be monitoring your activities on a weekly basis. You will have to keep a log of how you spend each week on the job, describing your weekly teaching, supervision, research, and administrative efforts. Every cent you spend from your grant will be examined by her, and she will inspect the contents of your lectures, exams, and classroom assignments. Your web-browsing and emailing at work also will be monitored. You will be required to clock in and out of work to document how much time you spend on campus.
What effect would this have on you? Among other things, you would probably think you were distrusted by your department head. But why? she might ask. After all, this is purely an information-gathering exercise. You aren’t being evaluated. Moreover, it’s nothing personal; all the other academics are going to be monitored in the same way.
Most of us would still think that we weren’t trusted anymore. Intensive surveillance of that kind doesn’t fit with norms for trust behavior. It doesn’t seem plausible that all that effort invested monitoring your activities is “purely” for information gathering. Surely the information is going to be used for evaluation, or at the very least it could be used for such purposes in the future. Who besides your department head may have access to this information? And so on…
Chances are this would also stir up some powerful emotions. A 1996 paper by sociologists Fine and Holyfield observed that we don’t just think trust, we feel trust (pg. 25). By the same token, feeling entrusted by others is crucial for our self-esteem. Treatment of the kind described above would feel belittling, insulting, perhaps even humiliating. At first glance, the contract-like assurance of this kind of performance accountability seems to be sound managerial practice. However, it has hidden costs: Distrusted employees make disaffected and insulted employees who are less likely to be cooperative, volunteering, responsible or loyal—All qualities that spring from feeling entrusted.