Science for Good and Evil: Dual Use Dilemmas
For fascinating examples of attempts to control curiosity, look no further than debates and policies regarding scientific research and technological development. There are long-running debates about the extent to which scientists, engineers, and other creative enterprises can and should be regulated—and if so by whom. Popular images of scholars and scientists pursuing knowledge with horrific consequences for themselves and others range from the 16th-century legend of Faustus (reworked by numerous authors, e.g., Marlowe) to Bruce Banner.
The question of whether experiment X should be performed or technology Y developed is perennial. The most difficult versions of this question arise when pursuing the object of one’s curiosity violates others’ deeply held values or ethical principles, or induces great fear. These issues have a lengthy history. Present-day examples of scientific research and technological developments evoking this kind of conflict include stem cell research, human cloning, the Large Hadron Collider, and genetic modification of food crops.
Before it began operations, fears that the Large Hadron Collider (LHC) could generate a black hole that would swallow the Earth made international headlines, and debates over its safety have continued, including lawsuits intended to halt its operation. The nub of the problem of course is risk, and a peculiarly modern version of risk at that. The sociologist Ulrich Beck’s (1992) classic work crystallized a distinction between older and newer risks associated with experimentation and exploration. The older risks were localized and often restricted to the risk-takers themselves. The new risks, according to writers like Beck, are global and catastrophic. The concerns about the LHC fit Beck’s definition of the new risks.
When fears about proposed experiments or technological developments concern the potential misuse of potentially beneficial research or technology, debates of this kind are known as “dual use dilemmas.” There’s an active network of researchers on this topic. Recently I participated in a workshop at The Australian National University on this topic, from which a book should emerge next year.
Probably the most famous example is the controversy arising from the development of nuclear fission technology, which gave us the means to nuclear warfare on the one hand but numerous peacetime applications on the other. The fiercest debates these days on dual use dilemmas focus on biological experiments and nanotechnology. The Federation of American Scientists has provided a webpage source of fascinating case-studies in dual-use dilemmas involving biological research. The American National Research Council (NRC) 2004 report on “Biotechnological Research in an Age of Terrorism” is an influential source. Until recently, much of the commentary came from scientists, security experts or journalists. However, for a book-length treatment of this issue by ethicists, see Miller and Selgelid’s (2008) interesting work.
The NRC report listed “experiments of concern” as those including any of the following capabilities:
- demonstrating how to render a vaccine ineffective;
- enhancing resistance to therapeutically useful antibiotics or antiviral agents;
- enhancing the virulence of a pathogen or render a non-pathogen virulent;
- increasing the transmissibility of a pathogen;
- altering the host range of a pathogen;
- enabling the evasion of diagnosis and/or detection by established methods; and
- enabling the weaponization of a biological agent or toxin.
There are three kinds of concern underpinning dual-use dilemmas. The first arises from foreseeable misuses that could ensue from an experiment or new technology. Most obvious are experiments or developments intended to create weapons in the first place (e.g., German scientists responsible for gas warfare in World War I or American scientists responsible for atomic warfare at the end of World War II). But not as obvious are the opportunities to exploit nonmilitary research or technology. An example of potential misuse of a rather quotidian technology would be terrorists or organized crime networks exploiting illegal botox manufacturing facilities to distill botulinum toxin (see the recent Scientific American article on this).
Research results published in 2005 announced the complete genetic sequencing of the 1918 influenza A (H1N1) virus (a.k.a. the “Spanish flu”) and also its resurrection using reverse genetic techniques. This is the virus that killed between 20 and 100 million people in 1918–1919. Prior to publication of the reverse-engineering paper, the US National Science Advisory Board for Biosecurity (NSABB) was asked to consider the consequences. The NSABB decided that the scientific benefits flowing from publication of this information about the Spanish flu outweighed the risk of misuse. Understandably, publication of this information aroused concerns that malign agents could use it to reconstruct H1N1. The same issues have been raised concerning the publication of the H5N1 influenza (“bird flu”) genome.
The second type of concern is foreseeable catastrophic accidents that could arise from unintended consequences of research or technological developments. The possibility that current stockpiles of smallpox could be accidentally let loose is the kind of event to be concerned about here. Such an event also is, for some people, an argument against research enterprises such as the reengineering of H1N1.
The third type of concern is in some ways more worrying: Unforeseeable potential misuses or accidents. After all, a lot of research yields unanticipated findings and/or opportunities for new technological developments. A 2001 paper on mousepox virus research at The Australian National University is an example of this kind of serendipity. The researchers were on the track of a genetically engineered sterility treatment that would combat mouse plagues in Australia. But this research project also led to the creation of a highly virulent strain of mousepox. The strain the researchers created killed both mice with resistance to mousepox and mice vaccinated against mousepox.
Moreover, the principles by which this new strain was engineered were readily generalizable, and raised the possibility of creating a highly virulent strain of smallpox resistant to available vaccines. Indeed, in 2003 a team of scientists at St Louis University published research in which they had extended the mousepox results to cowpox, a virus that can infect humans. The fear, of course, is that these technological possibilities could be exploited by terrorists. Recent advances in synthetic genomics have magnified this problem. It is now possible not only to enhance the virulence of extant viruses, but also create new viruses from scratch.
The moral and political questions raised by cases like these are not easy to resolve, for at least three reasons. First, the pros and cons often are unknown to at least some extent. Second, even the known pros and cons usually weigh heavily on both sides of the argument. There are very good reasons for going ahead with the research and very good reasons for prohibiting it.
The third reason applies only to some cases, and it makes those cases the toughest of all. “Dual dilemmas” is a slightly misleading phrase, in a technical sense that relates to this third reason. Many cases are really just tradeoffs, where at least in principle rational negotiators could weigh the costs and benefits according to their lights and arrive at an optimal decision. But some cases genuinely are social dilemmas, in the following sense of the term: Choices dictated by rational, calculating self-interest nevertheless will lead to the destruction of the common good and, ultimately, everyone’s own interests.
Social dilemmas aren’t new. Garrett Hardin’s “tragedy of the commons” is a famous and much debated example. A typical arms-race is another obvious example. Researchers in countries A and B know that each country has the means to revive an extinct, virulent pathogen that could be exploited as a bioweapon. If the researchers in country A revive the pathogen and researchers in country B do not, country A temporarily enjoys a tactical advantage over country B. However, it also imposes a risk on both A and B of theft or accidental release of the pathogen. If country B responds by duplicating this feat then B regains equal footing with A but now has multiplied the risk of accidental release or theft. Conversely, if A restrains from reviving the pathogen then B could play A for a sucker by reviving it. It is in each country’s self-interest to revive the pathogen in order to avoid being trumped by the other, but the result is the creation of dread risks that neither country wants to bear. You may have heard about “Prisoner’s Dilemma” or “Chicken Game.” These are types of social dilemmas, and some dual use dilemmas are structurally similar to them.
Social dilemmas present a very difficult problem in the regulation of curiosity (and other kinds of choice behavior) because solutions cannot be worked out solely on the basis of appeals to rational self-interest. Once you know what to look for, you can find social dilemmas in all sorts of places. The research literature on how to deal with and resolve them includes contributions from economics, psychology, political science, anthropology, sociology and applied mathematics (I co-edited a book on social dilemmas back in 1999). This research has had applications ranging from environmental policy formation to marriage guidance counseling. But it shouldn’t surprise you too much to learn that most of the early work on social dilemmas stemmed from and was supported by the American military.
So to conclude, let’s try putting the proverbial shoe on the other foot. The dual-use dilemmas literature focuses almost exclusively on scientific research and technological developments that could be weaponized. But what about the reverse process—Military research and development with nonmilitary benefits? Or, for that matter, R & D from illicit or immoral sources that yield legitimate spinoffs and applications? These prospects appear to have been relatively neglected.
Nevertheless, it isn’t hard to find examples: The internet, for one. The net originated with the American Defense Advanced Research Projects Agency (DARPA), was rapidly taken up by university-based researchers via their defense-funded research grants, and morphed by the late 1980’s into the NSFnet. Once the net escaped its military confines, certain less than licit industries spearheaded its development. As Peter Nowak portrays it in his entertaining and informative (if sometimes overstated) book Sex, Bombs and Burgers, the pornography industry was responsible for internet-defining innovations such as live video streaming, video-conferencing and key aspects of internet security provision. Before long, mainstream businesses were quietly adopting ways of using the net pioneered by the porn industry.
Of course, I’m not proposing that the National Science Foundation should start funding R&D in the porn industry. My point is just that this “other” dual use dilemma merits greater attention and study than it has received so far.