ignorance and uncertainty

All about unknowns and uncertainties

Posts Tagged ‘Technology

A Few (More) Myths about “Big Data”

leave a comment »

Following on from Kate Crawford’s recent and excellent elaboration of six myths about “big data”, I should like to add four more that highlight important issues about such data that can misguide us if we ignore them or are ignorant of them.

Myth 7: Big data are precise.

As with analyses of almost any other kind of data, big data analyses largely consists of estimates. Often these estimates are based on sample data rather than population data, and the samples may not be representative of their referent populations (as Crawford points out, but also see Myth 8). Moreover, big data are even less likely than “ordinary” data to be free of recording errors or deliberate falsification.

Even when the samples are good and the sample data are accurately recorded, estimates still are merely estimates, and the most common mistake decision makers and other stakeholders make about estimates is treating them as if they are precise or exact. In a 1990 paper I referred to this as the fallacy of false precision. Estimates always are imprecise, and ignoring how imprecise they are is equivalent to ignoring how wrong they could be. Major polling companies gradually learned to report confidence intervals or error-rates along with their estimates and to take these seriously, but most government departments apparently have yet to grasp this obvious truth.

Why might estimate error be a greater problem for big data than for “ordinary” data? There are at least two reasons. First, it is likely to be more difficult to verify the integrity or veracity of big data simply because it is integrated from numerous sources. Second, if big datasets are constructed from multiple sources, each consisting of an estimate with its own imprecision, then these imprecisions may propagate. To give a brief illustration, if estimate X has variance x2, estimate Y has variance y2, X and Y are independent of one another, and our “big” dataset consists of adding X+Y to get Z, then the variance of Z will be x2 + y2.

Myth 8: Big data are accurate.

There are two senses in which big data may be inaccurate, in addition to random variability (i.e., sampling error): Biases, and measurement confounds. Economic indicators of such things as unemployment rates, inflation, or GDP in most countries are biased. The bias stems from the “shadow” (off the books) economic activity in most countries. There is little evidence that economic policy makers in most countries pay any attention to such distortions when using economic indicators to inform policies.

Measurement confounds are a somewhat more subtle issue, but the main idea is that data may not measure what we think it is measuring because it is influenced by extraneous factors. Economic indicators are, again, good examples but there are plenty of others (don’t get me started on the idiotic bibliometrics and other KPIs that are imposed on us academics in the name of “performance” assessment). Web analytics experts are just beginning to face up to this problem. For instance, webpage dwell times are not just influenced by how interested the visitor is in the content of a webpage, but may also reflect such things as how difficult the contents are to understand, the visitor’s attention span, or the fact that they left their browsing device to do something else and then returned much later. As in Myth 7, bias and measurement confounds may be compounded in big data to a greater extent than they are in small data, simply because big data often combines multiple measures.

Myth 9. Big data are stable.

Data often are not recorded just once, but re-recorded as better information becomes available or as errors are discovered. In a recent Wall Street Journal article, economist Samuel Rines presented several illustrations of how unstable economic indicator estimates are in the U.S. For example, he observed that in November 2012 the first official estimate of net employment increase was 146,000 new jobs. By the third revision that number had increased by 68% to 247,000. In another instance, he pointed out that American GDP annual estimates each year typically are revised several times, and often substantially, as the year slides into the past.

Again, there is little evidence that people crafting policy or making decisions based on these numbers take their inherent instability into account. One may protest that often decisions must be made before “final” revisions can be completed. However, where such revisions in the past have been recorded, the degree of instability in these indicators should not be difficult to estimate. These could be taken into account, at the very least, in worst- and best-case scenario generation.

Myth 10. We have adequate computing power and techniques to analyse big data.

Analysing big data is a computationally intense undertaking, and at least some worthwhile analytical goals are beyond our reach, in terms of computing power and even, in some cases, techniques. I’ll give just one example. Suppose we want to model the total dwell time per session of a typical user who is browsing the web. The number of items on which the user dwells is a random variable, and so is the amount of dwell time for each item. The total dwell time, then, is what is called a “randomly stopped sum”. The expression for the probability distribution of a randomly stopped sum doesn’t have a closed form (it’s an infinite sum), so it can’t be modelled via conventional statistical estimation techniques (least-squares or maximum likelihood). Instead, there are two viable approaches: Simulation and Bayesian hierarchical MCMC. I’m writing a paper on this topic, and from my own experience I can declare that either technique would require a super-computer for datasets of the kind dealt with, e.g., by NRS PADD.


Written by michaelsmithson

July 26, 2013 at 6:57 am

Science for Good and Evil: Dual Use Dilemmas

leave a comment »

For fascinating examples of attempts to control curiosity, look no further than debates and policies regarding scientific research and technological development. There are long-running debates about the extent to which scientists, engineers, and other creative enterprises can and should be regulated—and if so by whom. Popular images of scholars and scientists pursuing knowledge with horrific consequences for themselves and others range from the 16th-century legend of Faustus (reworked by numerous authors, e.g., Marlowe) to Bruce Banner.

The question of whether experiment X should be performed or technology Y developed is perennial. The most difficult versions of this question arise when pursuing the object of one’s curiosity violates others’ deeply held values or ethical principles, or induces great fear. These issues have a lengthy history. Present-day examples of scientific research and technological developments evoking this kind of conflict include stem cell research, human cloning, the Large Hadron Collider, and genetic modification of food crops.

Before it began operations, fears that the Large Hadron Collider (LHC) could generate a black hole that would swallow the Earth made international headlines, and debates over its safety have continued, including lawsuits intended to halt its operation. The nub of the problem of course is risk, and a peculiarly modern version of risk at that. The sociologist Ulrich Beck’s (1992) classic work crystallized a distinction between older and newer risks associated with experimentation and exploration. The older risks were localized and often restricted to the risk-takers themselves. The new risks, according to writers like Beck, are global and catastrophic. The concerns about the LHC fit Beck’s definition of the new risks.

When fears about proposed experiments or technological developments concern the potential misuse of potentially beneficial research or technology, debates of this kind are known as “dual use dilemmas.” There’s an active network of researchers on this topic. Recently I participated in a workshop at The Australian National University on this topic, from which a book should emerge next year.

Probably the most famous example is the controversy arising from the development of nuclear fission technology, which gave us the means to nuclear warfare on the one hand but numerous peacetime applications on the other. The fiercest debates these days on dual use dilemmas focus on biological experiments and nanotechnology. The Federation of American Scientists has provided a webpage source of fascinating case-studies in dual-use dilemmas involving biological research. The American National Research Council (NRC) 2004 report on “Biotechnological Research in an Age of Terrorism” is an influential source. Until recently, much of the commentary came from scientists, security experts or journalists. However, for a book-length treatment of this issue by ethicists, see Miller and Selgelid’s (2008) interesting work.

The NRC report listed “experiments of concern” as those including any of the following capabilities:

  1. demonstrating how to render a vaccine ineffective;
  2. enhancing resistance to therapeutically useful antibiotics or antiviral agents;
  3. enhancing the virulence of a pathogen or render a non-pathogen virulent;
  4. increasing the transmissibility of a pathogen;
  5. altering the host range of a pathogen;
  6. enabling the evasion of diagnosis and/or detection by established methods; and
  7. enabling the weaponization of a biological agent or toxin.

There are three kinds of concern underpinning dual-use dilemmas. The first arises from foreseeable misuses that could ensue from an experiment or new technology. Most obvious are experiments or developments intended to create weapons in the first place (e.g., German scientists responsible for gas warfare in World War I or American scientists responsible for atomic warfare at the end of World War II). But not as obvious are the opportunities to exploit nonmilitary research or technology. An example of potential misuse of a rather quotidian technology would be terrorists or organized crime networks exploiting illegal botox manufacturing facilities to distill botulinum toxin (see the recent Scientific American article on this).

Research results published in 2005 announced the complete genetic sequencing of the 1918 influenza A (H1N1) virus (a.k.a. the “Spanish flu”) and also its resurrection using reverse genetic techniques. This is the virus that killed between 20 and 100 million people in 1918–1919. Prior to publication of the reverse-engineering paper, the US National Science Advisory Board for Biosecurity (NSABB) was asked to consider the consequences. The NSABB decided that the scientific benefits flowing from publication of this information about the Spanish flu outweighed the risk of misuse. Understandably, publication of this information aroused concerns that malign agents could use it to reconstruct H1N1. The same issues have been raised concerning the publication of the H5N1 influenza (“bird flu”) genome.

The second type of concern is foreseeable catastrophic accidents that could arise from unintended consequences of research or technological developments. The possibility that current stockpiles of smallpox could be accidentally let loose is the kind of event to be concerned about here. Such an event also is, for some people, an argument against research enterprises such as the reengineering of H1N1.

The third type of concern is in some ways more worrying: Unforeseeable potential misuses or accidents. After all, a lot of research yields unanticipated findings and/or opportunities for new technological developments. A 2001 paper on mousepox virus research at The Australian National University is an example of this kind of serendipity. The researchers were on the track of a genetically engineered sterility treatment that would combat mouse plagues in Australia. But this research project also led to the creation of a highly virulent strain of mousepox. The strain the researchers created killed both mice with resistance to mousepox and mice vaccinated against mousepox.

Moreover, the principles by which this new strain was engineered were readily generalizable, and raised the possibility of creating a highly virulent strain of smallpox resistant to available vaccines. Indeed, in 2003 a team of scientists at St Louis University published research in which they had extended the mousepox results to cowpox, a virus that can infect humans. The fear, of course, is that these technological possibilities could be exploited by terrorists. Recent advances in synthetic genomics have magnified this problem. It is now possible not only to enhance the virulence of extant viruses, but also create new viruses from scratch.

The moral and political questions raised by cases like these are not easy to resolve, for at least three reasons. First, the pros and cons often are unknown to at least some extent. Second, even the known pros and cons usually weigh heavily on both sides of the argument. There are very good reasons for going ahead with the research and very good reasons for prohibiting it.

The third reason applies only to some cases, and it makes those cases the toughest of all. “Dual dilemmas” is a slightly misleading phrase, in a technical sense that relates to this third reason. Many cases are really just tradeoffs, where at least in principle rational negotiators could weigh the costs and benefits according to their lights and arrive at an optimal decision. But some cases genuinely are social dilemmas, in the following sense of the term: Choices dictated by rational, calculating self-interest nevertheless will lead to the destruction of the common good and, ultimately, everyone’s own interests.

Social dilemmas aren’t new. Garrett Hardin’s “tragedy of the commons” is a famous and much debated example. A typical arms-race is another obvious example. Researchers in countries A and B know that each country has the means to revive an extinct, virulent pathogen that could be exploited as a bioweapon. If the researchers in country A revive the pathogen and researchers in country B do not, country A temporarily enjoys a tactical advantage over country B. However, it also imposes a risk on both A and B of theft or accidental release of the pathogen. If country B responds by duplicating this feat then B regains equal footing with A but now has multiplied the risk of accidental release or theft. Conversely, if A restrains from reviving the pathogen then B could play A for a sucker by reviving it. It is in each country’s self-interest to revive the pathogen in order to avoid being trumped by the other, but the result is the creation of dread risks that neither country wants to bear. You may have heard about “Prisoner’s Dilemma” or “Chicken Game.” These are types of social dilemmas, and some dual use dilemmas are structurally similar to them.

Social dilemmas present a very difficult problem in the regulation of curiosity (and other kinds of choice behavior) because solutions cannot be worked out solely on the basis of appeals to rational self-interest. Once you know what to look for, you can find social dilemmas in all sorts of places. The research literature on how to deal with and resolve them includes contributions from economics, psychology, political science, anthropology, sociology and applied mathematics (I co-edited a book on social dilemmas back in 1999). This research has had applications ranging from environmental policy formation to marriage guidance counseling. But it shouldn’t surprise you too much to learn that most of the early work on social dilemmas stemmed from and was supported by the American military.

So to conclude, let’s try putting the proverbial shoe on the other foot. The dual-use dilemmas literature focuses almost exclusively on scientific research and technological developments that could be weaponized. But what about the reverse process—Military research and development with nonmilitary benefits? Or, for that matter, R & D from illicit or immoral sources that yield legitimate spinoffs and applications? These prospects appear to have been relatively neglected.

Nevertheless, it isn’t hard to find examples: The internet, for one. The net originated with the American Defense Advanced Research Projects Agency (DARPA), was rapidly taken up by university-based researchers via their defense-funded research grants, and morphed by the late 1980’s into the NSFnet. Once the net escaped its military confines, certain less than licit industries spearheaded its development.  As Peter Nowak portrays it in his entertaining and informative (if sometimes overstated) book Sex, Bombs and Burgers, the pornography industry was responsible for internet-defining innovations such as live video streaming, video-conferencing and key aspects of internet security provision. Before long, mainstream businesses were quietly adopting ways of using the net pioneered by the porn industry.

Of course, I’m not proposing that the National Science Foundation should start funding R&D in the porn industry. My point is just that this “other” dual use dilemma merits greater attention and study than it has received so far.

Written by michaelsmithson

November 3, 2010 at 10:49 am