Regulating Nanotechnology Via Analogy, Pt. 2

[Blogger’s note: This post – a continuation of the previous one – is adapted from a talk I gave in March 2012 at the annual Business History Conference. Like the last post, it draws on research done by Roger Eardley-Pryor, an almost-finished graduate student I’m advising at UCSB, and me. I’m posting it here with his permission. Some of the images come from slides we put together for the talk.]

Screen Shot 2013-02-15 at 9.52.10 AM

Can historical analogies help us approach technological risks better?

In my last post, I discussed the ways in which policy makers could use historical analogies as tools when considering ways in which nanotechnologies might be regulated. At the end, I suggested that multiple definitions for nanotechnology posed a challenge for finding the one best analogy, however. So – what are examples of the analogies made between nanotech and other technologies and what does have to say about possible regulation paths…consider the following examples:

Example #1 – Genetically Modified Organisms

Screen Shot 2013-02-13 at 5.02.46 PM

Engineered nanomaterials bear some relation to GMOs…but it’s not necessarily a strong one.

In April 2003, Prof. Vicki Colvin testified before Congress. A chemist at Rice University, Colvin also directed that school’s Center for Biological and Environmental Nanotechnology. This “emerging technology,” Colvin said, had a considerable “wow index.”1 However, Colvin warned, every promising new technology came with concerns that could drive it from “wow into yuck and ultimately into bankrupt.” To make her point, Colvin compared nanotech to recent experiences researchers and industry had experienced with genetically modified organisms. Colvin’s analogy – “wow to yuck” – made an effective sound bite. But it also conflated two very different histories of two specific emerging technologies.

While some lessons from GMOs are appropriate for controlling the development of nanotechnology, the analogy doesn’t prove watertight. Unlike GMOs, nanotechnology does not always involve biological materials. And genetic engineering in general, never enjoyed any sort of unalloyed “wow” period. There was “yuck” from the outset. Criticism accompanied GMOs from the very start. Furthermore, giant agribusiness firms prospered handsomely even after the public’s widespread negative reactions to their products.  Lastly, living organisms – especially those associated with food – designed for broad release into the environment were almost guaranteed to generate concerns and protests.2 Rhetorically, the GMO analogy was powerful…but a deeper analysis clearly suggests there were more differences than similarities.

Example #2 – Asbestos

Screen Shot 2013-02-13 at 5.03.16 PM

Are engineered nanoparticles analogous to asbestos fibers?

A different definition of nanotech treats it like a new form of matter…stuff requiring special oversight, particularly in the workplace. Such a material definition of nanotechnology suggests a ready analogy to asbestos.  Given decades of enormous and expensive asbestos litigation, the analogies between asbestos and nanotechnology have prompted substantial toxicological analysis on new materials.3  Carbon nanotubes (CNTs) are best known of these new nano-materials. With a long thin structure that resembles that of asbestos, numerous toxicological studies indicate that nanotubes share a similar toxicity. These similarities and the historical circumstances of attempts to regulate asbestos in the United States offer suggestions for how to proceed toward the regulation of certain nanotechnologies.

Given the known threats of asbestos, the U.S. EPA attempted an all-out ban on its manufacture and use. However, in 1991, the U.S. Fifth Court of Appeals claimed EPA did not meet the requirements to impose the “least burdensome” controls. The court promptly lifted the ban for all but the most dangerous existing asbestos products. The inability of EPA to ban asbestos, despite decades of evidence confirming its hazards, indicates the need for serious reform of Toxic Substances Control Act or TOSCA, the existent United States’ law for chemical regulation.4 While this need for reform applies for existing substances like asbestos, it applies even more so for novel and analogous nanotechnologies like CNTs.

Example #3 Fallout

Screen Shot 2013-02-13 at 5.05.06 PM

Per capita thyroid doses in the continental United States resulting to atmospheric nuclear tests at the Nevada Test Site from 1951-1962.

With planetary fears about grey goo and self-replicating nanobots, figures like Michael Crichton, Bill Joy, Prince Charles, and, at times, even K. Eric Drexler, seemed to define nanotechnology as so broad, diverse, and nebulous that they rendered it as a questionable, minute, and invisible unknown.  This line of thinking suggested nanotechnology might be analogous to another existential and invisible, yet life-threatening technological byproduct – radioactive fallout.

Each of the hundreds of open-air nuclear devices exploded between 1945 and 1980 released minute, invisible, radioactive debris that circulated around the planet’s stratosphere before falling back to earth, exposing humans and the environment to its harmful radioactivity.5 The global spread of these materials throughout ecosystems and into human bodies occurred without full public or private consideration of their risks by policy-makers, by scientists, or by unknowingly exposed publics. In WWII and during the Cold War, the dictates of national security instigated the development and open-air testing of nuclear weapons.  However, by the end of the Cold War, national security came to be defined increasingly in terms of economic security.  Along those lines, American scientists and policy-makers in the late 1990s and early 2000s framed the need for the federal development of nanotechnology in the rhetoric of economic national security.

The nanotechnology enterprise has also yielded novel engineered particles that exist only at invisible scales; new particles that have found wide commercial distribution around the world before full public or private consideration of their potential risks to human health, or full consideration of their threats to our environmental security. In 2003, Oregon Congressman David Wu hinted at the analogy between nanotechnology and nuclear fallout by citing a historic example of regulating fallout’s novel and invisible threat via the Partial Nuclear Test Ban Treaty. 6  Though Representative Wu celebrated the Test Ban Treaty for its international cooperation and control of hazardous fallout, he noted that “In many respects, the Nuclear Test Ban Treaty is nothing but a ban on experimentation.”7 At the time, organizations like ETC, Greenpeace, and Friends of the Earth-Australia had also called for a ban on nanotechnology production until researchers clearly understood all of nanotechnology’s EHS risks.  As with other examples, one’s definition of nanotechnology – here as an invisible, existential, and global threat – determined the appropriate analogy to prior technologies.  That definition, in turn, indicated to various nano-stakeholders particular forms of precaution, regulation, and control.  If nanotechnology was analogous to fallout, maybe the analogous regulation would be an outright ban that would forestall all future risks?

Example #4 – Recombinant DNA

Screen Shot 2013-02-13 at 5.01.54 PM

A fourth definition for nanotechnology moves us beyond consideration of novel forms of matter and instead identifies nanotechnology as a suite of technological practices for manipulating nature – techniques that render the natural world as unnatural.  This identification of nanotechnology with particular lab practices yields an analogy to debate about recombinant DNA (rDNA) techniques of the 1970s.

In the mid-1970s, scientists agreed to a moratorium on rDNA practices until they better understood the technology and until the U.S. National Institutes of Health (NIH) could establish proper guidelines. After the famous 1975 Asilomar Conference, the NIH’s Recombinant DNA Advisory Committee produced its research guidelines. These guidelines clearly defined specific biological techniques and instituted multiple layers for control, including requirement of biological containments. This ensemble of lab practices helped stimulate the rapid commercialization of modern biotech research and, one could argue, consumer acceptance.

Nanotechnology-stakeholders have identified a similar goal of early anticipation and mutually agreeable control through their framework of anticipatory governance. For some nanotech stakeholders – particularly entrepreneurs affiliated with commercialized industry – the NIH’s decision to institute guidelines for rDNA technology, rather than push for legally binding regulations, offers possible paths for the eventual oversight of nanotechnology. Government guidelines consist of procedures that people are expected to follow when receiving federal dollars, whereas regulations are substantive rules for all actors that carry the authority of law. However, drawing lessons from rDNA and applying them to nano comes with drawbacks –for example, guidelines similar to those from the NIH might only apply to federally funded research. This would leave privately funded research in a different regulatory regime, subject not merely to guidelines, only to the hard law of regulation.

Some concluding thoughts…

“Nanotechnology” is a socially constructed collection of techno-scientific ideas, practices, and materials. But with such a broad and sometimes vague set of definitions for nanotechnology used by scientists, policy-makers, activists, and businesses, how can nano-stakeholders know what to regulate?

Some scholars, including Andrew Maynard, a leading expert on risk science, suggest that regulators’ wish for strict definitions is misplaced. Maynard, for instance, believes that a precise definition for nanotechnology would actually impede proper regulation.8  Instead of a categorical definition, Maynard now argues that regulation must focus on its various “trigger points,” or empirical points that transition a material from conventional to risky. Here, one could imagine officials looking to historical examples to find other such ‘tipping points’ which catalyzed regulatory reform.

But policy makers have moved in the opposite direction. In late 2011, Health Canada as well as the European Commission announced a specific set of politically designed definitions for nanomaterials to be used explicitly “for all regulatory purposes.”  Similarly, the United States’ most recent research strategy for environmental, health, and safety emphasized the need for federal agencies to establish agreed-upon definitions for nanomaterials. But, even as regulators moved toward a “one size fits all model”, analogies with other materials, techniques, and industries still prove useful. The US EPA, for instance, has considered whether certain materials should be regulated under rules that apply to insecticides. So, perhaps we can look forward to the drawing of new analogies, not to GMOs and asbestos and fallout but to DDT…

Screen Shot 2013-02-13 at 5.13.17 PM

Advert for DDT, c. 1947

So — If historical analogies teach can teach us anything about the potential regulation of nano and other emerging technologies, they indicate the need to take a little risk in forming socially and politically constructed definitions of nano. These definitions should be based not just on science but rather mirror the complex and messy realm of research, policy, and application. No single analogy fits all cases but an ensemble of several (properly chosen, of course) can suggest possible regulatory options.

  1. House of Representatives Committee on Science, “The Societal Implications of Nanotechnology,” Hearing before the Committee on Science, House of Representatives (108th Congress, 1st Session), April 9, 2003, p. 49. A quick web search on “Vicki Colvin + wow to yuck” yields some 360,000 hits including several presentations and papers she and her Rice colleagues gave that use the phrase. []
  2. Ronald Sandler, “The GMO-Nanotech (Dis)Analogy?,” Bulletin of Science, Technology, and Society 26:1 (2006): 57-62; Arie Rip, “Folk Theories of Nanotechnologists,” Science as Culture, 2006, 15, 4: 349-65. []
  3. A good article on this is Geoffrey Tweedale and Jock McCulloch, “Chrysophiles versus Chrysophobes: The White Asbestos Controversy, 1950s–2004,” Isis , Vol. 95, No. 2 (June 2004), pp. 239-259. []
  4. Marc Landry, “EPA and Nanotechnology: The Need for a Grand Bargain?,” in Governing Uncertainty: Environmental Regulation in the Age of Nanotechnology, edited by Christopher Bosso (Washington, DC: RFF Press, 2010), pp. 87. []
  5. Harold L. Beck and Burton G. Bennett, “Historical Overview of Atmospheric Nuclear Weapons Testing and Estimates of Fallout in the Continental United States,” Health Physics 82:5 (May 2002): 591-608. []
  6. This treaty, signed in 1963 by the United States, the Soviet Union, and Great Britain after years of international negotiation, banned nuclear test explosions in the air, above the atmosphere, or at sea. []
  7. House of Representatives Committee on Science, “The Societal Implications of Nanotechnology,” Hearing before the Committee on Science, House of Representatives (108th Congress, 1st Session), April 9, 2003: Wu, pg 91. []
  8. Andrew Maynard, “Don’t Define Nanomaterials,” Nature 475 (7 July 2011): 31. []

Regulating Nanotechnology Via Analogy, Pt. 1

[Blogger’s note: This post is adapted from a talk I gave in March 2012 at the annual Business History Conference; it draws on research done by Roger Eardley-Pryor, an almost-finished graduate student I’m advising at UCSB, and me. I’m posting it here with his permission. This is the first of a two-part essay…some of the images come from slides we put together for the talk.]

Screen Shot 2013-02-10 at 3.09.55 PM

Take a Little Risk?

Over the last decade, a range of actors – scientists, policy makers, and activists – have used  historical analogies to suggest different ways that risks associated with nanotechnology – especially those concerned with potential environmental implications – might be minimized. Some of these analogies make sense…others, while perhaps effective, are based on a less than ideal reading of history.

Analogies have been used before as tools to evaluate new technologies. In 1965, NASA requested comparisons between the American railroad of the 19th century and the space program. In response, MIT historian Bruce Mazlish wrote a classic article that analyzed the utility and limitations of historical analogies.1 Analogies, he explained, function as both model and myth. Mythically, they offer meaning and emotional security through an original archetype of familiar knowledge. Analogies also furnish models for understanding by construing either a structural or a functional relationship. As such, analogies function as devices of anticipation which what today is fashionably called “anticipatory governance.”2 They also can serve as a useful tool for risk experts.3

Screen Shot 2013-02-10 at 3.14.02 PM

Analogies may serve as a tool for crafting better governance of new technologies

Policy makers recognize the importance of analogies. In 2003, participants at an NSF-sponsored workshop on nanotechnology’s societal implications warned that “one of the more disturbing possibilities is that policy makers and leaders of social movements may respond to nanotechnology not as it actually is, but in terms of false analogies.”4 In 2003, policy makers and scientists were especially concerned about the public perceptions of nano.

When the U.S. government first launched its National Nanotechnology Initiative in 2000, few if any policy makers expressed concerns about its environmental implications.5 But by 2003, it was impossible for the people charged with managing nano to have ignored its environmental, health, and safety issues. So how did EHS issues get on the radar screen of policy makers and journalists? There are several causal factors; their common feature is that they all originated not in the laboratory but in the realms of popular culture, celebrity, and social activism.

An early shot across the bows came from an unexpected source. Bill Joy was a Berkeley-trained computer researcher and dot-com millionaire. His incendiary article – published by Wired in April 2000 – was titled “Why the Future Doesn’t Need Us.” It highlighted perils he saw in several emerging technologies. Motivated partly by controversies over corporate development of genetically-modified crops, Joy identified self-replication of newly emerging nanotechnologies as a clear and future danger. The solution? Joy proposed “relinquishment” and limiting development of “technologies that are too dangerous.” Accompanied by a flurry of international publicity, Joy’s article came at an inconvenient time for nano-boosters as Congress was preparing to vote on Clinton’s proposed new national nano initiative in 2000. Controversy stirred by articles like Joy’s threatened this initiative.

Nano-anxieties were fanned anew in late 2002 when HarperCollins published Prey by blockbuster novelist Michael Crichton. Central to its plot was the deliberate release of autonomous, self-replicating nanobots. Created by an amoral corporation working under contract to the Pentagon, the predatory swarm of millions of nanobots attacked people until it was destroyed. Crichton’s book hit every button that might stoke public alarm about nanotechnology: a greedy, high-tech firm; lack of government regulation; new technologies turned into military applications.

Non-governmental organizations helped keep controversies over nanotechnology in front of North American and European citizens. In January 2003, the Action Group on Erosion, Technology, and Concentration (ETC), a small Canadian NGO, released a report called The Big Down.

Screen Shot 2013-02-10 at 3.16.46 PM

ETC had previously led campaigns against genetically modified foods. Not surprisingly, their report savaged the idea of nanotechnology. ETC’s report reflected the group’s larger agenda, which was less about so-called emerging technologies per se and more about restricting corporate power and maintaining cultural diversity and human rights.

But none of the examples was about a specific existing technology. Instead, these spurs to regulation referred to hypothetical technologies and the creation of planet-threatening dangers. Soon however, concerns about nano’s regulation transcended vague existential threats and moved to specific and potentially troubling techniques and materials.

But what exactly was to be regulated? Was nanotechnology something with the capacity to spread across wide swaths of land and reap tremendous environmental damage with the fear amplified in part because of its minute size? Or perhaps nanotechnology was less an existential threat and instead a suite of scientific techniques and tools that require regulation?  If not a particular technique, was nanotechnology a particular product, a specific category of material, a hazardous form of matter that should be controlled for the health and safety of workers and consumers? Or, did nanotechnology represent an entire new industry in need of care and control in order to reap its economic benefits?

Screen Shot 2013-02-10 at 3.19.36 PM

How you define something helps determine how you regulate it…

So…in order to draw fitting analogies that might suggest an ideal path toward the appropriate oversight or regulation of nanotechnology, stakeholders first had to agree on its definition. And depending on what definition one chose, a different historical analogy could be found which suggested a different approach to regulation…more on this next time.  But, to give a hint, the frequent comparisons between nano and genetically-modified organisms were not necessarily the best way to build a regulatory policy.6

To be continued…

  1. Bruce Mazlish, “Historical Analogy: The Railroad and the Space Program and Their Impact on Society,” in The Railroad and the Space Program: An Exploration in Historical Analogy, edited by Bruce Mazlish (Cambridge, MA: M.I.T. Press, 1965), pp. 1-52. []
  2. Daniel Barben, Erik Fisher, Cynthia Selin, and David H. Guston, “Anticipatory Governance of Nanotechnology,” in The Handbook of Science and Technology Studies, Third Edition, edited by Edward J. Hackett, Olga Amsterdamska, Michael Lynch, and Judy Wajcman (Cambridge, MA: MIT Press, 2008), 979-1000. []
  3. Scott Knowles at Drexel University has a great book out on the role of “disaster experts” in modern America that is worth looking at. []
  4. “Workshop Breakout Session Reports,” in Nanotechnology: Societal Implications I, Maximizing Benefits for Humanity, edited by Mihail C. Roco and William Sims Bainbridge (Dordrecht, The Netherlands: Springer, 2007), 73. []
  5. When the National Academies of Science reviewed the NNI in 2002, its report shows that, out of $464 million allocated for nano in FY2001, less than 1% went to the Environmental Protection Agency. NAS, Small Wonders, Endless Frontiers report, 2002. []
  6. A rhetorically powerful example of this came from 2003 Congressional testimony given by Rice University chemist Vicki Colvin. Colvin, director of Rice’s Center for Biological and Environmental Nanotechnology, spoke about societal implications of nanotechnology. This “emerging technology,” Colvin said, had a considerable “wow index.”Nanotech offered “potential benefits to the economy, human health, and quality of life.” However, Colvin warned, every new such emerging technology came with its own particular set of concerns. If improperly handled, these concerns “can turn wow into yuck and ultimately into bankrupt.” To drive her point home, Colvin shrewdly drew an analogy between a future in which nano might go bankrupt and an example that would resonate with policy makers – the “genetically modified foods industry.” A quick web search on “Vicki Colvin + wow to yuck” yields some 360,000 hits including several presentations and papers she and her Rice colleagues gave that use the phrase. Her original testimony appears in House of Representatives Committee on Science, “The Societal Implications of Nanotechnology,” Hearing before the Committee on Science, House of Representatives (108th Congress, 1st Session), April 9, 2003. []

Let Us Prey

Screen Shot 2013-01-14 at 8.52.23 AM

Crichton’s book hit bookshelves November 2002

Like so many academics, I sometimes have a hard time telling non-specialists what I do. My parents are classic examples of such folks. Usually, they seem content to tell their friends “Our son’s a history teacher” or “He’s a writer.” I’m OK with either.

But back in 2006, when I was helping start a center that looks at nanotechnology’s societal implications, my dad surprised me with “Nanotechnology? Oh that…” Given that polls and surveys show that a relatively small percentage of people had heard of nanotech, let alone know what it is, this was startling.1 And then I put it together – when my folks visited me in 2004, my dad, looking at my bookshelf for some pot-boiling fiction, must have instead grabbed my copy of Crichton’s novel Prey.

Crichton, of course, was one of the most successful fiction authors of the late 20th century. Before his death in 2008 (at age 66), he wrote such classics as The Andromeda Strain and Timeline and, especially, Jurassic Park. The latter went on to become a Hollywood franchise – the first film in the series grossed almost a billion dollars – the basis for several amusement park rides, and the inspiration for a “Weird Al” Yankovic parody. (When “Weird Al” targets you, you know you’ve made it).

Prey appeared just before Thanksgiving in 2002. With the 10th anniversary of its release just having passed, I thought I’d reflect on its place in nano-history. This seems especially important because, if my father is an accurate gauge, Prey was how a good many Americans first learned anything about nanotech.

When I asked my dad what he thought nanotech was, his answer was basically: “It’s those tiny machines that scientists are trying to build. We have to be careful because they might take over.” This is a pretty good summary of more than one of Crichton’s novels. The Andromeda Strain and Jurassic Park, for example, all deal with rogue organisms of some sort (dinosaurs, alien microbes) generated/released by unscrupulous scientists/government agents. Seen in a positive light, they suggest the need to take a cautionary approach to novel technologies.

Prey takes a similar tack – an unscrupulous company called Xymos, operating out in the desert with secret military funding. Scientists at Xymos, including the protagonist’s wife, have developed the ability to make semi-sentient and autonomous nanobots. Coalescing into a predatory swarm, the nanobots attack and contaminate people until they are destroyed.

The ideas that animate Prey can be traced to the visioneering of nanotech pioneer K. Eric Drexler. Starting in 1981 with a peer-reviewed article in Proceedings of the National Academy of Sciences, Drexler vocally advocated a biologically-infused vision of what he initially called “molecular engineering.” Researchers’ ability in the future to design protein molecules could lead to the manufacturing of molecular-scale devices which, in turn, could make “second-generation machines” and the eventual “construction of devices and materials to complex atomic specifications.” Drexler insisted that what he called “exploratory engineering” was similar to John von Neumann’s work on the theoretical capabilities of computers c. 1950 or Konstantin Tsiolkovki’s design of rocket motors c. 1920.

With the publication of his 1986 book The Engines of Creation, he and his supporters regularly started using the n-word (i.e. nanotechnology). In Engines, Drexler famously suggested that out-of-control and self-replicating machines might pose a serious hazard – i.e. “grey goo”- that could, if not controlled, threaten the planet. Long on enthusiastic ideas but short on specific scientific details, Drexler’s books and articles offered an enthusiastic view of a technological future in which engineers had precise control over the material world.

Screen Shot 2013-01-14 at 9.00.18 AM

“When the world ends, will you be covered in grey goo?”

By the early 1990s, technology-oriented magazines like Mondo 2000 and Wired that catered strongly to the techno-cognoscenti provided positive coverage of the Drexlerian nano-future as did mainstream venues like Time, Fortune, and The Economist. However, the publicity and popularization of his ideas, compounded by the fact that Drexler wasn’t doing traditional lab research, made him a controversial figure. By the early 1990s, “the apostle of nanotechnology” had already become a lightning rod for praise and scorn from fellow scientists.”

Prey was, in essence, a fictionalized mélange of Drexler’s ideas. Crichton even provided his readers with an epigraph from Drexler, a short introduction to “artificial evolution in the 21st century” (it cited Drexler) and a multi-page bibliography that listed Drexler (twice). The book appeared on the Monday before Thanksgiving in 2002.

HarperCollins timed its carefully choreographed release to coincide with the holiday weekend. Crichton appeared on network talk shows, gave a seven-city book tour, and wrote about nanotechnology for the Sunday supplement Parade that millions read. Rumors circulated, after it became a #1 bestseller, that Hollywood would turn Prey into a major motion-picture which, if Jurassic Park (Crichton’s earlier tale of escaped, marauding techno-creatures) gave any indication, tens of millions of people would see. Reviews of the book were positive – one reviewer for the Times called said it might be Crichton’s “most ambitious techno-thriller yet” that, despite some absurd plot twists, brought renewed attention to the Drexlerian conceit of “grey goo.”

Crichton’s book hit every button that might stoke public alarm about nanotechnology: a greedy, high-tech firm; lack of government regulation; new technologies turned into military applications. Reality reflected this last aspect all too well. In 2002, for example, MIT announced it would establish a $90 million Institute for Soldier Nanotechnologies. To some watchdog groups, the idea of nano-enhanced soldiers sounded quite ominous.

Moreover, Prey appeared in bookstores in the midst of larger furor over the implications of “emerging technologies” like nanotechnology, robotics, genetic engineering, and artificial intelligence. This started in 2000 when Sun Microsystems founder Bill Joy broke ranks with fellow technologists and wrote an incendiary article titled “Why the Future Doesn’t Need Us.” Joy’s venue for publication – Wired magazine – was especially poignant given its cyber-libertarian ideology that deified markets and disparaged regulation. In January 2003, the Action Group on Erosion, Technology, and Concentration (ETC), an unwieldy name for a small Canadian organization, released a report called The Big Down. ETC had previously led campaigns against genetically modified foods. Not surprisingly, their report savaged the idea of nanotechnology. Even Prince Charles and Royal Astronomer Martin Rees got in on the act and warned of the existential threats new technologies like nanotech posed.2 Given the global panic after the 2002-2003 outbreak of “severe acute respiratory syndrome” (SARS) and the existential fears about terrorists getting weapons of mass destruction, statements such as Rees’ guaranteed headlines.

The timing of all this negative press regarding nanotech was dreadful for scientists and policy makers in the U.S. and Europe who were trying to build support and maintain funding for various national nanotechnology initiatives. The U.S. effort had been launched in 2000 as a Clinton initiative and its advocates were trying to see its research agenda codified as a law under the new Bush administration.

In response, mainstream scientists and science managers took some of the wind out the storm that Joy, Crichton,  et al. had helped stoke by supporting the need for more research on the societal and ethical implications of nanotechnology. In late 2004, the National Science Foundation announced it wanted to start a Center for Nanotechnology in Society and, a year later, funding was given for two such centers: one at Arizona State and another at UC-Santa Barbara (disclosure: I helped write the proposal for the UCSB center and currently lead one of its three research groups).

It would be overstating the case to say that Prey catalyzed a national effort to look more closely at the implications of emerging technologies like nano. By the same token, it would be an exaggeration to say that researchers wouldn’t be studying nanotechnology were it not for Drexler’s advocacy…they still would be it might be called something else and it might exist as a less coordinated research agenda.

However, the historical record is clear on the fact that policy makers and some scientists were very concerned about the possible effects that Prey might have on public perceptions of nanotech. I can recall going to several nano and society meetings c. 2005-6 and a regular topic of conversation around the coffee table was what effects a cinematic version of Prey might have on public views of nanotech.3

My Google-based forays shows that Prey doesn’t seem likely to be arriving in theatres any time soon (although I found an amateur version on YouTube…5 minutes of my life I’ll never get back).4  Nonetheless, Prey already had two main impacts – one was giving the millions of people who bought the book some sense, albeit one that was highly distorted and hyperbolic, of what nanotechnology is. The other was, even if only indirectly, to help propel scholarship that considers the societal implications of a host of new technologies.

  1. A 2006 survey done on behalf of the Woodrow Wilson International Center for Scholars showed that about 70% of Americans had never heard of nanotech (42%) or had knew just a little (27%). []
  2. An example of a headline from spring 2003; this was from an Edinburgh weekly. []
  3. Interestingly, one study showed just the opposite! Prey actually made people think nanotech’s implications would be more positive for society, not less: Michael Cobb and Jane Macoubrie, “Public Perceptions About Nanotechnology: Risks, Benefits, and Trust,” Journal of Nanoparticle Research, 2004, 6, 4: 395-405. []
  4. One version has it that Crichton gave up on a movie version after seeing the 2003 Hollywood version of his book Timeline. The drubbing Crichton took from scientists after his book State of Fear (in which environmentalists and eco-terrorists are portrayed as a threat and global warming a hoax) came out in 2004. Crichton himself died in 2008. []