Brain Mapping and the *-omics of Everything

“It is not down in any map; true places never are.”

-Herman Melville

Screen Shot 2013-02-20 at 9.12.14 AM

I recently gave a talk at the Tech Museum of Innovation in San Jose. Afterwards, John Markoff from The New York Times came up and introduced himself. I’m a fan of his work – I enjoyed his 2005 book What the Dormouse Said on the interplay between the counterculture and the computer industry and have also used it as source material for my own writing.1 I asked John what he was working on and he just told me to keep an eye on tomorrow’s paper.

Markoff’s story – front-page, above the fold – revealed how the Obama administration is planning a major new initiative to map the active human brain. (Note: On February 25, a day after I first posted this, a follow-up article by Markoff appeared on-line.) As Markoff described it, the Brain Activity Map (BAM) project “would do for the brain what the Human Genome Project did for genetics.” The BAM project – which would be much more challenging than mapping the human genome – even got a shout out in Obama’s 2013 State of the Union address when the President noted that “our scientists are mapping the human brain to unlock the answers to Alzheimer’s” and hinted at the good things to follow if this work continued.

BAM’s basic goal is to create a dynamic map of the brain – it’s not clear whose grey matter will be studied but I can only imagine the field-day that Institutional Review Boards will have one day with this – rather than static maps. A June 2012 paper published in Neuron explains BAM’s goals.2 (The paper’s 6 co-authors are listed alphabetically. Together they are an odd ensemble of star scientists including Berkeley chemist Paul Alivisatos and Harvard geneticist George Church who I wrote about in an earlier post). According to their paper, the BAM project will “transcend the ‘structural connectome’” studied (and popularized) by people such as MIT’s Sebastian Seung.

Screen Shot 2013-02-19 at 11.08.09 AM

Title and abstract of the 2012 Neuron article

What intrigued me even more than BAM’s ambitious goals was the proposed methods. These would draw on some of the radical ideas for nanotechnology that have circulated since the mid-1980s in pop-science works like Eric Drexler’s Engines of Creation. For example, as Markoff described it, a “fleet of molecule-sized machines” – called “nanoprobes” in the Neuron paper – could noninvasively sense and record spikes of activity from every neuron.3 This massive amount of data would be wirelessly transmitted and recorded, an activity that would require formidable information process capabilities. One approach to this might be “novel techniques” enabled by advances in synthetic biology. For example, DNA polymerases could be used as “spike sensors” while “prechosen DNA molecules could be synthesized” to record data patterns.

BAM’s advocates make an economic as well as scientific argument for why the government should fund this decade-long endeavor at about $300 million/year. As Markoff points out, a recent Batelle study states that the $3.8 billion the feds spent on the Human Genome Project “drove $796 billion in economic impact” and “created 310,000 jobs.” Obama referenced these figures in his SOTU address. The Neuron paper predicted the BAM project might lead to similar “technological breakthroughs” at the interface of biotechnology and nanotechnology. These could include “intelligent nanosystems for fundamental investigations in the life sciences, medicine, engineering, and environmental applications; capabilities for storage and manipulation of massive data sets; and development of biologically inspired, computational devices.” In short – few tech sectors would be untouched by a major brain mapping project.

Several aspects of the BAM project intrigue me. As a historian, I’m fascinated by the invocation of the Human Genome Project (HGP) as an analogy for why BAM should be done. George Church explained that the “genome project arguably began in 1984, where there were a dozen of us who were kind of independently moving in that direction but didn’t really realize there were other people who were as weird as we were.” Church clearly hopes to replicate (in a good way) some of that weird chemistry with BAM. I’d be really interested to hear from my history of biology colleagues as to ways in which the HGP has been harnessed to make the case for other big science projects. Also – can/should we think of ourselves as being products of our genome or our connectome?4

Screen Shot 2013-02-19 at 11.18.41 AM

Are you your connectome or your genome?

As I’ve explored in previous posts here and here, historical analogies have great power and their use should also come with some scrutiny. For example, neither Markoff’s article nor the Neuron piece commented on the fact that the HGP was spurred on by competition with Craig Venter and Celera Corporation. Might BAM eventually become a public-private hybrid or a competition between federally-funded researchers and those operating with private monies?5 Moreover, comparisons between BAM and HGP reminded me that when scientists were lobbying for a National Nanotechnology Initiative (NNI) there would be the occasional allusion to the glory-years of the Apollo program.6 Obama’s 2013 SOTU also drew up the 60s era push to space: “Now is not the time to gut these job-creating investments in science and innovation,” the President said, “Now is the time to reach a level of research and development not seen since the height of the Space Race.” Likewise, advocates of the NNI made economic, environmental, and health arguments as to why nano should be the next tech frontier Uncle Sam funded.7

I am also intrigued by how advocates of BAM echo ideas from the early years of nanotechnology’s popularization. Although scientists like Smalley successfully marginalized images of bio-inspired nanobots – my Visioneers book has a chapter devoted to this – these ideas clearly had a powerful hold on the imagination of the public and policy-makers. So it’s intriguing to see references to “nanoprobes” and “biologically inspired, computational devices.” The self-replicating nanobots which could build things “atom by atom” that Drexler popularized are ancestors of a sort.

Finally, the field of brain mapping – like nanotech of the early 1990s – has a small population of its own strange and/or fringy characters. In July 2012, The Chronicle Review ran an article called the “The Strange Neuroscience of Immortality.”

Screen Shot 2013-02-19 at 10.44.53 AM

Fabulous illustration that accompanied the Chronicle piece

It focused on the work of Kenneth Hayworth, a postdoctoral researcher in Jeff Lichtman’s lab at Harvard University. A “self-described futuristic thinker,” Hayworth has developed a series of new machines that can help map the brain’s neural circuitry (the “connectome”). Hayworth’s automated “ultramictrotome”, for example, uses a tiny diamond saw to cut ultra-thin slices of brain tissue. Imaging these slices with an electron microscope can produce high-res maps – analogous to a circuit diagram – of them. What makes Hayworth’s research controversial is his belief that mapping the brain in exquisite detail is one path to brain preservation and eventual immortality. The Chronicle described it: “Then one day, not too long from now, his consciousness will be revived on a computer. By 2110, Hayworth predicts, mind uploading—the transfer of a biological brain to a silicon-based operating system—will be as common as laser eye surgery is today.” In other words, connectomics is, at least for Hayworth, one path to fulfilling the posthumanist dream of transcending the limits of the human body. “We’ve had a lot of breakthroughs—genomics, space flight—but those are trivial in comparison to mind uploading,” Hayworth said, “This will be earth-shattering because it will open up possibilities we’ve never dreamed of.”

To help realize his dream, Hayworth started the Brain Preservation Foundation. Currently, the Foundation offers a cash prize “for the first individual or team to rigorously demonstrate a surgical technique capable of inexpensively and completely preserving an entire human brain for long-term (>100 years) storage with such fidelity that the structure of every neuronal process and every synaptic connection remains intact…” Evan Goldstein interviewed me for his Chronicle piece. My response was that people like Hayworth have “ideas that stand out there as something to be looked at, maybe shot down, proven or disproven, but they are part of the process of staking out where the frontier of science is.”

Parallels between Hayworth’s activities and the early years of nano are striking. The Foresight Institute, a nanotech advocacy group that Eric Drexler helped start, offers large prizes to people who made significant advances in molecular engineering. Moreover, Hayworth’s goal and his approach (a non-profit funded by donations) resonates with those of the early nanoists. Like Drexler, Hayworth has turned to deep-pocketed entrepreneurs – Peter Diamandis, Peter Thiel, et al. –  for support. Finally, one of the first communities to embrace radical ideas for nanotechnology in the 1980s were those with a keen interest in life extension technologies such as cryonics and mind uploading. As MIT’s Sebastian Seung – an advisor for the Brain Preservation Foundation – put it, “Mind uploading is part of the zeitgeist…People have become believers in virtual worlds because of their experience with computers. That makes them more willing to consider far-out ideas.” Similar affinities brought people from the computer science and software communities to the radical flavors of nanotech c. 1990.

So, bringing this all back to BAM – I am looking forward to seeing whether and how a major national initiative do dynamic brain mapping takes shape. But already I’m fascinated by the ways in which BAM’s advocates have connected it to previous major national science initiatives as well as the dark matter of unusual ideas and individuals that forms a diffuse halo around the project.

  1. Markoff’s book preceded Fred Turner’s more scholarly treatment From Counterculture to Cyberculture by a few years []
  2. A earlier version of this paper from 2011 is available here. The report originated at a September 2011 conference at the Kavli Royal Society International Center and was organized by the Kavli Foundation, the Gatsby Charitable Foundation, and the Allen Institute for Brain Science. []
  3. I discussed the “molecule-sized machines” idea at length with a colleague of mine at Caltech. He noted that these are passive sensors, not active nanobots – i.e. millions of silicon-based nanoprobes spaced out at the micron scale. Caltech’s Michael Roukes, who is one of the BAM report authors, is a specialist in this line of research. []
  4. It seems as if everything has its own –omics­ these days: proteomics, epigenomics, and now connectomics. This meme has spread far beyond the life sciences – in the humanities we have “culturomics” while, in 2011, the Obama administration announced a “materials genome initiative” to “double the speed with which we discover, develop, and manufacture new materials.” I’m surprised there’s no nano-omics yet but I guess that lexigraphical ship already sailed []
  5. The 2011 Kavli workshop that led to the BAM report was funded by some private money, including the Allen Institute for Brain Science…so, there is private money that has skin in the game. []
  6. As Nobel laureate Richard Smalley advised Congress in 1999, “Somebody has to go out and put a flag in the ground and say: Nanotechnology, this is where we are going to go and we are going to have a serious national initiative in this area.” []
  7. Debates among scientist as to which is more important – your connectome or your genome – should be interesting topics for historians to explore. Perhaps these mirror nature vs. nurture debates which are, of course, not an either/or proposition. Also of interest would be the development of technologies to actually do brain mapping exercises…how is the field of neuroscience driven by technological and instrumental developments? How have these shaped research topics, etc. etc. – all the sorts of questions that STS scholars are well-equipped to study! []

Regulating Nanotechnology Via Analogy, Pt. 2

[Blogger’s note: This post – a continuation of the previous one – is adapted from a talk I gave in March 2012 at the annual Business History Conference. Like the last post, it draws on research done by Roger Eardley-Pryor, an almost-finished graduate student I’m advising at UCSB, and me. I’m posting it here with his permission. Some of the images come from slides we put together for the talk.]

Screen Shot 2013-02-15 at 9.52.10 AM

Can historical analogies help us approach technological risks better?

In my last post, I discussed the ways in which policy makers could use historical analogies as tools when considering ways in which nanotechnologies might be regulated. At the end, I suggested that multiple definitions for nanotechnology posed a challenge for finding the one best analogy, however. So – what are examples of the analogies made between nanotech and other technologies and what does have to say about possible regulation paths…consider the following examples:

Example #1 – Genetically Modified Organisms

Screen Shot 2013-02-13 at 5.02.46 PM

Engineered nanomaterials bear some relation to GMOs…but it’s not necessarily a strong one.

In April 2003, Prof. Vicki Colvin testified before Congress. A chemist at Rice University, Colvin also directed that school’s Center for Biological and Environmental Nanotechnology. This “emerging technology,” Colvin said, had a considerable “wow index.”1 However, Colvin warned, every promising new technology came with concerns that could drive it from “wow into yuck and ultimately into bankrupt.” To make her point, Colvin compared nanotech to recent experiences researchers and industry had experienced with genetically modified organisms. Colvin’s analogy – “wow to yuck” – made an effective sound bite. But it also conflated two very different histories of two specific emerging technologies.

While some lessons from GMOs are appropriate for controlling the development of nanotechnology, the analogy doesn’t prove watertight. Unlike GMOs, nanotechnology does not always involve biological materials. And genetic engineering in general, never enjoyed any sort of unalloyed “wow” period. There was “yuck” from the outset. Criticism accompanied GMOs from the very start. Furthermore, giant agribusiness firms prospered handsomely even after the public’s widespread negative reactions to their products.  Lastly, living organisms – especially those associated with food – designed for broad release into the environment were almost guaranteed to generate concerns and protests.2 Rhetorically, the GMO analogy was powerful…but a deeper analysis clearly suggests there were more differences than similarities.

Example #2 – Asbestos

Screen Shot 2013-02-13 at 5.03.16 PM

Are engineered nanoparticles analogous to asbestos fibers?

A different definition of nanotech treats it like a new form of matter…stuff requiring special oversight, particularly in the workplace. Such a material definition of nanotechnology suggests a ready analogy to asbestos.  Given decades of enormous and expensive asbestos litigation, the analogies between asbestos and nanotechnology have prompted substantial toxicological analysis on new materials.3  Carbon nanotubes (CNTs) are best known of these new nano-materials. With a long thin structure that resembles that of asbestos, numerous toxicological studies indicate that nanotubes share a similar toxicity. These similarities and the historical circumstances of attempts to regulate asbestos in the United States offer suggestions for how to proceed toward the regulation of certain nanotechnologies.

Given the known threats of asbestos, the U.S. EPA attempted an all-out ban on its manufacture and use. However, in 1991, the U.S. Fifth Court of Appeals claimed EPA did not meet the requirements to impose the “least burdensome” controls. The court promptly lifted the ban for all but the most dangerous existing asbestos products. The inability of EPA to ban asbestos, despite decades of evidence confirming its hazards, indicates the need for serious reform of Toxic Substances Control Act or TOSCA, the existent United States’ law for chemical regulation.4 While this need for reform applies for existing substances like asbestos, it applies even more so for novel and analogous nanotechnologies like CNTs.

Example #3 Fallout

Screen Shot 2013-02-13 at 5.05.06 PM

Per capita thyroid doses in the continental United States resulting to atmospheric nuclear tests at the Nevada Test Site from 1951-1962.

With planetary fears about grey goo and self-replicating nanobots, figures like Michael Crichton, Bill Joy, Prince Charles, and, at times, even K. Eric Drexler, seemed to define nanotechnology as so broad, diverse, and nebulous that they rendered it as a questionable, minute, and invisible unknown.  This line of thinking suggested nanotechnology might be analogous to another existential and invisible, yet life-threatening technological byproduct – radioactive fallout.

Each of the hundreds of open-air nuclear devices exploded between 1945 and 1980 released minute, invisible, radioactive debris that circulated around the planet’s stratosphere before falling back to earth, exposing humans and the environment to its harmful radioactivity.5 The global spread of these materials throughout ecosystems and into human bodies occurred without full public or private consideration of their risks by policy-makers, by scientists, or by unknowingly exposed publics. In WWII and during the Cold War, the dictates of national security instigated the development and open-air testing of nuclear weapons.  However, by the end of the Cold War, national security came to be defined increasingly in terms of economic security.  Along those lines, American scientists and policy-makers in the late 1990s and early 2000s framed the need for the federal development of nanotechnology in the rhetoric of economic national security.

The nanotechnology enterprise has also yielded novel engineered particles that exist only at invisible scales; new particles that have found wide commercial distribution around the world before full public or private consideration of their potential risks to human health, or full consideration of their threats to our environmental security. In 2003, Oregon Congressman David Wu hinted at the analogy between nanotechnology and nuclear fallout by citing a historic example of regulating fallout’s novel and invisible threat via the Partial Nuclear Test Ban Treaty. 6  Though Representative Wu celebrated the Test Ban Treaty for its international cooperation and control of hazardous fallout, he noted that “In many respects, the Nuclear Test Ban Treaty is nothing but a ban on experimentation.”7 At the time, organizations like ETC, Greenpeace, and Friends of the Earth-Australia had also called for a ban on nanotechnology production until researchers clearly understood all of nanotechnology’s EHS risks.  As with other examples, one’s definition of nanotechnology – here as an invisible, existential, and global threat – determined the appropriate analogy to prior technologies.  That definition, in turn, indicated to various nano-stakeholders particular forms of precaution, regulation, and control.  If nanotechnology was analogous to fallout, maybe the analogous regulation would be an outright ban that would forestall all future risks?

Example #4 – Recombinant DNA

Screen Shot 2013-02-13 at 5.01.54 PM

A fourth definition for nanotechnology moves us beyond consideration of novel forms of matter and instead identifies nanotechnology as a suite of technological practices for manipulating nature – techniques that render the natural world as unnatural.  This identification of nanotechnology with particular lab practices yields an analogy to debate about recombinant DNA (rDNA) techniques of the 1970s.

In the mid-1970s, scientists agreed to a moratorium on rDNA practices until they better understood the technology and until the U.S. National Institutes of Health (NIH) could establish proper guidelines. After the famous 1975 Asilomar Conference, the NIH’s Recombinant DNA Advisory Committee produced its research guidelines. These guidelines clearly defined specific biological techniques and instituted multiple layers for control, including requirement of biological containments. This ensemble of lab practices helped stimulate the rapid commercialization of modern biotech research and, one could argue, consumer acceptance.

Nanotechnology-stakeholders have identified a similar goal of early anticipation and mutually agreeable control through their framework of anticipatory governance. For some nanotech stakeholders – particularly entrepreneurs affiliated with commercialized industry – the NIH’s decision to institute guidelines for rDNA technology, rather than push for legally binding regulations, offers possible paths for the eventual oversight of nanotechnology. Government guidelines consist of procedures that people are expected to follow when receiving federal dollars, whereas regulations are substantive rules for all actors that carry the authority of law. However, drawing lessons from rDNA and applying them to nano comes with drawbacks –for example, guidelines similar to those from the NIH might only apply to federally funded research. This would leave privately funded research in a different regulatory regime, subject not merely to guidelines, only to the hard law of regulation.

Some concluding thoughts…

“Nanotechnology” is a socially constructed collection of techno-scientific ideas, practices, and materials. But with such a broad and sometimes vague set of definitions for nanotechnology used by scientists, policy-makers, activists, and businesses, how can nano-stakeholders know what to regulate?

Some scholars, including Andrew Maynard, a leading expert on risk science, suggest that regulators’ wish for strict definitions is misplaced. Maynard, for instance, believes that a precise definition for nanotechnology would actually impede proper regulation.8  Instead of a categorical definition, Maynard now argues that regulation must focus on its various “trigger points,” or empirical points that transition a material from conventional to risky. Here, one could imagine officials looking to historical examples to find other such ‘tipping points’ which catalyzed regulatory reform.

But policy makers have moved in the opposite direction. In late 2011, Health Canada as well as the European Commission announced a specific set of politically designed definitions for nanomaterials to be used explicitly “for all regulatory purposes.”  Similarly, the United States’ most recent research strategy for environmental, health, and safety emphasized the need for federal agencies to establish agreed-upon definitions for nanomaterials. But, even as regulators moved toward a “one size fits all model”, analogies with other materials, techniques, and industries still prove useful. The US EPA, for instance, has considered whether certain materials should be regulated under rules that apply to insecticides. So, perhaps we can look forward to the drawing of new analogies, not to GMOs and asbestos and fallout but to DDT…

Screen Shot 2013-02-13 at 5.13.17 PM

Advert for DDT, c. 1947

So — If historical analogies teach can teach us anything about the potential regulation of nano and other emerging technologies, they indicate the need to take a little risk in forming socially and politically constructed definitions of nano. These definitions should be based not just on science but rather mirror the complex and messy realm of research, policy, and application. No single analogy fits all cases but an ensemble of several (properly chosen, of course) can suggest possible regulatory options.

  1. House of Representatives Committee on Science, “The Societal Implications of Nanotechnology,” Hearing before the Committee on Science, House of Representatives (108th Congress, 1st Session), April 9, 2003, p. 49. A quick web search on “Vicki Colvin + wow to yuck” yields some 360,000 hits including several presentations and papers she and her Rice colleagues gave that use the phrase. []
  2. Ronald Sandler, “The GMO-Nanotech (Dis)Analogy?,” Bulletin of Science, Technology, and Society 26:1 (2006): 57-62; Arie Rip, “Folk Theories of Nanotechnologists,” Science as Culture, 2006, 15, 4: 349-65. []
  3. A good article on this is Geoffrey Tweedale and Jock McCulloch, “Chrysophiles versus Chrysophobes: The White Asbestos Controversy, 1950s–2004,” Isis , Vol. 95, No. 2 (June 2004), pp. 239-259. []
  4. Marc Landry, “EPA and Nanotechnology: The Need for a Grand Bargain?,” in Governing Uncertainty: Environmental Regulation in the Age of Nanotechnology, edited by Christopher Bosso (Washington, DC: RFF Press, 2010), pp. 87. []
  5. Harold L. Beck and Burton G. Bennett, “Historical Overview of Atmospheric Nuclear Weapons Testing and Estimates of Fallout in the Continental United States,” Health Physics 82:5 (May 2002): 591-608. []
  6. This treaty, signed in 1963 by the United States, the Soviet Union, and Great Britain after years of international negotiation, banned nuclear test explosions in the air, above the atmosphere, or at sea. []
  7. House of Representatives Committee on Science, “The Societal Implications of Nanotechnology,” Hearing before the Committee on Science, House of Representatives (108th Congress, 1st Session), April 9, 2003: Wu, pg 91. []
  8. Andrew Maynard, “Don’t Define Nanomaterials,” Nature 475 (7 July 2011): 31. []

Regulating Nanotechnology Via Analogy, Pt. 1

[Blogger’s note: This post is adapted from a talk I gave in March 2012 at the annual Business History Conference; it draws on research done by Roger Eardley-Pryor, an almost-finished graduate student I’m advising at UCSB, and me. I’m posting it here with his permission. This is the first of a two-part essay…some of the images come from slides we put together for the talk.]

Screen Shot 2013-02-10 at 3.09.55 PM

Take a Little Risk?

Over the last decade, a range of actors – scientists, policy makers, and activists – have used  historical analogies to suggest different ways that risks associated with nanotechnology – especially those concerned with potential environmental implications – might be minimized. Some of these analogies make sense…others, while perhaps effective, are based on a less than ideal reading of history.

Analogies have been used before as tools to evaluate new technologies. In 1965, NASA requested comparisons between the American railroad of the 19th century and the space program. In response, MIT historian Bruce Mazlish wrote a classic article that analyzed the utility and limitations of historical analogies.1 Analogies, he explained, function as both model and myth. Mythically, they offer meaning and emotional security through an original archetype of familiar knowledge. Analogies also furnish models for understanding by construing either a structural or a functional relationship. As such, analogies function as devices of anticipation which what today is fashionably called “anticipatory governance.”2 They also can serve as a useful tool for risk experts.3

Screen Shot 2013-02-10 at 3.14.02 PM

Analogies may serve as a tool for crafting better governance of new technologies

Policy makers recognize the importance of analogies. In 2003, participants at an NSF-sponsored workshop on nanotechnology’s societal implications warned that “one of the more disturbing possibilities is that policy makers and leaders of social movements may respond to nanotechnology not as it actually is, but in terms of false analogies.”4 In 2003, policy makers and scientists were especially concerned about the public perceptions of nano.

When the U.S. government first launched its National Nanotechnology Initiative in 2000, few if any policy makers expressed concerns about its environmental implications.5 But by 2003, it was impossible for the people charged with managing nano to have ignored its environmental, health, and safety issues. So how did EHS issues get on the radar screen of policy makers and journalists? There are several causal factors; their common feature is that they all originated not in the laboratory but in the realms of popular culture, celebrity, and social activism.

An early shot across the bows came from an unexpected source. Bill Joy was a Berkeley-trained computer researcher and dot-com millionaire. His incendiary article – published by Wired in April 2000 – was titled “Why the Future Doesn’t Need Us.” It highlighted perils he saw in several emerging technologies. Motivated partly by controversies over corporate development of genetically-modified crops, Joy identified self-replication of newly emerging nanotechnologies as a clear and future danger. The solution? Joy proposed “relinquishment” and limiting development of “technologies that are too dangerous.” Accompanied by a flurry of international publicity, Joy’s article came at an inconvenient time for nano-boosters as Congress was preparing to vote on Clinton’s proposed new national nano initiative in 2000. Controversy stirred by articles like Joy’s threatened this initiative.

Nano-anxieties were fanned anew in late 2002 when HarperCollins published Prey by blockbuster novelist Michael Crichton. Central to its plot was the deliberate release of autonomous, self-replicating nanobots. Created by an amoral corporation working under contract to the Pentagon, the predatory swarm of millions of nanobots attacked people until it was destroyed. Crichton’s book hit every button that might stoke public alarm about nanotechnology: a greedy, high-tech firm; lack of government regulation; new technologies turned into military applications.

Non-governmental organizations helped keep controversies over nanotechnology in front of North American and European citizens. In January 2003, the Action Group on Erosion, Technology, and Concentration (ETC), a small Canadian NGO, released a report called The Big Down.

Screen Shot 2013-02-10 at 3.16.46 PM

ETC had previously led campaigns against genetically modified foods. Not surprisingly, their report savaged the idea of nanotechnology. ETC’s report reflected the group’s larger agenda, which was less about so-called emerging technologies per se and more about restricting corporate power and maintaining cultural diversity and human rights.

But none of the examples was about a specific existing technology. Instead, these spurs to regulation referred to hypothetical technologies and the creation of planet-threatening dangers. Soon however, concerns about nano’s regulation transcended vague existential threats and moved to specific and potentially troubling techniques and materials.

But what exactly was to be regulated? Was nanotechnology something with the capacity to spread across wide swaths of land and reap tremendous environmental damage with the fear amplified in part because of its minute size? Or perhaps nanotechnology was less an existential threat and instead a suite of scientific techniques and tools that require regulation?  If not a particular technique, was nanotechnology a particular product, a specific category of material, a hazardous form of matter that should be controlled for the health and safety of workers and consumers? Or, did nanotechnology represent an entire new industry in need of care and control in order to reap its economic benefits?

Screen Shot 2013-02-10 at 3.19.36 PM

How you define something helps determine how you regulate it…

So…in order to draw fitting analogies that might suggest an ideal path toward the appropriate oversight or regulation of nanotechnology, stakeholders first had to agree on its definition. And depending on what definition one chose, a different historical analogy could be found which suggested a different approach to regulation…more on this next time.  But, to give a hint, the frequent comparisons between nano and genetically-modified organisms were not necessarily the best way to build a regulatory policy.6

To be continued…

  1. Bruce Mazlish, “Historical Analogy: The Railroad and the Space Program and Their Impact on Society,” in The Railroad and the Space Program: An Exploration in Historical Analogy, edited by Bruce Mazlish (Cambridge, MA: M.I.T. Press, 1965), pp. 1-52. []
  2. Daniel Barben, Erik Fisher, Cynthia Selin, and David H. Guston, “Anticipatory Governance of Nanotechnology,” in The Handbook of Science and Technology Studies, Third Edition, edited by Edward J. Hackett, Olga Amsterdamska, Michael Lynch, and Judy Wajcman (Cambridge, MA: MIT Press, 2008), 979-1000. []
  3. Scott Knowles at Drexel University has a great book out on the role of “disaster experts” in modern America that is worth looking at. []
  4. “Workshop Breakout Session Reports,” in Nanotechnology: Societal Implications I, Maximizing Benefits for Humanity, edited by Mihail C. Roco and William Sims Bainbridge (Dordrecht, The Netherlands: Springer, 2007), 73. []
  5. When the National Academies of Science reviewed the NNI in 2002, its report shows that, out of $464 million allocated for nano in FY2001, less than 1% went to the Environmental Protection Agency. NAS, Small Wonders, Endless Frontiers report, 2002. []
  6. A rhetorically powerful example of this came from 2003 Congressional testimony given by Rice University chemist Vicki Colvin. Colvin, director of Rice’s Center for Biological and Environmental Nanotechnology, spoke about societal implications of nanotechnology. This “emerging technology,” Colvin said, had a considerable “wow index.”Nanotech offered “potential benefits to the economy, human health, and quality of life.” However, Colvin warned, every new such emerging technology came with its own particular set of concerns. If improperly handled, these concerns “can turn wow into yuck and ultimately into bankrupt.” To drive her point home, Colvin shrewdly drew an analogy between a future in which nano might go bankrupt and an example that would resonate with policy makers – the “genetically modified foods industry.” A quick web search on “Vicki Colvin + wow to yuck” yields some 360,000 hits including several presentations and papers she and her Rice colleagues gave that use the phrase. Her original testimony appears in House of Representatives Committee on Science, “The Societal Implications of Nanotechnology,” Hearing before the Committee on Science, House of Representatives (108th Congress, 1st Session), April 9, 2003. []