A (Con)Fusion of Money and Politics

In 2010, I had the good fortune to spend several months in southern France. It wasn’t all rosé wine and boules though. I also researched and wrote an article (which you can read here) about the International Thermonuclear Experimental Reactor, now known simple as ITER. Pronounced something close to “eater” (as in eater of budgets), it also means “the way” in Latin (as in advocates hope, this is “the way to boundless clean energy”). ITER is the global fusion community’s most ambitious project to date. It is also one of the longest-running, most expensive, and most troubled science projects ever.

Under construction in southern France near the tiny town of St. Paul les Durance, just east of Aix-en-Provence and adjacent to the French nuclear facility at Cadarache, ITER is a fusion energy facility based around a massive tokamak reactor. Taken from the Russian acronym for toroidal kamera magnetik (“toroidal chamber with magnetic coils”) – a tokamak magnetically confines a plasma into a doughnut-shaped space, producing high temperatures. The fuel will be a mixture of deuterium and tritium, two isotopes of hydrogen, than when heated to temperatures in excess of 150 million°C form a hot plasma. The heat drawn from the plasma can be used to boil water, make steam, and generate energy. Like conventional nuclear reactors, ITER is, in essence, a giant teakettle.

Screen Shot 2014-02-26 at 10.34.13 AM

Schematic of the ITER reactor…projects like this often include a human figure as a marker of scale. Look in the lower center right.

Since the project really got started in 2006, its expected completion date has slipped from 2016 to 2018 to 2020. Meanwhile, costs have ballooned. The project’s official website doesn’t even give a price: “Based on the European evaluation, we can estimate the cost of ITER construction… at approximately EUR 13 billion” with the caveat that this is “if all the manufacturing was done in Europe.” (ITER’s components will be manufactured by all the project’s partners in a manner proportional – somewhat -to their contributions.) However, a recent article in Science estimates a cost of some €16 billion (i.e. about $22 billion).1

Perhaps even more impressive that its cost is its scope. ITER is a joint international effort; its partners include the E.U., the U.S., Russia, China, India, Japan, and South Korea. This megaproject represents over half of the world’s population. Like the former British empire, the sun never sets on ITER.

Screen Shot 2014-02-26 at 10.33.06 AM

ITER’s logo reflects its goal – bottling the power of the sun inside a magnetic field – as well as the partners involved.

Wavering support for ITER in the U.S. is nothing new. ITER began as a creature formed by the fusion of money and politics. In her prize-winning 1998 book The Radiance of France, my colleague Gabrielle Hecht deploys the term “technopolitics.” Defined as “designing or using technology to constitute, embody, or enact political goals,” Hecht’d book asked the question of “what was French about the French nuclear program?”

Screen Shot 2014-02-26 at 11.26.21 AM

ITER’s history shows similar technopolitics at the very roots of this mega-project. The key difference here, is that ITER’s technopolitics transcended national identity and state politics and moved to a much larger global arena. Consider ITER’s origins:2

After World War Two, the most active fusion programs were in the U.S. and the USSR while initial European efforts remained quite modest. Military classification initially prevented the international exchange of scientific and technical information about fusion but these constraints began to loosen following the Eisenhower administration’s “Atoms for Peace” initiative.

In the late 1970s, there were plans to build a new Europe-only fusion reactor. This was imagined as a cooperative transnational European project that would provide a bridge to a future commercially viable reactor. Before these efforts gained any traction, however, a Soviet delegation proposed building an international “Large Tokamak of the next generation” under the auspices of the International Atomic Energy Agency (IAEA). The IAEA referred the Soviet proposal to its own advisory body which then organized a series of workshops to discuss cooperation between Europe, the U.S., the Soviets, and Japan to build an “International Tokamak Reactor.” Negotiations followed but the sudden end of U.S.-Soviet détente after the ’79 Soviet invasion of Afghanistan quashed progress.

Meanwhile, European plans became entangled in U.S. science policy. European fusion managers expressed reservations about the relative prioritization of their goals versus what American science managers were interested in. Were American researchers proposing a fusion research collaboration in order to better understand plasma physics or as steps toward building a future power-generating device? The Americans, as European researchers saw it, were more oriented towards basic science, a belief reinforced by George A. Keyworth, Reagan’s science advisor. Keyworth’s emphasis on basic science conflicted with the underlying long-term European goal of developing fusion technologies for energy applications and eventual “industrial production and marketing.” Meanwhile, some U.S. politicians indicated reluctance to support international collaborations which involved sharing manufacturing technology with the Soviets (a risk to national security) or the Europeans and Japanese (a risk to U.S. economic competitiveness).

Hints of new-found political will to move forward with some sort of international fusion project eventually appeared but came, rather unexpectedly, from heads of state and not laboratory directors.  In the fall of 1985, before the first summit between Reagan and Mikhail Gorbachev, Secretary of State George Schultz met with Soviet Foreign Minister Eduard Shevardnadze. At the suggestion of Evgeny Velikhov, a fusion physicist who advised Gorbachev on science, Shevardnadze proposed adding international cooperation in nuclear fusion to the agenda.

Screen Shot 2014-02-26 at 11.31.00 AM

As an arena for Cold War superpower collaboration, fusion made sense for several reasons. For one thing, an international research community and pathways for information exchange already existed. Second, clean energy via nuclear fusion offered the semblance of broad societal benefits. It also had the potential to put a more positive face on nuclear applications at a time when the superpowers’ nuclear arsenals drew widespread global condemnation.3 Finally, while technically possible, practicable applications of fusion energy still remained many years off, making it a fairly safe arena for U.S.-Soviet collaboration in terms of technology sharing.

When the Geneva summit concluded, Reagan and Gorbachev issued a joint statement which was pure technopolitics. To further superpower cooperation and perhaps ease global tensions, the two leaders endorsed “the potential importance of…utilizing controlled thermonuclear fusion” and advocated “practicable development of international cooperation” to achieve this. When Reagan briefed Congress on his summit trip, he identified international cooperation in fusion as one part “of a long-term effort to build a more stable relationship” with the Soviets and the two leaders repeated their backing for an international fusion reactor at subsequent meetings. The seeds for ITER – planted some years back – slowly started to germinate.

This was nearly thirty years ago. Think about it…when Ronny and Gorbie got ITER started, the Norwegian band A-Ha was on the top of the pop charts.

Screen Shot 2014-02-26 at 11.41.28 AM

Since 1985, the U.S. has left the project, re-joined it, and threatened to leave it again more than once. (I’ve actually lost count.) Along the way, all sorts of complicated politics – where to build the fusion facility (France vs. Spain vs. Japan?) as well as European opposition to the 2003 Iraq War – threatened to derail the whole entire thing. Meanwhile, anti-nuclear and anti-corporate groups in France protested the entire project (and the massive investments regional and national French officials had made in it). Technopolitics indeed.

Screen Shot 2014-02-26 at 10.30.55 AM

Cartoon from a French anti-ITER group.

In 1995, Hubert Curien, then serving as president of the Organization Européenne pour la Recherche Nucléaire (CERN), described ITER as “globalization with hardware.”4 From its inception, ITER and its predecessors were enmeshed in politics at many levels, from the regional and national to the global. First conceived as a tool to improve relations between Cold War superpowers, ITER became an opportunity for “Old Europe” to project an image of unity and a tool to reward countries if they aligned themselves with American foreign policies.

Since 1985, a whole new generation of fusion scientists has entered the fusion community. Do they know the project’s long and tortured history? This – to me – is a fascinating question. I’m sure that younger scientists have heard stories and myths about ITER’s early days. What are these creation tales like, I wonder? Do the veterans tell the newcomers about how ITER – if or when it’s built – will be a site not just for the fusion of light elements, but of technological ambitions, money, and politics of all kinds?

Update: Just after I posted this, I learned about a lengthy feature piece on ITER from the brand-new (3 March 2014) issue of the New Yorker. It’s worth a read and it gets at the same issues – politics, money, and technological ambitions – noted here… 

  1. Moreover, a recent story in Physics Today notes that if estimates from the U.S. Office of Science’s are correct, the actual cost might approach $50 billion. This seems to be an extremely high estimate, however. []
  2. Note: a massive repository for fusion and ITER history is here…it’s a fascinating place to explore. []
  3. Ironically, similar motivations were behind Eisenhower’s “Atoms for Peace” program. []
  4. Comment made during a 1995 round table discussion, in Florence, Italy about European research cooperation; John Krige and Luca Guzzetti, eds., History of European Scientific and Technological Collaboration. Brussels: European Commission, 1997. []

Getting Away From It All

Silicon Valley’s public image has been taking some body blows lately.

Screen Shot 2014-02-19 at 2.05.25 PM

Perkins on Bloomberg…critics pointed to the rather posh watch on display.

The most shocking example of this was a series of statements by Thomas J. Perkins, a Valley venture capitalist. With a net worth northward of $8 billion, last month he had the audacity to compare the “persecution” (his term, not mine…) of America’s 1% with the Nazi treatment of Jews prior to the Holocaust. That’s sure one way to Godwin a conversation.1 Determined to dig the hole deeper, Perkins later gave a speech in San Francisco in which he suggested that the rich should get more votes than the poor.2

OK, maybe Perkins is just a tone-deaf idiot or his inflammatory statements are part of some larger anti-tax strategy being ginned up by the billionaire class. This interests me far less than the underlying message from Perkins – the über-rich of Silicon Valley are not just different from me and (probably) you but, to be honest, they would like to see less of us. And private jets just aren’t enough anymore.

Screen Shot 2014-02-19 at 2.16.47 PM

Protesters block a Google bus

Commensurate with Perkins’ statements is the rising tide of protests – as seen in this video – in San Francisco against the fleet of private buses run by Google, Facebook, Apple, et al.. The protester’s basic gripe is simple – these buses use public roads and public bus stops yet the companies owning them don’t contribute enough to the public transportation system. Underlying this, of course, are deeper resentments about rising rents, affordable housing, and so forth.

However, like the statements by Perkins, the “Google bus” issues taps into a deeper issue – the Silicon Valley tech crowd would like to get away from it all. And that includes having less to do with everyone else who isn’t part of their technological ecosystem.

The surge of praise from the tech elite for Bitcoin is another reflection of this “get away from the huddled masses” attitude…and, in the process, let’s abandon their decaying infrastructure, oppressive tax brackets. What Bitcoin offers is an escape hatch for some people – apparently including scam artists and drug dealers – to avoid the responsibility and social contract that comes with using traditional legal tenders.

There are a host of things about Bitcoin that I find problematic. Sci-fi writer Charles Stross captured them in a fine blog post called “Why I want Bitcoin to die in a fire.” First of all, as Stross and others have noted, a very real environmental cost goes into producing this virtual currency.

Screen Shot 2014-02-19 at 2.31.43 PM

Look at the graphs above. All that processing power needed to solve the Bitcoin algorithms uses an ever increasing amount of energy. Look down into the Bitcoin mine shaft…Stacks upon rows of energy-sucking servers and coolant systems. Bitcoin’s virtuality is just an illusion.

Screen Shot 2014-02-19 at 2.33.39 PM

Bitcoin “mining” center in Hong Kong

Besides the environmental aspects, – as Chinese regulators recently found out –  Bitcoin offered a good way for investors to avoid taxes. And you know you have a problem when the Chinese government starts crying foul. Bitcoin has developed an especially strong appeal among Silicon Valley libertarians. As Stross (and Paul Krugman) have noted, libertarians like Bitcoin because it pushes the same buttons as their gold fetish” Don’t like the Federal Reserve…here’s your Bitcoin.

But the most serious issue I have Google’s city clogging buses or Bitcoin is how these signal a continued erosion of civil society. Each speaks to a violation of today’s moral economy, the unshared rules and assumptions that have been tacitly agreed on and through which society functions.

The most extreme example of this elite desire for flight, sublimation, and exclusivity might be the Seasteading movement.

Screen Shot 2014-02-19 at 2.47.58 PM

Image from a Seasteading promotional video

The idea is simple – create a permanent dwelling at sea, outside of government’s territorial claim. Once past those limits, all sorts of social and economic experimentation can happen. As the Seasteading Institute – based (where else?) in San Francisco – says on its web page, the goal is to “to enable seasteading communities – floating cities – which will allow the next generation of pioneers to peacefully test new ideas for government.” Who is the leader of this group of wannabe escapists? Patri Friedman is chairman of Seasteading’s board. The name? Yes, he’s the grandson of free market guru Milton Friedman.

Last summer’s movie Elysium was just the sci-fi version of this “get me away from you” mentality. Whether it’s Bitcoin, human-built islands of free enterprise floating in open waters, or roaming fleets of buses for techie riders, the message is clear – there is a group of powerful people who want to leave it all – and us – behind. It’s really no surprise, I suppose. After all, when some of the most visible tech billionaires – Jeff Bezos, Elon Musk, Paul Allen, Richard Branson – have made their fortunes, what did they want to do? Start a private space company…and really get away from it all.

  1. As reported by the Wall Street Journal, he said: “I perceive a rising tide of hatred of the successful one percent” calling it “a very dangerous drift in our American thinking. Kristallnacht was unthinkable in 1930; is its descendant ‘progressive’ radicalism unthinkable now?” []
  2. As he said, “”The Tom Perkins system is: You don’t get to vote unless you pay a dollar of taxes…But what I really think is, it should be like a corporation. You pay a million dollars in taxes, you get a million votes. How’s that?” []

Checks and Stripes

Screen Shot 2014-02-10 at 10.08.49 AM

What do you see here? Stripes? Or feedback?

Leaping Robot note: Historians, like most of us, like controversies. They provide a valuable lens to look at a range of issues. This post was inspired by an article that appeared recently in Science. It concerns a simmering feud between two groups of researchers over how to interpret some images of gold nano-particles made with a scanning tunneling microscope (STM). My colleague, Cyrus Mody, a historian of science at Rice University, wrote a prize-winning book called Instrumental Community that tells the story of the invention and spread of scanning probe microscopy.1 So Cyrus is an excellent person to offer a more nuanced reading of this controversy. One might think that the nano-feud is, as one person notes, just a “minor storm in a nano teapot.” Mody’s guest blog post goes deeper than this and shows how controversies like this, besides being rather common, also tell us something important about how scientists (and science journalists) communicate with each other and the public. Here’s Cyrus…

***

The January 24 issue of Science has a news item by Robert Service with the juicy titleNano-Imaging Feud Sets Online Sites Sizzling” describing a multiyear tussle over a decade’s worth of science based on some scanning tunneling microscope (STM) images of gold nanoparticles.  The STM images are from Francesco Stellacci’s group, first at MIT then at the Swiss Federal Institute of Technology in Lausanne.  They purport to show “stripes” of organic molecules attached to particles with diameters of 20 nanometers or less, in an arrangement resembling lines of latitude.  However, a number of critics have insisted – in blogs, Twitter feeds, and elsewhere – that Stellacci’s stripes are actually instrumental artifacts.

I’m not going to weigh in on whether Stellacci’s stripes are real or not.  I know some very smart STMers on either side, so I doubt consensus will be reached soon.  Instead, let me do what historians of science usually do: put controversial research in perspective, followed by a point for the prosecution and a point for the defense.

Controversies like this are pretty common.  The point of doing forefront research is to push our ability to make and measure stuff right to the limits.  Robert Service has made a career out of reporting such disputes, for which historians of science should be grateful.  For instance, I’ve made frequent use of his article on a related dispute from 2003, “Molecular Electronics – Next Generation Technology Hits an Early Mid-Life Crisis.”  If you look at the history of science you’ll find lots of disputes, sometimes extending over decades, over matters of fact that you would think could be easily resolved.  One of my favorite examples is the more than thirty year debate from the early 1920s to the late 1950s in which cytogeneticists nearly uniformly agreed that a normal human somatic cell contains 48 chromosomes (rather than the now accepted 46).2 How hard can it be to count chromosomes that are almost a thousand times larger than Stellacci’s nanoparticles?  And yet, even something as simple as counting can remain interdeterminate (or determined but incorrect) for a very long time.

The instrument Stellacci used to image his nanoparticles, the scanning tunneling microscope, has a particularly rich history of disputes.  STM images are made by bringing a sharp metal probe very close to the surface being imaged while maintaining a voltage difference between the probe and the sample.  This encourages some electrons to “tunnel” from the sample to the probe and vice versa.  Generally, the closer the probe is to the surface the higher the probability of tunneling in one direction rather than the other, so the number of tunneling electrons is a reasonable proxy for the z-height of the sample for any given x-y position of the probe.  As you move the probe around in x and y, you build up a matrix of z values for the strength of the tunnel current, which you can then convert into a three-dimensional image of the sample.

Screen Shot 2014-02-10 at 11.56.26 AM

Schematic of how an STM works

Roughly, the computer to which the STM is hooked up registers a single value for the tunnel current per increment of probe position.  That is, the STM periodically samples the sample, and each sampled value is fed both to the imaging output and to the circuit controlling the height of the probe at the next increment.  Too strong a feedback can make the probe constantly overshoot and then play catch-up, so even a smooth surface will seem to have undulations.  Unfortunately, the phenomena researchers are looking for on a surface are also often periodic in nature – as, for instance, are Stellacci’s stripes.  If you think you’ve made a sample with stripy features, you want to get an STM image with alternating patches of light and dark, up and down.  A nice analogy, suggested to me by my Rice colleague Kevin Kelly, is of a strobe light illuminating water coming out of a tap.  If the strobe samples the water at the right rate, we see a series of drops accelerating under the force of gravity.  If the strobe frequency and duration are varied, though, we might see a smooth flow of water or we might see water drops that appear to move upward into the tap.

So it’s not hard to find people who are skeptical of claims that a particular STM image indicates the existence of some periodic nanostructure, particularly if the distance between periodic features is near the limits of the instrument’s resolution.  Perhaps the most famous such case involved STM images of DNA made in the late 1980s and early 1990s.

Screen Shot 2014-02-10 at 11.53.35 AM

2009 cover of Nature showing false-colour STM image with a DNA molecule running from bottom left to top right.

At the time, many STMers harbored hopes that their instrument could resolve the base pairs in a strand of DNA and might therefore be used in genetic sequencing.  Lots of STMers tried to image DNA; several groups published such images; one Caltech group even managed to get an atomic resolution image of DNA on the cover of Nature that appeared to show a helix with the right pitch distance (the linear distance between two turns of the helix).  And yet, there were lots of good reasons to think that an organic molecule like DNA should be difficult to image via electron tunneling.  Whispers began to circulate that some of the best images of “DNA” were actually images of complex defects in a graphite substrate that might or might not have any DNA deposited on it.  In the end, images such as the 1990 Nature cover were never definitively disproved, but the uncertainties surrounding their validity became so insurmountable that almost everyone moved away from trying to image DNA with an STM.  In the late ‘90s and early 2000s, the three or four remaining groups showed that DNA could be imaged, but only under very specific and difficult conditions completely unlike those used in the early ‘90s, conditions that, so far, have barred using an STM for genetic sequencing.

The STM of DNA case, then, leads to my two points about Stellacci’s stripes.  My point for the defense is that even a result that turns out, after vigorous debate, to be wrong can be extraordinarily productive.  Lots of people got into STM on the basis of those images of “DNA”, even if it’s still uncertain whether there was actually any DNA there.  The debate about those images led to a general tightening of standards for STM image production and interpretation, and a better understanding of which applications STM was and was not good for.  The DNA boom provided an early market for commercial STMs that gave manufacturers the revenue to build better versions that could be used in more appropriate ways.  Since the vast majority of scientific findings are ignored, a finding that turns out to be questionable but which is actually taken up for productive debate is doing pretty well.  I don’t have the expertise to say whether Stellacci is correct or not, but even if the consensus emerges that he is wrong, it looks to me like he and his skeptics will still have managed to move the field forward, not back.  (Conversely, if his skeptics turn out to be wrong, they also will still have done the field – and Stellacci himself – a great service).

My point for the prosecution is that some of the criticism of Stellacci’s skeptics’ methods is misplaced.  Service’s article offers a number of quotes from Stellacci and his allies complaining that the skeptics have used extrascientific means to carry out the debate – that they have resorted to blog posts and non-peer-reviewed articles instead of remaining within the arena of peer-reviewed journals.  That’s a rather ahistorical view of how scientific controversies proceed.  Peer reviewed journals are, of course, an important mechanism for fostering the validity of scientific contributions, but we all know that peer review is slow and hardly error-free.  Often, peer-reviewed articles only make sense within some ecology of other forms of communication.3  In the “STM of DNA” controversy, the thread of argument left quite a light footprint in peer-reviewed articles – it’s difficult to piece together, just from published texts, who said what when, much less when and why various actors changed their minds about STM of DNA.  Most of the influential voices used conference presentations and post-presentation conversations to persuade themselves, each other, and the rest of the community that there were serious problems with STM of DNA.  Some of the forms of communication used by Stellacci’s skeptics today (blogs and Twitter) didn’t exist back then, but if they had you can bet they would’ve been used too.  If history is anything to go by (and I hope it is!) there’s nothing inherently unscientific about using any mode of communication you can to get your point across.

Screen Shot 2014-02-10 at 12.02.38 PM

If you liked analysis and narrative here, you’ll probably like Cyrus’s book (which won the 2013 Cushing Memorial Prize)

  1. By forming a community, he argues, these researchers were able to innovate rapidly, share the microscopes with a wide range of users, and generate prestige (including the 1986 Nobel Prize in Physics) and profit (as the technology found applications in industry). Mody shows that both the technology of probe microscopy and the community model offered by the probe microscopists contributed to the development of political and scientific support for nanotechnology and the global funding initiatives that followed. []
  2. described in Aryn Martin, “Can’t Any Body Count? Counting as an Epistemic Theme in the History of Human Chromosomes,” Social Studies of Science 2004 []
  3. For a particularly engaging article on this topic with a self-explanatory title, see Bruce Lewenstein, “From Fax to Facts: Communication in the Cold Fusion Saga,” Social Studies of Science (1995). []