The Technologists’ Siren Song

Screen shot 2014-03-12 at 7.04.04 AM

Note: In 2013, I wrote a few blog posts (here and here) about “technology intellectuals” who largely promote a view of innovation that is largely corporate-friendly and rooted in the Silicon Valley ecosystem. More than anything, I was hoping to encourage more public engagement from my fellow academics with issues around controversial technologies. The Chronicle of Higher Education encouraged me to write up an essay on the topic which recently came out. The CHE version was edited for length so some references to colleagues’ work, etc. were left on the cutting room floor. I thought it might be useful to publish the full version. Here it is…

In July 1969 – less than two weeks after Neil Armstrong and Buzz Aldrin cavorted on the moon – the New York Review of Books published a controversial essay by John McDermott. His “Technology: The Opiate of the Intellectuals” presented a sharp rejoinder to those who viewed the Apollo 11 mission as both a triumph of modern technology and a harbinger of even more ambitious technological frontiers. McDermott, his view galvanized by United States’ escalating high-tech war in Southeast Asia, concluded that technologists’ prevailing ideology – that technology offered solutions to all problems – was especially attractive to those best positioned to reap its benefits and avoid its unattractive costs.

Screen Shot 2013-04-01 at 5.02.35 PM

Technology comforts, surrounds, and confounds us. It shapes our professional activities as witnessed by academics’ appropriate consternation about MOOCs. We wonder about the toxic consequences of hydraulic fracturing or the blowback from using drones to assassinate our enemies. On-going revelations about the National Security Agency’s surveillance programs make headlines even as we willingly reveal and record ourselves via social media.

Unfortunately, we lack a sufficiently robust cadre of scholars active in the marketplace of ideas which can engage with and critique today’s technological systems. As a pale substitute, we instead have “tech intellectuals” like Chris Anderson, Jeff Jarvis, Andrew Keen, and Eric Schmidt. A fairly homogenous group of mostly white men with elite degrees, inclined to champion innovation, disruption, and the free market, these pundits have usurped the opportunity to explain technology to policy makers, investors, and the public. Affiliated with science-based industries, they are cosmopolitan in their circles, speaking to millions via TED talks and hobnobbing at the World Economic Forum in Davos. With a reach digitally enhanced by YouTube and Twitter, today’s tech intellectuals use their skills to craft buzzword-laden arguments, often with one another. Generating more heat than light, these narrow exchanges tend to polarize debate and infuse it with self-gratifying interest. More attention, manufactured via on-line baiting, generates more Internet traffic, higher speaker fees, and lucrative book contracts.

There are at least three main problems with how many tech intellectuals frame public discourse about technology. First, they both foster and reflect an intellectual monoculture that favors Silicon Valley’s corporate viewpoint. Innovation from the producer’s perspective is almost always championed. In his 1969 essay, McDermott castigated this as laissez innover, a producer-centric perspective in which technological innovation is assumed “to work for the general welfare in the long run.” Consider two recent examples: Jeff Jarvis’s 2011 book Public Parts stands in for much of this genre writing. One doesn’t have to venture much beyond his subtitle – How Sharing in the Digital Age Improves the Way We Work and Live – to see that the corporate culture that produces technological innovation is something to be praised, not probed. A similar contribution is Eric Schmidt and Jaren Cohen’s The New Digital Age: Reshaping the Future of People, Nations, and Business. The two men wield enormous influence not just in Silicon Valley – both are top-level executives at Google – but also in Washington, DC where they’ve advised presidents and cabinet officials.

Rarely do tech intellectuals like Jarvis or Schmidt question the overall value of innovation. And indeed why would they? They are in the innovation business, part of a corporate culture that boats of its ability to thrive, not just survive, in a climate of constant disruption. But is innovation always beneficial? Schmidt – Google’s CEO who used to tout his company’s “don’t be evil policy” – stirred headlines when he was asked about whether Google’s users should have concerns about sharing information with the company. “If you have something that you don’t want anyone to know,” Schmidt said, “maybe you shouldn’t be doing it in the first place.”

Privacy issues notwithstanding, innovation doesn’t always create jobs. It sometimes destroys them. As Amy Sue Bix argued in her book Inventing Ourselves Out of Jobs, automation and innovation, from the 1920s through the 1950s, displaced tens of thousands of workers. Recall the conflict between Spencer Tracy (a proponent of automation) and Katherine Hepburn (an anxious reference librarian) in the 1957 film Desk Set. And what of broader societal benefits innovation brings? In Technological Medicine: The Changing World of Doctors and Patients, Stanley Joel Reiser suggests that, at least in the world of healthcare, innovation is not always an unalloyed good. In his study of several medical innovations, Reiser concludes that it produces winners and losers – and the winners are not always the patients. Sometimes, instead, they are hospital administrators, physicians, or Big Pharma. As Reiser argues in the case of some medical technologies, technology for its own sake can lead to unexpected moral dilemmas. An especially compelling example he gives is the invention of the artificial respirator: while saving countless lives, this medical innovation also created ethical, legal, and policy debates over, literally, questions of life and death. Moreover, there is the broader ethical question of whether it is better to spend large amounts of money for medical technologies and treatments that will benefit future generations if this means less funding to address current medical needs.

A second key flaw is that even though today’s prominent tech intellectuals rarely venture beyond a few high-tech hothouses – Silicon Valley being Exhibit A – they often ignore what is most central to understanding technology’s broader implications: its “thinginess.” Technology involves stuff. A persistent flaw in today’s digital boosterism is forgetting that all of the stuff that makes the Internet and the Web work is actually made of something – silicon, plastic, rare-earth minerals mined in Bolivia or China. The Foxconn workers in Shenzhen who assemble iPhones and other high-tech devices certainly see it that way.

Neglecting technology’s “thinginess” – the labor and materials essential to its creation –  fosters another ignoration. Even if we just delimit our consideration of technology to the world of digital devices – a common circumscription among tech intellectuals – we must recognize the profound environmental impact these have on the non-digital (i.e. real) world. According to the Silicon Valley Toxics Coalition, Santa Clara county – where Intel was born and where Google and Facebook now call home – hosts the largest concentration of Superfund sites in the U.S.. Some 29 sites remain heavily polluted with the by-products of making semiconductors and other high-tech commodities. As the SVTC notes, tens of thousands of residents – primarily lower-income people of color – “live, work, and go to school near or right on top of these polluted areas.”

Once we recognize the thinginess of technology, it becomes easier to see that the environmental impact of the digital world extends far beyond Silicon Valley. Those server farms that make today’s cloud computing initiatives possible (and capitalized at some $100 billion) aren’t powered by good intentions. According to Nathan Ensmenger, a historian of computing at Indiana University, the digital infrastructure that tech intellectuals praise has a voracious appetite. The equivalent of some thirty nuclear power plants is needed to keep the digital world uploading and tweeting. Running the “clean” digital world that tech intellectuals laud also implicates server farms which require water, just like real farms. A single data center might pump hundreds of thousands of gallons of chilled water through its facilities every day for coolant. Meanwhile, the detritus of our disembodied digital lives presents very real problems in developing countries where so much of it ends up. The “disposable” smart phone that we discard at the end of our telecom contract might end its product lifecycle, Ensmenger notes, in a landfill outside a slum in Ghana.

A third and final critique is that many of today’s prominent tech intellectuals are more inclined to celebrate technology rather than deeply question any deleterious effects on society and labor. Their writings and TED talks have not nearly enough in common with the critiques from public intellectuals like McDermott or David Noble. Before Noble passed away in 2010, the academic gadfly probed the history of technology while simultaneously pushing his readers to see how technology represented and reinforced social relations between men and women or labor and management. In his now-classic book Forces of Production, Noble argued, for instance, that Cold War military priorities and corporate concerns shaped industrial automation. As a result, sophisticated machine tools took precedence over more labor-friendly but lower-tech options despite the latter’s economic benefits. To be fair, Noble was a polarizing figure whose Marxist critiques of capitalism were never fully embraced by the academy. Noble once remarked that his first book, America by Design (1977) got him hired at MIT while Forces got him fired. Few tech intellectuals active today have much to say about how hyped innovations like 3-D printing or driverless cars will affect the workforce of the future.

Back in age of Apollo, critics like McDermott branded technology as an opiate, one that seduced intellectuals with the idea of a technological fix that could resolve social and political problems. A similar observation applies today as we are dulled, lulled, and anesthetized by today’s technology intellectuals. Recapturing territory lost to the tech intellectuals doesn’t require that we start giving TED talks. But more academics whose expertise is the history and social implications of technology are needed to contribute to what is an increasingly shallow public discourse. To leave so much of technology’s story to just the tech intellectuals would be a tragic abdication but one we can avoid.