The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.” One year earlier, F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
Other prominent interwar intellectuals worried about developments in nonmilitary technologies. Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
The Czech playwright Karel Capek’s 1920 drama, “R.U.R.,” imagined a future in which artificially intelligent robots wiped out humanity. In a scene that would strike fear into the hearts of Silicon Valley doomers, a character in the play observes: “They’ve ceased to be machines. They’re already aware of their superiority, and they hate us as they hate everything human.” As the A.I. godfather Geoffrey Hinton, who quit his job at Google so he could warn the world about the very technology he helped create, explained, “What we want is some way of making sure that even if” these systems are “smarter than us, they’re going to do things that are beneficial for us.”
This fear of a new machine age wasn’t quarantined to fiction. The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation, an argument The New York Times reviewed with enthusiasm. Others went to even greater lengths to act on their machine-age angst. In 1923, when “R.U.R.” opened in Tokyo, a Japanese biology professor, Makoto Nishimura, became so convinced by the machine-facilitated extinction the play depicts that he sought to create other, benevolent robots to prevent the human species from being “destroyed by the pinnacle of its creation,” artificial man.
One way to understand extinction panics is as elite panics: fears created and curated by social, political and economic movers and shakers during times of uncertainty and social transition. Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change. Today it’s politicians, executives and technologists. A century ago it was eugenicists and right-leaning politicians like Churchill and socialist scientists like Haldane. That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.