Against Safetyism

why safetyism — and not climate change or artificial intelligence — has become one of the biggest existential risks facing humanity
Tobias Huber & Byrne Hobart

From demonic apparition to a future history of total chaos, I hope I’ve made it clear I’m not a blind accelerationist when it comes to artificial intelligence. The technology’s potential is fundamentally unpredictable. That makes work in the field as dangerous as it is important, and I do think concern, generally, is justified. There are important questions here worth raising. There are important conversations here worth having. But chatbots breaking up your marriage and racist computers are not reasonable critiques, nor is firebombing data centers a reasonable alternative to progress in the field. The AI safety people and the writers entranced by them are, frankly, a little bit out of control. The philosophy of Safetyism? Also, perhaps, more dangerous than any of the field’s most existential concerns.

Byrne Hobart writes The Diff, a newsletter on inflections in finance and tech. Tobias Huber is a writer and tech founder. Their book on technological stagnation and the nature of innovation is forthcoming with Stripe Press.

-Solana 

---

Elon Musk once tweeted that “with artificial intelligence, we’re summoning the demon.” Given some of the hyperbolic responses to ChatGPT, it seems that we’ve indeed summoned a demon — or “god,” according to Sam Altman. Some think these chatbots, or similar applications that are built on foundation models, such as large-language models (LLMs), herald what’s called AGI (artificial general intelligence) — an AI demon capable of unleashing the apocalypse.

Among the most extreme sci-fi speculations of AI doomsday are “Roko’s basilisk” and the “paperclip maximizer” thought experiments, which are designed to illustrate the risks of a superintelligent, self-replicating, and constantly self-improving future AGI — one that might become uncontrollable and incomprehensible, even for its creators. But these hypothetical scenarios are built on questionable, and often highly anthropomorphizing assumptions, such as that safety measures can’t be built into these systems, that AI can't be contained, that a future AGI is subject to the selection pressures of natural evolution, or that a superintelligent AI will invariably turn evil.

And a deeper problem with these extreme scenarios is that it’s essentially impossible to predict the medium-term, let alone the long-term impact of emerging technologies. Over the past half-century, even leading AI researchers completely failed in their predictions of AGI timelines. So, instead of worrying about sci-fi paperclips or Terminator scenarios, we should be more concerned, for example, with all the diseases for which we weren’t able to discover a cure or the scientific breakthroughs that won’t materialize because we’ve prematurely banned AI research and development based on the improbable scenario of a sentient AI superintelligence annihilating humanity.

Last year, a Google engineer and AI ethicist claimed that Google’s chatbot achieved sentience. And leading AI researchers and others, including Elon Musk, just recently signed a letter calling for a moratorium on AI research — all it will take, in other words, are six months to “flatten the curve” of AI progress (obviously, China’s CCP would be very supportive). Signatories seem to not only fear the shimmering cyborg exoskeletons crushing human skulls — which Terminator 2’s opening scene immortalized — but also the automatization of jobs and, predictably, fake news, propaganda, misinformation, and other “threats to democracy.” But the call for a moratorium on AI — which confuses existential risks with concerns about unemployment — doesn’t define what the risks and their probabilities are, and lacks any criteria for when and how to lift the ban. So, given that regulations for AI applications, such as for autonomous driving and medical diagnostics, already exist, it’s unclear why a ban on basic AI research and development is needed in the first place.

But, as if a temporary ban on AI research isn’t enough, Eliezer Yudkowsky, a leading proponent of AI doomerism — and who expects that humanity will very soon go extinct due to a superhuman intelligence-induced Armageddon — called for a complete shutdown of all large GPU clusters, restricting the computing power anyone is allowed to use in training AI systems, and, if necessary, even destroying data centers by airstrike. The only way to prevent the apocalypse, according to this extreme form of AI safety doomerism, is for America to reserve the right to launch a preemptive strike on a nuclear power to defeat Ernie, the chatbot of Chinese search engine Baidu. (This would be doubly ironic, because China's Great Firewall turns out to be a pioneering effort to censor text at scale, exactly what generative AI companies are being called on to do today.) It’s one thing to generate an infinite number of improbable apocalypse scenarios, but it’s another thing to advocate for nuclear war based on the release of a chatbot and purely speculative sci-fi scenarios involving Skynet.

Now, a cynical response could be that AI doomerism is simply marketing for AI startups: it’s a potential instrument of Armageddon, not a glorified linear regression! Or the hyperbolic calls might just express the desire of emerging incumbents, such as OpenAI, to monopolize their leading positions and competitors to catch up to the frontier of the bleeding edge of AI research. So, in case AI capabilities don’t become a defensible moat, regulatory capture based on AI safety concerns could become one.

But such hysterical responses to emerging technologies are nothing new. In Silicon Valley, the obsession with existential risks, or “x-risks,” which today occur on online forums such as LessWrong, has a long tradition. We can trace the current wave of AI safetyism back to the Asilomar Conference on Recombinant DNA, which took place in 1975 in response to the technological breakthroughs in genetic recombinations by Herbert Boyer and others. Genetic engineering’s prospect of God-like capabilities to create or manipulate life itself triggered similar calls for regulations and bans. Despite these calls for regulation and a moratorium, Boyer co-founded Genentech a year later — instead of a ban, we got the biotech industry.

Not surprisingly, the recent call for an AI research moratorium follows the Asilomar AI Principles, a set of principles drafted at the 2017 Asilomar Beneficial AI conference. In contrast, just a few decades before the first Asilomar biotech conference, nuclear physicists involved in the Manhattan Project, which devised the first nuclear bomb, faced the speculative possibility that an atomic bomb might “ignite” the atmosphere because of a hypothetical fusion reaction. After a few calculations and informal discussions, they moved ahead as planned — without any moratoriums and no ethics boards or safety committees. And instead of a ban on nuclear science, we got nuclear power.

Now, while concerns about the safety of emerging technologies might be reasonable in some cases, they are symptoms of a societally deeply-entrenched risk aversion. Over the past decades, we’ve become extremely risk intolerant. It’s not just AI or genetic engineering where this risk aversion manifests. From the abandonment of nuclear energy and the bureaucratization of science to the eternal recurrence of formulaic and generic reboots, sequels, and prequels, this collective risk intolerance has infected and paralyzed society and culture at large (think Marvel Cinematic Universe or startups pitched as “X for Y” where X is something unique and impossible to replicate).

Take nuclear energy. Over the last decades, irrational fear-mongering resulted in the abandonment and demonization of the cleanest, safest, and most reliable energy source available to humanity. Despite an abundance of evidence, which scientifically demonstrates its safety, we abandoned an eternal source of energy, which could have powered civilization indefinitely, for unreliable and dirty substitutes while we simultaneously worry about catastrophic climate change. It’s hard to conceive now but nuclear energy once encapsulated the utopian promise of infinite progress, and nuclear engineering was, up until the 1960s, one of the most prestigious scientific fields. Today, mainly because of Hiroshima, Fukushima, and the pop-culture imagery of a nuclear holocaust though, the narrative has shifted from “alchemy,” “transmutation,” and “renewal” to dystopian imagery of “contamination,” “mutation,” and “destruction.” Although most deaths during the Fukushima incident resulted from evacuation measures — and more people died because of Japan’s shutting down of nuclear reactors than the accident itself — many Western nations, in response to the meltdown, started to obstruct the construction of new reactors or phase out nuclear energy altogether. This resulted in the perverse situation where Germany, which has obsessively focused on green and sustainable energy, now needs to rely on on highly polluting coal for up to 40% of its electricity demand. The rise of irrational nuclear fear illustrates a fundamental problem with safetyism: obsessively attempting to eliminate all visible risks often creates invisible risks that are far more consequential for human flourishing. Just imagine what would have happened, if we hadn’t phased out nuclear reactors — would we now have to obsess over “net-zero” or “2°C targets”?

Or let’s consider another recent and even more controversial example: biological gain-of-function research. Zhengli Shi, a virologist who directs the Center for Emerging Infectious Diseases at the Wuhan Institute of Virology, and Peter Daszak, a disease ecologist and president of the EcoHealth Alliance Virology who frequently collaborated with the Wuhan Institute of Virology, have found themselves at the center of the Covid-19 lab-leak controversy. Both researchers specialized in research on viral infections. By sequencing viral genomes, isolating live viruses, and through genetic mixing and matching, their research attempted to better understand the evolution of viruses and how they gain the ability to infect human hosts so that better drugs and vaccines to counter them could be developed to protect humanity from future pandemics. In other words, gain-of-function research, which modified viruses to understand their evolution, was designed to mitigate the existential risks of pandemics. If we assume that the lab-leak theory is true, then the gain-of-function research on coronaviruses precisely caused the pandemic it tried to prevent.

Now, whether we think that an AI apocalypse is imminent or the lab-leak hypothesis is correct or not, by mitigating or suppressing visible risks, safetyism is often creating invisible or hidden risks that are far more consequential or impactful than the risks it attempts to mitigate. In a way, this makes sense: creating a new technology and deploying it widely entails a definite vision for the future. But a focus on the risks means a definite vision of the past, and a more stochastic model of what the future might hold. Given time’s annoying habit of only moving in one direction, we have no choice but to live in somebody’s future — the question is whether it’s somebody with a plan or somebody with a neurosis.

One of the most worrying manifestations of our collective risk aversion is occurring in science. Many studies have documented a decreasing risk tolerance in scientific research. A core driver has been the dominance of citation-driven metrics to evaluate, fund, and promote scientific research — a process that parallels the ever-increasing bureaucratization of science itself (interestingly, as measured by the increase of academic administration staff, the onset of this trend coincides with the first safetyism conferences that were held in the 1970s). Citations have become the decisive factor in publications, grant-making, and tenure. Consequently, as crowded scientific fields attract the most citations, high-risk, exploratory science, in turn, gets less attention and funding. And, in addition to the risk aversion of scientists, ethics committees, peer reviewers, and commissions are now slowing down scientific progress. This scientific risk aversion, coupled with the increase in bureaucratization, helps explain why scientific productivity has been significantly declining over the past decades. Scientists simply don’t want to risk their careers to explore some radically novel and risky idea. It isn’t surprising then that a large-scale study published earlier this year confirmed the finding that scientific progress is slowing universally across several major fields and that papers and patents are becoming less disruptive over time. For example, it took CRISPR, the gene-editing technique, more than 20 years to be recognized. For a very long time, CRISPR research didn’t attract many citations and almost no funding. How many scientific breakthroughs could we have had if scientists wouldn’t have to spend most of their time writing funding proposals that are safe for their careers? 

Culturally, we’ve experienced risk aversion and safetyism in almost every domain of our lives. We’re trapped in an eternal recurrence of uninspiring sequels, prequels, and reboots — without exception, the ten highest-grossing movies last year have all been sequels or reboots. Machine-learning algorithms, trained on what has come before in Netflix’s library, churn out formulaic movie scripts, pop music is stuck in an endless loop of repetition and nostalgia, and art and architecture are dominated by generic conformity — and either by a syringe of filler, which costs a few hundred dollars, or an app filter, influencer and their followers converge on the so-called “Instagram Face,” the apotheosis of averageness.

Also when it comes to education, we don’t take real risks anymore — heretical ideas get “canceled” and careerism, which ideally progresses from an MBA to an investment bank or consultancy, or more recently a big tech company, leaves no space for experimentation or deviant paths. While we could dismiss the fact that Spiderman or Star Wars get another reboot as irrelevant, a generic culture, which fails to inspire heroic risk-taking and lacks any positive futuristic vision or message, results in large-scale failures of imagination. 

Whether it’s nuclear energy, AI, biotech, or any other emerging technology, what all these cases have in common is that — by obstructing technological progress — safetyism has an extremely high civilizational opportunity cost. From fire and the printing press to antibiotics and nuclear reactors, every technology has a dual nature — they involve trade-offs. Earth’s carrying capacity for human life has expanded through technologies that have created existential risks. Hydrocarbons bootstrapped our civilization, but contributed to climate change; artificial nitrogen fixation made it cheaper to manufacture explosives and poison gas, but are also necessary for feeding the current population; and the Green Revolution creates monoculture risk in agriculture, but also feeds billions. If there would have been a moratorium and ethics committee for every emerging technology, we most likely would have killed any progress in its larval stage.

A leading proponent of “x-risks,” the Oxford philosopher Nick Bostrom, recently published a paper that considers several counterfactual and speculative futures to outline a set of policy prescriptions to forestall the devastation of civilization due to runaway technological acceleration. They include the “restriction of technological development,” “extremely effective preventive policing,” “effective global governance” and “comprehensive surveillance.” Here, we detect a dark undercurrent of safetyism — in many instances, safetyism simply represents a tool for top-down control, which increases centralization, and, as Bostrom suggests, even might bring about “global governance.” Safetyism not only shifts control from AI engineers and programmers to a cabal of ethicists and policymakers, but the centralization of tech development in the name of safety also compromises the entire system in question. It’s abundantly clear that many regulations — even in areas that are far less complex than AI or genetic engineering — don’t really work. So, it’s highly unlikely that a board of ethicists or a group of regulators will bureaucratically solve, for example, the so-called “alignment problem” between AI and human values, which is infinitely more complex than, for example, regulating the gender politics of public bathrooms. Policy responses, such as bans or moratoria, will, in all likelihood, simply kill any progress and result precisely in a state of stasis that’s far more dangerous than the supposed risks these regulations try to mitigate. 

Perversely, safetyism itself has become one of the most important existential risks confronting humanity. So, it would be wise to heed the Bible’s prophetic warning in Thessalonians 5:3: “For when they shall say, Peace and safety; then sudden destruction cometh upon them.”

-Tobias Huber and Byrne Hobart

0 free articles left

Please sign-in to comment