
There’s an important connection between arguments that AGI (Artificial General Intelligence) is imminent and DOGE. AGI-imminence isn’t at all the sole justification for DOGE, but it is a real part of the intellectual mix. And if you genuinely believe that AGI is right around the corner, then what is happening with DOGE right now is the merest hint of what we need to prepare for over the next few years. If you don’t so believe (and I don’t) then DOGE is a visible example of how horribly AGI-prepping can go wrong. The beginnings of the ‘we need to prepare for AGI’ thesis is laid out in Kevin Roose’s NYT article, which came out last week. The beginnings of one counterargument, as I at least see it,* can be found in the piece on Large Models as Social and Cultural Technologies, by Alison Gopnik, Cosma Shalizi, James Evans and myself, that came out in Science right before.
Here’s what I think is happening. The case for imminent AGI more or less reduces down to the notion that creative problem solving can be commoditized via large model based technologies. Such technologies include language models like the GPT family and Claude, the diffusion models that produce art and others.
The thesis is that these models will soon be able to solve difficult problems better than humans ever could. They will be able to do this because of the “bitter lesson” that the “secret to intelligence,” is, in Dario Amodei’s formulation, scaling up simple objective functions by throwing data and compute at them. We will soon live in a world where “geniuses in a datacenter” can conduct fundamental research, solve the aging problem and propel us into a material paradise like that in Iain M. Banks’ Culture novels.
Under this theory, we should prioritize building AI over solving other problems because AGI (or whatever you want to call it: Amodei doesn’t like that term) will be a superior and independent means for solving those problems, exceeding the problem solving capacity of mere humans. Thus, for example, both Eric Schmidt and Bill Gates say that we should build lots of power capacity to fuel AI, even if this has short term repercussions for the climate. In Schmidt’s summation, human beings are not going to hit the climate change targets anyway, “because we’re not organized to do it.” Hence, the better bet is to build out the power infrastructure for AI, to build automated systems that are better capable of solving the problems than flawed human social institutions.
The proponents of this theory acknowledge strategic complications. Amodei and Matt Pottinger want the U.S. to get there first, to avoid being propelled into an autocratic AI future with Chinese characteristics. Schmidt and his co-authors fear that getting close to AGI dominance might destabilize international politics without some means of deterrence, precisely because it is so awesomely powerful. So getting to super-powerful AI may be politically hard, but once you do, many insoluble problems will be soluble, perhaps even trivial.
Our account provides a different understanding of large models and problem solving. Specifically, it claims that large models are a social and cultural technology through which human beings can solve problems and coordinate in new and sometimes useful ways. We explain large models as “‘lossy JPEGs’ of the data corpora on which they have been trained,” statistical machines that “sample and generate text and images.” The implication is that they will never be intelligent in the ways that humans, or even bumble-bees are intelligent, but that they may reflect, mediate, compress and remix human intelligence in useful ways. If they become smarter than individual humans it will be in ways that are roughly analogous to how markets are sometimes ‘smarter.’ As Herbert Simon argues in The Sciences of the Artificial, artificial systems can create composites of collective human intelligence, allow for new means of coordination and so on.
The implication of this is that large models will not be a substitute for human problem solving, but an extension of existing collective capabilities, which will also generate their own problems and conflicts, much as markets, bureaucracies, democracies have. They are not an exit door through which we can escape the human condition, delegating decisions to independent Minds-to-be that are wiser than us. Instead, they a collective extension of our own minds, founded on the cultural substrates through which we communicate and coordinate.
This has implications for how we ought think about the prospects of AGI right now. Roose suggests that we do not pay nearly enough attention to the people who believe that AGI is right around the corner. Our piece implies instead that we pay far too much attention and give these people too much leeway. We ought be listening to other voices.
As Roose himself notes, there is a strong body of opinion that AGI will:
tilt the balance of political and military power toward the nations that control it — and that most governments and big corporations already view this as obvious, as evidenced by the huge sums of money they’re spending to get there first.
Roose rightly argues that such beliefs reflect a near consensus among the people who work at these companies, but does acknowledge that “some experts” disagree. If you click through his link, you will find a Nature piece reporting on a recent survey of members of the Association for the Advancement of Artificial Intelligence, where 84% of respondents say that the neural net architectures that large models rely on are “insufficient to achieve AGI” on their own. Such views - which are common in my experience talking to Johns Hopkins colleagues who work on these technologies - are not nearly as colorful as speculation about imminent AGI, and do not get nearly as much attention in places like the New York Times, but do suggest that the claims of grand changes real soon are so much whistling past the graveyard.**
Even so, AGI-prepping is reshaping our politics. Wildly ambitious claims for AGI have not only shaped America’s grand strategy, but are plausibly among the justifying reasons for DOGE.
After the announcement of DOGE, but before it properly got going, I talked to someone who was not formally affiliated, but was very definitely DOGE adjacent. I put it to this individual that tearing out the decision making capacities of government would not be good for America’s ability to do things in the world. Their response (paraphrased slightly) was: so what? We’ll have AGI by late 2026. And indeed, one of DOGE’s major ambitions, as described in a new article in WIRED, appears to have been to pull as much government information as possible into a large model that could then provide useful information across the totality of government.
The point - which I don’t think is understood nearly widely enough - is that radical institutional revolutions such as DOGE follow naturally from the AGI-prepper framework. If AGI is right around the corner, we don’t need to have a massive federal government apparatus, organizing funding for science via the National Science Foundation and the National Institute for Health. After all, in Amodei and Pottinger’s prediction:
By 2027, AI developed by frontier labs will likely be smarter than Nobel Prize winners across most fields of science and engineering. … It will be able to … complete complex tasks that would take people months or years, such as designing new weapons or curing diseases.
Who needs expensive and cumbersome bureaucratic institutions for organizing funding scientists in a near future where a “country of geniuses [will be] contained in a data center,” ready to solve whatever problems we ask them to? Indeed, if these bottled geniuses are cognitively superior to humans across most or all tasks, why do we need human expertise at all, beyond describing and explaining human wants? From this perspective, most human based institutions are obsolescing assets that need to be ripped out, and DOGE is only the barest of beginnings.
Of course, you might hold this perspective and think that DOGE was premature. Perhaps DOGE-style reformers should have waited 18 or 24 months until the new systems were in place. But even under the moderate perspective, the basic point stands. The adjustment pains that we are experiencing now thanks to DOGE are just an anticipatory twinge of the radical changes in a few years, where we realize that most information workers across most sectors of the economy are economically worthless.
There will be political questions to be answered in this world - I’ve already mentioned the strategic interactions with China. Perhaps we need to buy the unemployed masses off with basic income, as Sam Altman proposes. Perhaps, as per Marc Andreessen, we don’t buy them off, but assume that they will enjoy the benefits of massively cheaper production. Either way, the shift to AGI is coming. Under this account, we don’t have many choices, other than preparing to weather the storm.
If, alternatively, large models are not the near-term harbingers of AGI, but social and cultural technologies, then DOGE style changes are plausibly a horrific mistake. We are lobotomizing government, and ripping out the funding structures that support key forms of basic and applied scientific research without any plausible replacement. The short, medium and long term consequences are likely to be terrible, unless these changes are reversed quickly.
Under this perspective, there is indeed likely to be value in adopting large models, but doing it well will require complex and careful reform. Government is not an obsolescent institution that can largely be substituted by AI awesome, but there are ways in which we can use these technologies to make government work better.
Figuring out how to do this will require a lot of deep knowledge about the data that government actually has, what it tells us, and what it does not. It will involve a lot of experimentation, not simply with technology but with the human social institutions that integrate with it. And finally, it will require the development of skills and knowledge across computerized and human-institutional knowledge systems. All of this will be messy, complicated and difficult in ways that sweeping invocations of “geniuses in a data center” are not. And it will be far more difficult in the U.S. case, because not only are institutions being pulled apart, but the alternative centers of technological knowledge and skill in the federal government that might help guide this through are being systemically dismantled (including, most recently, the Pentagon’s Office of Net Assessment).
There may be intermediary positions - I don’t think that there is any logical contradiction in having mildly pro-AGI beliefs, but still wanting institutional back up in case the predictions don’t pan out. But there is also a broader point. Pretty well every conversation that I read about AGI treats it as a miracle of rare device, a generic means of problem solving that is inherently better than individual and collective human institutions of decision making. I very rarely see specific discussion of what these technologies can, and more importantly, cannot, do. We know that they are brittle in many ways - but these weaknesses don’t much get entertained (perhaps on the assumption that self-improving AI will lead to a Singularity style take-off in which AI pulls itself up by its own bootstraps, eliminating its flaws as it improves itself in a virtuous feedback loop). I regularly see lapses into what seems to me to be magical thinking about magical thinking - Panglossian assumptions that we will soon have sufficiently advanced technology, which can reason its way out of the mess that humanity has gotten themselves into.
I am sure that there are plenty of holes to be poked in the alternative conception of large models that Alison, Cosma, James*** and I put forward - all theories are inadequate to the empirics they purport to describe. But what it does, I think, is to provide a prima facie plausible account of the broad limits of these technologies, the ways in which they are not miraculous, nor likely to be. Large scale means of summarizing and remixing cultural information can be incredibly useful things to have. They may also generate large scale social problems, huge new questions about how to divide gains, and minimize costs. All this is liable to be messy and divisive, at best. But that’s the human condition that we are stuck with, and it isn’t likely to be transcended any time soon.
* This is my interpretation of our common argument, which is a short-hand way of saying that my co-authors deserve credit for any good ideas, but are not to blame for any idiocies committed herein.
** To be clear - these are not claims about AGI as such. I suspect there is a variety of beliefs among academic experts about whether or not AGI is plausible in the medium to longish term, and that there are very few AI experts who think that AGI (understood as better-than-human machine intelligence) is ex ante impossible or inconceivable.
*** Again - they should not be held responsible for the arguments in this piece, which they may or may not agree with.
I find the current AI/AGI discussions make a lot more sense if I view them as complaints about having to hire servants. Complaining about 'the help', a favored pastime of the servant-hiring classes going back in perpetuity. Only now the servant-hiring class is lured by fantasies of never having to be bogged down in other people, robotics will replace all those annoying servant-people with their needs & their wants & their foibles & limitations. Perfected machine-servants will fix so many problems, starting with all that money spent on administering non-disclosure agreements. No more worrying about servants spying & gossiping & knowing. The Shakespearean bonus that is firing all those lawyers needed to administer all those NDAs. Above & beyond the more obvious advantages fantasy pseudo-sentient machinery would have over actual people.
The way that AI talk gets venture-capital wallets open has been much discussed & is kinda obvious, but the underlying emotion-logic of servant-keeping has & is not.
I think it's incredibly telling how involved some people who could reasonably be called 'private equity vultures' are with the whole DOGE project. To me, this demonstrates that even if some of the people supporting it are motivated by a belief that AGI is imminent, there are others who view it purely as an extractive enterprise. I suppose those beliefs aren't mutually exclusive though.
https://www.thenation.com/article/society/doges-private-equity-playbook/