Import AI 375: GPT-2 five years later; decentralized training; new ways of thinking about consciousness and AI
…Are today's AGI obsessives trafficking more in fiction than in fact?...
Welcome to Import AI, a newsletter about AI research. Import AI runs on lattes, ramen, and feedback from readers. If you’d like to support this (and comment on posts!) please subscribe. SPECIAL EDITION! GPT2, Five Years On: …A cold eyed reckoning about that time in 2019 when wild-eyed technologists created a (then) powerful LLM and used it to make some ver…
Mikhail Samin
•
3d
> I've found myself increasingly at odds with some of the ideas being thrown around in AI policy circles, like those relating to needing a license to develop AI systems; ones that seek to make it harder and more expensive for people to deploy large-scale open source AI models; shutting down AI development worldwide for some period of time; the creation of net-new government or state-level bureaucracies to create compliance barriers to deployment
Sane policies would be "like" those, but this doesn't represent any of the ideas well and doesn't provide any justification for them.
Frontier AI labs are locked in a race; locally, they have to continue regardless of risks; they publicly say that they should be regulated (while lobbying against any regulation in private).
As a lead investor of Anthropic puts it (https://twitter.com/liron/status/1656929936639430657), “I’ve not met anyone in AI labs who says the risk [from a large-scale AI experiment] is less than 1% of blowing up the planet”.
Pointing at complicated processes around nuclear safety to argue that we shouldn't give the governments the power to regulate this field seems kind of invalid in this context.
If the CEO and many employees of your company believe there's a 10-90% chance of your product or the product of your competitors killing everyone on the planet, it seems very reasonable for the governments to step in. It's much worse than developing a nuclear bomb in a lab in the center of a populated city.
Stopping frontier general AI training worldwide until we understand it to be safe is different from shutting down all AI development (including beneficial safe narrow AI systems) "for a period of time". Similarly, a sane idea with licenses wouldn't be about all AI applications; it'd be about a licensing mechanism specifically for technologies that the companies themselves believe might kill everyone.
Ideally, right now there should be a lot of effort focusing on helping the governments to have visibility into what's going on in AI, increasing their capability to develop threat models, and developing their capacity to have future regulation be effective (such as with compute governance measures like on-chip licensing mechanisms that'd allow controlling what GPUs can be used for if some uses are deemed existentially unsafe).
If all the scientists developing nuclear powerplants at a lab estimated that there's a 10-90% chance that everyone will die in the next decades (probably as a result of a powerplant developed), but wanted to race nonetheless because the closer you are to a working powerplant, the more gold it already generates, and others are also racing, we wouldn't find it convincing if a a blog post from a lab's cofounder and policy chief argued that it's better for all the labs to self-govern and not have the governments have any capacity to regulate, impose licenses, or stop any developments.
Bernard
•
4d
You mentioned the P(Doom) debate. I’m concerned that this debate may focus too much on the risk of extinction with AGI, without discussing the risk of extinction without AGI. For a proper risk assessment, that probability should also be estimated. I see the current p(Doom) as very high, assuming we make no changes to our current course. We are indeed making changes, but not fast enough. In this risk framing, AGI overall lowers the total risk, even if AGI itself carries a small extinction risk
It’s a plausible story to me that we entered a potential extinction event a few hundred years ago when we started the Industrial Revolution. Our capability to affect the world has been expanding much faster than our ability to understand and control the consequences of our changes. If this divergence continues, we will crash. AI, and other new tools, give us the chance to make effective changes at the needed speed, and chart a safe course. The small AGI risk is worthwhile in the crisis we face.