> John Collison: To put numbers on this, you've talked about the potential for a 10% annual economic growth powered by AI. Doesn't that mean that when we talk about AI risk, it's often harms and misuses of AI, but isn't the big AI risk that we slightly misregulated or we slowed down progress, and therefore there's just a lot of human welfare that's missed out on because you don't have enough AI?
Dario's former colleague at OpenAI, Paul Christiano, has a great 2014 blog post "On Progress and Prosperity" that does a good job explaining why I don't believe this: forum.effectivealtruism…
In short, "It seems clear that economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course."
"For example, if exponential growth continued at 1% of its current rate for 1% of the remaining lifetime of our sun, Robin Hanson points out each atom in our galaxy would need to be about 10140 times as valuable as modern society."
"So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants--they will live in a world that is "saturated," where progress has run its course and has only very modest further effects."
"I think this is sufficient to respond to the original argument: we have seen progress associated with good outcomes, and we have a relatively clear understanding of the mechanism by which that has occurred. We can see pretty clearly that this particular mechanism doesn't have much effect on very long-term outcomes."