I agree that it’s very plausible that machine “genetic evolution” into a superhuman state is possible. And that it’s important to not dismiss the chance of breakthroughs out of hand. In general I’m a fan of the precautionary principle.
But it’s a meaningful, qualitative difference whether you can get there through self-play or whether you need the inoculation to have the immune response. What I’m attacking is not “there’s no chance of generally superhuman AI” but the “there is no AI fire alarm” argument that it can become superhuman based solely on it’s internal machinations without interacting with the world publicly on human-legible timescales (like Waymos do).
Preparation is fundamentally different if you’re preparing for a threat that will grow increasingly legible in stages (“Okay, now the meteor has a 3% chance to strike in 2027…”) vs a threat that comes at you all at once with no warning. Much of the AI doomer crowd is pointing out the unique dynamics that come with that second style of threat. Since I don’t believe that sort of threat is plausible, it’s important to say why, even if I don’t think it disproves machines ever being superintellegent.
Feb 19
at
7:35 PM
Log in or sign up
Join the most interesting and insightful discussions.