1128 Comments
⭠ Return to thread
author
Mar 30, 2023·edited Mar 30, 2023Author

>> "Come on Scott, you're just not understanding this...for a start, consider the whole post!"

I'm a big fan of your work and don't want to misrepresent you, but I've re-read the post and here is what I see:

The first thirteen paragraphs are establishing that if AI continues at its current rate, history will rebegin in a way people aren't used to, and it's hard to predict how this will go.

Fourteen ("I am a bit distressed") argues that because of this, you shouldn't trust long arguments about AI risk on Less Wrong.

Fifteen through seventeen claim that since maybe history will re-begin anyway, we should just go ahead with AI. But the argument that history was going to re-begin was based on going ahead with AI (plus a few much weaker arguments like the Ukraine war). If people successfully prevented AI, history wouldn't really re-begin. Or at least you haven't established that there's any reason it should. But also, this argument doesn't even make sense on its own terms. Things could get really crazy, therefore we should barge ahead with a dangerous technology that could kill everyone? Maybe you have an argument here, but you'll need to spell it out in more detail for me to understand it.

Eighteen just says that AI could potentially also have giant positives, which everyone including Eliezer Yudkowsky and the 100%-doomers agree with.

Nineteen, twenty, and twenty one just sort of make a vague emotional argument that we should do it.

I'm happy to respond to any of your specific arguments if you develop them at more length, but I have trouble seeing them here.

>> "Scott ignores my critical point that this is all happening anyway (he should talk more to people in DC)"

Maybe I am misunderstanding this. Should we not try to prevent global warming, because global warming is happening? If you actually think something is going to destroy the world, you should try really hard to prevent it, even if it does seem to be happening quite a lot and hard to prevent.

>> "Does not engage with the notion of historical reasoning (there is only a narrow conception of rationalism in his post)"

If you mean your argument that history has re-begun and so I have to agree to random terrible things, see above.

>> "Does not consider Hayek and the category of Knightian uncertainty"

I think my entire post is about how to handle Knightian uncertainty. If you have a more specific argument about how to handle Knightian uncertainty, I would be interested in seeing it laid out in further detail.

>> "and does not consider the all-critical China argument, among other points"

The only occurrence of the word "China" in your post is "And should we wait, and get a “more Chinese” version of the alignment problem?"

I've definitely discussed this before (see the section "Xi risks" in https://astralcodexten.substack.com/p/why-not-slow-ai-progress ) . I'm less concerned about than I was when I wrote that post, because the CHIPS act seems to have seriously crippled China's AI abilities, and I would be surprised if they can keep up from here. I agree that this is the strongest argument for pushing ahead in the US, but I would like to build the capacity now to potentially slow down US research if it seems like CHIPS has crippled China enough that we don't have to worry about them for a few years. It's possible you have arguments that CHIPS hasn't harmed China that much, or that this isn't the right way to think about things, but this is exactly the kind of argument I would appreciate seeing you present fully instead of gesture at with one sentence.

>> "Or how about the notion that we can't fix for more safety until we see more of the progress?"

I discussed that argument in the section "Why OpenAI Thinks Their Research Is Good Now" in https://astralcodexten.substack.com/p/openais-planning-for-agi-and-beyond

I know it's annoying for me to keep linking to thousand-word treatments of each of the sentences in your post, but I think that's my point. These are really complicated issues that many people have thought really hard about - for each sentence in your post, there's a thousand word treatment on my blog, and a book-length treatment somewhere in the Alignment Forum. You seem aware of this, talking about how you need to harden your heart against any arguments you read on Less Wrong. I think our actual crux is why people should harden their hearts against long well-explained Less Wrong arguments and accept your single-sentence quips instead of evaluating both on their merits, and I can't really figure out where in your post you explain this unless it's the part about radical uncertainty, in which case I continue to accuse you of using the Safe Uncertainty Fallacy.

Overall I do believe you have good arguments. But if you were to actually make them instead of gesture at them, then people could counterargue against them, and I think you would find the counterarguments are pretty strong. I think you're trying to do your usual Bangladeshi train station style of writing here, but this doesn't work when you have to navigate controversial issues, and I think it would be worth doing a very boring Bangladeshi-train-station free post where you explain all of your positions in detail: "This is what I think, and here's my arguments for thinking it".

Also, part of what makes me annoyed is that you present some arguments for why it would be difficult to stop - China, etc, whatever, okay - and then act like you've proven that the risk is low! "Existential risk from AI is . . . a distant possibility". I know many smart people who believe something like "Existential risk is really concerning, but we're in a race with China, so we're not sure what to do." I 100% respect those people's opinions and wouldn't accuse them of making any fallacies. This doesn't seem to be what you're doing, unless I'm misunderstanding you.

Expand full comment