The app for independent voices

I am sympathetic to much of what you say in this post, but I was disappointed with the section on the orthogonality thesis. You say

>>The ability to create explanatory knowledge is a binary property: once you have it, you can in principle explain anything that is explicable. Morality is not magically excluded from this process!

Morality may very well be excluded, but its conclusion isn’t necessarily magical. It’s not at all clear moral standards even comprise any distinctive body of “explanatory knowledge.” If it doesn’t, then morality wouldn’t be an appropriate subject of the process you describe. For comparison, we can consider the gastronomic orthogonality thesis: would we imagine that all AGIs would converge on sharing the exact same food preferences? That they’d enjoy the same kinds of wines, the same toppings on their pizzas, and so on?

No. This is absurd. Which food you like isn’t simply a matter of discovering which foods have intrinsic tastiness properties; it’s a dynamic relation between your preferences and the features of the things you consume. Just so, if agents simply have different moral values, no amount of discovering descriptive facts about the world extraneous to those values would necessarily change what those values were or lead them to converge on having the same values as other agents.

You go on to say:

>>Philosophers and religious gurus and other moral entrepreneurs come up with new explanations; we criticise them, keep the best ones, discard the rest.

But note your use of the term “best.” People can and do have different conceptions of what’s “best.” It’s not at all clear why the process you describe would necessarily lead to convergence in moral values. What new explanations, followed by criticism, retention of the best ones, and discarding of the rest would lead to convergence in taste preferences, or favorite colors, or the best music?

There may very well be no answer to this, because such a question may be fundamentally misguided. It may presuppose an implicit form of realism according to which facts about what’s “best” aren’t entangled with our preferences and always converge with the acquisition of greater knowledge and intelligence. But this is precisely what proponents of the orthogonality thesis are questioning. I don’t see how you’ve shown they’re mistaken at all.

You also say:

>>It’s not a coincidence that science and technology has accelerated at the same time as universal suffrage, the abolition of slavery, global health development, animal rights, and so on.

It probably isn’t a coincidence. But that two things co-occur doesn’t mean that what’s true of one is true of the other. What matters is why they co-occurred. You also say this:

>>There may not be a straight-line relationship between moral and material progress, but they’re both a product of the same cognitive machinery.

This is too underspecified for me to say much about it. While I think changes in technology lead to changes in social structures, which prompt changes in the organization of human societies and shifts in our institutions, which in turn leads to changes in our moral standards, the connection between moral and material progress is a challenging one for which most positions on the matter would be speculative and underdeveloped. It’s not at all clear to me that we know that both moral and material progress are a result of the same cognitive machinery; it’s not even clear yet what cognitive machinery either is the product of, so we’re far from being in a position to claim it’s the same.

You quote yourself in a review saying:

>>A mind with the ability to create new knowledge will necessarily be a universal explainer, meaning it will converge upon good moral explanations. If it’s more advanced than us, it will be morally superior to us: the trope of a superintelligent AI obsessively converting the universe into paperclips is exactly as silly as it sounds.

I reject the first claim. Even if a mind was a universal explainer, it does not follow that it will “converge upon good moral explanations.” This presumes that there is a distinct set of moral explanations to converge on. But I deny that this is the case, and know of no good arguments that something like this is true. It sounds a bit like you’re alluding to some kind of moral realism, though I can’t tell. Perhaps you could clarify: are you a moral realist?

Maybe you have arguments for this elsewhere, but I see no good arguments in this post or in this quote to believe that if something is more advanced than us that it’d be morally superior to us.

As far as AI converting the universe into paperclips sounding exactly as silly as it sounds: This doesn’t sound silly to me at all.

Why I Am No Longer an AI Doomer
Jun 23
at
12:58 AM

Log in or sign up

Join the most interesting and insightful discussions.