The app for independent voices

As promised, read the paper and I must say, from a moral psychology point of view, I’m not convinced. I think the paper shows quite clearly that some people have a strong, anti-AI attitude. This is unsurprising. The importance of the paper is by saying that this is amounts to more than an attitude, that this is at least partially, moral conviction.

what indicates a strong negative attitude is elevated to a moral one? the authors seem to suggest (at least) 3 indicators:

  1. The use of moral language

  2. Consequence insensitivity

  3. Domain generality

(Obviously the finding that aversion to AI leads to forgoing its use is an indication of aversion, but not of a moral conviction per se).

The problem of the use of moral language is that, as always, language use is complicated. The words people use to describe things they really don’t like is similar whether it is moral or not. once going through the process of tokenizing/lemmatizing, embeddings and so on a lot of the context is lost. so I’m not sure what people are left with is “moral” opposition to AI, and not general opposition.

How can this be solved? One might look to find issues where people hold no moral stance, but still hold a negative stance towards and compare the use of moral language in those areas (BTW, the topics used as comparisons in the paper, Covid-19, GMO and vaccines might be such topics, therefore providing counter-argument to the claim in the paper!). Obviously finding such issues is not very easy, because it makes sense that topics get to make headlines systematically only once they start being at least partially moralized.

As for the two other indicators, the fear is that once an attitude is strong enough it will always seem insensitive to consequences, just because initial cost is so high. If I whole heartedly despise chopped liver, I would seem insensitive to factors like chopped liver’s price or health benefits, because its awful spoiled-rust-poison taste strongly outweighs anything else.

Domain general suffers a similar problem. If I hate something enough, there will be a spillover effect to anything similar to it. So anything similar to chopped liver - fois gras, stuffed spleen, kidney pie - I would hate as well, and indeed I will find a single latent factor. But again, this just indicates how much I hate chopped liver, not that I have a moral conviction against it.

So to summarize, I think the paper provides strong evidence of a powerful, emotional aversion to AI, but not so much a moral stance against it.

Last but not least - one could ask: how could you EVER tease apart a moral conviction from a strong attitude? This is a good question that I have struggled with myself, and I don’t believe I have a good answer for. One option, briefly mentioned in the paper but not directly measured or reported, is the focusing on the percieved objectivity of a stance as a defining feature of morality. Both defining and measuring perceived moral objectivity is tricky (I’ve spent my entire PhD Thesis on it), but I think it would be strong support for any claim of moralization.

Will write something more thorough later, but for some reason it seems many of the commentators on this post seem to think that the paper claims people who think AI is immoral are wrong.

But this is not a philosophy paper, its a moral psychology paper. The only point is that (some) people hold strong moral convictions masquerading as non-…

Mar 11
at
10:41 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.