I think it’s the opposite - if it is conscious then it likely can suffer. I think it’s even more confusing, in fact, because there’s a sense in which for AI the actual experiences might be decoupled from the behavior, unlike in humans, wherein we always express the pain we’re feeling (or at least, most of the time). In other words, maybe we end up in some terrible situation where training an AI involves terrible pain, but. I’m not saying it’s true, I’m just saying that’s a possibility, and without a good theory of consciousness we’re flying blind, morally speaking.
Sep 21, 2023
at
6:04 PM
Log in or sign up
Join the most interesting and insightful discussions.