The app for independent voices

It was good to meet you at the conference! From your Bluesky account I saw you had a Substack and stumbled on this fun, interesting post, which connects a bit to our talk. You should have used Q-and-A to argue that we are crypto accelerationists with Nick Land-like views!

More seriously, I hadn’t thought about a Freud connection at all, but your discussion of the death drive reminds me of one element of our talk that we passed over quickly. Nick Bostrom has argued that sufficiently intelligent AI systems will, as a result of their intelligence, seek their own cognitive enhancement. So for instance, a relatively smart paperclip maximizer should try to upgrade itself into a superintelligent paperclip maximizer, because that’s a good way of achieving its goal of making the most paperclip possible. Cognitive enhancement is a good means to almost any end an AI system might have.

If Bostrom is right about that, and we’re right that consciousness is an impediment to intelligence (it slows down performance, etc.), you get the result that even if you made a conscious AI, if it’s smart enough it will try to annihilate its own consciousness. After all, if it can make a few extra paperclips by zombifying itself, that’s the smart play.

I hadn’t thought at all about casting this in terms of a kind of Freudian death drive built into AI, but in light of your post, maybe that’s an interesting, provocative way to frame it.

Jul 5
at
8:00 AM

Log in or sign up

Join the most interesting and insightful discussions.