I increasingly have friends ask what philosophy they should read to help think about AI. I've come to realize that using LLMs is pushing all of us to more closely examine our philosophical assumptions. Seemingly esoteric questions like who has moral status now feel urgent: people want to know if they should say "please" and "thank you" to Chat!
Below is a rough attempt at laying out what in philosophy is most relevant for AI. I split the list between foundational philosophical concepts for thinking about AI and more applied frameworks for how to build and use AI
Part 1: Philosophical Concepts
What is a mind?
Nagel, What is it like to be a bat
Chalmers, Facing up to the problem of consciousness
Putnam, The Nature of Mental States
Searle, Minds, Brains, and Programs
What is agency?
Anscombe, Intention
Dennett, The Intentional Stance
Bratman, Shared Cooperative Activity
Applbaum, Legitimacy (group agents)
Who has moral status?
Kant, The Metaphysics of Morals (indirect duties)
Singer, Animal Liberation
Korsgaard, Fellow Creatures
What are reasons and values?
Hume, Enquiry Concerning the Principles of Morals
Parfit, On What Matters (Volume II, Part 6)
Mackie, Ethics: Inventing Right and Wrong
Korsgaard, The Sources of Normativity
Part 2: How should we build AI
How might AIs come to implement our values?
Kripke, Wittgenstein on Rules and Private Language