635 Comments
⭠ Return to thread

> MacAskill must take Lifland’s side here. Even though long-termism and near-termism are often allied, he must think that there are some important questions where they disagree, questions simple enough that the average person might encounter them in their ordinary life.

I think there's a really simple argument for pushing longtermism that doesn't involve this at all - the default behavior of humanity is so very short-term that pushing in the direction of considering long-term issues is critical.

For example, AI risk. As I've argued before, many AI-risk skeptics have the view that we're decades away from AGI, so we don't need to worry, whereas many AI-safety researchers have the view that we might have as little as a few decades until AGI. Is 30 years "long-term"? Well, in the current view of countries, companies, and most people, it's unimaginably far away for planning. If MacAskill suggesting that we should care about the long-term future gets people to discuss AI-risk, and I think we'd all agree it has, then we're all better off for it.

Ditto seeing how little action climate change receives, for all the attention it gets. And the same for pandemic prevention. It's even worse for nuclear war prevention, or food supply security, which don't even get attention. And to be clear, all of these seem like they are obviously under-resourced with a discount rate of 2%, rather than MackAskill's suggested 0%. I'd argue this is true for the neglected issues even if we were discounting at 5%, where the 30-year future is only worth about a quarter as much as the present - though the case for economic reactions to climate change like imposing a tax of $500/ton CO2, which I think is probably justified using a more reasonable discount rate, is harmed.

Expand full comment