I think the strongest counterargument to this is the value of "keeping options open" assuming that we will know more, and be better placed to make wiser decisions, in future.
If some adolescent Martian spent 5 minutes surveying the Earth, decided that humanity looked rather miserable on the whole, and so decided to permanently wipe us out…
“The best way to explain our messy moral intuitions may be that they aren’t tracking intrinsic features of actions (per se) at all, but rather subtle signs of good vs bad character.”
At the start of the pandemic, Peter Singer and I argued that our top priority should be to learn more, fast. I feel similarly about A.I., today. I’m far from an expert on the topic, so the main things I want to do in this post are to (i) share some resources that I’ve found helpful as a novice starting to learn more about the topic over …