EA is a philosophy that has captured the minds of tech billionaires and elite university students, but it is morally bankrupt at its core.
EA's most famous adherent, Sam Bankman-Fried, epitomizes its flaws. He used "expected value" thinking to justify unethical behavior in pursuit of his goals.
EA philosophers like Toby Ord and Will MacAskill make grandiose claims about "saving lives" based on flimsy evidence and reasoning. They downplay potential harms and unintended consequences.
Organizations like GiveWell, which recommends EA charities, lack transparency about possible negative impacts and uncertainties in their analyses. They should report potential deaths caused alongside "lives saved."
EA thinking is based on flawed "expected value" reasoning that rationalizes unethical actions in the name of doing good
The EA community suffers from groupthink, overconfidence, and lack of accountability to the people impacted by its actions
EA has shifted to an even more speculative "longtermist" philosophy focused on far-future scenarios. This allows them to avoid accountability for being wrong.
The tech billionaires funding EA seem to care more about being "heroes who save humanity" than carefully considering real-world consequences. The philosophers enable this grandiosity.
EA needs more epistemic humility, concern for potential harms, and accountability to those impacted by its actions. Its current philosophy is primitive and dangerous.