I can’t comment on Yascha Mounk ‘s post on effective altruism without being a paid subscriber to his blog, so I’ll cheat the system by posting my angry rant here.
———————————————————
I appreciate the sort-of-attempted defense, but overall I think this article is bad.
Your first sentence claims that effective altruists are "obsessed" with colonizing Mars. I've interacted with EAs near-daily over the past ten years and I've never met any who think that's a good idea. My impression is that EAs have generally argued against the people who think colonizing Mars is an existential risk intervention (possibly just Elon Musk, I don't know if there's anyone else) on the grounds that it's extremely low value for this compared to more earthly things. I think this reveals a level of unfamiliarity with EA (or willingness to collapse everyone in Silicon Valley together) that makes you the wrong person to write this article.
I feel the same way about the toaster story. I can't say, with certainty, that no EA has ever thought this way - just that I've never met one who did. It feels like someone who writes about how his thoughts on progressivism are shaped by a story where once a progressive refused to return a toaster because white males asking for their property back is an act of colonialist violence. It's what would happen if someone who had never met a liberal in real life tried to imagine what they were like based on the worst rumors they'd heard on the Internet.
Then you...admit this is probably fake, but that you chose to spread it anyway because it "encapsulates" something. I feel like this tactic has aged poorly recently. "Those Haitians, always eating cats - or who knows, maybe they don't eat cats, but the fact that it sounds like something that could happen sure encapsulates the core of Haitians, don't you think?"
I think this ignorance of real effective altruists and unwillingness to look into the philosophy beyond stereotypes serves you badly through the rest of the article:
[Effective altruists] failed to ask basic questions about what makes human beings tick; how well we can predict the impact of our own actions.
I challenge you to look at (for example) the effective altruist research on mosquito nets, where EAs have spent thousands of man-hours investigating whether people actually use them, whether they misuse them, whether people get tired of using them, whether there is compensatory behavior change, whether their funding them will make other people fund them less, what are the best ways to monitor how people use them, et cetera. You can find some of this at givewell.org/internatio…
[Effective altruists] fail to ask basic questions [about] whether we can influence events in the distant future in any meaningful way.
I think effective altruists have published dozens, maybe hundreds of papers on this question. I would ask whether you have ever spent one hour talking about this question with another person. If not, I think maybe you have failed to think about this (by assuming the answer is no), while effective altruists actually think about it quite a lot.
I’m impressed, for example, with effective altruists’ research into funding forecasting engines, including conditional forecasting engines that use wisdom of crowds and Tetlock-style superforecasting techniques to predict whether a certain action can have an effect. Nate Silver's recent book on forecasting and risk spends a couple of chapters on effective altruists' attempts to grapple with this topic, not because Nate is an EA but just because (IMHO) we're doing so much of the really novel work here.
I do agree that “long-termism” is not the most productive framing for some of this work - see my post at forum.effectivealtruism… . I think most of EA has now absorbed this critique and very rarely talks about affecting the far future anymore - not because we’re sure we can’t do, it but because it’s an unnecessary assumption.
[Effective altruism] tacitly assumes that a 22-year old who devotes himself to making a ton of money for purely altruistic reasons will still be guided by a desire to do good thirty or sixty years later
I don’t think we tacitly assume it, I think we’ve done about a dozen studies to quantify dropout rates and decide whether it’s still worth it. You can start here for an introduction to the topic - forum.effectivealtruism… . I would summarize the conclusion as earn-to-give dropout rates ranging from about 15% per five years in the more engaged cohorts to 65% per five years in the least engaged. I think studies like these have helped inspire work to to improve the retention rate, like the Giving What We Can pledge; surveys suggest that about 73% of people who have taken their pledge and joined their community continue to be compliant with it.
But I think aside from research like this, one thing EAs consider that your essay doesn't is counterfactuals. If there were some form of charity that nobody ever dropped out of, maybe that would be better than EA. But as far as I can tell nobody else is even trying. It's easy to say "EA might have dropouts, now I've proven it's bad!" It's harder to plan how you should donate and what advice you should give in a world where the dropout rates are whatever they are.
(also, I'm confused how this meshes with your later claim to have worked for McKinsey. You're concerned about people working in lucrative-but-morally-gray industries in order to donate and help the world, because perhaps they won’t donate as much as they think — but you endorse working in those same industries without donating or helping the world? What am I missing?)
Even when [effective altruists] succeed, they can have unintended consequences…In some parts of sub-Saharan Africa, for example, noble efforts to reduce the prevalence of AIDS have devoted so many resources and the attention of so many local doctors on the fight against one deadly disease that mortality from other causes started to skyrocket."
Sorry, this one really makes me angry. Fighting AIDS in Africa has saved about 20 million lives (this was mostly before effective altruists, and I don't think we deserve credit for it, but I think we do similar things, and if you want to lump us in with them I will proudly accept it). It's the midwittiest of midwit moves to say "Yes, but perhaps someone was also harmed, I am very smart". Have you tried to quantify the harm? Can you compare that quantity of harm to twenty million lives saved? Have you checked to see whether everyone involved thought about this really hard before trying to save the 20 million lives, realized it was wildly implausible that their actions caused enough harm to counterbalance that, and pushed forward while also trying their best to minimize collateral damage. I know I'm being a jerk here, but I feel like there really is a dynamic where people who save 20 million lives get mildly praised, but people who smugly assert that perhaps this has unintended consequences (while doing no work to demonstrate or quantify them) make people excited and become bigshot public intellectuals. This is also how I feel about any sentence that includes the words "mosquito net" and "fishing", which (to your great credit) you didn't deploy here.
None of this is a reason to throw the baby out with the bathwater.
I appreciate your grudging concession here. But I think what you recommend - an effective altruism with modesty - is what we already have.
A reconstituted effective altruism must give up on long-termist ambitions that make for fascinating sci-fi but completely fail to guide action of which we can be reasonably confident that it will actually have a positive impact
I think this reinforces my point above. You are absolutely certain - I would guess without ever thinking about it for an hour, or reading anything by Bostrom (whose name you spell “Bostrum”) that no cause that sounds “sci-fi” to you can possibly be effective. The last person I had this argument with was saying we were arrogant for worrying about sci-fi problems like global pandemics (this was a few months before COVID). I think actual humility involves thinking really hard about things, trying to quantify them, and then going with the results even if they don’t match our very simple intuitions about which causes feel right or not. I am proud of effective altruism’s record on this and will continue to defend it.
(I will say in camaraderie that I also got career advice from Ben Todd which I ended up not taking, but which I still think about. Maybe we should start a club.)