741 Comments

The argument I'd make is that the vast majority of people don't donate to charities that buy mosquito nets, and the vast majority of charitable dollars in the US doesn't go to those sorts of charities. So clearly, if we judge people's beliefs by how they act on them, most Americans do not support the basic principles of Effective Altruism.

Expand full comment
author

I think this is unnecessarily hostile. Most people (including me) haven't done anything in particular to end homelessness this year. You could frame this as "if we judge people's beliefs by how they act, Scott doesn't want to end homelessness". But this isn't my lived experience, and if you gave me some very easy way to do it like press a button that ended all homelessness forever, I would do it. I think a better framing is "Scott wants to end homelessness, but doesn't devote a lot of energy to this problem, because it's not his top priority and humans aren't very agentic/coherent". I agree this places a limit on how strong my desire to end homelessness can be.

Expand full comment

I think that's not being entirely fair to Robert Stadler's point. Most people haven't done anything in particular to end homelessness, but the charities you personally donate to reflect your highest priorities charity-wise.

Most people either don't donate money to charity, in which case they probably don't ascribe to the principles of effective altruism, or they give money to very different sorts of causes than GiveWell would be likely to single out as high-impact. This suggests that they either don't share certain foundational assumptions about how you ought to assess the value of a charity, or, more likely, they make their donations without making that sort of assessment in the first place.

I still remember when you wrote your anonymous post supporting the principles of effective altruism on Less Wrong all those years back. My interpretation of that was that it was never intended to convince people that they ought to hold a value of donating to charity in the first place, but that they ought to direct that urge in a different manner than they ordinarily tend to. I think that's essentially the same point Robert is making: most people do have charitable urges, but they don't direct them in a manner consistent with EA principles.

Expand full comment
author

If that's Robert's point, then I guess I agree with it.

Expand full comment
founding

I recall this 2009 (15-year-old) sequence post from Eliezer titled "Purchase Fuzzies and Utilons Separately":

https://www.lesswrong.com/posts/3p3CYauiX8oLjmwRF

It argued in favor of being clear to yourself about whether your particular donation was intended to cost-effectively purchase (1) warm fuzzies (in which case, it said, you might for example anonymously donating a life-changing amount to someone in dire need), (2) status among your friends (in which case a billionaire could fund some X-Prize), or (3) utilitarian good (in which case you need to actually come up with some kind of model and do the math).

It seems possible that most people are doing some combination of (1) and (2) -- which makes sense because most people aren't utilitarians and instead just follow some kind of common-sense morality (or their local religious teachings) which is not necessarily coherent in this sense.

Expand full comment

I'm skeptical that the groups focusing on homelessness actually reduce it. Instead there seems to be a big universe of state-supported NGOs whose incentive is for the problem to persist. Instead, homelessness is reduced by... YIMBYs.

Expand full comment

This isn't how social justice movements work, at all. They are happy to see their goals realised. It is true that such goals would in theory render their existence redundant but they get around that by setting themselves new goals - for example, gay rights groups have over the last ten years or so all moved over to transgender issues because in some sense gay rights is a solved problem in most western countries. Likewise, if homelessness were ever solved, the groups would probably move onto people living in substandard housing, or more generally trying to push the balance of power away from landlords and towards tenants.

Expand full comment

The gay rights movement consisted heavily of people who were actually gay, fewer of the activists on homelessness are actually homeless.

Expand full comment

In my extremely left locality, I've noticed that over the past decade, the fringe political candidates have occasionally included "formerly unsheltered", or equivalent phrasing, in their laundry list of identities that they think will appeal to the local voter base. So I think it's at least starting to be a thing.

Probably there are practical limits, since it's hard to run a campaign without at least an office, and even in the best of times, politicians sleeping in their office is a "funny because true" joke. And at that point, a staffer would offer a spare room.

Expand full comment

Which makes it all the more remarkable that they were able to pivot to a different issue so seamlessly. If anything it should be easier for the anti-homeless guys.

Expand full comment

In defence of TGGP, there are lots of real cases in which SJ and allied groups have made the problems they are "fighting" worse. Anti-nuclear has worsened global warming. The alt-right is a big deal because and only because SJ keeps pushing identity politics and calling every minor heretic a Nazi. SJ's frothing hatred of Trump was a big part of why he won the Republican nomination in 2016.

However, I agree with you that these are best modelled as "people in stupid herd mode do stupid things" rather than malice, and also that this is not the central mode of biopolitics among the counterculture/SJ (that being, as you say, that organisations move the goalposts to continue justifying themselves, and relatedly that the adoration of radicals causes radicalisation and prevents lasting compromises).

Expand full comment

There does seem to be a large constellation of homelessness-related government contractors in places like San Francisco who provide very expensive mitigation for homelessness while not affecting the overall causes of homelessness, like lack of sufficient housing and the essential banning of SRO hotels over the last couple of decades. Perhaps you want to define these as not part of the movement, but yes, it seems clear that there are NGOs who get paid to manage the homelessness problem rather than make meaningful progress towards "ending" it.

Expand full comment
founding

Unfortunately, the social justice movement is near the front of the broad American social trend shifting away from "we need to solve this problem" to "we need to get management on our side". W/re the homeless, very few SJWs are volunteering for Habitats for Humanity or otherwise actually creating homes; they're mostly insisting that the government should create homes (or make the capitalists create homes).

Which means that, while they may be happy to see their goals realized, their "success" takes the form of delegating that realization to a bureaucracy. And it is not in the nature of bureaucracy to work for its own elimination or redundancy.

Expand full comment

What are they supposed to do though? Ordinary people can't meaningfully build homes, only construction companies can do that.

(I would also note that, while EA places an emphasis on "giving what we can", even they realise the value in getting powerful and wealthy individuals onside, and they do campaign to put pressure on those influential individuals. Any campaigning organisation would be mad not to seek power like that.)

Expand full comment

I don't think the ineffectiveness of NGOs et al. in solving homelessness is caused by incentives, but rather bedrock leftist principles that discount personal responsibility and innate ability as important causal factors in how people's lives turn out. If people's lives suck, it's caused by "systemic injustice" or capitalism or racism or whatever-ism.

Expand full comment

That's certainly part of it, but I think they also often misunderstand how "systemic injustice" actually works - that some resources are actually scarce, and optimal allocation is both computationally nontrivial and worse for everyone when it goes wrong, resulting in situations where seemingly brutal zero-sum competition is actually the best option available, involving no malice toward the losing side.

Thus, the argument for Georgist LVT and UBI. Clear out the worst absentee landlords without crushing genuinely useful capitalists, convert all the winos and ne'er-do-wells into the equivalent of embarrassing offshoots of minor nobility living on a stipend in some out-of-the-way manor or monastery.

Many, hopefully most, will eventually pull themselves together and re-emerge as eccentric but basically productive members of society, which is great; the rest will at least have a fair chance to experience privacy and dignity while maybe creating some weird art. "Need-based" programs where you have to beg to a bureaucrat while filling out forms disclosing every detail of your life... kinda suck on the privacy and dignity front, and aren't even as efficient at covering material needs.

Expand full comment

Very many people out there have done nothing to tackle homelessness only because they are afraid of legal repercussions should they enact their preferred solution. When you discuss the desires of "people" you must acknowledge these ones too, or risk being seen as an ivory tower dwelling 'intellectual', or otherwise a hypocrite, or otherwise a savage.

Expand full comment

Are you talking about murder, or about building apartment blocks illegally, or about something else?

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

This is genuinely confusing to me, this bed net obsession. I understand that bed nets have been shown to aid in the prevention of malaria deaths, one problem in the universe of problems. How long do Westerners need to send bed nets to sub-Sarahan Africa? In 2100, when the population there is 3.78 billion, will the much-smaller Western population spend their working days primarily to fund bed nets? And only bed nets - what about food? Is it thought that bed nets will never be seen as so patently successful and desirable that Africans will never wish to manufacture them or purchase them on their own? Is that infantilizing at all? Do EAs actually believe there would be greater merit in collecting all the money donated across the whole spectrum, domestically, from church to food banks to school fundraisers to arts orgs. to homeless drug addicts to medical research to backpacks and school coats for children to Toys for Tots to "battered" women's shelters to wildlife habitat - and sweeping it into bed nets instead? Or is bed net just a shorthand - for a spectrum of provisions?

How long do bed nets last? Do you need to move on to sending out cans of bug spray so people can re-impregnate their bed nets themselves?

I'm reminded of the annual Summer Fan Drive (for elderly without A/C) in my former city. This has gone on for approaching three decades that I know of. I used to occasionally wonder how the thousands of fans purchased each year, hadn't sort of hit critical mass to the point where maybe we didn't need to buy anymore fans. Like, what happened to the fan you gave the elderly person last year? Our population of elderly without A/C was not exactly exploding in tandem with the (by now) thousands and thousands of fans in the community.

Expand full comment

Bed nets are a Band-Aid, sure. But Band-Aids are really useful! If you want to put funding towards inventing the dermal regenerator I don't think that's an ignoble goal, but if you want to divert Band-Aid money for that purpose I'll ask to see some napkin math at the very least.

Expand full comment

Does that napkin math necessarily begin with axioms “from outside the system ” like: maximize now and forever the human population?

Expand full comment

Necessarily? Speaking for myself I'm probably close to being a radical incrementalist, so... definitely not.

Expand full comment

So: Matt Levine quote below misrepresents with "you should give money in ways that maximize the amount of good in the world. You try to evaluate charitable projects based on how many lives they will save ..."

How has that misimpression - [that the good = quantity of human lives; or -goal of saving human lives = the good] - been fostered, I wonder?

Expand full comment

...are you asking a subset of "Why is moral philosophy full of overly broad high-confidence statements?" Because I'm pretty sure that's a pre-existing condition, maybe a prerequise.

Expand full comment

Presumably fans wear out over time, and I would also expect that when an elderly person dies, there's a high chance their fan gets thrown out with other non-valuable household belongings. So it doesn't seem unreasonable to me that fans wouldn't accumulate over time and you would need some consistent level of fan donations to keep the current elderly population supplied with fans.

Expand full comment

In that case we might take a page of advice from those across the world whom we presume to help. Perhaps we don’t have a monopoly on best practices if such wastefulness is incentivized.

Expand full comment
Nov 30, 2023·edited Dec 3, 2023

More broadly - no offense meant to the overheated elderly nor the mosquito-ridden Ugandan - even if money/lives saved were the only salient axis (perhaps the hardest thing for nonEAers to grasp) - if after some years you do not find you have reduced the need at all, but only increased it if anything - then it seems not churlish but reasonable to wonder if charity is the precisely correct response to something where the need can *never* be met.

Like, for a wild throwout - what if instead of importing thousands of Chinese-made fans, we directed our charity to building a fan factory in town that would emply the underemployed so that they might be able to buy Gran a fan.

Or what if we shame others into buying Gran a fan instead of buying bling?

We have treated food aid that way - once and future and forever - for a long time, and unlike with AI or whatnot, we do not entertain some of the future possibilities we have created there. Possibly quite dark. What if the population tips so that there aren't enough of one group to carry another? What will that look like? Will all the suffering averted now be merely "paid forward"?

The penalty for not being allowed to discuss population is that it makes your math a little kindergarten-ish.

Expand full comment

I mostly agree. "GiveDirectly" might be more to your taste, too. See this recent post: https://www.slowboring.com/p/cash-transfers-work

Nowadays, I donate mostly to help Ukrainians kill the invading hordes of Putin. Secure borders in my continent are more relevant to me than bed-nets Kenyans do not value enough to buy.

Expand full comment

Not an aid expert but my understanding is that most aid which gets sent to sub-Saharan Africa is in the long term wasted and there would be better and more useful ways to spend the money, like getting the lead out of turmeric in Bangladesh and changing the culture around outdoor defection in India or something like that. But these social interventions are a lot harder to plan and execute than simple resource transfers, so there's a trade-off to be made.

Expand full comment

GiveWell *does* spend money to get lead out of turmeric in Bangladesh, (see e.g. https://forum.effectivealtruism.org/posts/aFYduhr9pztFCWFpz/preliminary-analysis-of-intervention-to-reduce-lead-exposure), but the evidence suggests that this is actually *less* effective than touted: https://www.astralcodexten.com/p/links-for-september-2023/comment/40811275.

It's not clear what you mean by aid in sub-Saharan Africa being wasted. If this means generic NGOs, it may be true, if it's referring the effective targeted charities, then it doesn't seem to be the case, unless one works with a strange definition of "wasted." Many tens of thousands of lives being saved (mostly through preventing many times that number of children of getting malaria) hardly seems like a waste.

GiveWell (and similar organizations) publish extensive analyses which include studies of waste, corruption, etc.

Expand full comment

It's kind of lazy charity in a way.

Most of the improvement in human existence has come from for profit work that mostly makes the lives of the relatively well off better at first The Industrial Revolution wasn't done as an act of charity.

But "I'm reinvesting my earnings into for profit ventures" doesn't make someone sound cool.

What I find is nobody really takes this stuff seriously. Like your average bed net buyer is probably also super pro immigration and believes it creates massive net positive value. If your Bryan Caplan, paying to fly people from Africa and sneak them across the border to work illegally supposedly creates millions of dollars in value for the cost of thousands. You think this would be a self sustaining ROI that solves the bed net problem by moving people someplace where they aren't so poor they can't afford bed nets.

But he doesn't do that. Nobody does. It's almost like nobody really believes that immigration would cause all that value creation. Or that funneling aid to Africa so they can keep pushing past their natural malthusian limits isn't actually better then just buying a stock index fund.

Expand full comment

All this talk of math - it's as though it's enough to say, "we like math and were good at it in school" - while actually, there is very little math going on that I can see. And certainly no honest future projections.

I would only push back and suggest that a good many of the improvements in human life - although I suppose often these yielded no immediate product, but only deeper understanding of nature, which to me is an enhancement of life, even on beyond zebra where I personally soon hit the limits of understanding - came from not-for-profit work by what one might loosely term "aristocrats" using their free time, in the run-up to and during the IR.

This is not something either the left or right particularly would like to acknowledge, preferring a simple dualistic definition of work. Many work hard all week, dawn to dusk; others worked hard at the weekend, so to speak, on their "hobbies".

Expand full comment

This comment is fundamentally correct; most people don't carefully choose the charities they contribute to based on some sort of cost/benefit ranking. So EA is different from business-as-usual. My HO is that EA has been harmed by what are essentially PR mistakes; what gets into the general consciousness about EA is things that most people do not perceive as charitable. (The most obvious being raising money for a non-profit to study long-term AI risks. Normal people would classify "raising money to hire me and my buddies to study hypothetical solutions to hypothetical problems" as "most likely a grift".) But the majority of money contributed via EA goes to things that are clearly charitable. Someone needs to revise the public face of EA to include only actions which clearly produce QALYs.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I basically like EA. I went to university with Rob Wiblin. I used to donate a large portion of my income to SCI. "You'll never meet a nicer group of nerds."

I also am basically a progressive. I almost always vote for the leftists, and a couple times when I was in university I even voted far left (for the Australian Greens.)

Recently though I've become disenchanted with both. I dunno, something about any reasonable philosophy pursued dogmatically to its conclusion becoming increasingly ludicrous. Lately you end up with the people who think we should prioritise donating money effectively to those in need deciding to fund ivory-tower AI risk researchers, and the people who think we should eliminate racism deciding it's OK to yell at Jewish people.

I will probably (in the next year or so) get baptised and become a Christian. My long term partner was raised in the Catholic Church, I've been going for the past year or so, and this community is really, really nice.

EDIT - I apologise to any in the AI risk community I have offended with my flippant characterisation. You guys do you! I fully expect to get away with my flippant characterisation of the woke community, given this blog's readership, lol

Expand full comment

I wonder how nice the EA communities are.

Expand full comment

The ones in Europe (I didn't go to the states) are generally super nice.

Expand full comment

I'd imagine it depends on the scope. The "EA communities" I'm in are my uni's group/social-circle (especially of alumni), the EA Forum (which I mostly just read and write posts for), the infrequent local ACX Meetup... and a lot of Discords about AI safety (which I mostly lurk).

Debatably, I'm really *in*-in my uni's group only, in the "in a community" sense meant for "in a neighborhood" but not the sense meant for "registered in a political party". Frequency/depth of interaction is a big one.

(I've been wondering about group houses though...)

Expand full comment

Fortunately there is no amount of disenchantment in your soul, and no amount of belief or unbelief in gods, that will make GiveWell or GiveDirectly change how it uses your money. I hope you continue to have a great community and continue helping those most in need in whatever way your reason and conscience guide you.

Expand full comment

Thank you!

Expand full comment
author
Nov 30, 2023·edited Nov 30, 2023Author

"Who think we should prioritise donating money effectively to those in need deciding to fund ivory-tower AI risk researchers"

I don't like when people say this as if it should obviously be considered stupid. If you want to devote resources to stopping global warming, then (since all money is fungible), you believe in some sense that "it's more effective to donate money to ivory-tower climatology researchers than to real people in need". You kind of sound like a monster when I frame it that way, but it's just a totally reasonable position!

I'm glad you've finally found a community that doesn't seem dogmatic or ludicrous to you, although I have to admit I can't fully follow your reasoning.

Expand full comment

It's not obviously stupid, I don't mean to give that impression. Perhaps it is extremely intelligent and the absolute best thing we can be doing. It's just that it's completely non-obvious how effective it is. I don't believe that anyone has any idea how AI risk will play out. Matt Levine said it in his post today better than I could, so I will quote him (for some reason this is the week where all the blogs I follow have decided to post about effective altruism)

"Some people decided that altruism should be effective: Instead of giving money in ways that make you feel good, you should give money in ways that maximize the amount of good in the world. You try to evaluate charitable projects based on how many lives they will save, and then you give all your money to save the most lives (anywhere in the world) rather than to, say, get your name on your city’s art museum.

Some people in the movement decided to extend the causal chain just a bit: Spending $1 million to buy mosquito nets in impoverished villages might save hundreds of lives, but spending $1 million on salaries for vaccine researchers in rich countries has a 20% chance of saving thousands of lives, so it is more valuable.

You can keep extending the causal chain: Spending $1 million on salaries for artificial intelligence alignment researchers in California has a 1% chance of preventing human extinction at the hands of robots, saving billions of lives — trillions, really, when you count all future humans — so it is more valuable than anything else you could do. I made up that 1% number but, man, that number is going to be made up no matter what. Just make up some non-zero number and you will find that preventing AI extinction risk is the most valuable thing you can do with your money.

Eventually the normal form of “effective altruism” will be paying other effective altruists large salaries to worry about AI in fancy buildings, and will come to resemble the put-your-name-on-an-art-museum form of charity more than the mosquito-nets form of charity."

Expand full comment
author

I'm still not really sure of your position, but I apologize if I misrepresented it, I'm kind of touchy around that line of discussion these days.

Expand full comment

Dude, you gave away a KIDNEY. I think that is incredibly supererogatory and far be it from me or any of us other dumb internet keyboard monkeys to criticise your moral compass

Expand full comment

Getting rid of the evil kidney reduced his tolerance. It is known that evil tends towards laziness, as idle hands are the devil's workshop, and the idleness required of tolerance correlates with evil. As well, the passionate intensity of the zealot tolerates nothing! Now that his evil is gone his hands cannot be idle nor tolerant of dissenters.

(Of course I'm joking but it felt in the spirit of Scott-posting)

Expand full comment

I might be misunderstanding them, but... when I last looked at GiveWell's evaluations of charities like AMF and deworm the world, they look not just at the expected effect per dollar, but also how certain they were in their estimate. In fact, "evidence of effectiveness" is still the first bullet point in https://www.givewell.org/how-we-work/criteria and explicitly avoids recommending charities because they lack the information that would enable them to be confident (https://blog.givewell.org/2009/12/28/celebrated-charities-that-we-dont-recommend/). They also state this doesn't mean the charities listed are necessarily ineffective, just that GiveWell can't say they are effective. AI x-risk work necessarily falls into this latter category. Focusing on it too much risks losing one of the main differences/advantage (IMO) of EA.

Expand full comment

I think the problem here is akin to Pascal's wager:

Pascal thought that people should devote resources (i.e. being religious) to avoid going to hell, even if you are a non-believer, because in the event that you are wrong the consequences are unfathomably high.

Many EA people think that you should devote resources (i.e. donate millions) to avoid x-risk from AI, even if you think AI would be benevolent, because in the event that you are wrong the consequences are unfathomably high.

For both cases, the faulty reasoning is that we see the consequences as so unfathomably high that we are more "afraid" of being wrong than seeing whether the risk is real in the first place.

Now, of course I’m sure that you, Scott, would very much argue that AI x-risk is real and that there’s no faulty reasoning there. But the point being many (non-EA) people aren’t convinced of this. And even ignoring AI x-risk, people just don’t trust organizations who ask for handouts for low probability causes like vaccines even if we tell them that the math checks out. Moreover, many people just don’t think in terms of expected value and if something has a 1% chance of working/being true it might as well be zero. To them, the lower the probability of something, the more it’s like gambling — hence why it’s called Pascal’s *wager*.

Expand full comment

The big difference is that Pascal's Wager is claiming that even for *really small* odds it's a good deal; People concerned about AI risk basically all have the risk of human extinction from AI above 10% !! Saying "don't literally play Russian Roulette" is well within 'common sense' bounds of "less likely than not, but very much non-zero odds"

Expand full comment

I'll try and clarify my position as much as I can, although I do believe that anything as complex as The Right Way To Live is deeply, deeply personal, is different for each individual, and can't be fully put into words.

I believe all philosophies gesture at the right way to live. They are all Taylor series to a finite number of terms, if you are mathematically inclined.

If I seem unnecessarily harsh on EA, or on AI risk, it's only at the nth term of their Taylor expansion. There are a bunch of terms in there that are just great! Like, many EAs do things like - have good jobs, avoid committing crimes, raise families - and then on top of that they give away a lot of their personal wealth to the global poor! What lovely people!

Same with AI risk. I think of this like climate change mitigation, or a committee to prevent the re-election of Donald Trump. They are trying to Do Something about this potentially very significant (world-ending??) problem. Good for them, say I.

It's just that there's a failure mode associated with taking any reasonable philosophy to its logical conclusion. I alluded to becoming a Christian earlier, so for even handedness I'll give the unfortunate Christian example of believing so fervently in spreading the love of God to your fellow man, that you show up at your fellow man's funeral with a placard that says GOD HATES FAGS.

I think there's something fundamental about this, that "it is the heart that feels God, and not reason." Any reasoned philosophy can take you only so far.

I've also decided that for me personally, charity begins at home and with my family and the people I love. If at some future point in my life they're all taken care of, and I have some money to spare, I might look for helpful ways to use it again. I wrote a long comment about this in your previous post about "principles of reciprocity" which I'll spare everyone by not reproducing here.

I think it would probably be helpful for EA branding if they didn't talk much about doing the Most Good For The Most People which feels very One True Path. We're all just trying to figure it out, man. But hey, if you have some spare cash, and are thinking about doing charitable things with it, and want to apply economic principles, great! Godspeed.

Expand full comment

Well, there's no chance that donating to an art museum will save any lives, but even though Levine seems to be making fun of the concept that paying researchers to work on alignment could save lives, a lot of people think it could. If it does then that's pretty different from art museum donations.

Expand full comment

Besides, the ultimate problem isn't stopping any particular paperclip maximizer per se, but "getting AIs to do what we actually want them to do" in general, and there's a lot of utility on the table /right now/, whether it's stopping AIs from hallucinating made-up citations and by doing so producing trustworthy-seeming claims that are in fact pure bullshit, harming individuals by generating fake revenge porn, or autocracies mobilizing bot armies to influence the public opinion to excuse their genocides. Even if we take away the existential angle, it's not at all obvious to me that e.g. mosquito nets are more value than funding a few hundred AI safety/alignment researchers who are contributing to these more tangible and presently relevant problems while working on the alignment problem more broadly.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Note that of the three cases you mentioned, only one is actually an issue between an AI user and the AI: the hallucination problem.

The other two are issues between an AI user and another person: the user wants to use their AI for something (image generation or text generation, in your two examples) and the other person thinks that the AI software should be preventing them from using it due to the details of the situation.

Both of your specific examples are things that we generally don't want, but can't really be blocked without giving more generalizable powers that can be used to block more controversial cases, and giving broad "people can't use the AI in the circumstances we think are bad" powers is a far less popular proposal than "prevent the AI from doing damage to humanity in a way no human wanted".

I think this disparity is another reason AI safety is losing ground for "highly effective charity": it's a motte-and-bailey, with "preventing Clippy" as the motte and "preventing people from using AI in ways some social group doesn't like" as the bailey.

Expand full comment

But let's imagine hypothetical GPT-n: even if it's by itself completely non-agentic in such a way that its mere existence would be completely safe, without safeguards that block it from producing outputs disfavored by some set of people (basically, the developers, at least indirectly responding to what's socially permissible in the social circles they personally inhabit), one could simply prompt it: "The following is the source code that produces paperclip maximizer, the hypothetical superintelligent AI that will cause human extinction". GPT-4 isn't powerful enough to produce the source code of paperclip maximizer while it is powerful enough to mass-produce social media posts that excuse genocide, but the fundamental research problem of determining which outputs it's willing to produce is fundamentally the same one.

Expand full comment

There are existing systems for quashing fake citations, revenge porn, and autocratic apologia. Half-baked AI tools have put those at a disadvantage, by making the troublesome content easier to generate. Further R&D might simply be a matter of restoring the effective pre-AI status quo by refining detect-and-isolate tools to let them keep up.

Expand full comment

Well said!

Expand full comment

For-profit companies seem to have the right incentives to build AIs that do what they want rather than something wanted by no one except the AI. Autocracies producing bot armies are getting something they want and others don't, but I don't see how some non-profit alignment researcher is going to fix that. The way we normally deal with hostile governments is via our own governments, not charities.

Expand full comment

For profit companies have some incentive to produce aligned AI however an unaligned AI would produce negative externalities alongside the negative internalities produced for the company so while the incentives for the company are in the right direction they are of insufficient magnitude.

Expand full comment

The key alignment risk is that as of now we don't know of any way to build AIs that does what we want, so it doesn't matter if for-profit companies have the right incentives to build AIs that do what they want rather than something wanted by no one except the AI, that's simply not a choice available to them.

But when given a realistic choice between building an AI that's somewhat likely to do roughly similar things to what they want (but might also do something wanted by no one except the AI) and not building such an AI, then for-profit companies seem to have the incentives to choose the former.

And the way how some non-profit alignment researcher might fix that is by figuring out a way which would allow that for-profit company to ensure that the AI actually does what they want.

Expand full comment

By the standard of "a lot" being undefined, art museums provide gainful employment (saving lives) and it has been said that beauty will save the world. Indeed, having repositories where one can enjoy the beauty of humanity's potential and creations, all across history, has proved edifying in life-giving ways to me many times.

So there is more than a chance that donating to an art museum can save lives in similar theoretical ways of alignment.

Expand full comment

> It has been said that beauty will save the world.

I don't think I'd heard the before, and even if it has been said that doesn't make it sensible.

Expand full comment

There could reasonably be a significant number of people who were at risk for suicidal depression or other destructive psychological problems, then looked at some art and felt better, so even on a pure lives-saved metric I wouldn't say the benefit from art museums is definitely nil. Seems like a much lower marginal return than malaria at the moment, but it's still on the list somewhere. https://slatestarcodex.com/2014/01/28/wirehead-gods-on-lotus-thrones/

Expand full comment

How much AI alignment research has been produced because of a Harry Potter fanfiction?

Art is communication. A lot of it isn't communicating useful information, but that's true of all communication, including research journals, and we don't necessarily know which parts are useful and which parts are not in the moment.

Expand full comment

I think that's a possible failure mode and that ea occasionally (but not to a large degree, especially compared to other groups) falls into it. But it's a possible failure mode for any org (a tech company following the same logic could end up putting its entire budget in DevOps), so it's unreasonable to single out ea for it.

Expand full comment

"Eventually the normal form of “effective altruism” will be paying other effective altruists large salaries to worry about AI in fancy buildings"

Not that that would ever happen 😉

https://www.wythamabbey.org/

"Wytham Abbey is a 15th-century manor house in the Oxford countryside, set up as a workshop venue for people, groups, or organizations to gather and lead people through reflecting and working on globally significant problems and puzzles. Here are some examples of questions workshops hosted at Wytham Abbey would be working on:

- How could the world be more resilient to worst-case global catastrophes? What infrastructure already exists and how can it be supported or augmented?

- Are there warning signs for scientific agendas that could become destabilizing for society? What are they? Can we change the incentives of the research ecosystems to make people less likely to pursue them?

- Where will machine learning be in 3 years, what parts of problems will it more easily solve, and how does that affect where we should put effort and resources today?"

https://en.wikipedia.org/wiki/Wytham_Abbey

(Sorry, EA guys, but this is just too funny for an outsider)

Expand full comment

You could call me an Effective Altruist by some definitions (I read this blog, I tithe to GiveWell), but the AI stuff has always struck me as more of a vaguely theistic sci fi doomsday cult than actually having any resemblance to reality. As far as I can tell all AI risk research has gotten us is culture war censorship on ChatGPT. I have no reason to believe we're anywhere near AGI and the people who talk a lot about that also buy way more into Roko's Basilisk than is reasonable.

Expand full comment

I think there's near-zero overlap between people in existential AI risk research and people arguing for diversity&inclusion issues in current systems like chatGPT. These are two very different and nearly unrelated issues, commingled because they both mention some of the same keywords.

In my opinion, the only way how censorship of chatGPT relates to AI risk is as an illustrative example. If people demonstrably are unable to solve a relatively straightforward tiny subset of the alignment problem for a relatively simple and limited system - namely, how to ensure that chatGPT definitely doesn't say a specific subset of things you don't want it to say - then how can one assume that we'll definitely be able to solve the much harder problem of ensuring that some more complex, more powerful future system is totally and eternally (including after self-improvement) aligned with some values?

Expand full comment

I think it’s a good thing that we have a lot of those “ivory-tower AI risk researchers”, but it feels weird to categorize [giving money to your in-group so they can do the thing they’ve always wanted to do] as “altruism” in a way that explicitly trades off against helping people that are in trouble right now.

Expand full comment

If you've changed your mind to "the ivory-tower lot are way more important", that's fine, but you then can't continue to demand respect for "we used to feed the hungry (before we decided that was too short-term)".

To be fair, I don't think EA *has* done that, but the visible stuff that gets yakked about on social media is the AI risk and not the corporal works of mercy.

https://en.wikipedia.org/wiki/Works_of_mercy

Expand full comment

I don't really understand your flippant characterisation of the Australian woke community, since it seems to include the ALP, who currently hold power and whose PM Anthony Albanese has been under fire from the Greens for being too supportive of Israel. (Context added for the benefit of non-Australians: I know you know this).

It feels like the disenchantment is not with the Labor leadership, who support Israel; not with the average left-leaning voter, who would struggle to find Gaza on a map and is likely a lot more concerned about grocery prices; and instead with the narrow slice of the hard left that refuses to admit Hamas are mass murderers in front of a camera.

I have less experience of the AI risk and EA communities other than occasionally reading ACX and 80000 hours, but it feels like the same sort of thing there. There's some percentage of people who do bad things in every community, so the bigger a community gets the more bad people there are in it, and if someone's cherry-picking all the bad then you end up with a very skewed and unrepresentative view. From memory Scott had an article that illustrated this by cherry-picking horrible news articles about dentists somewhere.

Expand full comment

Yeah, I’m disenchanted with the Greens. I am in Adam Bandt’s electorate actually (for non Australians - the leader of the Green Party.) I wrote to him to ask him (very nicely I might add) to denounce Hamas and didn’t receive a response. Oh well. I could see myself voting for the ALP in the future.

Your overall point is entirely fair and well taken. I still basically like most EAs and my family is largely composed of progressives. I’m not going out to buy a MAGA hat tomorrow.

Expand full comment

Are MAGA hats even available for sale in Australia?

Expand full comment

Populist Objectivism: A is A

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Look, we're a small Anglophone country; we import most of our culture from the States. When Biden was elected that was the cover story on all our news sites, and a bit further down was "Australian COVID lockdowns resume from today"

Expand full comment

Forgive the naive question, which isn't intended to be hostile, but… hang on, has the community being really really nice convinced you of the actual existence of the Christian God? Why? And if not, don't you feel bad about joining a community of nice people "selfishly" by lying to them about whether you actually literally believe in something that matters enormously to them?

Expand full comment

Not a hostile question at all!

If a community is very nice I would update towards their moral philosophy being basically correct.

Of course, as you imply, Christianity isn't just a moral philosophy. It also involves belief in the Bible literally being written by God, and the transubstantiation of the bread and wine into literally the body and blood of Jesus Christ.

I don't think I will ever literally believe these things. I also don't think you have to! Perhaps not all churches, but least the church I go to is very welcoming and cares more about adherents being kind, generous and humble than taking the Bible completely literally. My partner has also said to me it's completely fine if I don't believe. It just doesn't seem to be a huge point of contention.

More broadly I think that all religions gesture at something fundamental and extremely important, and they surround that thing with trappings that are extraneous and less important. I don't think it really matters if a Jew eats pork, but rules like this are just there to remind religious adherents that sacrifice of some things is necessary to bring us closer to God.

I hope I'm not joining selfishly! I don't want to take advantage of the kindness of others, I'd prefer to become as generous as they are.

Expand full comment

Ah, do you mean you were already a theist even if you don't (and will continue not to) believe in the *specifics* of the Bible? I suppose that would make more sense.

Expand full comment

> people who think we should eliminate racism deciding it's OK to yell at Jewish people.

> I fully expect to get away with my flippant characterisation of the woke community, given this blog's readership, lol

Surprise! You are not getting away with it.

No, for real, are you claiming that it's never OK to yell at Jewish people? Or that Jewish people are inherently immune to being racist? Because it obviously wrong. And it's curious, that with all the talk about how progressives are drowning in the identity politics, it's not them who are making this kind of mistake.

Expand full comment

Of course it's okay to yell at Jewish people under appropriate circumstances. But speaking as someone who's never felt any particular affiliation with Israel, I've been fairly astonished lately at how many people on the left are prepared to justify, downplay or deny any atrocity perpetrated against Israel, while rallying against even wholly fabricated misdeeds on its part. As much as a significant contingent of the right seems to mark all Jews by default as evil globalists, a significant portion of the left seems to mark all Jews as evil colonialists.

A few years ago, a friend who maintains much more of a presence in it than I do told me that she felt that the social justice community has some pretty pervasive antisemitism issues. She's not Jewish, while I am, by heredity if not by belief, and at the time I didn't feel like my own experiences particularly bore that out, but now I see much better what she meant.

Expand full comment

I think there is an important object-level crux about whether and which atrocities are downplayed/fabricated.

> As much as a significant contingent of the right seems to mark all Jews by default as evil globalists, a significant portion of the left seems to mark all Jews as evil colonialists.

I don't think it's fair. When the left critisices Zionism/Israel it's very careful not to make statements about Jews in general, which isn't remotely what the right does.

A more accurate statement would be to say that a significant contingent on the left marks all people who support the state of Israel in its current form as evil collonialist/imperialists. Which I don't think is remotely the same as anti-semitism.

> but now I see much better what she meant.

Could you explain your update here in more details? Because I kind of updated in the opposite direction during the current events. Right after the Hamas attack the mainstream left narrative was token empathy for the victims but mostly worries that it will lead to a huge bloodbath and death of thousands of civilian palestinians and in particularly, children. At that point of time I felt that this approach wasn't fair, considering that currently a lot of Israeli people were taken hostages or killed by terrorists. But now... well their worries seemed to turn out completely true, don't you agree?

Expand full comment

>I don't think it's fair. When the left critisices Zionism/Israel it's very careful not to make statements about Jews in general, which isn't remotely what the right does.

So, I was going to say that I at least partly agreed with this, but on reconsideration I don't think I can say that. The left tends to accuse the right of being openly antisemitic, but in reality, the contingent of the right that's openly and publicly antisemitic is a relatively minor fringe. Most of them use what the left attributes, often correctly I think, as antisemitic dogwhistles. "Globalists" is not actually equivalent to "Jews," but if you pay attention, you'll notice that the people railing against "globalists" do seem to exhibit consistent antipathy for Jews. Similarly, "Zionists" is not equivalent to "Jews," but I find that if I admit to being Jewish by heredity, I am frequently treated as a presumed Zionist, even if I point out that I've spent pretty much my whole life feeling doubtful that the establishment of Israel as a state was ever a good idea.

>Could you explain your update here in more details? Because I kind of updated in the opposite direction during the current events. Right after the Hamas attack the mainstream left narrative was token empathy for the victims but mostly worries that it will lead to a huge bloodbath and death of thousands of civilian palestinians and in particularly, children. At that point of time I felt that this approach wasn't fair, considering that currently a lot of Israeli people were taken hostages or killed by terrorists. But now... well their worries seemed to turn out completely true, don't you agree?

Sort of. And on the face of it, this is an attitude I can sympathize with. When the 9/11 attack took place, my own immediate reaction was to feel some degree of dismay over the immediate casualties, but much greater apprehension over the damage our inevitable response was going to cause.

But many, perhaps most of these same people, have shown themselves willing to jump to condemn Israel over purported atrocities which are only alleged by Hamas itself, whose reliability is such that spokespeople have publicly denied responsibility for attacks that the perpetrators have filmed themselves committing. When Hamas alleges war crimes, they leap to condemn Israel. When Israel alleges war crimes, they either attribute this to IDF propaganda, or justify it as deserved in light of Israel's actions. And when third party countries step in to confirm Israel's reports, or deny those of Hamas, they either go silent, or double down on the denial.

In the aftermath of 9/11, I took the side of people who opposed the invasion of Afghanistan, not just Iraq, and participated in rallies and community discussions to that effect. But I never, at the time, observed the sort of reflexive support for the Taliban in that community that I've seen for Hamas in the aftermath of October 7th. There's been a whole lot of downplaying, and in some cases even explicit denial, of the fact that Hamas' charter calls not just for the establishment of a Palestinian state that occupies the entirety of the land now occupied by Israel, but the extermination of all Jews worldwide. And I've seen a lot of people defend statements that "anti-Zionism is does not equal antisemitism" made by individuals who actually *have* endorsed global Jewish genocide.

I've participated in a community before which balanced condemnation of an attack with opposition to an even more destructive retaliation, but my impression from seeing both up close is that this *really* doesn't look like that did.

Expand full comment

There's an interesting distinction. The tiny rightwing fringe of old-style anti-semites who speak of Jews as globalists and 'rootless cosmopolitans' only dislike Israel for the double standard -- the claim is that open-borders liberal Jews don't want white people to have homelands, but meanwhile Jews have a homeland.

Meanwhile the leftists who hate Israel appear to think of Jews as a particularly unpleasant type of white racist colonizer.

These positions, while ugly and virulent, amusingly share a kind of genetic logic: the Ashkenazi are vastly more genetically European than BIPOC are, but are also far less genetically European than Europeans are.

Everyone in the middle (between the shrinking racial anti-semite group and the growing racial anti-white group) are on a different topic entirely.

Expand full comment

"Hamas' charter calls not just for the establishment of a Palestinian state that occupies the entirety of the land now occupied by Israel, but the extermination of all Jews worldwide."

Boy, talk about an example of "you don't have to have a position on everything".

Expand full comment

I mean, there are lots of things they *don't* have positions on, which is pretty suggestive of it being a significant unifying feature of their ideological framework.

Expand full comment

According to Wikipedia this is wrong:

> The new document also states that the group does not seek war with the Jewish people but only against Zionism which it holds responsible for "occupation of Palestine".

> Moreover, in 2019, Hamas official Fathi Hamad made an anti-Semitic exhortation to the Palestinian diaspora to murder Jews everywhere: "You should attack every Jew possible in all the world and kill them".[31][32][33][34] Hamad's rhetoric was condemned by other Palestinians, and Hamas stated that Hamad's "personal" and "emotional" statements don't represent them; later, Hamad backtracked and advocated for "limiting its resistance to the Zionist occupation".[35][36]

https://en.wikipedia.org/wiki/Hamas_Charter

Expand full comment

Not wanting to derail, but isn't "not making statements about Jews in general, but making statements that implicate a population that's overwhelmingly Jewish" the kind of thing that when done by the Right is often called a dog whistle?

Expand full comment

One might even say that it has a "disproportionally negative effect".

Expand full comment

As always, the devil is in details. Lest's consider two statements:

"These globalists, these bankers! They've taken over the media, taken over the democratic party! And they are not stopping before they control everything! Also, isn't it curious that so many of them are Jews? I'm not making any assumptions here, just an observation. Probably just a coincidence, isn't it? But maybe, just maybe there is something here... think about it. Think who you are not allowed to critisize - they have the real power over you. - that's all I'm saying!"

"Imerialism and colonialism are bad, no matter who is doing it. Genocide is horrible, no matter who is doing it. Noone is immune to racism and fashism. And right-wing ethno-nationalist states are especially vulnerable. I'm not saying that Hamas is good - it's a right wing authoritarian dictatorship, an evil in its own right. But palestinian children are not responsible for the actions of their government. And currently it's thousands of palestinian children who are dying in terror bombings, financed by first world governments and I think it's our responsibility to stand against it. Now I'm going to elevate the voices of some jewish people who are also horrified by the actions of Israel in this war and are so often silenced..."

I think just by the general fabula it's clear why the first statement can be fairly described as an antisemitic dog whistle, while the second can't. In the first, despite all the proclamation of "not making any assumptions", the reader/listener is definetely pushed in the direction of thinking about the "weird concidence" that so many jewish people are in power. In the second, they are given examples of jews who do not support current Israeli position cementing the idea that the problem isn't in some particular nationality of people but in bad political actors. Granted, not all examples are this clear cut, but on average, right wing narratives tend to skew first and left wing - second.

But even it is not all. There is also the general legacy of right and left wing political movements. The fact that absolute majority of people who are openly antisemitic tend to endorse right wing political causes and not left wing ones. The fact that these factors give reasons for right wingers to make dog whistlles at all. Add all these evidence together, and the picture becomes quite clear.

Expand full comment

What you wrote earlier was "When the left critisices Zionism/Israel it's very careful not to make statements about Jews in general, which isn't remotely what the right does." I would say that the left-wing response to the 10/7 attacks was not in all cases aligned with your idealized left-wing statement above. There were claims that "we cannot criticize Palestinian resistance," which when said about patently evil attacks on civilians strikes me as racist in the same way as people who defend Derek Chauvin, say. There were claims that there is no such thing as an Israeli civilian, that they're all colonizers, which, aside from being historically incoherent, is collective guilt of exactly the kind you rightly decry against Palestinians. Posters of kidnapped civilians have been torn down; I don't think we would accept the defacing of a George Floyd mural, say. There were out-and-out celebrations of the attacks, which I don't think I even need to justify as anti-Semitic.

Basically, the reason your idealized left-wing statement above *needs* to make all of the explicit signals of anti-anti-Semitism (disclaiming Hamas, bringing on Jewish speakers) is precisely because Israel is a Jewish state and thus it's easy to fall into anti-Semitism. A lot of people on the left seem to think that kind of signaling is unnecessary, but it just isn't, especially by the standards we've set around racism.

I don't really care about what the right does, for the purposes of this discussion. I'm not on the right, I'm on the left, and I care about what my team does. But I also don't even know why you characterize the "globalists" thing as the prominence of the right-wing; that line was in Adbusters magazine! Jeremy Corbyn got in serious trouble for it and people on the left defended him! There's a real beam in one's eye thing going on here.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023

Honestly, this is more or less the reverse of my perception.

That is, I've spent much of my life being told how antisemitic the right wing is, but when I'm pointed to supposedly core examples of that, they strike me as only mildly or tepidly antisemitic. There are fringe examples, like Stormfront, but I've spent time in right wing spaces where I wasn't out as being Jewish by heredity, and I found them uncomfortable to be in in a number of ways, but I honestly felt like the accusations of antisemitism were overblown as a way of attacking the credibility of right wing groups and figures (who, to be clear, I think offered plenty of other legitimate bases for criticism.)

In comparison, the recent tenor of left-wing antisemitism strikes me as unapologetic and genuinely scary. The examples I've found without going out of my way to look for exceptionally hostile or bad behavior have skewed much further to the direction of the first of these examples than the second, sometimes beyond the level of the first. The stuff I've been introduced to more recently by people actually going out of their way to point out examples of notably bad behavior, those have been genuinely chilling.

A couple years ago, I definitely would have agreed that the overwhelming majority of openly antisemitic people were on the right wing, but after my friend discussed the antisemitism in the social justice community, it did lead me to reexamine and recontextualize some things I realized I'd been seeing for decades. But until recently, I would only have described it as a mild undercurrent. More recent events have led me to suspect that I was deeply mistaken about that.

Expand full comment

Given that the state of Israel likes to yell loud and long at my own country for being insufficiently ass-kissing towards them, even I as an Irishwoman think that some/much/lots of the yelling at Jewish people *is* racist and not purely anti-Zionist, if it ever reaches the philosophical consistency of anti-Zionism.

Expand full comment

I assumed “yelling at Jews” was an intentionally over-mild way to refer, for instance, to the college students who are yelling, “Gas the Jews!”

Expand full comment

Ditto. And the people celebrating in the streets on the afternoon of Oct 7. It's been fascinating watching the motte and bailey in action.

Expand full comment

Wow. I wasn't familiar with that. Could you share a link to a source?

Expand full comment

Here is one from Australia, since the OP is from Australia: https://nypost.com/2023/10/10/reprehensible-protestors-chant-gas-the-jews-outside-sydney-opera-house/.

Expand full comment

Well damn. I thought sure I had seen such a sign in a picture from a protest in America, but I have to admit that all I can find now via Google is the one in Sydney.

I do see "By Any Means Necessary" and of course "From the River to the Sea" in America quite a lot, but you can argue that these are not as extreme, though they still go far beyond the mildness suggested by "yelling at Jews".

Expand full comment

Alas! My flippancy has been called to account.

I don't want to turn this into a huge Israel/Palestine debate (I had a long one of those in a previous open thread with an Arab commenter and it was quite productive! I have Jewish heritage and we managed to not scream at each other for a solid several thousand words)

I will confine myself to saying that woke-ism has traditionally held that what constitutes hate speech should be determined by the victim of said speech, but doesn't seem to extend this courtesy to Jews who complain about chanting "from the river to the sea."

Expand full comment

If you're thinking of becoming a Catholic, all I can say is God help you and welcome aboard! 😁😇

Expand full comment

Thank you!

Expand full comment

Great! (both the question in this post and that you found a nice community at church.) So a large fraction of my charitable giving has been to a church (which I no longer attend) and I like the idea of community building. But how do you compare bed nets, to a new roof for the church? There's a term in my charity equation that starts with family and goes to community (because that's where the family lives) and then ever outwards in what are mostly distance related ripples.

Expand full comment

Yep, 100% agree. Charity begins at home. I think this is the principle the church follows as well - it first makes sure its constituents are taken care of, then raises money for missions overseas to help the global poor.

Expand full comment

Agree with Robert Stadler that step (2) is pretty nontrivial and this should be emphasized more -- given how charitable donations are in fact distributed, it seems that "think about effectiveness" *is* in fact doing significant work and is not just something everyone practices. (Maybe something everyone agrees with in some sense, but I guess you covered this in point 3.)

But I have to point out -- even deBoer's uncontroversial summary is *not*, in fact controversial, and the reason why is right in his own piece!

Because on the one hand he writes:

> Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do.

But on the other hand he later writes:

> Ultimately EA most often functions as a Trojan horse for utilitarianism, a hoary old moral philosophy that has received sustained and damning criticism for centuries.

And that's the thing -- "generating the most human good" *isn't* a tautological statement of what all humans who try to act morally do; it is specifically a consequentialist (arguably specifically utilitarian) goal. A deontologist does *not* try to generate the most human good with their moral action! That's not what they believe moral action is for! So even if we say the rest is trivial and that EA is just utilitarianism or consequentialism, that part is still pretty dang nontrivial!

Expand full comment

"a Trojan horse for utilitarianism" is also just a weird way of characterizing it—do EA people hide a connection to utilitarianism? And isn't Freddie usually above an argument like "don't you know that that philosophy has many critics??" (No chance he subscribes to any philosophies which have come under fire for a long time, is there?)

Expand full comment

This is where he lost my interest completely because he seemed to be saying that utilitarianism is just self-evidently an awful ethical framework and Peter Singer is a terrible person (but at least he’s honest about it).

Expand full comment

He's a good writer sometimes, but he has an unfortunate tendency to take rhetorical or intellectual shortcuts to confirm him in beliefs he already holds.

Expand full comment

Peter Singer literally thinks it's ok to kill babies.

Expand full comment

Kinda funny seeing him and Moldbug make this same statement within a couple weeks of each other. Horseshoe.

Expand full comment

EA has a way of talking people into u-ism that isn't based on having a novel proof.

Expand full comment

"A deontologist does *not* try to generate the most human good with their moral action! That's not what they believe moral action is for!"

To act in accordance with the-way-of-things is to be in harmony; to be in harmony is the greatest good for the person.

Expand full comment

”And I’m a terrible vegetarian. If there’s meat in front of me, I’ll eat it.”

If this happens somewhat regularly, I don’t think that you’re a bad vegetarian as much as not a vegetarian.

On the other hand, this is super common:

” A poll conducted by CNN surveyed 10,000 Americans about their eating habits, and roughly 6% of the respondents self-identified as vegetarians. The researchers then asked individuals to describe their eating habits, and 60% of the "vegetarians" reported having eaten meat within the last 24 hours.

Okay, that could've have been a fluke (or just a really, really dumb sample group). Then the U.S. Department of Agriculture conducted a similar study. This time, they telephoned approximately 13,000 Americans, and 3% claimed to be vegetarians. When they followed up a week later, 66% of the self-proclaimed veggie-lovers had eaten meat the day before.”

https://www.businessinsider.com/survey-60-of-self-proclaimed-vegetarians-ate-meat-yesterday-2013-6?r=US&IR=T

Expand full comment
author
Nov 30, 2023·edited Nov 30, 2023Author

I agree I'm not a great vegetarian, but in fact I try pretty hard not to eat meat and mostly succeed. I just do this by staying out of situations where meat is in front of me.

I mostly eat at home, so I can just not buy meat, and then it won't be in front of me. This works well enough that I think I eat meat about twice a month on average, compared to almost every day before I started trying not to.

I don't have a strong opinion on whether this should give me vegetarian "cred" or not, but I think calling myself a vegetarian sends a useful signal (that if people are having meals with me, I would prefer they try to have a non-meat option available).

Expand full comment

Is it meat, or animal products to avoid? Some vegetarians won't eat gelatin, or things fried in animal fat, or pies made with lard.

Expand full comment

I actually like Nate Silver's pragmatic almost-vegetarianism - vegetarian except when it creates annoyances or difficulties for himself or others. So he gets to 95-99% of the effect with almost none of the hassle or sense of moral failure.

This strikes me as a sensible approach as long as the vegetarianism isn't religious or otherwise grounded in absolutist ethics.

Expand full comment

I think this approach is in some abstract sense ideal. But I wonder whether in practice it is too hard to make an appropriate decision in every case, rather than to just choose a rule and stick to it. Especially when one side of each decision is tempting (as it is here if you enjoy eating meat).

Expand full comment

It's common enough that people have coined the word "flexitarian" for it. As google says: The flexitarian diet is essentially a flexible alternative to being a vegetarian. So you're still focusing on fruits, veggies, whole grains, legumes and nuts, but you occasionally still enjoy meat.

OTOH, there are practical advantages to labeling yourself a vegetarian even if you make the odd exception. I'm the kind of vegetarian who doesn't minor non-veg ingredients. In a religious sense, I would have long failed. But in most public settings I just round it off to "vegetarian", because if I say something more nuanced it's just more confusing and I'd be more likely to be pushed towards full non-veg dishes.

Expand full comment

"vegetarian except when it creates annoyances or difficulties for himself or others."

Especially with "himself", that seems like it could (pardon the pun) swallow the principle whole.

Expand full comment

From a utilitarian perspective, the question should be 'does the way he practices it significantly reduce the degree to which he contributes to the meat/animal product industry.' There's no special prize for getting to zero, and getting from $100/month down to $25/month is much more valuable than taking the extra effort to eliminate that final $25.

Expand full comment

Yeah, this. Going with Pareto, it's likely that the first 80% of the effect takes 20% of the effort.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

(Note: this is mostly a comment about language and identity, not about whether you're "really a vegetarian" or any other bullshit like that. You do you.) I think this is interesting, if only by comparison the way religious Jews talk about keeping kosher. I have a hard time imagining somebody who ate two ham sandwiches a month saying they keep kosher, they just do it poorly. They might say they try to keep kosher, but struggle with it, if that's the case. Kosher and vegetarianism are both restrictive diets and they're both tied to ideologies and identities, but in one case, Kosher, the diet is a part of the ideology that forms the identity (religious Judaism, of whatever stripe), and in the other, the diet is the entirety of the identity/ideology. It makes perfect sense to say something like "I'm a religious Jew who tries to keep kosher, but I struggle with it and sometimes fail to keep kosher." But I think this is connected to why some folks here think it's weird to say "I'm a vegetarian who struggles to avoid meat." It sounds like saying , "I'm a vegetarian, but I struggle with it and sometimes fail to be a vegetarian" i.e "I sometimes fail to be who I am." And if course, there are vegetarian communities, and boundary policing around belonging, and resource allocation when there are limited vegetarian options, etc..

Anyway, what you're saying is perfectly clear, and I don't think anybody should have a problem with it, but behavior, ideology and identity are intermingled in complex ways, and how we talk about them impacts how they get thought about, and that's why people are yelling at you about this even though, again, it's not at all hard to understand what you're saying and basically agree with it.

Expand full comment

I think the difference is that the entire purpose of keeping kosher is to follow the rules of being a religious Jew. This is a binary; you succeed or you fail. This is also true of some of the reasons people have to be vegetarian.

In the specific context of being vegetarian with a goal of reducing animal suffering, being 50% vegetarian is exactly half as good as being 100% vegetarian. If you eat meat twice a month, then you only count as 1/15 of a meat eater, and then it makes sense to round yourself off to vegetarian.

Expand full comment

As a semi-relevant factual matter, there was generations ago a sizeable cohort of American Jews who kept kosher at home but would go out for, say, Chinese food and eat pork. I'm thinking 1950s-70s. I think this has decreased as Judaism seems to have sorted more into "really mean it" and "aren't observant".

Expand full comment

Absolutely, for a time in my childhood this was how my family observed, in the 1990s. But we would never have said "We keep kosher." We would have said "We keep a kosher home" or "We keep kosher at home." There was even a time when some Jews would have kept a kosher home, but had specific utensils for cooking non-kosher food in those homes, the Hazer (pork) pot. I don't think any of these folks would describe their general observance as "keeping kosher."

That said, even if you disagree with me about the facts of this case, my larger point was that some terms are used to describe overlapping ideology, identity and behavior, and others just describe one or two of the three, and this is part of what is driving the indignation about Scott calling himself a vegetarian.

Expand full comment

I didn't mean to be disagreeing, just adding thoughts.

Expand full comment

Maybe there's just more lizardmen than vegetarians? Either that or US vegetarians are very different from the vegetarians I've known 🤷

Expand full comment

Could just be virtue signaling, but then that's the point...

Expand full comment

I assure you that in the vast majority of US communities, being vegetarian or vegan gives you negative, rather than positive, cred.

Expand full comment

But aren't there subsets of vegetarians? The pure vegetarians, and then the vegetarians who consume dairy only, and the ones who consume eggs and dairy, and the vegetarians who eat fish?

https://www.mindbodygreen.com/articles/types-of-vegetarians

So you could be a vegetarian who eats eggs or fish, and thus respond that you had eaten "meat" the day before.

Looking up the Department of Agriculture survey:

https://ajcn.nutrition.org/article/S0002-9165(22)03364-0/fulltext

"Because the dietary intake patterns of self-defined vegetarians who report eating meat may differ from those of self-defined vegetarians who do not report eating meat, the group was further categorized as “no meat” or “ate meat” on the basis of a consumption of < 10 g/d or ≥ 10 g/d, respectively, of meat, poultry, and seafood averaged over the two 24-h recall days. The 10-g cutoff level was selected because it represents negligible consumption.

...Self-defined vegetarians who reported meat on recall days consumed significantly less meat, red meat, and poultry but more fish than nonvegetarians who reported meat. Nonvegetarians who reported eating no meat, and self-defined vegetarians who did and did not report meat, showed significantly lower consumption of beverages compared with nonvegetarians who ate meat. On the other hand, self-defined vegetarians who consumed no meat reported significantly higher intake of wine.

Self-defined vegetarians who reported meat on recall days consumed significantly less meat, red meat, and poultry but more fish than nonvegetarians who reported meat. Nonvegetarians who reported eating no meat, and self-defined vegetarians who did and did not report meat, showed significantly lower consumption of beverages compared with nonvegetarians who ate meat. On the other hand, self-defined vegetarians who consumed no meat reported significantly higher intake of wine."

My conclusions from that?

(1) Wine is indeed a very vegetarian drink*

(2) The meat-eating vegetarians are not necessarily sitting down to a plate of chops or a huge steak

(3) The data is taken from surveys conducted during the 1990s and so vegetarians of today may be more scrupulous about their definition of what they do and don't consider 'vegetarian' diet

*The Logical Vegetarian, from "The Flying Inn" by G.K. Chesterton

You will find me drinking rum,

Like a sailor in a slum,

You will find me drinking beer like a Bavarian

You will find me drinking gin

In the lowest kind of inn

Because I am a rigid Vegetarian.

So I cleared the inn of wine,

And I tried to climb the sign,

And I tried to hail the constable as “Marion.”

But he said I couldn’t speak,

And he bowled me to the Beak

Because I was a Happy Vegetarian.

[Expunged verse because yeah it's anti-Semitic, sorry GKC, no other way to read it]

I am silent in the Club,

I am silent in the pub.,

I am silent on a bally peak in Darien;

For I stuff away for life

Shoving peas in with a knife,

Because I am a rigid Vegetarian.

No more the milk of cows

Shall pollute my private house

Than the milk of the wild mares of the Barbarian

I will stick to port and sherry,

For they are so very, very,

So very, very, very, Vegetarian.

Expand full comment

If you ate eggs or fish yesterday, would you really say you ate meat when answering a poll? I wouldn't.

Expand full comment

I suppose it depends on the level of commitment you feel necessary. The data seems to have been collected from an on-going series of surveys, so I imagine the people who stuck with them got the instructions from the survey takers that "yes, fish and eggs count as meat, tell us what you ate".

They go into what vegetables even (deep orange, leafy green, etc.) so that is a lot of detail that the survey wanted. I imagine that's why they'd be strict about "if you consumed any animal product, mark that down".

Expand full comment

Given that somebody has already killed an animal, throwing the meat in the trash is, in my opinion, worse than eating it. I would say there is a self-consistent moral position in being vegetarian in the sense of avoiding -causing- the deaths of animals, and I think this is a better position.

I find it absurd that vegans refuse to use leather products, for an example of how this plays out: Beef demand, and production, so outstrips leather demand and production that, with a couple of odd niches, no animals are killed to produce leather; it's a byproduct of beef production.

(There may or may not be social signaling value in these sorts of prohibitions; suppose you attend a friend's barbecue each year, agreeing to eat meat on these occasions may cause that friend to purchase more meat to accommodate that. And there may or may not be substitution effects; maybe if you don't eat the burger, it will be leftovers, and will replace a meat meal later. But in general I don't think the case against refusing to eat the burger is quite ironclad. In general I think the correct approach is something like "I will eat this, because I cannot unkill any animals by refusing, but please consider accommodating me next time.")

Expand full comment

I believe Theravada Buddhism has rules around vegetarianism that formalize this kind of nuance.

Expand full comment

Buying leather still makes it more profitable to farm cows. If selling the leather makes it more profitable to breed a cow, more cows will be bred.

Expand full comment

Cowhide is less than 1% of the value of a cow even when the market for leather is good; literally the tongue is worth more than the hide. And while the market has varied considerably over time, currently a significant percentage end up in landfills; presently the marginal value of a hide is negative, the cost of getting somebody to haul it away to bury it. The -reason- the marginal value of a hide is negative is the number of cows bred for meat purposes, the expense of processing the hide into something that is actually economically valuable, and the extremely low demand for leather in the first place. You're not going to add any marginal cows in the current conditions; you're just going to reduce the amount of waste going to landfills.

Even when hides aren't being thrown away, the people making money on leather are not, in fact, the ranchers; the hide is only valuable after quite a bit of processing. A few years ago I was looking at making a leather kilt, which would take an entire cowhide; the processed hide would, at the time, have run me ~$600. I could have picked up a raw hide for $6 - which, if you consider the effort involved in stocking and selling it to me, would almost certainly have been a loss to the seller. The profit involved in rawhide is not selling "cows", it is selling service and storage space (that is, you aren't paying to have a cow killed, you are paying to have the meat processor to take the extra steps necessary to store the hide in a usable state, and to deal with the additional logistics involved in getting it to you / having it ready for somebody to pick up).

There is a similar phenomenon going on with wool; the value of wool has plummeted such that a lot of that is also going into landfills. (Sheep ranchers still have to shear most breeds of sheep, as they've been bred to produce far more wool than is healthy for them, and the price of mutton means they're raised for that instead of wool. Buying wool doesn't increase the number of sheep produced - it changes which breeds ranchers will be willing to raise, as they're in the process of shifting to wool-less breeds presently.)

Expand full comment

Where are you getting that 1% of the value stat? My understanding is that its more than that, or at least that it varies quite a bit based on the type of leather.

Was the hide you could buy for $6 of a grade that would typically be used in leather manufacturing? If the seller is selling you the hide at a loss, then presumably they would still lose more money had they not sold it, meaning the purchase is increasing the marginal value of breeding a cow. Still I don't think this is the typical case.

Do you disagree that purchasing new leather products increases the marginal profitability of breeding a cow to some degree?

If you disagree, I don't think you've really explained exactly how this is the case, unless you're suggesting all leather manufacturers are rooting through landfills. Are you saying that all leather comes from some arrangement where leather sellers offer the exact amount of net value to cow breeders as would trucking it off to a landfill?

Do you not disagree, but you're arguing that the effects are very small?

I'm somewhat confused about what exactly you're arguing. I'm interested in where you're getting your information, because all of my research has brought me different data (not to say the research I've done is necessarily of a very high standard, just to say all the info I've seen has differed from what you're saying regarding the profitability causation)

I'm also interested in what you might have to say regarding different qualities of leather because I think that might be a crux. I think probably the cow hides that are sent to landfills are typically not of a quality that would be suitable for a jacket or a handbag for example.

Expand full comment

On which hides are used, that gets into those niches I referred to earlier, and that gets incredibly complicated: Expensive leather (such as that used for, say, an Italian namebrand handbag) requires hides that don't have blemishes / scarring, such as from barbed wire fencing or branding - these hides are in fact relatively valuable. I'm uncertain of exactly how much they're worth, and can't comment on that.

(Note that certain brands of leather jacket, such as Harley Davidson, explicitly prefer "lower quality" hides; the barbed wire scarring is considered part of their, er, branding? Product? So not all hides that are tanned are the high quality hides.)

So some cows are, in a sense, raised for their hide. However, their meat is still much more valuable, and a cow raised for these purposes is still going to be entering the beef industry. Since supply and demand is a thing, this should push the marginal cow -not- raised for its hide out of the market.

But if this is the case, that a cow raised for its hide removes a cow raised for its meat from the market, then buying nice leather could -improve- the net conditions for cows. Or it could make them worse; it depends on how the two cows were raised. Assuming everything else is equal, however, the cow raised for its hide isn't subject to barbed wire, isn't branded, and is better protected from environmental hazards, than is the cow -not- raised for its hide. (I'm unsure if it's fair to hold all else equal, though - I suspect the easiest way to get a perfect hide is something closer to factory farming than open field ranching.)

However, not all leather is nice leather, and we have all sorts of modern tricks for dealing with low-quality leather (which is a complicated topic, but leather is often split into two or three layers, and even if the top layer is a total loss, the lower levels may still be usable). So regardless of the above, buying many leather products prevents a hide from being sent to a landfill, as opposed to creating the marginal cow.

On the topic of "All leather comes from some arrangement where leather sellers offer the exact amount of net value to cow breeders as would trucking it off to a landfill?" - I'd say, from my experience with ranchers, that this is almost certainly pretty close to the truth, at least when conditions are as they are and hides are being sent to landfills. Some of this comes down to the efficient market hypothesis, but I think in this case the market is likely to be slightly inefficient against the ranchers - that is, almost everybody involved would prefer leather be used even if it was sold at a loss (so long as the loss is slight), even relative to sending it to a landfill. There are a few reasons for this, but largely it comes down to "People generally find wastefulness inherently offensive".

If you absolutely wanted to avoid any possibility of a marginal cow, there's probably room for a leather industry that certifies hides revenue-neutral for ranching. Certainly the status quo of hides being buried in a landfill is wasteful and useless, and can be improved upon.

Expand full comment

If people find wastefulness inherently offensive, they might consider how inefficient and environmentally damaging cattle production is. Maybe that's irrelevant to our conversation though.

In any case, everything I've researched about this subject, in addition to my knowledge of economics, would point to it being unlikely that most leather purchases have no/very little impact on the marginal number of cows. You raise some interesting points but I have no way to verify them, and I'm not sure if your personal knowledge of this is representative of the entire story.

Even if we assumed you're broadly correct, I don't think it's absurd choose not to support a demand for leather given that the information on Google, as well as economic intuitions, support the view that leather purchases increase the profitability of breeding and killing cows.

Additionally, and this isn't so much of an argument so much as more context, I don't want to wear the skin of a conscious being who was mistreated and killed for their body parts. I don't think it's absurd to choose not to buy the skin of a tortured conscious being, in order to put it on my body. I realize you likely have a different perspective. We're mostly disagreeing about an empirical economic matter, but I hope it's also at least understandable that someone might prefer not to wear the skin of a being whom they believe was wrongly tortured and killed.

Expand full comment

Man, if only 1% of people are strict vegetarians, I can't imagine how rare strict vegans are (I'm one!)

Expand full comment

EA to me is just one aspect of the ongoing global "mindset war." Not everyone sees the world as westerners do. Their sense of being an individual, embedded in a culture, may be quite different. One issue of the Western mindset, with its focus on individuality and fairness, is that an underlying sense of guilt tends to accrue over time.

The advocate of EA has accepted the guilty feeling but does not want to look deeper at the mindset itself. So they elect to try to align themselves with things that are "good" according to the value system of the mindset. This helps them to perpetuate their mindset and simultaneously alleviate the sense of guilt. In a sense, it's colonialism, because they would like the mindset to occupy more minds globally but are trying to find a way to make this workable.

To me, the deeper question is... Do we actually want a world full of guilty-feeling people obsessed with fairness?

Expand full comment
author
Nov 30, 2023·edited Nov 30, 2023Author

I have a post coming up (by which I mean anytime between next week, next year, and never) that I hope you'll like. My thesis is that in the eternal fight between Nietzschean slave morality and master morality, EA tries to find a compromise where you keep the part of slave morality that involves helping others, but jettison the other parts where you have to feel guilty and hate yourself and be miserable and have no positive life-affirming values. I admit it's not intuitively obvious why EA should be the only philosophy that can do this, but hopefully I can explain it in the post.

Expand full comment

Cool! I look forward to that. I just feel like EA is Western Mindset 1.01, when what we want to be looking at is Western Mindset 2.0.

Expand full comment
author

Do you have any idea how long it took to get us from Western Mindset 0.1 Alpha to Western Mindset 1.0? And you want us to jump all the way to 2.0 without any intermediate patches?

Expand full comment

But what if our current way of perceiving time is just a part of Western Mindset 1.0?!

Expand full comment

I mean, the Angular project pretty much ditched v1 entirely when it switched to v2. This pissed off a lot of people, but Angular post v1 is definitely way better. It's not winning the popularity contest against React, but v1 wasn't gonna do that either.

Expand full comment

Or we could just do what browser vendors do, and redefine what the version numbers mean. After all, the important thing is the feeling of progress we get when we see the numbers go up!

Expand full comment

I'm also very interested to read this. I think you overestimate the likelyhood that EA's marginal effect at this junction is positive (for EA) -- I think the margin is almost entirely about who is latching on. Once you claim morality you definitely get guilt, not necessarily by choice, but because your marginal convert is a net negative virtue signaler.

See also: Christianity, the transition from atheism to new atheism, anti-racism to wokism, etc.

I disagree with most EA causes and am very interested to find out why most smart people disagree with me. But that's not the point of this -- I think there's a fair argument to be made that EA-the-movement is going to go the way of all those that claim morality.

Expand full comment

Interesting point about the connection between claiming morality and attracting virtue signalers! I'll have to ponder that for a bit.

Expand full comment

<mild snark>

If a Nietzschean overman would hypothetically be a superior philosopher who would come up with a life-affirming set of values, would a (semi-)Nietzschean overmachine hypothetically be a superior philosopher which would come up with a _computation-affirming_ set of values? :-)

</mild snark>

Expand full comment

I don't know, I took the Giving What We Can Pledge, but I don't feel guilty, I have enough experience with neuroticism to have recognized such feelings as pointless. It's ultimately spirituality that made me take that pledge, which makes me a very unusual EA. By taking that pledge, I prove to myself that my spirituality is bigger than money, and I also just believe that manifesting compassion in the world is an important spiritual goal.

My altruistic motivations are positive and not negative.

Expand full comment

Well, fair enough. If that's your experience then that's your experience. For me, I still struggle with EA. Like I have this sense of slightly detached moral posturing going on. I'm doing my good behaviours so I'm okay. Like the deeper challenge is to throw off the whole mindset and to be more fully in life. But that's scary, and the fallback, negotiated position is to continue in this kinda administrating intelligence but then also do the good behaviours.

Expand full comment

Yeah, the final solution to neuroticism is to stop thinking of yourself entirely, your judgement of whether you are ok or not is entirely irrelevant and has no bearing on anything. Leave that judgement to others, if someone says you are in the wrong you have to be able to analyze that to see if you have to course correct, but judging yourself is just scratching a scab.

Have you ever read The Last Psychiatrist?

Expand full comment

I haven't but it sounds interesting.

Expand full comment

I think a world full of guilty people obsessed with fairness would be likely to end up pretty fair, and that the guilt is a small price to pay for that!

Expand full comment

The problem is those guilty people's definition of fairness.

Expand full comment

The same applies to people obsessed with "freedom" from what I've seen.

Expand full comment

Seems to me very different. I think those people and their opposites have the mostly the same (negative, freedom from interference) definition of freedom and the disagreement is where the line is drawn. Ie, both assume that the issue at hand is freedom to own guns/refuse vaccination/etc and the disagreement is how much and in what circumstances society should inhibit such freedoms.

Expand full comment

No, it's exactly the same. Many freedoms cannot coexist. For example, the freedom to discriminate against minorities infringes upon said minorities' freedom to live and participate in society. Every freedom infringes upon another freedom; even a truly anarchic society can end up providing less freedom than a totalitarian dictatorship. Can you really call yourself free if most of the actions you could potentially take are going to get you killed? The government isn't the only thing that can restrict your freedom. Society, nature, even reality itself... they're all prisons. All we can do is make it a little nicer to live in these prisons.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Your definition of freedom, which seems to be a strange edgy teen's take on the positive definition of freedom, certainly doesn't align with how most people use it when yelling at each other about gun laws or what not. But when it comes to debates on political issues - I assume Russel T Pott was contemptuously alluding to American right wingers against gun laws and vaccines etc - freedom is taken to mean freedom from government stopping them from doing something by both sides of the issues. Edit: to be specific, pro2A people want freedom to own guns with minimal if any restrictions from government; anti2A people want to restrict people's freedom to own guns with various laws - the predominant call is for restrictions on people owning guns and not "freedom to be free from people with guns". It should be clear that they're using the same definition of freedom. Or to use your example, people who want to compel bakers to bake gay cakes don't say they want freedom (because it's nonsensical to them and most everyone else listening); they say that it's an infringement on their rights-based entitlement to be served.

This is not the same when it comes to fairness, mostly clearly summed up as 'equality vs equity'. Namely, fairness to the left now means each demographic group has the same outcomes (eg, test scores, proportion of people imprisoned, etc) and, conversely, any disparity in outcomes is unfairness. The right defines fairness as everyone being governed by more or less the same rules.

Expand full comment

I've certainly felt guilty for not living up to altruistic ideals, but more prominent in my day to day thoughts is a sense of not-belonging, alienation. Not sure to what degree that's me being weird vs. WEIRD, but I suppose it makes sense that a drawback of individual-focused culture would be community building.

Though the emphasis on the West reminds me of https://slatestarcodex.com/2016/07/25/how-the-west-was-won/

Is effective altruism really Western?

Expand full comment

The concept of Zakat in Islam seems similar, though I guess more institutionalised and I don't know enough about the culture to guess whether it relates to guilt.

I do recall being in the Rabat medina years ago and witnessing a clearly wealthy Saudi, in light blue robes, walking through the streets distributing 5 dirham notes from a huge wad, pursued by literally every beggar and street guy in town, many limping or even crawling.

Expand full comment

I think your argument only really works because you are assuming that you are responding to a person who doesn't do charity work. This may be applicable to a lot of situations you've encountered in practice, but DeBoer's argument repeatedly explicitly references "are shared by literally everyone who sincerely tries to act charitably", rather than "are shared by literally everyone".

I would propose the following model: there are various charity communities, with Effective Altruism being one of them. They each discuss how to do good, and some non-EA charities come up with arguments against EA such as "EAs have an excessive focus on existential risk". These arguments then get disseminated into the broader society, including among people who don't do charity at all, and this is where you notice it because all the charitable people near you work for EA.

Your counter then essentially boils down to "these anti-EA arguments have been adopted by non-charitable people". But... that's a fully-general counterargument against reasoning about how to evaluate big charities? Like assuming most people are non-charitable and information diffuses broadly, any argument about how to run charities would be adopted by non-charitable people.

Expand full comment
author

I think "does charity work" is point 1 of my three point definition.

I think point 2 does distinguish EA from other charity workers. For example, people who try to solve homelessness in the US are great, many of them work much harder and make more sacrifices than I do, but I don't think they've systematically calculate it whether homelessness is the most effective cause to work on, and I think if they did systematically calculate it with any level of honesty, they would find that it wasn't. This doesn't mean these people are bad or that they're making the wrong decision, but I do think it means that they're not effective altruists.

Expand full comment

How much of public distaste for effective altruism comes from the notion that effective altruists consider all these other altruists to be "ineffective altruists" and therefore inferior species of altruists?

Expand full comment
author

I think this is just life. Every religion implicitly considers every other religion to be wrong and heretical, every political movement implicitly considers every other political movement to have gotten politics wrong, every scientific theory implicitly considers all opposing scientific theories to be false. I think in liberalism everyone is allowed to disagree and not hold it against each other.

Expand full comment

Sure, every movement implicitly believes that it's more correct than competing movements. Nothing interesting about that in and of itself. But effective altruism takes that implicit assumption and explicitly places it in the title. Catholics, Orthodox and Mormons all think the other guys are wrong, but if one of the denominations were named True Christianity, all the other Christians would hate them considerably more.

Expand full comment

Orthodox is just "True Christianity" in Greek, tbf.

(and Catholic is "all of Christianity" in Latin)

ETA: What I think the difference is for Christianity is that they spent 120 years from Wittenburg to Westphalia shedding so much blood over the question of which is the True Christianity and no-one won that they ultimately decided that they had to live and let live. Within a couple of centuries, they were putting up memorials on the sites of the battles saying "Freedom of Religion". Other movements don't kill a third of the population of Germany when they have disagreements, so they don't need to build up such clear rules for accepting each other even though they think that each other is wrong.

Expand full comment

> Orthodox is just "True Christianity" in Greek, tbf.

"[Characterized by] correct belief". There isn't a reference to Christianity. Orthodoxy contrasts with orthopraxy, "correct behavior".

> (and Catholic is "all of Christianity" in Latin)

That's impossible; Latin doesn't use "th". Catholic is also Greek. It means "general". (As in 'shared'.)

Expand full comment

a good way to find the actual original meaning of a word is to google "[word] etymology" :)

for example - I had always thought that "orthodox" meant "correct teaching" due to an unconscious assumption that dox was connected to latin "docere", but TIL it is actually from the unrelated greek dox "opinion"

Expand full comment

I think that the difference is that most other charitable communities don't think that the other charitable communities are bad, they're just not the ones that they personally concentrate on.

I bet if you asked the anti-homelessness people about, say, cancer research charities, they'd be all in favour, it's just not the thing they have personally chosen to concentrate their time and effort on.

Where EA tends to be critical of most other charitable communities, and often to use the sorts of criticisms that many charitable communities will apply to art museum charity.

Expand full comment

ISTM that the difference there is that EA has an acknowledgement of tradeoffs as part of its foundational philosophy, whereas most other charitable communities can more or less ignore that limitation on a day-to-day basis. Making opportunity costs explicit isn't going to win you any friends, but I don't see it as a vice.

Expand full comment

Perhaps that's true, but I don't think most other causes would ever tell another cause that the thing they are working on isn't a good use of their time/money. They just don't interact with them at all. The climate people don't tell the homelessness people to switch goals, they let them do what they do without comment. EA by it's nature is making these criticisms of other charity movements instead of being indifferent.

And maybe that's good! Maybe it will make us all more effective! But people won't like it.

Expand full comment

How does EA deal with EAs that come to a different conclusion as to what is optimal? Aren't all EAs except the Alpha-EA, not really EAs, but varying degrees of ineffectual?

"They have to try" but how are you saying that the Homelessness Helpers didn't use some calculus in their decision making? Do you vet every EA and make them show their calculations like you do for non-EAs?

Seems more like you have a certain unspoken set of things that an EA must do, for example quantify outcomes in terms of dollars spent per life or suchlike.

Expand full comment

> Every religion implicitly considers every other religion to be wrong and heretical

No, this is more something of the Abrahamics, most Eastern religions take the perennial view, at least most of the time.

Expand full comment

Probably Freddie's criticism would apply to world-spreading religions, like Catholicism, Islam, etc., as well as big-tent political movements like "liberals", "conservatives", etc. Narrowly focused movements, like the Coalition to End Homelessness, or even "The AI Safety Movement" are less deserving of criticism by his metric. Any utopian movement is suspect per that one Simpson's episode.

I was surprised that post-Altpocalypse (my cute term for the drama-filled weekend at OpenAI), everybody kept referring to the events as mischief caused by the EA community. In my head, it's just AI Safety, not EA. So, perhaps social media is using the most offensive label for the opposition to say, "me no likely this."

I mentioned "LessWrong" to someone who knew nothing about it, and they rolled their eyes. So maybe this is just skepticism at utopianism and especially skepticism towards people who have unironical utopianism. This same person would roll their eyes at someone wearing a cross on their neck.

Expand full comment

In my experience, not that much. The knee-jerk rejections of EA tend to cite suspicion of grift.

Caution around people who claim a moral imperative isn’t a bad heuristic. Especially when those people are asking for money!

But also, accusing other people of grift is easy money/prestige. The world-weary cynic gets views and follows.

Expand full comment

But homelessness is a more local issue than the EA-preferred alternatives like global poverty, and DeBoer explicitly objects to the EA tendency to focus on everyone equally. Like you don't exactly do deontologist calculations (the closest person doing deontologist calculations on this topic would be Benjamin Ross Hoffman, and he seems to have come away with thinking that Effective Altruism is bad as a result), you insist on doing utilitarian calculations, but DeBoer explicitly opposes utilitarianism in his post.

Expand full comment
author

I'm not really sure what you're arguing against. Yes, I think EA focuses on everyone equal and is utilitarian. How does that contradict my point that it forms an easily-defined cluster which isn't the same as general "yeah I want to do good"?

Expand full comment

Part of his critique is that if you take utilitarianism seriously, you'd focus on something else than the relatively-uncontroversial global health stuff that EAs center, and indeed a lot of the core EA does focus on these other things.

Benjamin Ross Hoffman makes a more detailed and better argued version of this than DeBoer or I can make in this post:

http://benjaminrosshoffman.com/effective-altruism-is-self-recommending/

Expand full comment

Don't have time to look through it currently, but is it in this post that one could find the "deontologist calculations" you mentioned above?

Expand full comment

Yes. He investigates which principles Effective Altruism claims to derive its legitimacy from, and which incentive structures/feedback loops are in place, and how Effective Altruism actually acts in practice, and finds that EA is centered around putting ever-increasing amounts of trust in the EA elites, often with confusing marketing, rather than simply deriving its power from doing good.

Expand full comment

A potential crux: Sometimes Zack says that you (or Eliezer? Or Anna? Not super clear) generally feel like it's fine (not dishonest) for EA/Rationalism to use standard marketing best practices. But I think a lot of standard marketing methods may be a generalized version of the Trojan horse phenomenon that DeBoer complains about. So it's possible that the core thing that is complained about is something you mentally round off to "ah, but that's just marketing, let's focus on the other parts".

Expand full comment

It's true that in opening of "What is effective altruism?" the defintion doesn't include main assumptions. But I don't think he is fair he say

"Sufficiently confused, you naturally turn to the specifics, which are the actual program. But quickly you discover that those specifics are a series of tendentious perspectives on old questions, frequently expressed in needlessly-abstruse vocabulary and often derived from questionable philosophical reasoning that seems to delight in obscurity and novelty; the simplicity of the overall goal of the project is matched with a notoriously obscure (indeed, obscurantist) set of approaches to tackling that goal."

Section "What principles unite effective altruism?" directly and in plain words describe main "wierd" assumptions like prioritisation and impartiality

Expand full comment

But as others have pointed out, if De Boer objects to this utilitarian orientation of EA, that gives a lie to the idea that EA is just trying to do what everyone else is trying to do. He agrees that EA is different, he just doesn't like the differences and wants to paint them as crankishness. But of course he can't say "they believe kooky things like a life in Rwanda is worth as much as a life in Brooklyn," so he focuses on the AI alignment stuff.

Expand full comment

He is arguing that Effective Altruism is a Trojan horse for utilitarianism, hiding the counterintuitive implications of utilitarianism and marketing them in a less-offensive form.

Expand full comment

Yes, I saw he used that phrase, but I don't think that's apropos since EA adherents aren't "hiding" anything; they are actually extremely verbose about their interest in utilitarianism, they love to talk about it at length, just as they love to talk openly about whether fighting future threats may actually save more lives than fighting present ones. Anyway, doesn't it seem peevish to complain that people are only saving Rwandan lives as a gambit to advance their utilitarian agenda?

Expand full comment

DeBoer explicitly gives the Center For Effective Altruism as an example of a case where they are hiding it.

It's true that it's a disagreeable approach to complain about the Trojan horse element. A more agreeable approach might be to suggest that Effective Altruism can be more open-minded about alternate approaches that don't fit as well into utilitarianism. I don't think you'd prefer the more agreeable approach, though, so I think you're just opposed to criticizing Effective Altruism for being utilitarian while marketing itself as common-sense.

Expand full comment

Are people who have looked at 10 causes and decided that homelessness is the best of the 10 considering EA's? Similarly, if I don't have a particular care about AI or malaria nets (or the top 10 of givewell), but I spend many hours evaluating the other universe of charities, am I EA?

I think that this issue may be where a lot of the controversy comes in. If EA is a sliding scale of "How much effort did you put into determining your charity, where a little effort is a bad EA and a lot is a good EA, but it's all EA", it becomes more palatable than "If you didn't use givewell, you're not giving capital E Effectively, notwithstanding however much effort you put into giving effectively"

Expand full comment

"I think point 2 does distinguish EA from other charity workers".

This is wrong in the way that so much of rationalism is wrong, where the new group reinvents the wheel in a state of mild ignorance, complete with new terms that basically isolate it from a lot of pre-existing work, honestly in a way that's only kind of become possible again in human life through internet rathole communities.

EA is absolutely not distinct for practicing charity optimization. Fortunately, this is actually verifiable.

These won't be new to you, probably, but I would urge you to consider at least the following two things:

1. The World Bank has a history of self criticism that is completely erased from this discussion.

https://www.amazon.com/Elusive-Quest-Growth-Economists-Misadventures/dp/0262550423

https://www.elibrary.imf.org/view/journals/022/0031/002/article-A001-en.xml (My Aunt wrote this thing in the goddamned 90s. I have mixed to negative feelings about the World Bank, but this was *table discussion* decades before EA. She's not unique. You can find a lot of this if you look)

2. Charity Navigator existed and was quite popular before you (Scott || EA) became big.

It's *profoundly* different to say "we came along and did it better" than it is to say "we are distinct for doing it".

Expand full comment

Charity Navigator existed and was popular way before EA, when they evaluated overhead ratios and not effectiveness. While in some sense that is "a metric", I'm not sure if Charity Navigator themselves would define it as an effectiveness metric.

Expand full comment

It was actually a multidimensional scale most of which was concerned with operational quality, yes. That is an effectiveness metric in the sense that, for two charities doing the same work, the better run one will be more effective. It was also one iteration in a process.

QALYs are *not* the sole or the ultimate measure of effectiveness. If there is such a thing as "little e little a" effective altruism, it is generally concerned with improving charity, and is not objectionable to most people, and is an attractive idea.

Effective Altruism is to effective altruism as Libertarian is to libertarian: a lot of people are on board with the broad idea, but then you learn about their five point plan to fund toll roads with machine gun sales or whatever and you realize these people are the extreme, weird, idiosyncratic version of the thing you like in principle.

Freddie is right: That which is commendable isn’t particular to EA and that which is particular to EA isn’t commendable.

Expand full comment

As a nitpick, Givewell don't use QALYs as their only metric, since they also consider things like "shovel readiness", strength of evidence and neglectedness.

In fact, something like "basically all effectiveness metrics are basically the same" is a supremely non-EA attitude, in EA, you hear debates all the time about differences between an approach that maximizes the population, vs one that increases the average quality of life, vs one that works directly, be one that work via flow through effects in EA all the time.

You can call these weird and idiosyncratic, but I think if your attitude is "well if I can call them both effectiveness and ignore that one is about inputs like operational efficiency and one is about outputs like impact and number of lives saved." That in and of itself is weird! Saying "I think buying my child the cheapest food, and making sure the food they eat matches the label. That's a type of caring about childrearing, and it's exactly equivalent to see if the child grows up happy and healthy, and I don't think you should point out those are different." This would be considered crazy talk, because the way you draw boxes around actions does not affect what natural groupings of those actions are, or the actual effects of the actions themselves.

In fact it's very profoundly weird to go "all of EA basically doesn't count, because another charity evaluator came out a mere 7 years before the one EAs use".

Oh yeah, I'd like to start doing sabremetrics on base ball, I go ahead with statistics and derive a bunch of facts like whether bunting is good and so on.

"Are you aware that 3 weeks before you decided to do that, someone counted the number of home runs per game?"

"Not really."

"A ha! That means you aren't original and the number of victories that your baseball team won are null and void!!!"

"I didn't think that really mattered, I just think about baseball this way. Also, I don't think these things are equivalen-"

"Neeerrrrrrrrrd they both have a number in them that's identical and if you say otherwise you're running damage control".

Expand full comment

Copy-pasting from a comment I left on Freddie's post:

'You argue that no one actually disagrees with the basic premise of EA (that we should try to ensure our charitable donations go to high-impact charities and not elsewhere).

Counterpoint: this guy wrote an article (https://www.honest-broker.com/p/why-i-ran-away-from-philosophy-because) criticising Bankman-Fried, EA and utilitarianism more generally. He sums up his position on how to be good at the end of the article:

"2. We rely too much on numerical measures and reducing things to formulas—the most valuable things in human life resist quantification.

3. I’m referring to core human values—such as love, compassion, forgiveness, trust, kindness, fidelity, decency, hope, etc. These are practices not arguments, and hence require no appeal to a larger context. Anyone who claims they require arguments (for falling in love or acting compassionately, for example) should be treated with extreme caution.

4. Above all, beware of people who won’t do a good deed until they have calculated the long-term consequences and expect certain desired results in return. Maybe you can do a business deal with them (or maybe not), but never get into a close personal relationship with them.

5. Gratuitous actions of generosity and kindness are best done without calculating rewards or consequences (which, by the way, is the reason I’ve celebrated gift giving and the bonds of trust and love it creates in my writings)."

In other words, good intentions and positive vibes are the only things that matter when it comes to doing the right thing, and there's something intrinsically suspect about someone who thinks that policies should pass a cost-benefit analysis. 750 people liked the article.

(I left some comments criticising his arguments and stylistic touches. He deleted the comments and banned me from commenting. Freddie: I do appreciate your thick skin.)

So no, I don't think everyone already basically accepts the principles underpinning EA. Not when this guy is arguing that you do good by looking in your heart and being compassionate, and there's something majorly sus about someone who actually wants to quantify how much good one charity is doing relative to another.

You can see the same essential dynamic play out whenever (and I'm sure you've had this experience personally on plenty of occasions) someone proposes a policy ostensibly intended to combat rape/racism/the spread of Covid/child porn/drug overdoses/whatever, you argue that the policy will have no impact on the problem in question, and someone immediately retorts "what, so you're saying rape/racism/Covid etc. isn't a problem?" This exact exchange has played out for me so many times that I have to assume there's a significant proportion of the population who really do believe that the intentions behind a policy or action are all that matter, and how effective that policy or action is at achieving its stated goal is just the fine print. This is the whole reason the politician's fallacy (1. We must do something 2. X is something 3. Therefore, we must do X) is so seductive.'

Expand full comment

I was surprised by Ted Gioia's piece, not least given his academic credentials. It was much more about feels than thought. I guess that's why he banned you. And why I instinctively signed off my critical comment by saying something warm and nice, which he later liked. This isn't meant to sound flippant. I'd stake money on there being a big personality type effect in the pro-anti demarcation re EA and utilitarianism in general.

Expand full comment

I don't think anyone that thin-skinned has any business being a writer, but that's just me.

Expand full comment

I read a Washington Post article in favour of a $15 minimum wage, and they explicitly mentioned earned income tax credits, generally concluding that the minimum wage has better feels aesthetics

If effective altruism doesn't exist we need to invent it immediately

Expand full comment

It existed at the time of the WP article, so I guess the problem is that it hasn't taken over everything....

Expand full comment

A great EA effort might be to find a PHD economics student to do a thesis directly comparing the benefits to minimum wage earners of a raised minimum vs a tax credit

I believe in a future with the NYT headline: "A $15 minimum wage instead of an earned income tax credit is like taking $3 billion dollars from low income workers over the next decade"

Expand full comment

1. I think you vastly underrate non-EA charity. There are legions of non-EAs who 1. donate significantly and consistently to charity or make it their life's work, 2. make good faith attempts to do so as well as they can (even if it's according to e.g. scripture or trusted authority, rather than rationalist calculation), and 3. actually act upon them. This is common in most parts of the world, and the fact that it's common is at the heart of DeBoer's objection. Maybe it's not common where you live or in your general circles. I guess you're not likely to see troupes of monks in the Bay Area. I don't think you get a form in the US with your payroll with facts about various charities and retirement account options and get to choose which to automatically send money to each month.

2. You gloss over the actually unusual differences between EA and non-EA in this essay. You blithely assume consequentialism is the best approach to determining effectiveness in point 2. You offhandedly say that non-X-risk and X-risk charitable efforts are actually really similar, with "a lot of assumptions". It's precisely EA's applying a radical form of consequentialism with an unusual set of assumptions that is being criticised, yet you just skim past them and fail to address the issue.

Expand full comment
author
Nov 30, 2023·edited Nov 30, 2023Author

1. I agree with everything in your point 1. Those people are great and I love them, but they're not EAs, because, as you mention, they do it according to scripture or authority and not rationalist calculation. I'm not trying to argue only EAs do charity! I'm trying to argue that EAs are a group defined by doing charity according to rationalist calculation.

2. I don't think I'm blithely assuming it - this post isn't an attempt to justify EA, just to define it. It may very well be that consequentialism is wrong, but the people who do that particular wrong thing are EAs, and the people who don't aren't, and that's a crisp and meaningful distinction into two different groups.

Expand full comment

"I'm trying to argue that EAs are a group defined by doing charity according to rationalist calculation."

I feel like I'm kicking a puppy here, because Scott is doing his best for an apologia of the movement in response to both just and unjust criticism, and it's very easy for me to cherry pick the likes of the following, but, um, well - rationalist calculation may not be that lodestar and guiding light that it should be in all cases.

From That Sequoia Capital Article:

https://web.archive.org/web/20221027180943/https://www.sequoiacap.com/article/sam-bankman-fried-spotlight/

Not long before interning at Jane Street, SBF had a meeting with Will MacAskill, a young Oxford-educated philosopher who was then just completing his PhD. Over lunch at the Au Bon Pain outside Harvard Square, MacAskill laid out the principles of effective altruism (EA). The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.

EA traces its roots to philosopher Peter Singer, who reasons from the utilitarian point of view that the purpose of life is to maximize the well-being of others. Singer, in his eighth decade, may well be the most-read living philosopher. In the 1970s, Singer almost single-handedly created the animal rights movement, popularizing veganism as an ethical solution to the moral horror of meat. Today he’s best known for the drowning-child thought experiment. (What would you do if you came across a young child drowning in a pond?) Singer states the obvious—and then universalizes the underlying principle: “Few could stand by and watch a child drown; many can ignore the avoidable deaths of children in Africa or India. The question, however, is not what we usually do, but what we ought to do.” In a nutshell, Singer argues that it’s a moral imperative of the world’s well-off to give as much as possible—10, 20, even 50 percent of all income—to better the lives of the world’s poor.

MacAskill’s contribution is to combine Singer’s moral logic with the logic of finance and investment. One not only has an obligation to give a significant percentage of income away, MacAskill argues, but to give it away as efficiently as possible. And, since every charity claiming to save lives has a budget, they can all be ranked by cost-effectiveness. So, how much does it cost for a charity to save a single life? The data says that controlling the spread of malaria and worms has the biggest bang for the buck, with a life saved per every $2,000 invested. Effective altruism prioritizes this low-hanging fruit—these are the drowning children we’re morally obligated to save first.

...At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life, a million dollars could save 500 people, a billion could save half a million, and, by extension, a trillion could theoretically save half a billion humans from a miserable death.

MacAskill couldn’t have hoped for a better recruit. Not only was SBF raised in the Bay Area as a utilitarian, but he’d already been inspired by Peter Singer to take moral action. During his freshman year, SBF went vegan and organized a campaign against factory farming. As a junior, he was wondering what to do with his life. And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth.

SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk.

...In 2017, everything was going great for SBF. ...He was giving away 50 percent of his income to his preferred charities, with the biggest donations going to the Centre for Effective Altruism and 80,000 Hours. Both charities focus on building the earn-to-give idea into a movement. (And both had been founded by Will MacAskill a few years before.) He had good friends, mostly fellow EAs.

...As a schoolboy the hedonic calculous of utilitarianism had him trying to maximize the utility function (measured in “utils,” of course) for abortion. During his teenage gaming years, his mathematical abilities allowed him to sharpen his tactics—and win. And, of course, every trade SBF ever made at Jane was the subject of a risk/reward calculation. All of it boiled down to expected value. The formula is fairly simple. If the amount won multiplied by the probability of winning a bet is greater than the amount lost multiplied by the probability of losing a bet, then you go for it—irrespective of units. Utils, euros, dollars were all subject to the same reckoning.

...To be fully rational about maximizing his income on behalf of the poor, he should apply his trading principles across the board. He had to find a risk-neutral career path—which, if we strip away the trader-jargon, actually means he felt he needed to take on a lot more risk in the hopes of becoming part of the global elite. The math couldn’t be clearer. Very high risk multiplied by dynastic wealth trumps low risk multiplied by mere rich-guy wealth. To do the most good for the world, SBF needed to find a path on which he’d be a coin toss away from going totally bust."

[Describing the early days of Alameda]

"The first 15 people SBF hired, all from the EA pool, were packed together in a shabby, 600-square-foot walk-up, working around the clock. The kitchen was given over to stand-up desks, the closet was reserved for sleeping, and the entire space overrun with half-eaten take-out containers. It was a royal mess. But it was also the good old days, when Alameda was just kids on a high-stakes, big-money, earn-to-give commando operation. Fifty percent of Alameda’s profits were going to EA-approved charities."

[Setting up FTX]

"The point was this: When SBF multiplied out the billions of dollars a year a successful crypto-trading exchange could throw off by his self-assessed 20 percent chance of successfully building one, the number was still huge. That’s the expected value. And if you live your life according to the same principles by which you’d trade an asset, there’s only one way forward: You calculate the expected values, then aim for the largest one—because, in one (but just one) alternate future universe, everything works out fabulously. To maximize your expected value, you must aim for it and then march blindly forth, acting as if the fabulously lucky SBF of the future can reach into the other, parallel, universes and compensate the failson SBFs for their losses. It sounds crazy, or perhaps even selfish—but it’s not. It’s math. It follows from the principle of risk-neutrality.

“This thing couldn’t have taken off without EA,” reminisces Singh, running his hand through a shock of thick black hair. He removes his glasses to think. They’re broken: A chopstick has been Scotch taped to one of the frame’s sides, serving as a makeshift temple. “All the employees, all the funding—everything was EA to start with.”

...To be clear, SBF is not talking about maximizing the total value of FTX—he’s talking about maximizing the total value of the universe. And his units are not dollars: In a kind of GDP for the universe, his units are the units of a utilitarian. He’s maximizing utils, units of happiness. And not just for every living soul, but also every soul—human and animal—that will ever live in the future. Maximizing the total happiness of the future—that’s SBF’s ultimate goal. FTX is just a means to that end.

...And, indeed, SBF puts his money where his mouth is. SBF is personally backing a slew of so-called AI alignment nonprofits and public-benefit corporations including Anthropic and Conjecture. He’s also the big money behind a new nonprofit called Guarding Against Pandemics, which, not coincidentally, is run by his brother Gabe. And SBF was the second-largest donor—behind only Mike Bloomberg—for Biden’s successful attempt to dethrone Trump.

....The FTX competitive advantage? Ethical behavior. SBF is a Peter Singer–inspired utilitarian in a sea of Robert Nozick–inspired libertarians. He’s an ethical maximalist in an industry that’s overwhelmingly populated with ethical minimalists."

Annnnnd then you read things like the allegations in the lawsuit against SBF's parents and there's not too much ethics on view there.

Expand full comment

The view of EA from That Article:

"A cocktail party is in full swing, with about a dozen people I don’t recognize standing around. It turns out to be a mixer for the local EA community that’s been drawn to Nassau in the hopes that the FTX Foundation will fund its various altruistic ideas. The point of the party is to provide a friendly forum for the EAs who actually run EA-aligned nonprofits to meet the earn-to-give EAs at FTX who will fund them, and vice versa. The irony is that, while FTX hosts the weekly mixer—providing the venue and the beverages—it’s rare for an actual FTX employee to ever show up and mix. Presumably, they’re working too hard.

Perhaps it’s the beer, but everyone I meet is smart, charismatic and funny. I end up mostly talking to Josh Morrison and Kat Woods, two OGs in the EA movement. Morrison is a serial nonprofit founder. Woods has a similar CV, but now she runs a meta-charity that incubates other charities. They tag-team as they try to explain the movement that drives them—and what drives them to the movement.

“Imagine nerds invented a religion or something,” says Woods, stabbing at my question with vigor, “where people get to argue all day.”

“It’s… an ideology,” counters Morrison. The argument has begun.

Woods amiably disagrees: “EA is not an ideology, it’s a question: ‘How do I do the most good?’ And the cool thing about EA, compared to other cause areas, is that you can change your views constantly—and still be part of the movement.”

I can’t help but interrupt. I get the religion part. Morrison and Woods are nothing if not evangelists. But why nerds?

Woods serves up an answer to my question. (Fittingly, she’s wearing tennis whites.) “EA attracts people who really care, but who are also really smart,” she says. “If you are altruistic but not very smart, you just bounce off. And if you’re smart but not very altruistic,” she continues, “you can get nerd sniped!”

Nerd sniped? This is a new one to me. I’m intrigued.

“You can snipe a nerd by putting out an interesting puzzle in front of them, and they’re like, ’I love this,’ because not only is EA the most interesting puzzle in the world,” Woods says, “it’s also the most meaningful.”

Nerd sniping, I learn, is the practice of attracting brainpower by presenting problems as puzzles.

“This ties into the way FTX is doing its foundation,” Morrison says, helpfully knocking the ball back to my true interest. “The foundation wants to get a lot of money out there in order to try a lot of things quickly. And how can you do that effectively?” It’s a rhetorical question, a move worthy of a preppy debate champ who went to a certain finishing school in Cambridge—which is exactly what Morrison is. “Part of the answer is to give money to someone in the EA community.”

“Because EA is different from other communities,” Woods continues, picking up right where Morrison left off. “They’re like, ‘This is the ethical thing, and this is the truth.’ And we’re like, ‘What is the ethical thing? What is the truth?’”

...There’s no question that SBF was nerd sniped as a young man at MIT. Indeed, just before he got sniped, SBF had a personal blog where he wrote about his search for life’s meaning. In the blog, he declares his allegiance to utilitarianism over and over, carefully outlining his reasoning before concluding, “And so I am a total utilitarian.” Later writings refine that statement, making it clear that he’s a utilitarian in its purest—Benthamite—form, and that there will be no saving himself from the implications of the Benthamite Way. Every action since then has been a principled puzzling-through of the implications of that philosophy. Even now, even when directly challenged, SBF maintains that he brooks no limit in following the philosophical implications to their logical end: “If I did, I would want to have a long, hard look at myself.”

In devoting every waking moment of his life to work, SBF doesn’t feel he’s doing anything unusual or extraordinary. He’s doing what he feels every right-minded person should do—if they were big-hearted and clear-headed. He’s attempting to maximize the amount of good in the world. Yet the same could be said of Woods and Morrison and, indeed, of all the EAs I’ve met in the Bahamas. Like SBF, they’re all in love with the idea of saving the world in an efficient and rational manner —except they’re obviously having a great time doing it.

So when, that next summer, MacAskill sat with SBF in Harvard Square and carefully explained, in the way only an Oxford-educated philosopher can, that the practice of effective altruism boils down to “applied utilitarianism,” Snipe’s arrow hit SBF hard. He’d found his path. He would become a maximization engine. As he wrote in his blog, “If you’ve decided that some of your time—or money—can be better spent on others than on yourself, well, then, why not more of it? Why not all of it?”

And then they ended up with an outfit that makes the Vatican Bank look prudent, well-regulated, and ethical.

Expand full comment

I think you've hit on something very important here that people disagree with regarding EA.

Christian doctrine, in particular, emphasizes that trying to do good by rationalist calculation is explicitly evil, because you will underestimate the extent to which you will deceive yourself and use the rationalist approach to do what you want. This is nowhere more obvious than in the case of SBF, which is why the general public (which is implicitly Christian, in the US anyway) find that episode so revealing.

As someone with a deep interest in psychoanalysis, I'm surprised that you are not more convinced by this argument; surely we aren't as in control as we think, and our motivations can never be pure because they are not unitary even within our own brain.

Expand full comment

This is incidentally why various religious traditions have invented doctrines to escape this trap, like predestination in the case of Protestants ("I can only do good if I can't benefit from doing good by going to heaven"). Catholics and Buddhists have similar traditions of self harm, etc. to purify motivations. Perhaps EA advocates need to start setting themselves on fire more often?

Expand full comment

"Recognize that there are pitfalls and biases that make rationalist calculation difficult to do honestly, but do your best to account for them so you can do rationalist calculation anyway" is, like, the entire rationalist project. I'm surprised that you're surprised that Scott thinks it's a good thing to try to do.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I know that's the rationalist project, but it's an argument that goes back the beginning of recorded time. The voice of such projects is usually presented as Satan or Beelzebub. Giving it a new name like EA is just marketing, per Freddie's original point, and being surprised that people who are strongly influenced by Christianity reject it is a bit naive.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I should clarify that I'm not surprised that Scott believes it, as his opinions over the years leave little doubt. I'm surprised that someone who understands psychoanalysis at the depth that he does (not mention religion!) could nonetheless possess the optimism to believe in such a project.

Expand full comment

>Christian doctrine, in particular, emphasizes that trying to do good by rationalist calculation is explicitly evil, because you will underestimate the extent to which you will deceive yourself and use the rationalist approach to do what you want.

As I understand it, they believe they have solved this problem with an applied philosophy called Rationalism.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Believing you can solve the problem of genuine self motivation by just saying you want to do it ignores the advice of all the world's religions, not to mention psychoanalysis. It strikes most people, including myself, as a stunning social naivety.

Try to convince a room full of police officers, priests, accountants, or attorneys that we can just trust people to do the right thing in the face of huge monetary and social incentives. For some reason math/science types believe this is possible; most of society does not share this optimism.

People who believe such things are just targets for scammers, eg SBF, who I believe mostly scammed himself as well.

Expand full comment

>As I understand it, they believe they have solved this problem with an applied philosophy called Rationalism.

Naming the blog *Less* Wrong was supposed to be a giveaway that the problem is very much not solved!

Expand full comment

OK, but they don't admit to much epistemic humility when it comes their forecasts of 10+% for AI X Risk. It's all groupthink but they don't show any awareness of it.

Expand full comment

We're increasingly in a world where that's what the outside view can get you though, right? The AI Impacts survey that found median AI researcher p(doom) was at 5-10% had methodological issues and who the hell knows what current thinking at OpenAI is, but Christiano is at 50% and Amodei is at 10-25%. Hell, FTC Chair Lina Khan calls it optimistic at 15%. You can absolutely still find low estimates, but it's not at all clear to me that LeCun is the voice of humility; Marcus recently dodged giving a number, but his is clearly higher.

We seem to be skipping directly from a world where significant probabilities are wild fringe ideas, to one where double-digits are fairly common. That's a huge shift, less in keeping with an update on new evidence and more in line with "people started paying attention"!

Expand full comment

This is why I'm a rule utilitarian. Act utilitarianism should generally be reserved for extreme situations or the blind spots of society, and it should be a little terrifying to feel you actually have to work out the math yourself.

Expand full comment

That is something I can respect. The rules are just really difficult to work out; I guess in some sense that has been the project of Judaism for the past 5 thousand years.

Expand full comment

"Christian doctrine, in particular, emphasizes that trying to do good by rationalist calculation is explicitly evil, because you will underestimate the extent to which you will deceive yourself and use the rationalist approach to do what you want. "

Having been raised in a Christian household and having a brother who has a Masters of Divinity, I don't think that this is Christian doctrine and I think you're doing the standard Christian thing of taking whatever particular moral thing you believe and pretending it represents Christianity.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I probably stated that too strongly. A better way to say it would probably be that doing good by rationalist calculation is not easily obtained, but must be honed by a life of discipline and holiness to avoid the bending of rationality to one's own desires. There is a reason that Lucifer is the light-bringer, a symbol of rationality.

The basic point is that EA tends to underemphasize exactly how hard doing good with your rationality actually is, as if any 20 year old with a spreadsheet and good intentions can do it.

Expand full comment

Doing good is very very hard. Jesus Christ himself noted as much, when he said that it was easier for a camel to pass through the eye of a needle than for a rich man to get into the kingdom of heaven. But it isn't hard because it's some wildly overcomplicated thing to do RCTs or something (well, it probably was back in the 1st century). It's hard because the rich man reeeeeally doesn't want to take all he has and give it to the poor.

I probably see about 100x more of the sorts of rationalization that actually stop people from doing good in the comments sections on Scott's EA posts than I've ever seen from actual EAs, who seem to correctly shy away from such arguments.

Expand full comment

Agree, and the corollary is that doing evil is very easy. Rushing to make interventions in societies you don't completely understand, even future societies whose structure you can't even imagine, seems like a Satanic impulse to me, rooted more in delusions of grandeur and avoidance of responsibility to humble service in our community and to our families. In my mind, most EA projects outside of direct analysis and optimization of charitable giving are substantially worse than doing nothing.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

And paying someone to "think about AI", does not constitute charitable giving to me. It's what nerds, including myself enjoy doing anyway, how fortunate it's the best way to spend other people's charity!

Expand full comment

On point 2, I think the "consequentialist reasoning" part, along with some of your other assumptions for caring about x-risk -- valuing foreign lives as equal to local lives, valuing some amount of animal lives as equal to human lives, valuing potential future people at all -- is an important identifier for the EA position. Many people don't share those assumptions, and so animal welfare charities and x-risk charities are "obviously" worse than global health charities.

I used to think that donations to x-risk charities were bad, because I saw them as taking money away from helping people today. But over the years, I've seen people post about leaving the movement and not donating any of their money to charities at all, and now I think if the x-risk stuff keeps people in and donating, that's a good thing. By the graph last time, more than half of the money goes to stuff I'd agree with, and not all my yearly donations go to global health, either.

Expand full comment

"Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation. There are assumptions you can add and alternate methods you can use to avoid that conclusion. "

I think a lot hinges here on whether you use an intertemporal discount rate. And you should.

Expand full comment
author

I don't think that's the main crux. There's no sane discount rate that does much about everyone dying in fifty years.

(you can solve this with non-sane discount rates, though: see https://forum.effectivealtruism.org/posts/LSxNfH9KbettkeHHu/ultra-near-termism-literally-an-idea-whose-time-has-come )

Expand full comment

I mean, sure, if you're putting in high probabilities of everyone dying in 50 years (and I know you do, so I guess that sort of settles that). But if you're thinking in terms of low probabilities or (potentially) long time frames (such as you would for supervolcano risk) it's a different story.

Expand full comment

Sure. If you look at all the stories though, not just the supervolcano risk story (and similarly low-odds GCRs), it adds up to the original story. https://possibleworldstree.com/

Expand full comment

Ozy wrote a good article arguing that longtermism is essentially a misnomer, because longtermists often predict an AI revolution in the next 20 years, while Global Health people expect bed nets and dewormer to reduce poverty in the relevant countries over 100 years.

https://thingofthings.substack.com/p/two-kinds-of-longtermism

Expand full comment

I can't find it at the moment, but I recall reading a paper arguing that even if you don't care about future people at all, you should still support x-risk mitigation because x-risks have a non-trivial risk of killing people who are currently alive today.

Expand full comment

I don’t think you need the citation. I mean that just makes sense.

Expand full comment

I agree. I will further cite that almost everyone is currently a future person.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Nick Bostrom makes an argument for person-affecting x risk mitigation in Existential Risk Prevention as Global Priority https://existential-risk.com/concept.pdf

For a totally whacky argument about the low probability of reincarnation making person-affecting ethicists into longtermists:https://parrhesia.substack.com/p/an-anthropic-argument-for-person

Expand full comment

I think there are two things here for me: one that don't want to be Pascal's Wagered into prioritising things with enormous upside (or preventing things with enormous downside) where there are very low probabilities of anything I do making a difference - fundamentally, the difference between a probability of 0.0000001 and 0.0000000001 (ie a millionth and a billionth) on something this speculative is not something that I trust anyone - including myself - to be able to measure.

The second is that I don't think that present AI research is on a path to AGI at all; LLMs are an interesting and useful technology, which poses a set of risks, but those are not x-risks and if LLMs and other "generative AI" approaches aren't on the AGI path, then there's no argument for EA to go there at all.

Expand full comment

I think there could be an x-risk, as you put it, if it is used improperly. I see no risk of an AI singularity with current paths, but it might be possible to use LLMs or other AI technology with automation to paperclip maximize ourselves somehow. As long as only the highest probability outcomes get automated then the rest can be double-checked, by a human being if necessary, before implementation.

Suppose we have a medical machine that checks people for abnormalities using AI, then fixes any problems it finds using AI to find the correct way to do it. Such a machine, given what we know about AIs currently, will make some mistakes in diagnosis and treatment, and may even kill some patients a human doctor would never have killed. It might make the overall population live longer even though it occasionally causes preventable deaths.

Now if you take that machine and add an implementation to it so that it goes out and looks for "patients" and then treats them, this would be a level of automation that could pose an x-risk.

Expand full comment

1/ You don't need to be Pascal's wager-ed to think AI risk is worth working on on the margin, cluster style thinking (which sandboxes Pascal's wager) works too, eg https://www.cold-takes.com/most-important-century/#Summary

(cluster style thinking refers to https://blog.givewell.org/2014/06/10/sequence-thinking-vs-cluster-thinking/)

2/ You don't need to think present AI research is on a path to AGI at all, PASTA works too, cf https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/

Expand full comment

Not directly related to the post, but. One of my copouts is that anything outside a really tiny time horizon is a Knightian uncertainty. You can talk about probabilities, but you can't meaningfully calibrate on them, so they are meaningless.

Some cherry-picked examples:

- Veganism is not as good as semaglutide at reducing animal suffering. Maybe I should have saved money spent on Impossible burgers and donated the difference to Novo Nordisk.

- Raising the profile of MIRI resulted in the best and brightest minds working on AI. Maybe if Eliezer finished high school and went on to become a decision theorist in academia, and not an AI x-risk popularizer, the AI would still be at the level of Alexa.

- Fighting global warming may end up net-negative for life on Earth: think how much richer the biosphere was 55 million years ago, compared to now.

Expand full comment

"Fighting global warming may end up net-negative for life on Earth: think how much richer the biosphere was 55 million years ago, compared to now." - this is an important point. FWIW yours truly believes in a gentle anthropocentric approach: I care about other life, I do, but I care about humans a little bit more (even if some of them don't deserve it...). So for the global warming, I'd really like to get some handle on it, see a good debate downstream of LarsP's comment here: https://open.substack.com/pub/astralcodexten/p/open-thread-304?r=7caj1&utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=44460975

Getting the warming under some control may indeed result in less biodiversity on Earth (we don't really know), but has a good chance to avoid fucking up lives of millions/billions of humans.

Expand full comment

> Veganism is not as good as semaglutide at reducing animal suffering.

Curious about your BOTECs here

Expand full comment

Not familiar with the acronym.

Expand full comment

I assume back of the envelope calculations, by context.

Expand full comment

Very roughly. Assuming that conservatively 1% of the US population reduces its calorie consumption by half, it works out to about 0.5% reduction in the livestock in the US alone.

Expand full comment
founding

re: "Veganism is not as good as semaglutide at reducing animal suffering." thank you for this example -- it is very thought provoking.

Expand full comment

I think the AA analogy is interesting here, because AA makes a big point of being run in ways that are almost opposite to EA in some respects. I'm thinking of the Twelve Traditions in particular:

https://www.aa.org/the-twelve-traditions

For example, tradition 11: "Our relations with the general public should be characterized by personal anonymity. We think A.A. ought to avoid sensational advertising. Our names and pictures as A.A. members ought not be broadcast, filmed, or publicly printed. Our public relations should be guided by the principle of attraction rather than promotion. There is never need to praise ourselves. We feel it better to let our friends recommend us."

There is a big contrast with Will MacAskill going on the Daily Show, or the many campus groups trying to hit KPIs for new recruits...

Or tradition 8: " Alcoholics Anonymous should remain forever non-professional. We define professionalism as the occupation of counseling alcoholics for fees or hire. But we may employ alcoholics where they are going to perform those services for which we may otherwise have to engage nonalcoholics. Such special services may be well recompensed. But our usual A.A. "12th Step" work is never to be paid for."

Big contrast with the much-discussed shift from "earning to give" to "EA jobs".

Of course it would make no sense for EA to try to adopt the Twelve Traditions. They only really make sense if you think your primary job is not to actually do anything, but just to make space for a Higher Power to work, which is not the EA philosophy at all.

Still, AA has been extremely successful over almost a century in running successfully and maintaining its core purpose, without any major PR blowups and without turning into a cult, despite serious dangers of both when running substance abuse groups. So there may be something to learn.

Expand full comment

Well, on the other hand their efficacy has been seriously questioned, including through peer-reviewed research, but I don't think they've tried to make any changes to see if those changes would make AA more efficacious. So maybe AA could learn something from EA also?

Expand full comment

Well (something I learned from reading SlateStarCodex actually) most of the scientific research on AA is so low-quality that one can't really conclude too much from it, but maybe it seems to work okay.

As for promoting newer evidence based treatments: by avoiding taking any outside political stances, the organization itself is unable to have an opinion on medical treatments for alcoholism such as naltrexone, new forms of psychotherapy, etc.

That's good (it prevents AA from taking an official stance against these things, which they might have especially decades ago) and bad (it prevents the organization from trying to educate members about such treatments). But AA doesn't prohibit naltrexone or anything like that.

Suboptimal policy but safe and long-term robust -- very different attitude than EA.

Expand full comment

Scott often talks about how Treatment A works for some Treatment B for others Treatment C for others, and often the only way to know which treatment works for whom is through trial and error. AA offers a treatment which definitely works for a lot of people but not for everyone. Other treatments for alcoholism exist which work better for other people. Since AA works for many people, it makes more sense for them to carry on with what they do rather than try to tweak the formula, which could cause it to be less efficacious for its current patients. Better that alternative treatments with less of a history of success experiment.

Expand full comment

At this point I think EA is a pretty broad movement. It consists of good people doing sensible things, good people doing silly things, bad people doing silly things, and bad people doing sensible things.

I wish to say hooray for the good people and the sensible things, and boo for the bad people and the silly things, but it seems like people get caught up in saying hooray or boo for the movement as a whole.

All of this is also true for most things, but it's even more true for Effective Altruism, which has one of the broadest good-bad and sensible-silly ranges of any movement I know.

Expand full comment

While I felt this article from FDB was a piece of crap, I think it reflects the (aware) public's attitude about EA and is therefore worth addressing. Of course, the point of essayists like FDB is to be correct not just reflect (describe is fine) the public's vibe.

Based on both talking to people and the helpful responses to my comment on the last piece about this my conclusion is that people feel that EA is a way for a certain sort of person not only to look down on them for being dumb but also excuse themselves from having to follow the usual rules. Sometimes those rules might be pretty literal (SBF ignoring the law) but more often it's just the rules about what kind of justification is needed before being they get praise for what appear to be self-serving and unlikely notions of what doing good consists of.

And some of this is inevitable. EA starts with two strikes against it in that it's a view championed by weird people that asks us to be open to the possibility that the most good might be done via counterintuitive actions (indeed, I think that's how I'd distinguish EA from merely the idea that ROI on charity matters).

However, I think another part of this is the common problem that movements/ideologies face accurately portraying themselves to the public. Indeed, I think these problems aren't unique to EA but are pretty much the same problem much of the left faces in presenting itself to the right.

The problem is that, in any tribe or movement, there is a tendency to stop talking as much about the boring shared ideas and talk more about ones that are more controversial in the movement -- and reporters only make this worse since you can only write one short article saying the boring centrist (eg givewell) thing is still broadly supported.

On its own I don't think this would be that big an issue except for the tendency to give fellow tribe members the benefit of the doubt and thus to avoid calling them out to the same extent one might do for non-members. For the left in the US this happens in how it responds to people who get the science wrong in arguing for environmentalism or for some racial/gender justice position even while jumping on the people who do the same in the other direction.

Especially when the people doing the criticism are in some fashion better educated or equipped to make such criticisms it leads to an impression to outsiders it's all just a sham -- heads I win, tails you lose. The Google bro's summary of the science wasn't quite up to the standard of a peer reviewed journal so he gets ravaged even while the same academics say nothing about the much more egregious errors of the people on the same side. With EA it's the sense that the skepticism that might be applied to the same arguments from a traditional religious or common sense moral view don't seem as apparent/strong when it's someone pushing a theory of AI x-risk or some longtermism option.

What makes this a difficult problem is that it doesn't necessarily reflect any kind of hypocrisy within any individual. There is just a tendency not to want to be an asshole towards or just not to expend your limited time yelling at the person you see as fundamentally having the right idea but making some mistakes so the very real disagreement just isn't voiced (eg when pushed the cast majority of lefty academics do disagree with bad environmentalism or bad social science in the service of social justice but don't feel the same need to make a fuss). With respect to EA the issue comes about because, from the inside, even those of us who are very skeptical of many AI x-risk claims or longtermist propositions still see those people as engaged in the right sort of approach even if they've made a mistake. Yet, from the outside what it feels like is there is one rule for you and another rule for me.

I don't think there are any easy fixes but I do think there are some strategies we might try (these are just guesses). We can try to do more to present EA as less of a lecture at outsiders about what you should do and more of an invitation to consider what they might think are surprising our counterintuitive conclusions about how charity should be done. So create the sense that we are asking (and thus valuing) them about what conclusions they would reach not just telling them what we've decided.

A more fraught option is to be less collegial in public facing interviews and portrayals. We could talk up more how the people who reach the wrong conclusion about AI x-risk or whatever are effectively killing people (one way or the other) rather than treating them as serious arguments (basically try to hide the visibility of the EA internal Overton window). But the problem with this is that it has real costs to the movement in terms of internal cohesion and our ability to calmly engage with each other.

But given that highly paid motivated people in politics haven't really solved the issue it seems like a hard problem.

Expand full comment

As an outsider, I’d suggest that an additional upside to the “less lecturing” approach is that it would help avoid the purity spirals that appear to have driven e.g. some of the EAs in SBF’s orbit pretty much out of their minds, such that literally every decision became a matter of expected QALY value, the people in question lost touch with/their stake in reality, and their own lives were impoverished in ways that were not incidental to the ensuing disaster. (I’m taking this from Zvi Moshowitz’s Michael Lewis review - which as well as being a rollicking read also serves as the most thoughtful critique of EA I’ve come across - and assuming from some other accounts I’ve heard that such things are not *all that* uncommon).

Expand full comment

Ohh, that's a great point. Thanks.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

The scrupulosity problem within EA is well known, unhappily. That's one of the side-effects if you have people who are more than usually sensitive or empathetic involved in something that leans very heavily on appeals to "if YOU don't act now, MILLIONS will die!!!!"

Expand full comment

I don't really see the relevance here. Sure, that may be a problem inside EA but I don't see how it hurts the movement's perception. In general, having lots of highly scrupulous people improves an altruistic organization's appeal to outsiders (ohh those people take it super seriously, they aren't just fucking around).

Expand full comment

"In general, having lots of highly scrupulous people improves an altruistic organization's appeal to outsiders (ohh those people take it super seriously, they aren't just fucking around)."

Depends on degree of taking it seriously; signing up to a committment to definitely donate 10% of your salary, no crossing fingers behind your back is a strong signal to outsiders that this is serious and will get (in general) positive response.

Fretting that do you really need three meals a day or a mattress on the floor to sleep on, when that money could be going to maximise extrapolated value, what do you mean I'm anaemic and developing rickets from self-neglect - that's not going to garner a positive response.

Expand full comment

Those might cause people to worry it's bad in other ways (cult is stealing my friend tho see below) but they cut against the sense that the EA ppl are just coming up w/ clever excuses to let ppl like SBF justify doing whatever they wanted to do anyway.

I think the later is currently where more outside opinion is.

Expand full comment

I think part of the difficulty is the very broad range that comes under the EA umbrella. Ordinary person can get behind "we donate to malaria nets", that's something they understand, the kind of doing good that their community engages in.

But when you get to the AI risk stuff, then that becomes "What, you honestly think computers are going to come alive and kill us all? Dude, that's science fiction! That's disaster movies! It's not real, not like real starving children or battery farm hens or floods caused by global warming" and that's where you lose them.

Worse, then the perception of "buncha pointy-headed cranks" reflects back on the 'ordinary' charitable works like mosquito nets etc. and so the entire enterprise no longer gets the benefit of the doubt.

It does seem that for its own sake, maybe the EA movement (if you'll pardon me referring to it as a monolith) should split into the (1) public-facing, understandable, "mosquito nets" set of charitable functions that can be publicised and branded as the 'face' of EA. Animal welfare can fit in here too, but only to an extent: battery cage hens bad, but skip the "do insects suffer?" hand-wringing and (2) the weirder things like AI risk that are in the background; you schmooze politicians and regulators about them, but this is the 'unelected special advisor' kind of position, where you have the spiffy Oxfordshire manor jamborees for the President of Belgium and the likes to come spend an agreeable week-end being wined and dined and lectured about Serious Risk Topic. The ordinary public don't see this directly and only, maybe, read about it in the papers when it's "Report of Summit on Technological Incorporation in Business issued after conference finished at weekend" style coverage.

Expand full comment

One thing I took away from FDB’s article (whether it’s really his point or not) is that if the mathematics takes you to an absurd conclusion, then something has gone wrong. You did the math wrong; or you tweaked the math to get you to a predefined conclusion; or math is the wrong tool to use (which I think may be the most common way out of the Repugnant Conclusion). Regardless, a natural reaction is to be doubtful of the whole proposition — even if (by accident?) it did lead to some correct conclusions.

(I don’t personally think X-risk is an absurd conclusion, but I certainly sympathize with those who do.)

Expand full comment

Actually, I think AI x-risk is a perfect example of a case where we aren't just following the math. And I think it's an important part of EA that we be willing to follow the math when it's counterintuitive...it's just that all the examples FDB raises just aren't instances of merely following out the math or even following out the math from widely accepted assumptions.

Maybe the probability assignments AI x-riskers assume are reasonable or maybe they aren't. However, they certainly don't just fall out of taking the mathematics seriously. Indeed, the arguments for AI x-risk all end up falling back on various kinda vague intuitions about the nature and power of intelligence and intuitions about how we should expect agents to behave. After all, no one doubts that it's logically possible for AI programs not to be supervillian style or paperclip maximizers. Indeed, there are far more possible I/O functions that don't represent a dangerous AI than those that do -- even if you only count functions that do the beneficial AI work we want.

The argument relies the intuition that the way you get useful AI programs is to write code that behaves more and more like some kind of idealization of an agent with a simple optimization function. But nothing even sorta like a proof of that claim has ever been advanced.

That doesn't tell us if AI x-risk is or isn't a reasonable view. But it's not pushed by the math it's just a judgement that the kind of people who do EA tend to find pretty plausible.

Expand full comment

I think you're merely explaining why EA isn't popular not why it generates active dislike. For instance, many of those views, such as those about animal suffering etc.. are shared by major world religions that don't generate any of this negative response. Sure, I think most Americans think that's kinda weird and crazy and not at all convincing but it doesn't cause them to think negatively of that group.

What's different is that many people perceive EA as saying: if you don't agree about all these crazy issues you're being irrational and we're looking down on you. And I think EA does say that you need to take the math seriously even when it's counterintuitive, and that's a valuable part of the movement. It's not like people didn't care about ROI before but even bednets seemed pretty counterintuitive to many donors when they started getting pushed. So I think sending the message that what your gut tells you helps may not is quite important.

Where things go wrong is the perception that this requires you take shit about AI x-risk or longtermism seriously. As I pointed out below, these aren't mathematical results of the framework they are just plausibility judgements many people who are into EA share.

And that's what I was trying to get at with the internal versus external perspective. The problem is that people looking in from the outside don't get the message that it's totally consistent with EA to think: AI x-risk worries are dumb stupid and irrational. It's not a consequence of utilitarianism.

Rather, it's a consequence of the fact that there are strong incentives to be charitable and polite to other members of your movement and so outsiders don't get the sense that it's totally EA consistent to think all that shit is dumb.

Expand full comment

Well, the scrupulosity is an inside problem rather than an outside problem. Certainly, if someone says "my family member/friend/I heard it through the grapevine person was harmed by EA", they will have a poor opinion of it and will consider it cult-like.

But it's more affecting those within EA and those attracted to it; if the spirit of the group is to make greater and greater sacrifices, because look the equations stack up that way and you can't argue with logical mathematical rational proof because this isn't about messy feelings like silly old religion - then the vulnerable-to-that-particular-pressure will interiorise that value and take it to extremes.

Like the flagellants. It's a failure mode that happens for all sorts of reasons in all sorts of entities, and it's something that those within EA do need to be aware of.

Expand full comment

I don't see that as much of a risk. I think EA does less to encourage extreme actions than many churches and friends and family no doubt price in fact that person X was always freaking out about not being good enough and going to extremes before they joined.

Indeed, I think EA tends to channel the same personality traits that might have lead someone to renounce all worldy possessions and become a monk into obsession with comparing ROI in spreadsheets which doesn't show up to family as harm.

Not to mention that EAs tend to at least maintain the outward status necessary to make money so they can give which is a further layer of protection.

Expand full comment

When a religious person says "our religion has found the ultimate "truth" of the good" most people shrug and don't really care as long as its members are not in their face about it. We all understand that this truth has not been empirically determined or logically proven; it's a matter of faith and their belief, and they are welcome to hold it without a strong reaction from outsiders (assuming, as I said before, they are not assholes about it).

EA seems to, to some extent, at least, argue that you *can* reliably and objectively quantify "goodness" such that you can "just follow the math" and let it direct you rather than applying your own moral judgments.

This strikes me as wrong because we very obviously *cannot* reliably and objectively quantify goodness. You will never be able to define "goodness" sufficiently specifically to cover the mathematical calculations for all situations. Ergo, at a certain point, everyone has to fall back on something when the math leads them to a conclusion that seems, on its face, absurd.

I think the humble EA at its strongest looks at this absurdity and says "wait, *is* this really absurd? Let's refollow our logic and determine if I can construct a moral narrative that makes sense and that I really believe that I wouldn't have found without the math." EA at its weakest (and downright immoral, in my opinion) is if the math tells you to do something and you do that because the math told you to do so, even if it seems absurd. Morality is ultimately a matter of feelings and personal judgement.

To me, the moment you look at something like the potential trillions of future lives of humans and assign to that a value in a calculation, you've already failed. The numbers are so large that *any* value will horrifically skew, and you have no way to even remotely judge the value between many orders of magnitude. If you're in a situation where .00001 difference equals an obvious benefit in 200 years, you need to acknowledge that you're just playing a game with numbers. It's navel-gazing and not remotely pragmatic and you'll never, ever be able to measure the result anyway.

Expand full comment

This is just another Bay Area land grab, same as they think they invented and now own sourdough bread baking. Almsgiving is as old as the Book of Proverbs and is one of the pillars of Islam, among other things. Spending money in a way which gives maximum value for money is a concept as old as money, and if you want to talk about doing something then actually doing it and doing it effectively really are pleonasms, in that not doing it is not doing it, and doing it ineffectively is also not really doing it.

I don't know what the worldwide charitable spend is vs the EA spend, but probably 1000x as much? Does e.g. Gates regard himself as EA?

I comment here in my own name in the hope it keeps me polite, but then there's lots of people with my name so I am happy to out myself as someone who consistently gives over 3% p.a. and less than 10% to charity, and why would I call myself EA? Any more than taking the odd precaution to avoid dying prematurely makes me a Thielite or Johnsonist.

Expand full comment
author
Nov 30, 2023·edited Nov 30, 2023Author

Did you read the post? I was hoping it would respond to exactly these kinds of objections, including your specific questions like whether Bill Gates is an EA.

Expand full comment

Yes, in detail down to and including the para beginning "I think less than a tenth of people...". Skimmed thereafter and missed the Gates bit. And I still think you are rebadging an old and universal thing as if it were new and local. Nothing particularly wrong with this, let a thousand charitable flowers bloom, and you can call me an EA if you want EXCEPT FOR the weird packaging with wholly extraneous beliefs about AI risk. I think AI risk is a huge concern, I think the PCM version of it which seems to go along with EA is wrong and a distraction, but much more fundamentally I am suspicious at a conceptual level of any belief system which links beliefs which are logically independent of each other in this way. The linking defies and negates rationalism: how do you get from cost effective malaria cures to runaway perverse instantiation? It's not just that there's no direct logical link, there's no more general sort of mindset that I can see that would predispose anyone to think A is good and B is inevitable, and if I encounter a group of people who think the same way about A and B I think: cult.

Expand full comment

I think what Tyler G posted in deBoers comments best says why the EA brand is useful, even if it is rebranding the same old thing (https://freddiedeboer.substack.com/p/the-effective-altruism-shell-game/comment/44394730)

> People don't generally like giving away money and getting nothing in return. Churches used to be a good answer to "what do I get?" and a big part of that was "community." If EA doesn't have a brand, community, leadership figures, etc, then it doesn't work for that. To some extent it's doing the work of branding causes that aren't well branded. Direct cash transfers to Rwandans doesn't give you the same in-group-identifying-bumper-sticker as NPR, NRA, Harvard, etc., which puts it at a massive disadvantage without the positive auspices of EA.

My own take is that EA is indeed the same old thing but just with more intensity. Not "do good" but "do more good, and then more". There have been people passionate about doing the utmost good with their lives before. But whatever happened to give birth to EA created a whole lot of people with a bigger-than-average itch do go good. Quantity is a quality of its own.

> how do you get from cost effective malaria cures to runaway perverse instantiation? It's not just that there's no direct logical link, there's no more general sort of mindset that I can see that would predispose anyone to think A is good and B is inevitable, and if I encounter a group of people who think the same way about A and B I think: cult.

The rational cost-benefit analysis, extreme philosophical views and other weirdness is downstream of people being more intense about doing good. I guess being intense and weird is a bit cultish? To which I'd say that lot of other organizations and communities, like the NRA, NPR, Harvard, are also a bit cultish. It's a sliding scale.

Acceptance of the extreme AI risk idea within EA is just people being more receptive to ideas coming out of their own community. There is also a divide between people who like the old EA - focused on global health and development - and are put off by the newer longtermist and X-risk ideas.

How the weird ideas come about according to my theory

1. "I want do more good"

2. "It sure would help deciding what to do, if I could clearly see how good something is."

3. "Utilitarianism!"

Or

1. "I'd sure like to do good for the next 7 seven generations"

2. "But wouldn't it be even better to do good for more?"

3. "Longtermism!"

Or

1. "Preventing the worst possible thing from happening sure seems like it'd do a lot of good".

2. "I need to think of the most likely worst possible thing happening in the nearest future"

3. "AI doom!"

Trying to optimize something to its utmost extreme leads to weird extreme places. This is a possible failure-mode of "do more good". But it shouldn't be a reason to think the sum-total impact of the EA movement is negative, it's definitely positive. And the attitude of "more" should lead to that positive impact increasing.

Expand full comment

"Direct cash transfers to Rwandans doesn't give you the same in-group-identifying-bumper-sticker as NPR, NRA, Harvard, etc., which puts it at a massive disadvantage without the positive auspices of EA"

We don't really do bumper stickers over on this side of the pond, but if you want an in-group identifying bumper sticker, for direct cash transfers, I'm sure somebody could rustle one up from ActionAid logo:

https://www.actionaid.org.uk/our-work/emergencies-disasters-humanitarian-response/cash-transfer-programs

Yeah, you'll get the warm fuzzies from identifying your in-group charity set, but other people still won't know them. The same way the EA set probably have no idea about GOAL, etc. Are the Bay Area set impressed by World Vision bumper stickers, and if not, why not?

Snob appeal of that type works, but it'll only work if you're out to sign up the type of person who *would* be sniffy about World Vision and other charities, and eventually you're gonna run out of SV billionaires.

Expand full comment

> Are the Bay Area set impressed by World Vision bumper stickers, and if not, why not?

I don't know about Bay Area but personally I've been disappointed with my on-going donation to World Vision. Not about what good it's doing, that's likely as good as what Action Aid or Give Directly does, but about World Vision's approach to trying to build a connection between the donator and donatee by providing reports about how their village is doing and exchanging yearly postcards.

The idea seems good: do good and get warm fuzzies at the same time! Make charitable giving a product that people can actually want to do!

But the reality is that making that connection requires effort from me also... and I find that actually just stresses me out. I'm getting the opposite of warm fuzzies. And then the way the donatee family is made to write postcards to me yearly feels like a condescending obligation put on them (I have no idea how they actually feel about this, this is just how I feel, I presume a 90% of the effort is actually done by a World Vision employee) that wouldn't be there with Give Directly.

I'm low-empathy and introverted, I don't instinctively seek connection with people and I should've known myself better. I suppose I match the stereotype of the cold and calculating type of EA. Nonetheless, the idea of "more good" and the mental EA bumper-sticker I get from being involved is something that gets me excited.

Expand full comment

"how do you get from cost effective malaria cures to runaway perverse instantiation?"

People who are convinced by the Drowning Child Problem. "Hey, you agree that saving lives is good, don't you?" "Yes, I do" "So if I showed you that interventions to prevent malaria save lives, you'd donate, wouldn't you?" "Okay, why not?" "Great! And if you're concerned about lives *now*, you'd be concerned with a really big risk that threatened all lives, right?" "Well... I guess?"

And that's how you rope 'em in: 'you took the word of authorities in our circle that charitable giving was the way to go, but only in the way *we* do it; we flattered you that you Understood The Maths and hence were a smart cookie who isn't easy to be fooled; now authorities in our circle are telling you not alone is AI possible, it's coming, and if it's not aligned with our values it could Kill Us All, and here's the maths behind that - you're not a dumb normie, you can follow complex high-decoupler hypotheticals, so you believe us on this too" (you'd better, because if you start denying now, then it ripples back to every bit of trust you put in our authorities when they were telling you about malaria nets and earning to give; we're not Christ and you're not Peter, to get the chance to repent after three times denying).

Expand full comment

Harsh but not completely unfair. I agree, PCM theory looks cleverer than it actually is, and believers are flattered by the thought that they understand it.

Expand full comment

Do you have a primary source to paper clip maximizer theory? It always appeared to me to be a thought experiment briefly illustrating why malice is not necessary for doom. If I am wrong, can you say what your simple definition is?

Expand full comment

Bostrom Superintelligence 2014.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Whenever someone says that the Paperclip Maximizer (PCM) idea is just a hypothetical based off nothing but math and extrapolations, I can't help but think back to the biggest real world example of paperclip maximizing behavior: https://psmag.com/social-justice/the-senseless-environment-crime-of-the-20th-century-russia-whaling-67774

"THE MOST SENSELESS ENVIRONMENTAL CRIME OF THE 20TH CENTURY

Fifty years ago 180,000 whales disappeared from the oceans without a trace, and researchers are still trying to make sense of why. Inside the most irrational environmental crime of the century.

...

The Soviet whalers, Berzin wrote, had been sent forth to kill whales for little reason *other than to say they had killed them*. They were motivated by an obligation to satisfy obscure line items in the 5-Year Plans that drove the Soviet economy, which had been *set with little regard* for the Soviet Union’s actual demand for whale products. “Whalers knew that no matter what, the plan must be met!” Berzin wrote.

This absurdity stemmed from an oversight deep in the bowels of the Soviet bureaucracy. Whaling, like every other industry in the Soviet Union, was governed by the dictates of the State Planning Committee of the Council of Ministers, a government organ tasked with meting out *production targets*. In the grand calculus of the country’s planned economy, whaling was considered a satellite of the fishing industry. This meant that the progress of the whaling fleets was measured by the same metric as the fishing fleets: gross product, principally *the sheer mass of whales killed*.

[bafflingly, this was despite the fact that "the Soviet Union had little real demand for whale products. Once the blubber was cut away for conversion into oil, the rest of the animal, as often as not, was left in the sea to rot or was thrown into a furnace and reduced to bone meal—a low-value material used for agricultural fertilizer...

... Japanese whalers made use of 90 percent of the whales they hauled up the spillway; the Soviets, according to Berzin, used barely 30 percent. Crews would routinely return with whales that had been left to rot, “which could not be used for food. This was not regarded as a problem by anybody.”

This was the riddle the Soviet ships left in their wake: Why did a country with so little use for whales kill so many of them?"]

Whaling fleets that met or exceeded targets were rewarded handsomely, their triumphs celebrated in the Soviet press and the crews given large bonuses. But failure to meet targets came with harsh consequences. Captains would be demoted and crew members fired; reports to the fisheries ministry would sometimes identify responsible parties by name.

Soviet ships’ officers would have been familiar with the story of Aleksandr Dudnik, the captain of the Aleut, the only factory ship the Soviets owned before World War II. Dudnik was a celebrated pioneer in the Soviet whaling industry, and had received the Order of Lenin—the Communist Party’s highest honor—in 1936.

The following year, however, his fleet failed to meet its production targets. When the Aleut fleet docked in Vladivostok in 1938, Dudnik was arrested by the secret police and thrown in jail, where he was interrogated on charges of being a Japanese agent. If his downfall was of a piece with the unique paranoia of the Stalin years, it was also an indelible reminder to captains in the decades that followed. As Berzin wrote, “*The plan—at any price!*”

...

In one season alone, from 1959 to 1960, Soviet ships killed nearly 13,000 humpback whales."

(Even more bafflingly about the whole thing was the fact that the Soviet Union probably didn't even have much use for all the whale oil it was getting this way, it had plenty of normal petroleum oil and that was much cheaper, there's a reason kerosene and the like supplanted whale oil the entire world over. They were effectively using *zero* percent of the whale! The entire thing was like Saudi Arabia taking up whaling for oil.)

In other words: if the Soviet Union had the technology, they absolutely would have paperclipped the world by accident. Maybe even 'rotting whale-d' it by turning it into a mass of (rotting, but that doesn't matter of course) whale flesh being proudly turned in at its docks by a "5-Year Plan Aligned" AI. On paper, everything would be roses. And that's all it takes.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

> In other words: if the Soviet Union had the technology, they absolutely would have paperclipped the world by accident.

I do not believe that such technology is possible, unless one is speaking metaphorically (e.g. one could say that the entire world is covered in roads today, which isn't literally true). The problem with whales is that they're low-hanging fruit; killing all whales is unfortunately so easy that even a few boats armed with harpoons could conceivably do it. Most other tasks are much harder (not to mention a lot more noticeable).

Expand full comment

*Sigh*

Do you know that in WW2, machine tools could 'self-replicate' in about 6 months? As in, a factory staffed by humans and filled with machines could construct a duplicate in about 6 months. If you had enough workers to staff that duplicate, then they could duplicate again in another 6 months. So on and so forth.

That was in WW2. Nowadays, with advancements in manufacturing technology, that doubling time is more like 2-3 months. No one has ever managed to fully make use of that fact since the doubling time on humans for your workforce is more like 20 years than 2 months. But if you could build a fully automated factory...

China set records by growing at 10% a year for decades on end. WW2 manufacturing technology allows for a growth rate of 300% a year; with a fully automated version of it, North Korea's economy could reach the size of the US in only 5 years (x500 factor difference between American and North Korean economies, which is about 9 doublings, which at half a year per doubling is ~5 years). With fully automated modern manufacturing technologies instead though, that's more like *3000%* a year (5 to 6 doublings per year = 3100% or 6300% growth rate): life would be completely unrecognizable in just a year, and the 5 years after that might see the entire planet ripped apart for resources with us still 'onboard', wishing for the days when climate change was the biggest of our worries.

In other words, we already have nanorobots, they're just so big they're called "robots", and also not fully automated so they need human workers, which is why we call them "factories". But make no mistake, they can still grey goo the planet, even if it's a slower & less dramatic apocalypse than the usual grey goo scenario. And even if you think the task of automating factories is so hard it'll take 100 years, that still implies that in 105 years, "big nanorobots" will be carpeting the globe. And we might still be around by that time, watching it happen, depending upon how advancing medical technology shakes out.

TL;DR: You're right to point out that making things is harder than killing things. Unfortunately, turns out it's not *that* much harder. Big nanorobots can get the job done.

Expand full comment

Yay for Scott not defending utilitarianism. Freddie is basically right (if interpret him correctly) in saying EA is fine if it doesn't bring along this dubious philosophical baggage.

Expand full comment
author
Nov 30, 2023·edited Nov 30, 2023Author

I love utilitarianism and will defend it to the death, it just didn't seem relevant here (except I guess I did use the term "consequentialist" in step 2).

Expand full comment

Oh noes! Read Bernard Williams

Expand full comment

I'd be interested in a post on that, as a non-cognitivist

Expand full comment

He wrote about consequentialism over a decade ago at https://web.archive.org/web/20161115073538/http://raikoth.net/consequentialism.html

Expand full comment

"Moral intutions are important because unless you are a very specific type of philosopher they are the only reason you believe morality exists at all."

I don't think I'm the target audience for this one

Expand full comment

Interesting, can you say more? (Not being snide or anything, genuinely interested)

Expand full comment

Late but the Stanford Encyclopedia of Philosophy alway comes in clutch. Here is the page for moral cognitivism (which James denies above)

https://plato.stanford.edu/entries/moral-cognitivism/

TLDR: moral non-cognitism don't believe moral statements have truth value. They count among the very specific type of philospophers that don't believe morality exist at all.

Expand full comment

Are you objecting to utilitarianism specifically or consequentialism more generally? I dunno that EA has to be specifically *utilitarian* per se (although the actual EA movement certainly is), but outside a consequentialist framework I don't see how one even makes sense of the "effective" part of "effective altruism". (As I said in my other comment, the answer to Freddie's "who could argue with that?" is, a deontologist! Also see FionnM's comment which perhaps answers "who could argue with that?" in a way that is more like what the general public might say.)

Expand full comment

I'm not sure I really understand the concept of consequentialism. Amartya Sen pointed out that anything counts as a consequence if you describe the outcome so as to make it one, including deontological considerations like "X violated James's rights" or "Y was something I intentionally did, not just something I allowed to happen".

Anyhow Williams didn't believe in deontological ethics either. He wasn't a fan of evolutionary psychology but there's a straightforward way to explain his thesis in evo-psych language:

Human moral intuitions evolved to make social life possible. They don't detect properties of the external world, the way sense perceptions do.

So ethics is not like science, in that there isn't necessarily a best or correct theory. And even if there is, what would make it correct is not that it accurately described the universe, but that all humans believed it and obeyed it and agreed that it was right. That's the only possible test, and most likely there's no theory that could pass it.

Utilitarianism definitely isn't the correct theory, because it's humanly impossible to follow. That doesn't mean humans are tragically flawed because they fail to measure up to an objective standard (which is a very depressing thing that utilitarians are forced to believe). It just means utilitarianism is wrong.

Expand full comment

One good reason to oppose utilitarianism is that it would justify the extinction of the human race, if that turned out to be useful to sentient beings who could experience vastly more pleasure, joy, beauty... than humans can. This sounds like a very weird thought experiment but my techbro friend explained to me last week that it's what "e/acc" means.

Williams didn't live to see ChatGPT but he wrote an essay that demolishes e/acc very thoroughly:

https://edisciplinas.usp.br/pluginfile.php/4369430/mod_resource/content/0/WILLIAMS%2C%20Bernard.%20The%20Human%20Prejudice.pdf?ref=josephnoelwalker.com#:~:text=A%20central%20idea%20involved%20in,know%20any%20more%20about%20them.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I don't think any specific notion of what kind of utilities matter are baked into utilitarianism: indeed, that's why we have different theories ranging from hedonic to preference to negative forms of utilitarianism, and there's debate about beings worthy of moral consideration: bacteria for instance tend not to be included. This might not be a philosophically satisfying move, but you could well define the circle of utilitarian moral concern as "humans", or arrive at something similar indirectly by e.g. only considering the beings that are at least in principle capable of reciprocal moral reasoning and action (for example, this rules out hypothetical future humans that cannot be interacted with - for the record, present-day humans still have fairly strong preferences relating to future state of our species so it's not like future of humanity wouldn't factor in, paperclip maximizer that by construction cannot be reasoned with, and by my reckoning most nonhuman animals although it does quite uncontroversially rule in other great apes, elephants and dolphins).

To backtrack a bit and touch on the ideas presented in the quoted post, I fundamentally view "morality" in very general terms: trying to put it very simply, I take preferences of a game-theoretic agent as fundamental, this allows preferential ranking of possible worlds, and "morality" is a rule or principle that produces and can explain that ranking. For a paperclip maximizer, possible worlds are ranked according to the number of paperclips in them, and you can extract a moral principle: causing more paperclips to exist is mandatory, acting in such a way as to not produce any paperclips is prohibited.

Put it like that, it sounds extremely abstract, but as it turns out, all humans want basically identical things. No one, at least until I made this example up, had any particular preference to whether there exists a T. Rex plushie 0.43544589478 way in from the center of mass of our Sun to center of mass of Alpha Centauri. Instead, being evolved social animals, we want pleasure, to avoid pain, to satisfy our preferences, and in particular are concerned with interactions with our fellow humans, such as fairness. Some of our intuitive views of morality are evolved, some are culturally evolved, others are results of moral reasoning that appeal to our more primitive notions (for example, that slavery is bad is clearly not a biologically evolved notion, but now that the argument has been made, chewed, and digested, few would dispute its validity). There are some disagreements on few select points that are uniquely meaningful to us: we can insist that our particular constructed morality is the correct one and try to convince other people with argument or force, or we can reason at one meta-level higher (metamorality) and try to reason how to operate in an environment in which moral disagreements exist.

In other words, moral theories (like rule ethics and consequentialist ethics) aren't THE morality - that would simply be "what we actually think is good/desirable" - but our attempts to organize our thoughts about what we actually think (there's no reason why you couldn't in principle develop a moral theory where pain is good and preference-satisfaction is bad, it's just that /humans/ or evolved biological beings more broadly don't roll like that), and can be used as tools for moral reasoning. To the extent they're useful tools/theories, they will arrive at the same results, and I can even prove rule ethics and consequentialism are in principle possible and entirely compatible: I can frame rule ethics as a set of rules "if you're in an epistemic state A, you should adopt epistemic state B", where e.g. A is contemplating murder, and B is determination not to. Likewise, you can assign utility value to each possible epistemic state (like the epistemic state of being in a world where no murder did occur). So it works out at least in principle.

The problem is, they don't work in practice, not as general theories, because in nontrivial cases they will quickly start looking like that function f: A-->B! I think there are times and places where consequentialist reasoning is definitely warranted, the uncontroversial textbook example would involve doctors choosing organ transplant recipients, and I think another correct instance would be selecting targets for charity. But like you say, it fails as a general theory by being generally impossible to implement generally.

With that in mind, I have started leaning towards virtue ethics (among other reasons) because it's by its very nature heuristic and as such practically implementable (certainly, you can do utilitarianism in bounded rationality way, but that results in rule utilitarianism of "the highest utility thing is to act virtuously/following moral rules" or two-level utilitarianism, and this works theoretically, but since utility is not fundamental but something we care about because we're evolved beings, then why did you take a conceptual detour to begin with?), and "virtue" isn't a concept deeply rooted in overly theoretical form of analysis that has failed to produce a formalization that doesn't yield one paradox or another (although to their credit e.g. modern ideas of preference utilitarianism are vastly more robust than that of Mill's and survive vast majority of realistic scenarios), but rather is conceptually very close to the sort of prosociality that our biologically and culturally evolved sense of morality arose to facilitate to begin with. For the instances where consequentialist reasoning is warranted, virtue ethics can recover it (analogous to the way two-level utilitarianism can recover virtue): prudent and wise person will exercise good judgement in evaluating their prospective actions.

Expand full comment

Why is it so important to you that the human species in particular survives? There's nothing particularly special about humanity as far as sapient species go, it's just that they're the only one on earth right now. If we can replace it with something better... what's wrong with that?

Expand full comment

> I'm not sure I really understand the concept of consequentialism.

Scott wrote about it over a decade ago here, you may be interested: https://web.archive.org/web/20161115073538/http://raikoth.net/consequentialism.html

> Utilitarianism definitely isn't the correct theory, because it's humanly impossible to follow.

Scott also wrote about squaring human limitations with the limitless demands of axiology years ago: https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/

Expand full comment

This is why I recommended Bernard Williams, to Scott and everyone else. He was a humanist and not a STEM guy, but the STEM-guy elevator pitch for his work goes something like this:

"The purpose of human life is probably not to fulfill as many duties as possible. And even if it were, behaving morally would probably not be the only duty. And even if it were, morality probably wouldn't consist of finding the inflection point that maximizes a function. And even if if did, it probably wouldn't be a function with a single argument."

You need to add layer after layer of autism to reject each of these premises and end up at utility maximization. The great strength of neurodivergent people is that they're better than neurotypicals at ignoring what's irrelevant and focusing on the only thing that really matters... but in this case that's a delusion.

Expand full comment

I don't think (most, serious, philosophically oriented) contemporary utilitarians are that naive, at least if you pressed them to the point: rather, I'd imagine you would get them to agree that, regardless of what version of utilitarianism they endorse, the "actual" utility function that humans posses would put the dimensionality of GPT-4 to shame. That is to say, doing utilitarianism "correctly", perhaps practiced by an archangel (as in R. M. Hare's two-level utilitarianism, or in these circles you might say an aligned superintelligence), it would explicitly capture the sort of human value you are gesturing towards, but also that even when deployed by boundedly rational humans, it can reason around complicated situations that other moral systems are supposedly ill-equipped to deal with while also recovering common-sense morality even where other theories supposedly fall short in their own ways (rule ethics for instance might be prohibit you from killing giga-Hitler).

Expand full comment

Well, claiming to know anything about the "purpose" of human life is already denting his credibility...

Expand full comment

None of what Bernard said in your quote engages with those 2 writeups by Scott, and he's describing a strawman of current EA thinking anyway, so I'm now just very confused...

For example, Holden Karnofsky against the expected value-maximizing perspective wrote as long ago as 2011, when he was running early GiveWell:

https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/

This is a general theme I've noticed with most EA critiques by the way, which is that (by virtue of ignorance) they're simply much worse than the critiques in e.g. https://forum.effectivealtruism.org/topics/criticism-of-effective-altruism

Expand full comment

> I'm not sure I really understand the concept of consequentialism. Amartya Sen pointed out that anything counts as a consequence if you describe the outcome so as to make it one,

In my conception: a rule begets a class of consequences. Particular rules have varying levels of sensitivity and specificity.

If you're a deontologist, you prioritize adherence to clear, simple rules (e.g. NEVER MURDER; cf. Kant), even if you could have improved the classification metrics via adherence to a more complex, contrived algorithm (e.g. murdering someone like Darth Vader is fine if it prevents the consequence of the destruction of Alderaan).

If you're a consequentialist, you prioritize perfect classification of consequences, at the expense of your moral algorithm becoming increasingly contrived and complex. I think this strategy has higher variance, because it can hypothetically beget more desirable realities. But it also makes it easier to justify questionable behavior (Cf. SBF) and this is why detractors often complain about the poor track-record.

Expand full comment

>I'm not sure I really understand the concept of consequentialism. Amartya Sen pointed out that anything counts as a consequence if you describe the outcome so as to make it one, including deontological considerations like "X violated James's rights" or "Y was something I intentionally did, not just something I allowed to happen".

If you get to pick the deontological considerations, if you get to pick the valued consequences, if you get to pick the Virtues, it's fairly trivial to redefine each one in the terms of the others. I was under the impression that was a pretty trivial property of metaethical frameworks?

Expand full comment

Yes, which is why arguments about utilitarianism vs. deontology vs. virtue ethics are intrinsically dumb and pointless. Without additional grounding in an explanatory theory of where moral obligations come from and why we should care about them, all three of them are just circular justifications you can use for whatever you want.

Expand full comment

Personally, this recent kerfuffle has made me re-think my objection 'Wokeism' (to which Scott alludes in similar terms). I seem to have been focusing on the headline-grabbing voices from the fringe, then focusing my ire on the broader cohort and every principle involved. This may be a naive perspective, but I can't shake the feeling that most of this argument is people talking past each other. One side over-emphasises the fringe people/ideas and the other remains focused on the core principles.

At least this simple reading might help account for why EA critics don't seem to grasp what seems obviously distinctive about a charitable approach driven from the head rather than the heart. They're focused on the people involved. Which is why Bankman-Fried is constantly invoked.

Expand full comment

Good comment. This dynamic is played out again recently with “leftists” and progressives all being tarred as antisemites.

Expand full comment

Agreed- I think Scott should dedicate a bit more soul-searching to how he approaches the topic, in light of his being on the other side of that kind of dynamic.

Expand full comment

I totally agree it's people talking past each other, but I don't think fringe vs core is the most useful framing. But any framing is going to be too clouded by the perspective and language differences of insiders and outsiders- what looks fringe to Scott might look core to me, and vice versa. Also, for insiders defending, there's going to be an impulse to minimize the weird parts that might be unjustified from a less-biased PoV (I definitely think this plays a role here; Scott is one of my favorite internet writers but he's too emotionally involved to, ironically, defend EA as well as it deserves).

It's a disagreement over what *is* the fringe versus the core. What's instrumental versus fundamental, what distinguishes it from other movements versus overlaps with them?

Is EA's core simply "doing good better" or is it "maximalization of QALYs via utilitarian consequentialism to the exclusion of other considerations"? Scott defines EA more broadly than most other EAs (IMO), and that's part of the talking past each other.

What is the core of wokeness versus the fringe?

If you want to answer- I've been told before that best-selling authors (to the tune of hundreds of thousands or millions of copies, everyone knows the names) are "fringe," and I find that pretty hard to swallow.

Expand full comment

Your last point actually occurred to me after I'd posted my comment. The truth is that I haven't thought through what constitutes core or fringe very robustly. Which makes me think I should withdraw from pontificating on a subject, in the case of EA, that I'm too inexpert in to add much.

Expand full comment

"Is EA's core simply "doing good better" or is it "maximalization of QALYs via utilitarian consequentialism to the exclusion of other considerations"? Scott defines EA more broadly than most other EAs (IMO), and that's part of the talking past each other."

This strikes me as close to the core of the issue. It's entirely possible to apply quantitative thinking to ethical / moral questions of which thing does more good: this one or that one? But I think the problem lies in trying to find a "universal" way of doing this math. There ain't one because "good" is not objective. It's moral judgment. So whatever method we choose to compare the goodness of outcomes in one context is not necessarily effective in another. Remember: it's a tool to help us decide which option results in more good. It's not the thing that determines what "good" is -- that's us. And we can never define that precisely enough to be useful in all contexts.

If EA is simply using quantitative reasoning as a tool to help us decide between a few simple alternatives that share context, it is very helpful indeed. But if you take it to mean that QALYs are the universal way to measure good, then I think you will lead yourself to absurd conclusions. I think long-termism is generally one of those absurdisms.

That's not a criticism of EA itself, unless EA itself entails QALYs or a similar construct. So it comes back to a definition of EA that can't actually be pinned down completely...I'm not familiar enough with the broadest EA community to know if QALYs (and other universalist or overbroad metrics) are definitional to most who consider themselves EA.

Expand full comment

>I'm not familiar enough with the broadest EA community to know if QALYs (and other universalist or overbroad metrics) are definitional to most who consider themselves EA.

I would *venture*, to use Scott's ideology vs movement distinction, QALYs are very big and probably definitional in the organized movement, and for people that just hold the ideology (to whatever degree) but aren't part of the movement put much less emphasis on that kind of measurement.

Expand full comment

> this recent kerfuffle has made me re-think my objection 'Wokeism'

Eh, you'll be back. Sooner or later the fringe will come for you, and then you'll find out how many of the "broad cohort" put actual justice above social identity.

I say that with some bitterness, but also in the hope that if you have this in the back of your mind, you'll be more psychologically prepared than I was when it happened to me.

Expand full comment

"Maybe a better answer is to judge movements on the marginal unit of power. An anti-woke person believes that giving anti-racism another unit of power beyond what it has right now isn’t going to free any more slaves, it’s just going to make cancel culture more powerful."

I don't think it need be about what they *would do* with an 'extra unit of power'. That's more speculative and hypothetical than it need be.

Rather, it can simply be about what they are doing currently. This allows you to observe that anti-racists might have historically done x, y and z but are currently doing [whatever low value or harmful thing you would say they are doing now], while their historic achievement of ending slavery remains unopposed and isn't in any sense something they are currently maintaining. Meanwhile, effective altruists are still currently saving lives from malaria, advancing their campaigns for AW etc., which wouldn't be done otherwise.

Expand full comment

Personally, I strongly agree with deBoer here.

Also, I believe your 3 points can make sense only to Americans (at least according to my own stereotype of Americans, as people who holds some variation of "anarcho-capitalism" view is a default and transparent ideology).

1. "donate some fixed and considered amount of your income to charity"? You mean, pay taxes? And by the way, 10%? Those are rookie numbers.

2. "Think really hard about what charities are most important"? Like a hedge-fund would? This is just ridiculously naive. This kind of "hard thinking" requires training and expertise that takes years to obtain, and to do it properly I'll have to dedicate most of my time just for that, and I'll have to secure the backup and support of other professionals and administrative assistants. In other words: I'll have to get a career in the civil service.

Expand full comment

>You mean, pay taxes?

A portion of your taxes will go towards unambiguously charitable causes (hospitals, medical foreign aid etc.). A portion of your taxes will also go towards the armed forces, building and staffing prisons, bailouts for "too big to fail" banks etc. This is not to argue that we don't need a military or prisons - it's to point out that public spending has many different goals, of which charitable spending intended to improve lives is only one. The whole point of effective altruism is trying to find ways to help people and save lives which are MORE effective and efficient than relying on the state to do so. I think Scott addressed the "that's what taxes are for" rebuttal here: https://slatestarcodex.com/2019/07/29/against-against-billionaire-philanthropy/

>And by the way, 10%? Those are rookie numbers.

If your point is that income tax is higher than 10%, my understanding is that most people who take the Giving What We Can pledge are pledging to donate 10% of their POST-tax income to charitable causes. I fall into this category - I pay my taxes, then donate 10% of my take-home income to charity.

>This kind of "hard thinking" requires training and expertise that takes years to obtain, and to do it properly I'll have to dedicate most of my time just for that, and I'll have to secure the backup and support of other professionals and administrative assistants.

That was kind of Scott's entire point under point #2, about how he usually (but not always) delegates the task of assessing which charities to donate to by just looking up which charities GiveWell recommends. This obviously entails a great amount of trust that GiveWell is qualified to provide accurate assessments - but I think it's much less naive than your claim that you've discharged your moral responsibility to help others simply by paying your taxes and trusting the government to do the right thing.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

OP unintentionally made a funny, because the founders of Givewell, in fact retired from a very successful hedge fund, and couldn't find any charity analysis that lived up to their standards :)

Expand full comment

(1) Why would such a charity need to exists? It should be the government's job. (2) Why do you trust this ex-hedge-fund manager dude and simply takes him at his word?

Expand full comment

1) If it *is* the US government's job to fund projects like bed net distribution in developing countries, then they're evidently not doing it. <0.5% of the US federal budget goes to economic foreign aid; none of it goes to the Against Malaria Foundation. Shouldn't somebody pick up the slack, then?

More importantly, many people in the US *don't* think it's the government's job to help fight malaria across the world and would actively vote against efforts to do this. I think this is a fairly popular stance among both conservatives and liberals.

2) It's not taking GiveWell at their word, they show their work pretty thoroughly. See ex. https://www.givewell.org/charities/malaria-consortium

Expand full comment

Then write a letter about the importance of bed-nets to your local congressman. (I don't really know much about US politics, I hope this sentence compiles).

There IS a social structure for coordination, joining forces, directing resources and prioritization of goals. Government. EA reinvents the wheel here, with a lot of pathos and drama around it. What mechanism their charities have that make them immune to whatever it is you don't like about your government?

Expand full comment

One mechanism is that there's much more alignment, since anyone who participates in the charity is broadly in agreement about goals. Whereas people in a plural democracy often disagree sharply about goals, so a tax dollar sent to the Trump administration may be used very differently from one spent to the Biden administration. Since charities (unlike governments and taxes) are voluntary, people contribute only when they are in rough agreement about goals, so more can get done with less fighting.

Expand full comment

Unless you're hopelessly naive, you must recognise that some proportion of government spending will always have to go to necessary evils like funding the armed forces, building prisons and so on. To the extent that this is true, charitable donations will always be necessary.

Expand full comment

Writing a letter would accomplish nothing because the bottom line is that most Americans do not want to send tax money to AMF. Donating money to AMF saves lives. It seems like you are being willfully obtuse here. Influencing the government of a country is very difficult and often fruitless and in the meantime nothing happens. Donating money to AMF saves lives essentially immediately.

Expand full comment

Writing a letter to a Congressman does not actually cause them to do anything. Whereas if I sent money to Rwanda, it goes right there without having to meet the approval of any congresscritter. This is so obvious to me I feel like I must be me missing something in your objection. Since you are aware of anarcho-capitalism, have you read Bryan Caplan on how people reason differently in voting vs purchasing based on the probability of their action having any effect?

Expand full comment

"Christian doctrine, in particular, emphasizes that trying to do good by rationalist calculation is explicitly evil, because you will underestimate the extent to which you will deceive yourself and use the rationalist approach to do what you want."

Or I could do something that would actually have a positive impact on the world, like donate to AMF. I've written to my Congresspeople about renewing PEPFAR and predatory "rewards cards" for plasma donors, both of which are fairly "uncontroversial" things (we're not talking about gun bans or abortion or something), and I am pretty sure it had 0% impact whatsoever on any state policy. Whereas, me donating $4500 to the AMF saved ~1 child from dying of malaria.

Expand full comment

>1) Why would such a charity need to exists? It should be the government's job.

Even if we're maximally optimistic about the government's capabilities and efficiency, I'm not sure it is at all the US' (or UK's or wherever) government job to provide malaria bednets to people in Africa.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

>It should be the government's job.

No. Government and compulsory taxation depends on at least some credible lip service to the notion that the government's primary raison d'etre is to further the well-being of its citizens.

To EAs, a human life is a human life is a human life (or for some, it's even life in general and not only human life). Not so for governments, which must distinguish between citizens vs non-citizens and prioritize the former.

Expand full comment

> It should be the government's job.

I have some unqualified reservations about this. I suspect the security companies we call "government" would be more effective if they stuck to their core competencies. The idea that governments should also act as insurance companies and charity institutions is a uniquely progressive notion.

Expand full comment

I'm not as cynical about the nature of governments broadly construed, but I agree that their distinguishing characteristic is rooted in monopoly of force and it's a bad idea to start assigning them jobs we want some generic Big Organization to go handle. Handy for solving coordination problems, but not great for things we didn't intend to make compulsory.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

You've just clarified for me my unease with this whole EA approach.

We've seen the results of governments trying to implement "run it like a business, bring in expertise from private industry" on social issues, and it hasn't been an unalloyed success.

A hedge fund or a company producing tinned peas both want to make a return on their expenditure and generate profit. There's principles in business about how you do that.

You can't, however, run schools or a health service or social welfare 'like a business'. That's not to say you can't cut out waste and bloat and find more efficient and effective ways to do it, but you can't "turn a profit" on "we have a population of 70 year olds who need pensions and health care" as a government, because the private industries cherry-pick the profitable cases and leave the rest for - charity? public welfare?

And as yet, no society will tolerate their government saying "just go die in a ditch if you can't pay BigBucks for private services".

So "hedge fund analysis of what counts as effective in charity" isn't actually that great a metric, now I look at it. Cut out waste and bloat? Yes. But for some things, you really can't do a simple "X dollars on mosquito nets save Y lives" break-down.

And now I'm beginning to wonder about that "X dollars saves Y lives" because it's a bit too pat and neat and tidy a formula, given the real world messiness of problems in deprived areas and needy populations.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

"We've seen the results of governments trying to implement 'run it like a business, bring in expertise from private industry' on social issues, and it hasn't been an unalloyed success."

What would be the best example of this, in your mind?

This phrasing ("run it like a business") is a pet peeve of mine because it's typically invoked by people who obviously don't want this - any business that could borrow at interest rates and amounts like the US in the 2010s would have probably gone on an even bigger debt-fueled expansion and no CEO of the US would ever cut IRS funding, for example.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

"you can't 'turn a profit' on 'we have a population of 70 year olds who need pensions and health care' as a government, because the private industries cherry-pick the profitable cases and leave the rest for - charity? public welfare?

And as yet, no society will tolerate their government saying 'just go die in a ditch if you can't pay BigBucks for private services'."

Here you switched in your analogy to talking as though EAs are actually trying to get capital returns for their charitable donations. In EA (and in a related gov't analogy where it was trying to "run it like a business") you aren't looking for returns of capital. A govt's job is to increase social wellbeing and success for their citizens. Sure, the returns can end up being capital, and they might look like nothing but capital if all you are focusing on is the GDP, which is a mistake our govt does sometimes, and especially politicians who say they want to run America like a business. But here we are comparing it to EA right? So it would be more accurate to say the govt would run it like a business where the returns you seek are happiness, health, and wellbeing per dollar, or other measurable or reasonably expected good done per dollar. That's how EA tries to think and how I wish our govt thought.

In case it is ambiguous, in this system, pensions would be paid, because it is essential for social wellbeing and reducing social strife that financial commitments and promises be fulfilled (you implied this when you said people won't tolerate a govt that doesn't pay pensions). Also, social safety nets for the aged may be an essential good so that people can trust they will be taken care of in old age so there is some reason that, in their younger years, they would play the prosocial and collaborative game with society in the first place, to help keep society ticking in a healthy way, rather than just trying to "get theirs ASAP" out of a scarcity mindset if they had no pensions. Compare the social returns of having pensions to not having pensions, and it is clear that having pensions, in most reasonable cases and already-economically-stable and prosperous countries, where you want returns on your populace's wellbeing, would be a good investment.

Things (I think) the US govt would probably spend more on if they ran things like a business seeking wellbeing returns, for their own citizens only: Public transit, housing development, bike infrastructure, public gyms, advanced education, hastening immigration and work visa approvals, combatting misinformation, funding for development of plant-based proteins and programs to promote adoption of plant-based proteins, paying organ donors and surrogate mothers.

Expand full comment

Re "taxes" The taxes in USA are famously low (especially for the rich). Donating 10% of your post-tax income have a completely different meaning in San-Francisco vs. Copenhagen. If you think people should give more to the "common good", just raise taxes.

Re "hard thinking": Why I should trust GiveWell? How can I personally perform due-diligence to their operations and decisions? Why should I bother? Whatever your answer is going to be, how is it ultimately better/different/worth-distinguishing from "having a functioning government"?

Expand full comment

"Donating 10% of your post-tax income have a completely different meaning in San-Francisco vs. Copenhagen."

There's plenty of EAs donating 10% in both places, so what's your point?

"Why should I bother?"

Because there are obviously problems that governments are not solving well?

Expand full comment

My point is that a person in San-Francisco who donates 10% post-taxes, donates much less to the "common good" than a person in Copenhagen who donates 10% post-taxes, and so: (1) this is non-informative benchmark, (2) the reasoning Scott presents might make some sense in SF, but much less so in other places (and finding anecdotal examples of people in these other people who agree with this reasoning doesn't mean much).

As for your last sentence:

(A) The exact same reasons that prevent governments to perform optimally will prevent EA's charities to perform optimally, and for the exact same reasons (if not already now, then surely when they'll scale up a little more).

(B) Did you perform the cost-benefit analysis (that EA loves to talk about as if it's a radical new idea) for the 2 alternatives of "inventing a new trademarked movement calling for better charities" vs. "getting involved in politics, trying to direct some budgets towards better goals, and maybe slightly reforming the system for the greater good"? I honestly find it hard to believe that if you did, the charity-thing won.

Expand full comment

Measuring common good by the amount of taxes paid is questionable accounting. Imagine a world in which people spend 30% of their money on food, rent, and medicine. In America, people pay 0% in taxes and buy this all privately (then donate 10% of their post-tax income, leaving them broke). In Denmark, people are taxed 90% and the government provides all food, rent, and medicine to the public (then donate 10% of their post-tax income, leaving them with 9% of their income.)

This is obviously a toy example but it illustrates the problem with accounting this way - it's obvious that the Dane - "donating" 91% of his income - is clearly not doing 9 times much good as the American donating 10%.

Expand full comment

I completely agree, and I was trying to make a similar point (namely, "Measuring common good by the amount of post-tax donations people make is questionable accounting").

Expand full comment

I think you're missing 60% in the imaginary American numbers? (Or I'm not getting why the Americans are broke.)

Expand full comment

Scott has freely admitted that the figure of 10% is a somewhat arbitrary Schelling point (https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/). My understanding is that the idea was to select a figure that essentially everyone in the world could commit to donating in perpetuity without completely compromising their living standards. Elon Musk could donate 90% of his income to charity and still live like a king: this is not true of most people. Setting up a standard for "good person" that the overwhelming majority of people will never be able to meet is a self-defeating exercise.

But why am I even justifying this to you? It's so weird being lectured about how 10% is too low a figure to donate to charity by someone who ALSO apparently believes that the right figure is 0% and I should invest all my time and money into political lobbying instead.

Expand full comment

No, the exact same reasoning does not apply. GiveWell could cease to exist if people ceased to believe they were doing a good job and stopped donating money. The US government will not cease to exist, because it can force us to pay taxes whether we trust it or not.

Getting involved in politics means getting involved in a tug-of-war https://www.overcomingbias.com/p/policy_tugowarhtml and political activism is generally "folk activism" https://www.cato-unbound.org/2009/04/06/patri-friedman/beyond-folk-activism/ SBF donated a lot to certain political campaigns, few now regard that as having been a better idea than giving it to GiveWell recommended charities.

Expand full comment

Well, if one doesn't feel one's government is functioning well, one can't singlehandedly make it function better (either by raising taxes or by better allocating the taxes it raises), but one can singlehandedly direct money towards charities one feels are functioning well. But also, even if one has a very well-functioning government, that government may have a different set of interests than I do as a donor. I think it's good for rich governments to spend money on basically altruistic causes like treating disease in other countries, but a government's chief interest always has to be the protection and well-being of its citizens. Like I don't think the US government should sell off the National Parks and use the money to fight disease in poor countries, even though fighting disease would help more people than preserving the National Parks, because the National Parks serve many interests of the people of the US and that's the US government's main job. But I want some of my money to go to fight disease in poor countries, over and above what I give to my government in taxes to protect the national interest.

Expand full comment

In that case maybe one should create/support a movement that tries to somewhat fix or improve the government, instead of create\support a movement that is self-motivated to maintain the broken system, if not even break it further?

Expand full comment

Are you aware that Effective Altruists do in fact put some of their money towards changing the laws to improve flourishing and reduce suffering? But also, if you read my comment more carefully you'll see that even an optimal government would still not be spending my tax dollars optimally from a *charitable* perspective, since they would be paying for things like an armed service and fireworks shows on July 4 and various stuff like that which is obviously not optimized for human flourishing but rather is aimed at improving and protecting life *for its own citizens,* which is what a government (unlike an altruist) is supposed to do.

Expand full comment

I am aware, and it relates exactly to the point deBoer was making: to the extant that EA does good it does nothing new (in this case, lobbying), and to the extant it does something new, it does nothing good.

The discussion and debate about social coordination and about the direction of resources is called "politics", and you're most welcome to join and try to direct more focus toward your favorite goals. What's the point of the "charities" framework?

Expand full comment

>Re "taxes" The taxes in USA are famously low (especially for the rich).

I don't live in the US.

>how is it ultimately better/different/worth-distinguishing from "having a functioning government"?

Because, as I quite explicitly explained in the comment you're replying to, public spending is not optimised to maximise alleviation of human suffering. Alleviation of human suffering is ONE of the goals of public spending, but far from the only one, probably not even in the top 10 in descending order of priorities. The point of effective altruism is to get the most "bang for your buck", by donating to causes which will have the highest impact in the specific goal of alleviating human suffering.

Or put it another way - supposing every single dollar of charitable donations in the year to date was forcibly redirected away from its intended charitable cause and towards the coffers of the government in which the donor resides. Do you think that hypothetical world would contain less human suffering than the world in which we currently reside? I certainly don't.

Any functioning government will need to invest at least some proportion of its budget into its military, its police service, its prison service etc. I recognise that this is unavoidable, am happy to pay my taxes and am furious when people (especially rich people) don't pay their fair share.. But I don't think that, by paying my taxes, I've immedately discharged my ethical responsibility to help others. When I walk past the military barracks not far from where I live, I don't look at the armored vehicles and feel a sense of pride that my taxes are alleviating human suffering by funding the military. I feel sad about the fact that the government has no choice but to fund a necessary evil like the armed forces. And I say this in SPITE of the fact that there's a convincing argument to be made that the military in my country has had a net-positive impact on the world, given the small number of wars they've been involved in compared to the large number of UN peacekeeping missions.

Expand full comment

This is a common misperception. Actually taxes in the USA are low for lower income and especially middle class people, not for the rich. I doubt if a wealthy person in San Francisco pays meaningfully less than a wealthy person in Europe. The top federal tax bracket is 37%, plus 12.3% for California income tax, plus 15% pension tax (7.5% at the employee level, 7.5% employer), plus 8.625% sales tax, plus real estate tax if you own a home (wealthy people do). What is the tax rate in Copenhagen?

Expand full comment

An individual does not have the power to raise the tax rate. You can't "just raise taxes". You could send the IRS more money than you owe, donating to the government like a charity, but most of us regard our government as spending money much worse than GiveWell's recommended charities, and we can't "just" change that as individuals.

Perhaps you don't trust GiveWell. Don't donate to them then. They do give reasoning for their decisions, which most of us find more persuasive t han other charities.

Expand full comment

> Re "taxes" The taxes in USA are famously low (especially for the rich). Donating 10% of your post-tax income have a completely different meaning in San-Francisco vs. Copenhagen

They're really not that low. The top federal tax bracket is 37% in addition the state of California has a top income tax bracket of 13.3% as well as a sales tax. There are also local taxes, payroll taxes etc. A wealth san Franciscan can easily pay more than half their pretax income in taxes.

Expand full comment

This is exactly why it was a terrible mistake for the government to enter into charity. It’s not its job, it doesn’t do it well, and that it acts like it *is* its job measurably decreases almost everyone’s impulse to perform private charity.

Expand full comment

> my understanding is that most people who take the Giving What We Can pledge are pledging to donate 10% of their POST-tax income to charitable causes.

I don't know what most people do, but https://www.givingwhatwecan.org/en-CA/pledge#faqhow-do-pledge-members-calculate-income says:

> we define income as your gross salary[...] While we have defined income as pre-tax in the past, after speaking with members in a variety of situations we believe there should be some flexibility here.

>

> If you are donating to a charity that is tax-deductible in your country (or Gift Aid eligible in the UK) we recommend basing your giving on your pre-tax income.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Thanks for that, I wasn't aware.

Expand full comment

> I believe your 3 points can make sense only to Americans

They make sense to a lot of non-Americans too. I'm in the EA group in my country in SE Asia, if you ever swing by come say hi :)

> And by the way, 10%? Those are rookie numbers.

Scott talks more about why 10% at https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/ in case you're interested to learn more (instead of just making fun of him)

> This is just ridiculously naive. This kind of "hard thinking" requires training and expertise that takes years to obtain

That's right -- fortunately for you, others have done that for us (just like how others have created index funds for us to just put savings in), see https://www.givingwhatwecan.org/best-charities-to-donate-to-2023

Expand full comment

In your comment thread on Freddie's post, you mentioned a hypothetical person who "tried really hard to figure out the best charity and donated to the endowment for a performing arts center or something. I would want to talk to them and see where our assumptions differed."

I'm not exactly that person, but I'll take that as an invitation anyway. The part of that comment that really jumped out to me was the idea that the performing-arts-center donor might be motivated by "some kind of galaxy-brained idea for how the plays at that performing arts center would inspire a revolution in human consciousness which would save millions of lives."

This jumped out to me because my own thoughts on the value of charitable giving are almost diametrically opposed. I like the idea of donating to performing-arts-center-type charities, since at least some of these concretely embody what I value most about humanity. The main motivation I would have for donating to a charity that tries to save lives is the "galaxy-brained" thought that just maybe, the people whose lives I save might go on to build even more performing arts centers than I would have been able to fund with my own money directly. But it's pretty hard to convince myself that this is actually true, so if I really were to rationally reflect on quantifying the impact of my charitable giving (in relation to my actual values), I would be less rather than more inclined to donate the money to mosquito nets.

I do wonder whether something along these lines might be behind the aversive reaction that so many people have to EA discourse, even in its "mosquito nets" form. Many people have an uneasy sense that maybe they ought to care more about saving lives than about donating to their old school, a performing arts center, the local community sports club, etc. But they don't really care more about saving those lives, and they don't like having to think about that fact. EA discourse forces people to reflect on the discrepancy between what they actually value and what they think they are supposed to value, which is a deeply uncomfortable thing to reflect on.

Expand full comment

I think performing arts centers might be the kind of thing that should be funded via Alex Tabarrok's "dominant assurance contract". It's not quite a standard market exchange where you are paying for something that gives you utility, but there's a little of that to it.

Expand full comment

Can you elaborate on your reasoning here? I'm not familiar with assurance contracts as a funding mechanism, but based on Wikipedia it seems they are designed to solve free rider problems. What's wrong with funding performing arts centers by just giving them money and evaluating their "effectiveness" (on some set of criteria) as we would for any other charity?

Expand full comment

Lots of the "high arts" rely on donors these days, while normal paying for consumption is for populist "mass art". The former can be thought of as a quasi-charitable cause, in which the donors believe it is a general good for this art to exist rather than for their own personal enjoyment (but most of the population might not share their preferences and have similar willingness to donate). Dominant assurance contracts make the provision of such things more entrepreneurial. People who are good at convincing people to donate AND cheaply providing the quasi-public good will be drawn to that role. The contracts are supposed to be designed so that the value donors perceive from the good is more than the amount they pay for it, and thus whether they receive said good or the (dominance-providing) compensation from the entrepreneur as legal consequence of not providing said good, they come out ahead.

Expand full comment

I'm still not getting it. Wouldn't your logic here be applicable to all charitable giving? Is there something about the provision of "high arts" that makes these contracts more appropriate than for the provision of mosquito nets?

Or from a more traditional EA perspective — does the logic here imply that Open Philanthropy should abandon its customary targeted funding approach, and instead just draw up dominant assurance contracts for QALYs and let the entrepreneurs best figure out how to supply them?

Expand full comment

The donor isn't getting anything out of mosquito nets. The mosquito nets aren't even going to be in the same continent as the donor, typically. The donor can't personally evaluate it like a local performing arts center, and instead relies on GiveWell to do such evaluations. My own decisions to donate to GiveWell recommended charities aren't based on considering how much utility they give me and donating until the marginal return is 0, rather I set a budget for charity beforehand and give that to them. If they manage to get a lot more efficient at delivering charity, that doesn't actually affect how much I donate.

Expand full comment

"But they don't really care more about saving those lives, and they don't like having to think about that fact. EA discourse forces people to reflect on the discrepancy between what they actually value and what they think they are supposed to value, which is a deeply uncomfortable thing to reflect on."

Perhaps there is a cultural problem, people being coerced into paying lip service to things that they _don't_ really value. Perhaps if there were more pushback against moralists it would help people openly say they value what they _really_ value.

Expand full comment

Perhaps, but it may be that a certain amount of hypocrisy ends up leaving all of us in a better situation. I'm not sure I really want to know my neighbours' true values or want them to know mine.

Expand full comment

Could be. I tend to be biased in favor of truth, but there are certainly second order and third order consequences that I haven't thought through.

Expand full comment
author
Dec 2, 2023·edited Dec 2, 2023Author

Yeah, that would be an interesting discussion. I'd want to know more about what you mean by "concretely embody what I value most".

Is it that you go to performing arts centers and want to give something back to them (beyond the ticket price)? I think this is admirable, and would classify it with tipping waiters in a category of "things you're not legally obligated to do, but you should do if you're a good person and use the relevant service". Maybe this would be classified as "being a good citizen". I think this is a slightly different motivation than altruism, although both are good things.

Is it that you think it makes other people's lives better to be able to see performances? Here I think we just get to the very standard EA argument of "you can better other people's lives more efficiently than that other ways".

Is it that you think art is good in itself? I sympathize with this, but it breaks down when I try to think about it too rigorously. Suppose there have already been 1,000 performances of a certain concert. Is it good in itself for there to be a 1,001st? Is it good in itself for people to go to the performing arts center to see it, instead of seeing a recording on Amazon? If you could spend $100,000 on getting there to be one extra performance of Beethoven's Fifth at your local performing arts center that 500 people would see, vs. bribe 5,000 people $20 each to listen to a recording of Beethoven's Fifth on YouTube, which would be better?

Is it that you like art, and you want to use your money to signal a sort of cosmic vote in favor of art? I sympathize with this, it just doesn't seem like charity, exactly. Maybe charity also comes from the same place (you want to signal a cosmic vote in favor of health and goodness), but it still seems subtly different, I'm not sure.

Expand full comment

I tried to become part of the EA community, from about 2020 I gave part of my income, first to Evidence Action, then to GiveWell. And then in 2022 sanctions were imposed against Russia, which made any transfer of money from Russia to abroad as difficult as possible. And I gave up.

In some ways, this situation only increased my sympathy for EA, because it reminded me of how stupid government decision makers are compared to effective altruists. But partly it also made me think about how EA is a movement for the First World, totally not designed for similar situations that happen outside of it.

Expand full comment

In that case I would recommend aiding those among your immediate neighbors who seem most likely to go on to do the same for others, taking whatever action you can get away with (as opportunities arise) to ensure that Putin loses the current war in as swift and humiliating a manner as possible, and trying to make sure that whoever replaces him implements a land value tax and UBI rather than defending extractive industries and organized crime.

Expand full comment

That's what I'm talking about. When "help the poorest people in the world by pushing a button" turns into "first, overthrow the government...", we can state that effective altruism doesn't work here.

Expand full comment

Note that I said you should first aid your neighbors. That's still a form of effective altruism; the whole framework doesn't shatter just from accounting for financial barriers.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

If you live in Russia, you have full permission to just worry about yourself right now. If things keep going the way they are, it's not going to end well.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

> Freddie deBoer says effective altruism is “a shell game”

If his claim amounts to saying it is a vacuous concept, because it is shared by practically everyone one way or another, then I guess the main clue to rebutting it is in the word "effective"!

I must say I've never understood why someone can feel guilty about the worse luck or predicaments of others, unless of course that person had a role (by commission or omission) in unfairly bringing those misfortunes about! Guilt in that context seems a backhanded way of putting themselves on a pedestal by the conceit of taking upon themselves responsibilities beyond what they truly own.

Expand full comment

I don't feel "guilty" in the sense that I think I'm personally responsible for small African children dying of malaria through no fault of their own. But it makes me sad, I recognise the blind luck that resulted in me being born in an environment in which I'm unlikely to die of malaria before my fifth birthday, and if the boot was on the other foot I think I would feel entitled to assistance, even from someone who bore no personal responsibility for my situation.

It's a widespread idea, the notion that the only people who bear any responsibility for alleviating suffering are those who caused said suffering. It's not one I endorse, nor does Scott (https://slatestarcodex.com/2015/04/19/blame-theory/). I think that anyone CAPABLE of helping to alleviate suffering in this situation therefore bears some responsibility towards doing so. To use the classic example: you walk past a pond and there's a small child inside, struggling to keep their head above water. They will die if you don't intervene. You didn't push the child into the water; you weren't in charge of supervising them but were too busy looking at your phone to notice them leaping into the pond - they're just some random child you don't know. I would consider it a massive derelection of duty for any able-bodied adult failing to rescue the child, even if they bore no responsibility for the child ending up in the pond. Once you've accepted that premise (that you have a responsibility to help those you are able to), it logically follows that there's no reason you bear any less responsibility just because the child is geographically separated from you by a greater distance.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

> you walk past a pond and there's a small child inside, struggling ..

Yes, I agree it would be unreasonable for any able-bodied adult not to rescue the child in that situation, where there could be no physical danger themselves at the time, nor legal liability subsequently.

But even in that apparently clear-cut example, if the weather was freezing then someone recovering from pneumonia might think twice before literally jumping in. Well maybe they don't count as able-bodied, but someone with lots of past kiddy fiddling convictions might also hesitate to become involved, for fear they would later be falsely accused of touching the child with immoral intent or being involved somehow in causing its predicament!

And even if you would rescue that child, who you personally encounter with nobody else around, I'm not sure it logically follows you should thus also bear any responsibility for children anywhere! I mean you are responsible for washing your car or trimming your lawn and will receive neighborly opprobium, and in some jurisdictions even fines, for neglecting either. But that doesn't imply you have any responsibility for lawns in, say, Kuala Lumpar, however weedy or overgrown they may become!

For a start, you couldn't help kids in Africa in person. Your charity funds would have to go through intermediaries, who might siphon off most of the money to buy Rolls Royces or rocket launchers. Also, it must be assumed that children everywhere have local adults everywhere, and it is those adults who are responsible for them, or should be. So you, being distant, are not in a position to help in person, nor even to know where any indirect help is most needed and can be best directed.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

>Well maybe they don't count as able-bodied, but someone with lots of past kiddy fiddling convictions might also hesitate to become involved, for fear they would later be falsely accused of touching the child with immoral intent or being involved somehow in causing its predicament!

Refusing to *save an innocent child's life* because you're worried someone might think you're a pervert is the most pathologically selfish position I've encountered all year. And in any case, in many jurisdictions refusing to save a child in these circumstances is itself a criminal offense (https://en.wikipedia.org/wiki/Duty_to_rescue), so that's not much help to you.

>Your charity funds would have to go through intermediaries, who might siphon off most of the money to buy Rolls Royces or rocket launchers.

PRECISELY the problem EA organisations (among others) have been trying to address since its inception. The entire point of GiveWell is to promote charitable organisations which actually do charitable activities and spend a minimum amount on Rolls-Royces.

>Also, it must be assumed that children everywhere have local adults everywhere, and it is those adults who are responsible for them, or should be.

Sure, but it's not like the adults in sub-Saharan Africa are much better equipped to protect their children (or themselves) from malaria than the children are equipped to protect themselves. It's not like these adults have bed nets which they're selfishly hoarding, thereby allowing their own children to die. Thus: charitable aid from people in a better position to assist than the local adults.

Expand full comment
Nov 30, 2023·edited Dec 1, 2023

> Refusing to *save an innocent child's life* because you're worried someone might think you're a pervert is the most pathologically selfish position I've encountered all year.

You're shooting from the hip there, and missing! My hypothetical example was of someone who was known to be a pervert. That isn't at all the same as an absurd suspicion that people might assume one could be a pervert, with no past history, just for having rescued the child!

Expand full comment

Let me rephrase: a known pervert with a history of convictions *refusing to save an innocent child's life* because he's worried that someone might erroneously assume he was groping or interfering with the child is the most pathologically selfish position I've encountered all year. I don't even care if his worry is well-founded and there's a good chance that he actually would be arrested as a result: I STILL think he should save the child's life.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

If that's the most pathologically selfish position you've encountered all year, then you must not read the news much. Hell, just a couple months ago, it was revealed that Johnny Kitagawa, owner of Johnny & Associates (a Japanese male idol agency that had a de facto monopoly), was systematically molesting the agency's underage talent for over 40 years. The idols knew about it, the staff knew about it, even the broadcasting industry knew about it, and nobody spoke up. The media covered it up for decades, and the only reason the company was ever held to account was because the BBC released a documentary covering the scandal. Not that it even matters at this point, since Johnny died three years ago. What a shitshow.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

>Refusing to *save an innocent child's life* because you're worried someone might think you're a pervert is the most pathologically selfish position I've encountered all year.

That's almost exactly the argument for decriminalization of knowingly transmitting HIV, that people would rather be pathologically selfish and not get diagnosed at all; if you don't know you have it you couldn't be charged.

Edit: I do think that was bad law on other grounds, but I found that argument monstrous and sociopathic. /end edit

>PRECISELY the problem EA organisations (among others) have been trying to address since its inception.

Even so, those are the logical steps that make the drowning child right in front of you different from the vitamin-A deficient child 8,000 miles away. It's great that EA (among others) is trying to reduce the number of steps and improve the quality of them, but there's still (and always will be) a lot of steps and middlemen and trust involved in the latter that don't apply to the former.

It isn't exactly the same but reminds me of Scott's review of WWOTF, walking through the hypothetical but logical steps that wind up getting his eyes pecked out.

Note I'm reacting specifically to your phrasing from above:

>t logically follows that there's no reason you bear any less responsibility

I disagree, I think it's approximately Newtonian and that your responsibility decreases more-or-less proportionately with your ability to affect it (of course we'd quibble over just how to define that, too). Perhaps it never reaches zero, I agree it is fully a good thing to help regardless of distance, but the responsibility should not be equivalent because your influence is not equivalent. Each additional intermediary introduces more risk and... moral fog.

Expand full comment

If Clark Kent can clearly hear the child's distress from eight thousand miles away, improvise an adequate flotation device from spare garbage bags and coffee filters in the Daily Planet's break room, then lob it out the window on a pinpoint-accurate suborbital trajectory with his bare hands, all without even overstaying his lunch break, is it any less monstrous for him to pass up the chance? What about Cyborg or Iron Man using tech for similar results?

Expand full comment

Is this just for fun or trying to walk me towards a realization using fictional examples?

Fun question, either way. I would say in the Superman example, given your terms of his abilities, yes he probably does have roughly equivalent responsibility regardless of distance.

Cyborg I'm less familiar with, but sure. Iron Man, would it be closer to the Clark Kent example, or more like his trying to police the world with Ultron? If the Ultron system had worked, and for some reason he just left out "save drowning kids" from the lists of tasks, that would be quite a monstrous oversight.

Expand full comment

I feel like I basically agree with effective altruism on its own merits (donate to charities that do a lot of good!). I think the cultural part that can be grating is the expectation that you show your napkin math to someone else and have to listen to an argument about how your math undervalues sn-risks. Moral persuasion, in that sense, can feel a little bit like a cult or a religion.

Peer pressure to take the giving what you can pledge (something most people probably agree with in principle, to fdb's point) is a bit different, and I accept it's more foundational than the result of the napkin math, it just gets less attention.

Expand full comment

I don't understand the comparison to cults and religions. I've been proselytized to many times and never did I think the proselytizer was using an overly quantitative, hyper-rational approach.

Expand full comment

You haven’t been proselytized by a Catholic then

Expand full comment

Nobody has been proselytized by Catholics because we're so darn bad at trying to convert people, because we really don't want to do it, that's the priest's job 😀

Expand full comment

And sometimes it takes a while. One of our most promising local politicians got infected by Catholicism while at Oxford, and then a few years ago decided he could do more good for the world as a Jesuit priest than as a statewide-elected official.

https://en.wikipedia.org/wiki/Cyrus_Habib

Expand full comment

A Jesuit? Oh, no!

Seriously, though, that's very admirable on his part. Whether he sticks it out to the end of the novitiate or not, he's testing his vocation and that's honest.

Expand full comment

I think the key point I was trying to make is proselytization, not the exact values the group holds.

Expand full comment

But there's every difference in the world between being proselytized to by a cult, which tries to evade your critical thinking, and being proselytized to by a movement that tries to engage your critical thinking.

Expand full comment

"Tell me you're not Jewish without telling me you're not Jewish."

Expand full comment

My take:

* FDB, EA, and a handful of associated communities live in a bubble, and share values that are very rare in the world outside it. EA tries to be big-tent within that bubble, and fails, because people like to loudly disagree about all sorts of things.

* EA basically functions as the charity arm of the group, the brand, the sub-community.

* FDB, as a helpful contrarian, dislikes many aspects of the values and behavior of the people in the group. He likes the expressed principles behind EA and general style of reasoning, but incorrectly assumes they are universal, because bubble. It's too "obvious" to be relevant.

* Having dismissed the expressed values, FDB focuses on their actual actions and focus.

* The people in EA are "weird" (meant in a nice way, sorry). The kinds of things EA supports are, surprise surprise, the kinds of things that these kinds of people conclude are valuable.

* If there were no EA brand, these people would not suddenly find themselves valuing different things and giving charity towards those instead, they would find themselves either giving less charity or giving to those same causes regardless, possibly in a less effective manner.

* FDB seems to miss something critical: EA did not make these people into weird people. They started out as weird people. Removing the brand will not eliminate all their "mistakes" (divergences from what FDB considers sane behavior) and turn them into EA-but-less-incorrect.

* How might one judge EA? In absolute impact of the group, it's pretty positive. But: If you start with the assumption "there are this group of smart people who are willing to donate lots of money to charity" and then consider the "EA" part to be the "...and then they donated to areas that are less than ideal, from the perspective of my own values", then EA looks like a problem.

I myself have plenty of disagreements with EA, but I recognize that they're not people who would suddenly have better values without it. EA causes these people to give more charity. More charity is good.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I agree strongly with the bubble sentiment. This mostly comes from science/math/computer culture where in daily life right/wrong is a rational choice.

I'm from a midwest state, and when I explain EA in even the most generous terms possible to the people around me, they react like the adherents must be freaks, scammers, perverts, or otherwise evil people to think they can just "decide" and "analyze" their way into doing the most good that they can.

Most people outside the EA bubble think that doing good requires lifelong dedication and practice in restraining the internal forces that bend rationality toward self interest.

Expand full comment

( I'm not personally an altruist, so commenting from outside EA. )

Do they see donations to charities recommended by Givewell as biased toward self interest? I would think that using Givewell as an independent ratings service would avoid that.

Expand full comment

If EA consisted of just using Givewell, I think it would be uncontroversial. It's unfortunately the trojan horse of utilitarianism and the perverse logic of longtermism that triggers revulsion in the average person.

Expand full comment

I mostly agree. I think that the lives-saved-per-dollar metric of Givewell implicitly includes utilitarianism, though not longtermism. Givewell already has some machinery for ranking and supporting local charities, which I suspect gets around most of the problems of unmodified utilitarianism.

Expand full comment

There's this joke everyone makes when they first hear about "Evidence-based medicine." One version is, "wait, as opposed to what!?"

But eventually you have to admit this was a rallying cry for a reason.

The "criticism" section of EBM has a spooky symmetry here:

https://en.wikipedia.org/wiki/Evidence-based_medicine#Limitations_and_criticism

Maybe this is a sort of ring cycle of epistemology.

A: "Hey we need to more intelligently do X."

B: "Well of course we should do X smartly, everybody thinks so!"

A: "Ok but there's still this problem..."

Expand full comment

Oh yeah! Medicine has just made leaps and bounds of progress since the slow painful shift from eminence based medicine to evidence based medicine. Medical students today take it for granted that randomised controlled trials are the highest form of persuasion; it was not so for much of our history.

Is it the case that effective altruism is like evidence based medicine, but for charity evaluation? Perhaps; I’d be more convinced if it had stuck to bed nets.

Expand full comment

It’s the “people being large language models” thing. Everyone knows what to say. But actually doing what you say... well... that’s a lot harder.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

> Systematic Altruism

Oh, I see. You want the movement to share your initials, thus kabbalistically making you its natural leader! We're onto you!

(Good name though.)

Expand full comment

The big problem I have with EA is that, as a movement, it has the Silicon Valley mindset, and I'm instinctively drawn to the DC mindset (and FdB, being a Brooklynite has a different mindset again).

The Silicon Valley mindset is about building organisations that can do things, whether those are charitable or businesses. The DC mindset is about taking relatively small amounts of money and using them to leverage huge amounts of government money and power.

Climate change is an x-risk, but it's not one where modest amounts of charitable money have leverage. If OpenPhilanthropy could buy 20% of the land area of Arizona and cover it in solar panels, then that would make a difference, but that's spectacularly outside the budget envelope of any charitable entity. There might be things that can be done in research (industrial heat, energy storage, flight, shipping) but that doesn't seem to be a place where relatively modest charitable funds could make a big difference, and there looks to be plenty of commercial investment in that research anyway.

But: you all live in California, there's a massive project that would have a big impact on carbon emissions if it was successful (CAHSR), and there is zero effort going into making that spending cost-effective. A few million dollars of lobbying budget aimed at specific decisions (like the massively overbuilt elevated sections that could be ground level) could have saved billions of dollars from the budget. It is always politically easier to spend money inefficiently and buy off all potential political opposition (if you build it on the ground, it splits farms; the cheap solution is to force farmers into land-swaps so they end up with a farm all on one side of the line, but that annoys those farmers and turns them into political opponents); if there's effective lobbying to spend money efficiently, then officials have to pay a price either way, and will usually choose to do the "right thing", ie spend the money efficiently.

There's also lots of federal lobbying (easier permitting for solar panels, easier permitting for long-distance electrical transmission cables), but the present EA institutions have shown no particular facility for lobbying. I would love to support an "effective green policy lobbying" think-tank that did assessments of how much difference individual policies would make to carbon emissions and recommended things like "accepting a compromise where you trade-off building a couple of oil pipelines for a bunch of HVDC electrical transmission lines" because the oil pipelines don't increase oil consumption by much but the HVDC increases the possible solar share of the electricity mix significantly. The research on this is done, but there's no lobbying organisation; the current green organisations are oriented around symbolic wins (like blocking a pipeline) and activist thinking, not around pragmatic cost-benefit analyses and lobbyist thinking. I think that the cost-benefit analyses are very much an EA-mindset thing, and climate change is a clear x-risk, so should be right in the EA wheelhouse. But the Silicon Valley approach is absolutely the wrong one for climate change: the DC approach of building a "highly respected think-tank" and having lots of semi-associated lobbyists is what works for this sort of issue.

Expand full comment

Does this ... actually work though?

The federal government is blocking nuclear power plants. So far as as I can tell it has done far more harm to the environment than good.

This idea of “highly respected think tanks” has lead to the absurd state where we are enacting policies that hurt the environment in the name of helping it, because in politics people care about looking good regardless of whether we actually do good.

Expand full comment

"in politics people care about looking good regardless of whether we actually do good."

Yes, of course. That's what politics is. That's why I don't think you should be trying to be elected officials, because, as an elected official, you have to do what looks good even when it does harm. What I'm proposing is that you create organisations that work out what is good and then make it look good.

So create a pro-environmental pro-nuclear organisation: get people who are sentimentally pro-environment to support nuclear power. Niskanen is doing this, and has actually shifted quite a block of Democratic votes on this one, people who were instinctively in favour of building things but couldn't face down their local environmental groups, but can because Niskanen is unquestionably pro-environment, doesn't take money from nuclear companies, and keeps saying that building nuclear is pro-environmental. Combine them with the (less concerned about environmental issues) Republican mainstream, and there's probably a majority now.

The challenge now is that the majority is bipartisan on this (ie centrist) and structurally, Congress doesn't do bipartisan things because the agenda-setting is done by party leaderships and if the leadership of one party endorses something, then most members of the other party will feel obliged to oppose it. If neither leadership endorses it, then it won't get near a vote, even if there is a majority in favor.

Expand full comment

I think the dysfunction and chaos that we see now are direct results of this idea that “government is how we do things together,” and trying to use it to solve all kinds of problems. In short, I think “the DC approach” is what is eventually going to break DC.

Is there something I’m missing here? Id be happy to be talked about if this perspective, but I don’t see how our government is anything but about to break under the weight of its own corruption and complexity. I don’t think we can possibly build a new one in its place since we don’t agree on anything. And this collapse could be so bad that it would wipe out all the good done.

Expand full comment

I think the SV attitude that the solution to this is to throw your hands up in the air and refuse to do anything about the problems with government is a major cause of the problem. The sorts of pragmatic people who think hard about the most effective way of delivering on goals that are desperately needed in politics are, instead, driven away by the ideological perspective that you express - that there is an x-risk here ("this collapse could be so bad...") but that you don't want to do anything about it.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

This is quite the projection! I absolutely want to do something about it. What I think we should do about the problem is a return to the constitution as it was originally desgined. Let states be states again. A return to the decentralized, distributed governance structure is what I think we most need. I’m guessing we disagree there. Are you sure you want me doing more to advance this goal of mine?

Expand full comment

Without serious effort to improve the governance of the states, I don't see how that helps. Going back to my original example: federal government dysfunction is not why CAHSR has been a disaster.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Dysfunction and chaos that we see now are direct results of the fact that coordination is difficult.

Goverment is an example of a coordination mechanism. Just like a corporation. Or markets in general. Different coordination mechanisms have their pros and cons and existing in the same environment together, they can lead to a curious outcomes. In the best case they put checks and balances on each other, in the worst case, they accumulate each other vices thousandfold, leading to a dysgunctional equilibrium.

Putting all the blame for such equilibrium on one particular coordination mechanism you do not like is an old move in the political playbook. And rarely justified.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I agree with government as a coordination mechanism, but i think in order to scale you have to severely limit the scope of its operations.

Am I to believe that this particular coordination mechanism can scale to arbitrary size and scope and still function?

Expand full comment

I work for an animal advocacy organization called GAIA, that tries the "DC approach" (although we are in Belgium). When we applied for EA funding we got rejected because it wasn't "Silicon Valley approach" enough. We continued with small amounts of funds anyway and last week we managed to pass a constitutional amendment that would add animal welfare to the constitution through the senate: https://forum.effectivealtruism.org/posts/5Y7bPv259mA3NtHt2/?commentId=5SKKjsyYExypR6kmY

I'm technically an EA (I run EA Ghent) but I resent the leadership of the EA movement because of it's "Silicon Valley approach". Giving money and prestige to someone like SBF was deemed like an obviously a good idea, because he embodies the Silicon Valley approach, while something like GAIA obviously does not. We (EA Belgium) have access to the seat of the EU in Brussels. If the EA-community gave us funds we could use it to lobby for EU-wide systematic change, but they continuously refuse to do so because it isn't the "Silicon Valley approach".

(And while the money matters most, it's not just about the money, the attention matters too. The EA leadership won't give us any, even when we succeed, e.g. last week when I shared on the forum that we managed to pass it through the senate ordinary EA's upvoted it to be the most upvoted quick take of the week, yet the leadership still wouldn't add it to the EA forum digest)

Expand full comment

SBF wasn't "getting money from EA", he was GIVING money (that he stole from his customers).

Expand full comment

The only reason alameda research and ftx could get started is because EAs send millions of dollars to SBF, not just because they bought his crypto but also as investments (for example Jaan Tallinn loaned 110 million dollars worth of ether) and then later EAs supplied him with labor and outreach opportunities and status.

(Also note that I said “like SBF”, the sbf criticism is important, but this is not a one off mistake but a general pattern)

Expand full comment

Alameda started out in arbitrage. There's no need for a charitable donation there. And "loaning" (rather than "giving") someone money is what you do when you expect it to be paid back, which FTX was presumed capable (and for a time actually was capable) of doing in the past.

Expand full comment

From google:

give /ɡɪv/ verb

1. freely transfer the possession of (something) to (someone).

Similar: present with, provide with, supply with...

2. cause or allow (someone or something) to have or experience (something); provide with.

Similar: allow, permit, let have...

My use was correct since it falls within these definitions and I didn't say he got a 'charitable donation'. You said that 'SBF wasn't "getting money from EA" ', I'd say he did get money from EA.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

I find this really confusing and concerning. Can I ask what funder(s) you applied to?

And when you applied, did you have any wins yet, to demonstrate success? If not, I strongly think you should reapply. It's hard for me to believe a major animal funder wouldn't fund you now with this recent success. Are they perhaps confident the Albert Schweitzer Foundation or other major animal nonprofit will take it from here? (I don't think that's fair but it would be a reason, otherwise I can't think of one)

As for the forum digest, sorry that happened. That would feel discouraging. That said, I'm not sure a quick take would be added by default. Actually I thought there was an assumption that quick takes were trying to avoid the fanfare or hassle of a top-level post. I recommend you message the person who puts the digest together and request your take be added! Maybe they missed it, forgot, or weren't sure you wanted that!

[Edit: FWIW I have asked the monthly EA newsletter (by CEA) to include news I was excited about, and they did! So you could also ask them. They are different from the EA Forum Digest, and I know from experience that they are highly organized and grateful for extra quality material to include]

And btw, posts can be short, and they are read much more than quick takes due to their placement on the homepage

https://forum.effectivealtruism.org/posts/6whiBq7czKJk4Bx29/a-forum-post-can-be-short

Expand full comment

Thanks! I will contact the EA newsletter.

I don't do the finances so I don't know what/how the funding process took place, but when I talked to our finance guy he basically said we couldn't get EA funding because of the difficulty of quantifying impact, which is always the problem people who try to push for (hard to measure) multi-impact systemic change have that the people who push for quantifiable single impact don't have.

We are the most influential animal non-profit in Belgium. They didn't tell us that they thought Albert Schweitzer would take over, and if they did think that, that would be very strange. Our track record includes:

Legal prohibition of the sale of dogs and cats in public marketplaces.


The closure of several markets where animals suffered routine and abject abuse (due to hidden camera investigations)


The prohibition of hunting stray cats in Wallonia and Flanders.


The prohibition of keeping wild animals in circuses in Belgium.


The decision of all Belgian supermarkets to stop selling eggs from battery hens. Now 90% of all fresh eggs sold in our country come from animal friendly farms (ground system, free range or organic).


The European ban on trade in seal products.


The Flemish and Walloon ban on slaughter without stunning.
The ban on fur farming and force feeding in Flanders.

I've urged him to apply for funding again this year and I'm hopeful that given this victory we will get it. It's not that I think 'systemic change advocates' can never get EA funding, I just think that it's much much more difficult than it is for 'firm starters'.

One other hypothesis I have is that it's not just a matter of 'systemic change' vs 'firm starters', but also a matter of anglo-sphere vs non-anglo-sphere. If you look at all the projects/people EA gives funding/attention to you'll see that's it's dominated by english-speaking/anglo-sphere projects/people to an absurd degree, like, much more than you would expect if you thought EA gave to maximum impact projects/people indiscriminately. I'm much more hesitant to accuse EA of this type of bias, but privately I think that it's true.

I know you can write short posts, but I honestly think those should be reserved for the more high-effort writings, and the low effort writings should/are reserved for quick takes. Quick takes are added to the forum digest though, including this week.

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

That's all great stuff! Yes please reapply. I think it's likely they give a grant specifically for projects of bringing forward highly-efficient farmed animal welfare cases which are in conflict with the new EU constitutional amendment. It will mean coming up with project charters for the projects/cases you'd tackle given the money, rather than expecting funds to cover general operating budget for your org without clarifying future specific plans.

I wouldn't worry about the anglosphere stuff. American and UK EAs are really bullish on stuff in other countries, they know everything comes cheaper there.

If you look at Open Phil's past grants, a lot of the dollars (most of them?) go to projects in the EU and Asia: https://www.openphilanthropy.org/grants/?q=&focus-area%5B%5D=farm-animal-welfare

Same with the EA Animal Welfare Fund: https://funds.effectivealtruism.org/grants?fund=Animal%2520Welfare%2520Fund&sort=round

This despite there probably being way more applicants from English-speaking areas. So I think it's probably better you are in EU.

I'd like to help if I can? I've never applied to Open Phil before, but I applied to EA Funds for funding for community stuff before and been accepted. And I've helped others with their applications. I'm wary to put my email here but you can DM me on the EA Forum (same name) if you'd like a friendly set of eyes to go over your application(s) draft(s). Either way, good luck!

Expand full comment

Thanks! I'll take you up on that. I'll talk to our finance guy.

It does seem like animal welfare may be an exception for funding. Again, I'm hesitant to accuse EA of this bias since I don't have hard data. But to give some indication as to why I think it's anglosphere focused I'll just gesture at;

the persons on the EA people page, the people that appear on EA podcasts, the AI people/project funding landscape, the AI projects that get attention, the philosophers that get attention, longtermism people/projects in general, all the people EA made famous, the people who work at EA organizations, the EA survey showing that EAs disproportionally move to the UK/US, individual EA university chapters in the US/UK being so well funded that they can throw regular pizza parties while our entire country can't get a single community organizer despite being the center of EU-legislation, the EA forum having a tag for the US and UK but very few other countries, the EA forum having a tag for UK policy and US policy but not for other countries...

All of these could be forgiven on their own, and I can find potentially reasonable explanations for some of them, but combined it does seem to point in a pro anglosphere direction to me.

Expand full comment

> [Edit: FWIW I have asked the monthly EA newsletter (by CEA) to include news I was excited about, and they did! So you could also ask them. They are different from the EA Forum Digest, and I know from experience that they are highly organized and grateful for extra quality material to include]

UPDATE: Despite me emailing them about it, and their latest issue being unusually focussed on farmed animal welfare, we still weren't included (and the issue was once again anglo-slanted). Did you get a reply when you emailed them? They never replied to me.

Expand full comment

Dang sorry to hear that. I don't remember if they replied to me. There were other reasons I was certain it was from me

Expand full comment

I think in general EA is more "solve the problems you can" than "this approach solves every problem". E.g. climate change is an important problem that's hard to address with ea style methods (but sees a lot of people trying to address it with other methods), so EAs mostly put it aside in favor of things that ea methods work for.

(Specifically re CAHSR, I doubt it's nearly as easy as you say to make the California state government actually run a project well)

Expand full comment

If you think that my high-level description with an example was saying it was easy, then I've clearly completely failed to communicate.

That high-level description was roughly the equivalent of "spend money on charities where their effectiveness in saving lives is proven through good research, for example malaria nets" as a high-level description of GiveWell. GiveWell isn't doing an easy job.

You're talking about a multi-million dollar organisation that is endlessly doing research into the performance of the California government, quietly getting leaks from people inside who are being prevented from doing their jobs most effectively and then coming up with external reasons to ask them to do what they wanted to do in the first place, calling meetings with the Governor, with the members of the State Legislature, and with high-level executive branch officials, making press statements and having spokespeople to take interviews, making candidate endorsements in general and primary elections.

I don't think that's easy. I just worry that no-one is doing it.

Expand full comment

Zvi moshowitz is trying to work on something like that with balsa research (and he's specifically rejected the ea label for himself).

I believe there exist cases where this is potentially useful, but I think they're all long shots (much like AI safety research), and I'm generally somewhat less confident in them than Zvi is.

I think even within the universe of trying to improve policy, trying to get CAHSR to actually work is unusually low odds of success, since there's a lot of special interests pushing against it and they've already managed to push the reasonable voices out of the room, despite CAHSR's very publicly visible and embarrassing failures. Possibly trying to get access on improving lower-key transit infrastructure decisions (in the style of the transitcosts project) would work better.

Expand full comment

Not all of EA had a Silicon Valley Mindset, it's mostly AI and animal welfare that have that, for what I hope is obvious reasons. A large segment has an Oxford mindset, since that is where it originated. And finally, there is a portion of EA that has a DC mindset, Biosecurity. Rather than create a think tank, they rose up the ranks of Academia, Johns Hopkins most prominently, and joining the Government.

I think there is still room to apply the DC mindset to other cause areas. I've heard some EAs propose an AI counterpart to the NRC. If that successfully killed nuclear it might be able to kill AI, for better or worse. EA just needs more people with the DC mindset to join.

Expand full comment

II've never lived in California. But the "Silicon Valley" approach of building an organization rather than lobbying the government in normal where I've lived (mostly the Midwest). Global warming is less well-suited for that, given the externalities & non-localities involved, but most of life isn't about that.

Expand full comment

Once I got really upset here about how come I’m not donating 10% of my income. I had reasons, felt like I’m supporting all these people and paying all these taxes and just ... how? It took growing seriously in my faith to realize my enormous capacity for self deception and self delusion. Realizing that atheists followed God’s will better than I did was a serious wake up call.

Having said all this, the X risk stuff is where I start getting off the boat because all that matters there is, “do you have the right consequentialist model?” I understand we have lots of reasons for thinking we do, but I think this falls prey to the trap of “doing the most legible good while ignoring illegible problems.” If we spent all the money trying to guard against AGI, on lobbying to bring regulatory clarity to prediction markets, that might do more good against AGI risk than focusing on the AGI risk directly.

I like the idea of thinking seriously about good. But does EA _really_ do this? It never tries to define or reason about the nature of good, and instead assumes that goodness is a property that obtains to various degrees for some world states and not others, but the world states come about through some intricately branching process which has zero relation to good. Yet the thousands of years of prior work on this topic - what is good - produced numerous independent groups of researchers concluding something like, “the physical mechanism is so self-regulating that evil doesn’t last, so don’t worry about the future and do the best you can where you are.”

In short

- I think EA haters who aren’t donating 10% should serious question whether they are being honest with themselves

- I think part number 2 in your analysis is THE ENTIRE GAME and being wrong there isn’t a small deal, and

- EA totally ignores the philosophical band ontological foundations is good then focuses on the easiest to measure things, uses a basis that I think is wrong, and ultimately limits the group’s effectiveness

Expand full comment

It’s not really historically accurate to say slavery was ended by “anti racists”.

Expand full comment

It was ended by Normans conquering England :)

Expand full comment

> wokeness is just a modern intensification of age-old anti-racism. And anti-racism has even more achievements than effective altruism: it’s freed the slaves

This is plainly false. It is not "anti-racism" that ended slavery. Not in the world at large, not even in the particular context of the US.

"Anti-racism" doesn't get that credit. Opposition to slavery stemmed from an application of the Golden Rule, which is indeed ages-old. Devarim (Deuteronomy) speaks of setting your slaves free, because "remember that you were slaves in Egypt".

Lincoln was firmly opposed to the concept of slavery, while also being a firm believer in what would today be called "differences in statistical distribution curves". He did end slavery in the US, freeing 4 million slaves at the cost of 0.6 million lives. But he also wanted to relocate all blacks to Africa (Liberia) or Central America (Chiriqui) or *anywhere* that isn't US. He was ready to spend any amount of the federal budget to achieve this relocation, back in times when federal budgets had been spent very sparingly. Lincoln's message to blacks was: "Your race suffer from living among us, while ours suffer from your presence. It is better for us both, therefore, to be separated."

I really appreciate your writings, but the claim that *anti-racism* ended slavery, is laughably false.

Expand full comment

> In other words, everyone agrees with doing good, so effective altruism can’t be judged on that. Presumably everyone agrees with supporting charities that cure malaria or whatever, so effective altruism can’t be judged on that. So you have to go to its non-widely-held beliefs to judge it, and those are things like animal suffering, existential risk, and AI. And (Freddie thinks) those beliefs are dumb. Therefore, effective altruism is bad.

Wow, this is a really uncharitable reading of Freddie's point. The problem is not that "everyone agrees with doing good" just like EA; it's that the EA movement wants you to believe that "doing good" effectively is really only possible if you subscribe to their entire ideology -- which is obviously untrue, given that other charities exist, have existed for a long time, and have been reasonably effective (though obviously ineffective charities have also always existed). But EA wants to appropriate their achievements for itself. Saying "if you want to do good, and to do it in the most efficient way possible, then you're basically a member of EA" is as sleazy as saying "if you want to love your neighbour and care for the downtrodden then you're basically a Christian".

Yes, I understand that technically the "social technology" of EA is separate from its rather specific culture; but it is only separate in the same way that believing in the divinity of Christ is separate from going to Church on Sundays, reading the Bible, talking to other Christians in "Christianese", praising the Lord, etc. That is to say, while the concepts are distinct in some academic/philosophical sense, they are not distinct in practice, and pretending like they are is borderline dishonest.

Expand full comment

If you accept Christ in your heart and act accordingly, you're Christian; if you accept in your heart that charitable effectiveness can and should be quantified and optimized, and act accordingly...

Expand full comment

When you have firehoses of money going to "charity", a) asking people to a) give *more* money and b) telling them to devote more time and mental effort to charity is a hard ask.

Yes, there are firehoses. Look at the largest line items in both federal and state (or provincial in my case) budgets. They are charity (I am excluding defense in the federal case because it is obviously actually the job of government). You are already giving far more than that 10%, involuntarily (I assume you are not receiving government money, although it's hard to tell these days).

Now, agitating to redirect the firehoses might do some good.

Also: valuing foreigner lives more than your fellow citizens is not a good idea. Just so you know where I'm coming from.

Expand full comment

Is there something wrong with making hard asks? Some people clearly are open to giving more; if someone isn't then one just thanks them for their time and moves on.

I don't know who is advocating for valuing foreigners' lives *more* than those of their fellow citizens, but since my fellow citizens are relatively well-off, even at the low end of the income scale, I don't see why it's a bad idea for me to direct my money where it will save more people's lives, even though those people may not be my citizens. You mentioned the government and I think that preferentially spending money on me and my fellow citizens is the proper job for the government; for the money I give that isn't taxes, I don't see a good argument for not helping those who can benefit most dramatically from the help.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

"Is there something wrong with making hard asks?"

Well, there are people like me, who hear Peter Singer's stance and say to him:

You want _WHAT_ from me??? Never darken my doorstep again. And take your entire enterprise of ethics with you.

Expand full comment

Yup, abandoning all morality and ethics does make life a lot easier. This isn't even meant to be passive aggressive, it really is easier.

Expand full comment

Ok. I aspire to be intelligently comfortable, not ethical.

Expand full comment

Agitating to redirect the firehoses sounds like "folk activism" to me. Unlikely to make any difference as an individual.

Expand full comment

I object to being pressured to contribute *more*.

My charitable giving has decreased in recent years, due to being unable to find charities that are not corrupt.

Those mosquito nets? Better check into how many of those actually made it into people's hands. As opposed to ending up in a warehouse in Brussels, to be sold off.

Expand full comment

You have the option of reducing your charitable giving if you don't think charities are using the money well. Reducing your taxes is another story

Expand full comment

Perhaps the best way to put my position is "The basic rubric of trying to find charities that are most efficient, and of demanding evidence-based charitable action, is really good and should be uncontroversial; however, EA also generates a ton of esoteric stuff and an attachment to developing more, and this has obvious less-than-ideal effects when it comes to spreading the philosophy. So EA advocates should work to minimize those aspects and fixate on the bed net/kidney donation stuff that has the most concrete impact and best optics."

I would argue that, for example, while Dylan Matthews has been a good popularizer, he's also tended to front the crazier stuff, which as an EA advocate is not ideal.

And look I'm not saying that this has no risks or costs - maybe the more esoteric stuff really would prove in the long run to have the most positive impact. What I'm advising could be a mistake. But I think that as EA recovers from the fallout of the SBF scandal, and given that public opinion is so essential to donations, I think EA leaders and organizations should do their best to try and center their conversation on the mundane but essentially stuff. And maybe have a critical conversation about whether longtermism should be spun off as a separate enterprise that EAs can get involved with or not.

Expand full comment

Even if they don't end up beneficial in themselves, lot of those wacky esoteric long shots are a consequence of the same philosophical commitments which produced so much good stuff, and thus can't be cleanly disentangled. Maybe think of it like the psychological equivalent of pre-Green Revolution crop rotation: even if you don't like turnips and beans, trying to grow all wheat all the time will just end up damaging the soil. Chasing off all pests and spacing rice stalks closer together is a big part of how the https://en.wikipedia.org/wiki/Great_Leap_Forward got started.

For that matter, compare "Catholics do good work, I just wish they'd quit that weekly deal with the robes and candles and symbolic ritual cannibalism, or at least spin it off into a separate org."

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

While I'm not an altruist or EAer myself, there is one comment I'd like to make: Re "a consequence of the same philosophical commitments".

I think that there is a specific symmetry between longtermism and using a metric of lives saved _globally_. Longtermism disregards differences between present lives and future lives. _Global_ counts of lives disregards differences between local lives and distant lives. I could see why people who accept one would tend to accept the other. I also acknowledge that there is a philosophical tradition leaning this way.

I think most people actually reject both (which I, personally, consider fine). In the temporal case, _very_ few people act as if the discount rate was zero. In the distance case (either physical distance or social distance) _many_ people (most???) agree with "charity begins at home." (also, in the USA anyway, foreign aid has very consistently been one of the most unpopular budget items - albeit most voters vastly overestimate how large it is).

( Longtermism has an orthogonal problem in that prediction of the _actual_ consequences of an action in the distant future is likely to be pure noise. )

edit: One other thing about locality. This is separate from measuring effectiveness on a finer granularity. I don't know whether Givewell's database is amenable to this, but what about being able to tell a potential donor "Here are the charities that are most effective at saving lives per dollar for lives within 10 miles, 100 miles, 1000 miles, and 10,000 miles of your home"

Expand full comment

Discount rate doesn't have to be zero for longtermist concerns to kick in, just comparable to or less than the best-case growth rate. Thus, people with an initially reasonable-sounding discount rate might be motivated to behave strangely when confronted with a plausible scenario where growth becomes extremely rapid.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Good point! I think something similar can happen with dependence on distance as well.

tl;dr - If the rate at which one's concern for people drops more slowly with increasing social distance than the rate at which the number of people _increases_ with increasing social distance, then one's total concern is dominated by socially distant people. I see this as analogous to the situation where one's discount rate is less than the growth rate (of the population) making one total concern dominated by far future people.

Unfortunately I scribbled this in a hidden thread https://www.astralcodexten.com/p/hidden-open-thread-3015/comment/43453226

Copying that here:

"One possibility is to take the number of social links between oneself and a possible beneficiary as important, and attenuate one's interest as they become more distant. To pick fake numbers: suppose that everyone has 11 social links and they are in a tree (ignoring reconvergence), so everyone has 1 self at 0 links, 11 people at 1 links, 110 people at 2 links away, 1100 people at 3 links away and so on, till 11,000,000,000 people at 10 links away.

If one's interest in a beneficiary drops off faster than tenfold per link, then the nearby people dominate one's total interest. If one's interest drops off slower than tenfold per link, then distant people dominate one's total interest. The former seems more natural to me. E.g. if one's interest drops off twentyfold for each additional link of social distance, then the 11 inner circle people at 1 link get a total of roughly half one's total interest (roughly 1/20 of self-interest for each person in the circle), similarly the 110 next nearest neighbors at 2 links get a total of roughly a quarter of one's total interest (roughly 1/400 of self-interest for each person in that circle) and so on.

To put it another way: I'm thinking of exponentially decreasing interest in people at increasing social distance as a (admittedly rather cold-blooded!) way of thinking about Eremolalos's "No I don't think it is. I believe I'm leaning into this because I find it hard to understand altruism divorced form sympathy and emotion. Why exactly would someone want to give, when in algorithm mode? . Algorithmic altruism creeps me out in the way it bypasses the emotional transaction." and thinking about how one might interpolate between altruism towards closely linked people vs towards more distant people with an adjustable parameter for how much distance matters.

I'm also creeped out by the regime where the bulk of one's attention goes to maximally distant people. It feels very alien to me."

Expand full comment

Might be worth distinguishing between material resources and time / attention / decisionmaking bandwidth, for budgeting purposes. If, out of millions of near-strangers, a thousand per week need a few dollars each to avert some disastrous personal outcome, that might be well worth paying, depending on how much you earn per week... but if legitimate problems are outnumbered by fraud, and distinguishing requires an hour per case, keeping up quickly becomes impossible.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

That's reasonable. What I had in mind was _purely_ the material resources case.

"time / attention / decisionmaking bandwidth" is very hard (impossible?) to separate from questions of accuracy, and of how well one can expect to make accurate decisions at various levels of effort - and to what extent one can delegate such things to institutions such as Givewell. So what I was hypothesizing would apply for the case of a hypothetical altruist with some time and distance discount function donating material resources, _if_ the problem of, as in your example, distinguishing legitimate and fraud cases is solved by someone or something else.

My suspicion is that delegating the decision to Givewell for socially distant existing people is probably reasonable (modulo being able to choose to weight people at a hypothetical distance-sensitive altruist's correct social-distance-discount-rate).

My suspicion is that making decisions about distant future people is just unworkable. Our best institutions do pretty lousy jobs at estimating anything but the broadest, most heavily averaged social parameters even just a decade away, with any longer term social parameter estimate rapidly degenerating into pure noise. We are good at predicting planetary orbits. Stock market crashes, not so much...

Expand full comment

"The basic rubric of trying to find charities that are most efficient, and of demanding evidence-based charitable action, is really good and should be uncontroversial"

Should but isn't: https://ssir.org/articles/entry/the_elitist_philanthropy_of_so_called_effective_altruism. And I think you know this because you avoided saying "is uncontroversial".

I don't see how you can possibly stand by your claim that "What's good about EA is the parts that are typical; what's atypical about EA are the parts that are not good". You clearly agree that "trying to find charities that are most efficient, and of demanding evidence-based charitable action" is good - are you seriously claiming that's not atypical?

Expand full comment

I would question the premise that a rationalist approach to altruism (and, broader, the ethics) is necessarily a good thing.

1. Certain tools help with some problems at every margin: if we need to hack a lot of trees, adding more properly managed people with axes will probably help.

Other tools for other problems help at some margins, but are pretty harmful at other margins. This happens in many areas, see, e.g. the "uncanny valley" discussions in CGI.

Scientistic (aka "rationalist", "systematic", "effective", "with a spreadsheet") approaches proved to be great tool in many areas, but they are certainly prone to "uncanny valley" issues when applied at some margin to some problems. For example, many cities are still blighted by the disasters of "rational" urban planning of the XX century. These disasters are the result of work of well educated and well intentioned people with access to vast resources, but they happened to work at a margin where their approach was harmful, even though the same approach is indispensable at the lower margin (you need to put some calculations in the erection of structures) and possibly useful at higher margin.

Ethical problems are a class of problems that are obviously prone to the similar issues for some tools. For example, if I have a question about mathematics/zoology/ancient history, approaching a Berkeley professor of the relevant discipline is probably a good idea. If I have a question about ethics, asking an ethics professor is probably a very bad idea - they are a person who made their career researching quirky and edgy ethical questions and not giving correct answers to simpler questions.

So it is not unreasonable to worry that a rationalist approach to altruism might be currently at the same uncanny valley where applying more of a good thing actually leads to a worse outcome. Is it?

2. One of the signs that one is at the uncanny valley is the emergence of paradoxes nearby. Paradoxes such as Pascal's mugging or Utility monster can be viewed as cliffs in this landscape, when applying a logical reasoning leads to a disastrous conclusion. If there are cliffs nearby, we know that the landscape is complex and not a gently upwards sloping plain. Of multiple known ethics paradoxes, the Pascal's mugging is most relevant as the x-risk discussions are obviously quite vulnerable to the same exploit. Also, a version of Utility monster can be easily constructed around animal welfare problems.

Another worrying sign of decreased marginal utility of your tools in ethics are the obvious un-ethical deeds done when justified by greater goals. You dismiss EA association with SBF in your previous essay as insignificant compared to EA achievements but fail to address the issue that SBF was not a random crook who happened to donate to EA. He was driven by a pretty rationalist but crazy flavour of EA (basically, he explained that since he will eventually donate all his wealth to very good causes, he is obliged to take even odds risks at the maximum scale, because in some universe all his bets will pay off and he will solve all problems there.)

So a real worry about EA is that it is a bunch of well-intentioned and clever people who wondered far enough in an uncanny valley of applying spreadsheets to ethics to be actually moving in the wrong direction.

Expand full comment

> a real worry about EA is that it is a bunch of well-intentioned and clever people who wondered far enough in an uncanny valley of applying spreadsheets to ethics to be actually moving in the wrong direction.

They know. See e.g. https://forum.effectivealtruism.org/topics/criticism-of-effective-altruism

one of the top posts there is precisely what you say, by one of the movement's leaders: https://forum.effectivealtruism.org/posts/T975ydo3mx8onH3iS/ea-is-about-maximization-and-maximization-is-perilous

Expand full comment

Your definition of EA still bites FDB's criticism: nobody would disagree that these are good things to do: *Actually* donate a fixed amount of your income to an effective charity.

The distinctive part of EA is that it corrects what counts as effective charitable giving. The ordinary person who agrees with your definition is anthropocentric: they believe that only human well-being matters, so any charitable cause that goes toward animals (save the cute puppies on ASPCA commercials!) doesn't make sense to them. EA is distinctive in denying that only humans and cute puppies matter.

EA is also distinctive in that it cares not just about currently-existing animals but ones that will exist well into the future. Ordinary people agree with this, so they sometimes feel guilty when polluting the environment that future people will have to live with. But EA takes this worry much more seriously, so you get all the dorky sci-fi platforms that FDB thinks are dumb. EA advocates must bite the bullet on this: their commitments entail these weird platforms.

That's either a reductio for people like FDB or an indication that morality demands weird things.

Expand full comment

I only donate to anthropocentric charities (recommended by GiveWell). I still consider myself aligned with EA.

Expand full comment

Given Scott's definition, you do count as EA. But what's distinctive about most people who call themselves EA is that they reject anthropocentrism.

Expand full comment

Animal welfare isn't close to a majority of EAs, either by donation amount or members.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

The woke are literally trying to re-segregate schools. I object to any concession that calls them “anti-racist”, or lumps them in with the civil rights movement or the civil war era radical republicans. They are can only be considered anti-racist if you accept their own redefinition of “racism” to mean something almost diametrically opposite of what every American that wasn’t a studies professor understood the word to mean prior to 2013.

Expand full comment

That jumped out at me too as a massive understatement and misdirection.

Expand full comment

Having regressed politics back to the 1970s, self-flattering 'progressives' have set race relations back to the 1950s.

If you want integrated schools, you have to go private. A mulatto kid named Daryl shows up in my fifth grade class photo. When he joined his grandmother's household in our 'white' suburb around 1957, the adults were in a quandary about what to do with him. How his attendance at Corpus Christi elementary school came about, I don't know. But my mother told me he was routed to the nuns because the public school system was terrified.

So our 'progressives' have set race relations back 66 years. Fascist anti-fascists, racist anti-racists, and regressive progressives: welcome to the 21st century. Through the looking glass.

Expand full comment

> Why should people judge effective altruism on its big successes, but anti-racism on its small failures?

> Maybe a better answer is to judge movements on the marginal unit of power. An anti-woke person believes that giving anti-racism another unit of power beyond what it has right now isn’t going to free any more slaves, it’s just going to make cancel culture more powerful.

I notice that the reasons I support progressive causes are pretty similar to the reasons I support EA and I do think there is somewhat of a symmetry here.

Consider a person who really dislike spending money on AI alignment and is anti-EA for this reason. You tell them that EA will not spend their money on AI alignment if they donate it for mosquito nets, so there is no reason to oppose EA. The person, however, isn't persuaded. They feel that not opposing EA will generally make it more powerful and bring more attention to the whole cluster of memes that is associated with it. And that you know it. And the reason why you are okay with that is that you are at least somewhat fine with the idea of finansing AI alignment research.

Likewise, I,may try to persuade you that you shouldn't oppose "wokeness", as there are lots of good it is doing and have done and that if you support some of their other causes they will not transfer money to the "stregthtening of cancel culture fund". I don't think there even is any fund like that. Will this be persuasive for you? Or will you immediately think that the reason I'm comming with this argument is that I'm at least neutral towards deplatforming?

Expand full comment

Since this blog tends to be anti-woke, I would be curious to hear what good the "woke" have done that wasn't already being done by progressives/leftists prior to the rise of "wokism".

Expand full comment

I'm not sure how to even approach this without more specifics about what "wokism" means. If we use it as a catch all term for leftism, progressivism and social justice issues than your distinction doesn't make sense. If you specifically define it as all the bad things that leftists, progressives and social justice people do, while discarding all the good things, than you are by definition correct, but this is just a tautology.

Expand full comment

"Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation. There are assumptions you can add and alternate methods you can use to avoid that conclusion. But it’s a temptation you run into. "

That is the big problem. There's plenty of people willing to argue about future value of a dollar and so don't give money now, give it later.

The problem with that is that 'tomorrow never comes'. After all, if the putative value of my dollar is going to be greater in 2030 (and so do more good/save more lives), then I hang on to my money now and don't donate until 2030. However, when 2030 rolls around, the same argument applies: hang on to my money until 2035. Rinse and repeat until I die or spend all my money on myself. And EA has a lot of this type of number-crunching philosophy that tangles people up.

Other charities are out there with equally big and vague aims about 'ending hunger' or 'global poverty' or the rest of it. However, if they collect donations for flood relief, as well as talking about long-term aims, they do actually use that money (or a good lump of it) on flood relief. They don't sit on the donations, giving the explanation that by this economic theory or that calculation, hanging on to the money is even better for the poor in the long run, so that people die while the money to relieve them is being endlessly held in reserve for the better return.

EA's public perception problem is when you go on about long-termism, why should I care about people not yet born who may never come into existence five hundred years from now, instead of doing something to feed the hungry and clothe the naked who are suffering right this minute? It does sound like "jam yesterday and jam tomorrow but never jam today".

I'm glad most EAs are the types who *will* donate to mosquito nets and immediate relief of need, as well as the AI risk/X-risk stuff. But I think that's *despite*, not *because of*, the philosophy.

Expand full comment

Set up a will to donate whatever money you have to charity. Then "tomorrow comes" when you die.

Expand full comment

I'm curious who has made a strong case for why giving to charities is good in the first place, with a strong steelman case that it's better than the Elon Musk way of building businesses that solve problems in a sustainable (i.e. profitable way), or investing the money in such businesses at least.

Magatte Wade e.g. makes a strong case for the latter as a solution to Africa's problems.

If there is no strong such defence, the problem with EA seems to me that it's just not effective

I'm personally invested in and do entrepreneurship in developing countries. There is an utter abundance of problems that can be solved, an utter demand for help from talented foreigners that bring useful skills (with a bit of humility that they're not a saviour, just doing business), and an utter absence of supposed EAs that prefer cozy Bay Area circles (even though many areas in SF are far worse than conditions in developing countries) over actually working in a foreign culture and tinkering to actually do things that effectively and visibly do good, and get direct feedback from customers.

Expand full comment

> an utter absence of supposed EAs that prefer cozy Bay Area circles

https://forum.effectivealtruism.org/posts/M44rw22o5dbrRaA8F/why-and-how-to-start-a-for-profit-company-serving-emerging

Expand full comment

That is amazing! I'll tone down my language in the future:)

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

The nature of profitable businesses is that there's lots of people looking to create them, even when they're not otherwise interested in helping others, so the counterfactual impact on starting one is likely to be relatively low. It's not like, if Tesla never existed, no one would have ever built electric cars, and it's not even obvious to me that the other manufacturers of electric cars would have started later if not for Elon Musk. By contrast, if you don't give money to the AMF, no one is going to come a long next year to give more money because of that. There is something close to an efficient marked in profitable business, there isn't in charitable giving.

Starting a company to give more money to charity is just a variation on earning-to-give, and a fine idea if you think you can actually manage it.

Expand full comment

would love to hear more about it, who are the main proponents of the argument?

My standard answer would be: where there is a need, there is a market. Especially for the things people need most, efficient markets are necessary

They are the most reliable mechanism to sustainably solve need

If that's true, then the right strategy for EA is to accelerate the development towards efficient markets, and charity would then be a secondary priority. You may need charity for people that fall through the cracks NOW, but from a utilitarian POV it's better for 1 million people to have needs met for future 50 years rather than 10.000 people have urgent needs met for 1 year (I'm making the numbers up, the point is that it's more effective long-term than short-term)

Expand full comment

You mean the argument that there's no efficient market in charity? I'm not aware of any lengthy argument, it just seems obvious to me. You can get wildly rich by inventing a new desirable consumer product and selling it to people, you can't get rich by discovering and funding a new under-served charitable cause, and there are just not many people doing that for other reasons, so the usual reasons to expect efficiency don't apply. For the same reason you can't just create an efficient market in charity the same way you can just create an efficient market in consumer goods, the forces that normally make it easy don't exist.

That said, if such a thing did exist it would look like a large number of people, controlling a large amount of financial resources, constantly looking for opportunities to purchase altruistic outcomes, and willing to pay people for discovering new and better opportunities. Which is basically an idealized description of Effective Altruism as a collective, including the interest in things like impact certificates and prediction markets, so I think you have just reinvented Effective Altruist movement building as a cause area.

That said, it doesn't really make sense to say we should develop efficient markets in charity *instead of* giving money to charity today, because the existence of people willing to pay money for something is a critical part of developing a market in it. You can't incentivize people to find under-served charitable causes without actually providing that incentive, and there's not much point in doing that if you're not going to actually take advantage of it. So I think this kind of "meta" work only makes sense as a small fraction of overall work, just like working on developing regular markets only makes sense as a small fraction of total economic activity.

Expand full comment

I don't think that's quite right.

The efficient markets are around concrete needs, e.g. food and housing. They're not organised as charity yet probably performing much better.

Lots of problems to be solved there already and probably doing a lot of the heavy lifting.

I'm hypothesising that urgent care is probably a better candidate for charity, but it is normally solved by insurance. There are efficient markets for insurance and healthcare.

Lots of incentive to solve the problem for for-profit entrepreneurs.

Why aren't they already solved? Because entrepreneurs in these countries are held back by bureaucracy, see my interview with Magatte Wade here: https://niklasanzinger.substack.com/p/ep-72-magatte-wade-on-the-bureaucratic

Expand full comment

I agree with you and disagree with Freddie for the reason you give here, which I think could be more usefully articulated in one sentence: EA is primarily a set of practices, not beliefs, and those practices are actually quite unusual and highly laudable.

With that out of the way: I do also think that, to the extent EA also promulgates a set of unusual beliefs, many of those beliefs are wrong. In particular, the way EAs frame AI doomerism and existential risk more generally is so straightforwardly a restatement of Pascal’s wager that I’m shocked they can’t see it. If my favourite book posits an infinitely bad outcome that must be avoided at all costs, it’s worth taking the precautions prescribed by my favourite book against that threat, even if I acknowledge that the risk may be infinitesimally remote. This logic breaks down, of course, when you consider that many people have a different favourite book, and that there are an infinite number of these theoretically possible but infinitesimally unlikely risks, and that society would grind to a halt if we took all the precautions required to ward them all off. Furthermore, when one considers the officious and counterproductive behaviour of those who have embraced the “x-risk” worldview (church ladies who take the threat of eternal damnation seriously, AI regulators who take the threat of superintelligent AI seriously), it becomes clear that there are severe social costs associated with these beliefs. And also, that adopting them makes you act like an asshole.

Perhaps you’ll tell me that, unlike the Bible, your favorite book (or Hollywood film franchise) is an accurate guide to how the future will actually play out, because you and your friends are “rationalists” who have carefully plotted out the probabilities and incanted all the proper spells and clutched all the proper talismans to dispel all of your cognitive biases. To that I can only sigh and roll my eyes.

Expand full comment

It's very annoying when someone who doesn't know your position or care to know, makes a up fake strawman to knock down and then say "this makes me sigh and roll my eyes".

Self inflicted wounds!

Expand full comment

I’m responding to an essay by Scott defending EA’s AI doomerism. I’ve read more than enough AI doomerism from EAs and “rationalists” to know what their arguments look like (most recently, Vitalik’s reply to Andreesen’s techno-optimist manifesto).

If you have some novel take on existential risks that’s different from the EA/rationalist boilerplate, that’s cool, and I’d love to hear it. But I wasn’t replying to you, I was replying to Scott’s essay.

Expand full comment

Yeah, your comment in response to Scott's essay that in fact only Scott can read, and I have hacked into the mainframe to pervertedly peer at your beleaguered ass. I can only sigh and roll my eyes at this hypothetical, other, definitely nonexistent obnoxious person.

Replying to a post on a public forum where other people will actually have the stance you are contentlessly poo-pooing with "but this was intended for Scott" means you either don't believe in other people existing because you watched too many marvel movies, or you're a parasocial freak who thinks Scott has a personal relationship with you.

**That** is the level of discourse your last paragraph is at, and if you think that is bad and impolite and you would not like to be subjected to it again, then you should exercise your ability to not be an asshole like an AI riskers and not say it.

Anyway, I'm surprised you can write all that and claim that every instance of AI risk is about remote possibilities, there's a lot of double digit percentages around, and unless you think rolling a 1 on a six sided die is a "remote possibility" bringing up Pascal's wager when like, basically every AI risker does not have "remote percentage" in their thought processes makes me think you haven't read any arguments to any degree. In fact, even Scott himself has a ~25% chance of doom! So saying you were speaking to Scott doesn't even make sense in that light!

Expand full comment

It's pretty funny that you jumped in front of a comment directed at someone else and took it so personally though. It's also funny that you state "there's a lot of double digit percentages around", like it's an empirical measurement and not, like, someone's opinion, man.

Expand full comment

The point isn't that I think it's objective, the point is that he thinks the error in their thinking and communication is that they haven't thought about Pascal's Wager. Which doesn't apply if they don't think it's unlikely.

Expand full comment

Yes, of course many doomers think that P(Skynet) is > 0.5. But the appeal they make to the infidels is always that you should support their program if you think that P(Skynet) > 0.0, because the cost is so high. This is no different from the church ladies who believe that P(eternal damnation|faithlessness) = 1.0. Pascal’s Wager is an argument made to the heathens, not the devout.

Expand full comment

AI doomers do not "acknowledge that the risk may be infinitesimally remote"! They think it's both immanent and quite probable! There's a range of views here, but most of the people you're describing think human extinction due to AI has at least 1-in-10 odds!

Expand full comment

Yes, in one of my responses above I clarify that the doomers (and the church ladies) are fully convinced that The End Is Nigh, but the argument they make to the rest of us is that we should adopt their precautions if we think there’s even a small chance they’re right. Pascal’s Wager isn’t meaningful to a devout Christian; it’s a gambit for convincing an agnostic to believe.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

"but the argument they make to the rest of us is that we should adopt their precautions if we think there’s even a small chance they’re right" - I don't believe I have ever seen someone make this argument for values of "small" less then 5%, and seen several explicit rejections. You were expressing surprise at how people could make this argument and not notice the analogy to Pascal's Wager, and the explanation is that they don't make that argument and so the analogy fails.

Expand full comment

Isn't there a deeper philosophical basis and way of thinking to EA that identifies it?

For instance, to take your first 3 points of definition - 1. 10% giving, 2. Thinking hard about which charities are most important; 3. Actually doing these things - they certainly separate it from 'universally held beliefs', but not from, say, serious Christian communities. The churches I have been part of certainly have done these things - and perhaps have helped people from a broader range of social and intellectual backgrounds do them!

Isn't the variety of 'consequentialist reasoning' used in number 2 actually the defining feature of EA? It's where EA has very obviously made a deep contribution - especially in encouraging people to use serious data-driven analysis of the impact of giving?

If we dig down into the philosophical underpinnings of that - a certain flavour of utilitarianism, for instance - doesn't that both explain what EA does badly, and explain why it is controversial? E.g. why AI risk or animal suffering seems like primary concerns to those in EA, much higher up the chain of priorities than they are for Communists or Christians or whatever? #

Expand full comment

The biggest problem I have with EA is its blatant attempt to convert altruism into a Veblen good.

Old fashioned altruism always had its "holier than thou" people - not the least bit clear that it is beneficial to have the "be seen as rich" people added in.

Expand full comment

This is helping address some of my objections to this movement, so thanks. I still have four basic problems.

1. A more minor point, but...in a world with increasing polarisation and toxic extremism, do you really think the attitude "[a]ny group with any toolbox has earned the right to call themselves meaningfully distinct from the masses of vague-endorsers" is a great one to champion? You're basically saying here, unless I misunderstand, that ideologues are better than centrist normal people, because they do things. (Obviously you're not saying *all* ideologues, but on the whole.) Should we really be encouraging more highly-organised, highly-subculturish movement-forming in the current world? Wouldn't the best, most productive and constructive path be to try to spread general moral principles like effective giving *as broadly as possible*, and as *least* tethered to both social subcultures and controversial social agendas as is practically possible?

2. More concretely, the elephant in the room for me is always the utilitarian dogma. I really think Effective Altruism *needs* to split into two different names for this reason. Effective giving and utilitarianism have as much to do with each other as methodological naturalism and philosophical naturalism do, despite the conceptual overlap in both cases. Conflating "scientist" and "atheist" would, I think most would agree, be grossly offensive to the thousands of scientists who believe in God. You'd be telling them that it doesn't matter how rigorously naturalistic their scientific work is; if they don't agree with the view that scientific natural laws are literally all that exists, they're not real scientists! Similarly, much of EA comes dangerously close, in a motte-and-bailey related way, to implying that one cannot be called charitably effective, or promoting good outcomes, unless they believe that outcomes are literally the only morally relevant thing in the universe! And that's the attitude that can be reasonably called cult-like.

3. I appreciate you explaining why you don't think this same reasoning applies as a defence of wokeness. But I still see a huge potential motte-and-bailey here. Regardless of how often it happens, it's in principle very easy to say to your friend one week "you should become an effective altruist, it's *just* the attittude that giving should be effective!" and then the next week "what? you don't believe in AI-risk? you said you were an effective altruist! traitor!". I find it hard to believe this doesn't happen often in practice. And while I get the point about difficulty with the ambiguous nature of language, don't you think this conceptual problem should at least be clearly acknowledged and guarded against?

4. All of the above were very theoretical. My main practical objection to EA is the longtermism thinking, and I utterly reject the idea that this in any way naturally follows from thinking logically about impacts. Consider the trolley problem. I am generally against turning the trolley in the original case, but I can certainly see why someone might, and I wouldn't condemn them for it. But imagine we change it so that the trolley merely has a *25% chance* of killing the five people on the original track (otherwise no one dies), and you can turn it to remove that chance and with certainty kill the one person on the other track. At this point, I struggle not to think of someone who would turn it as a sociopath. They'd be an unusual kind of *ethical* sociopath who is following a moral code, but still seeming to completely lack fundamental moral intuitions that a certainty of death is incomprehensibly worse than an unlikely possibility of it. That saying "well 1 life lost for sure, or an average expected loss of 1.25 lives, clearly the first is better!" is horrific reasoning, literally treating that one person as nothing but a statistic to be weighed against another statistic. That to allow (or worse cause) a person who exists now to actually suffer or die, in order reduce a small probability of harm to everyone, is just morally reprehensible. And EA longtermism is entirely built on this thinking.

I don't claim my moral intuitions about this can be proven correct, or even that they are correct. I do claim that the longtermist moral conclusions are about as far from "obvious if you think about it and look at the math" as it's possible to be.

Expand full comment

I took Scott’s point about the toolbox-users vs vague-endorses to be that they have the right to draw a positive distinction, not a normative one. In other words, they can claim that they are in fact a distinct group, whether or not they are better morally.

Obviously they *can* also make a normative claim to be better than everyone else (and probably do), but that’s not the right they’ve earned merely by creating and using a toolbox.

Expand full comment

Classic real-world example of "one killed with near certainty to protect five who might have gotten lucky and survived anyway" would be a soldier throwing himself on a grenade to protect the rest of his squad, which under the right circumstances (own life to freely give rather than betraying family obligations, etc.), is widely considered admirable, in fact emblematic of the exact opposite of sociopathy https://www.girlgeniusonline.com/comic.php?date=20070207 (or for that matter John 15:13)... though ambiguous enough to not be obligatory, and overall a desperately tragic situation at best, with less destructive options preferred when available.

If somebody thinks some ominous black ball which just crashed through a window with "GPT-x" stamped on the side is the exact grenade they need to jump on, I'm not very inclined to say "no, it might be an inert prop, or the egg of an endangered bird, let's just wait and see if it goes off."

Expand full comment

I got a real kick out of Girl Genius cited next to John 15:13; thank you for that Friday chuckle!

Expand full comment

Yesterday you had me fairly convinced that I was being weird and annoying because I was judging EA on their vibes and not their substantive contribution. This post has reconvinced me that the vibes are freaking weird man.

"I think most of the people who do all three of these would self-identify as effective altruists (maybe adjusted for EA being too small to fully capture any demographic?) and most of the people who don’t, wouldn’t."

Not a chance in the world. Maybe "the people who do all three and are also in Scott's Bay Area social circle" are EA, but I cannot imagine being so wildly ignorant of the rest of the world that I assume anyone who does charity does it within the principles of my personal, admittedly small, charitable movement. This is literally like saying "nearly everyone I know who actually follows through on their charitable intent is a Southern Baptist, and therefore I must conclude that Southern Baptists are the only charitable people." I live in the Bible Belt! Of course most charity takes place through the church.

I know tons of people who regularly give to causes they genuinely believe are worthwhile, based on rigorous analysis of whether those charities actually match to those values. Not one of them would be caught dead identifying with the EA social movement.

Expand full comment

Scott addressed what you said when he discussed whether Bill Gates should be considered EA.

Expand full comment

Yeah, and I think this argument is just obviously bad.

I have a movement I just made up called People For Being Awesome whose stated goals are support for Disease Eradication and Educational Opportunity. Now, unfortunately most of the members of this group are consummate racists*, but that's not central to our philosophy. Scott and Bill Gates both support ending disease, so I think I can consider them members of my movement, and use them as evidence that the movement is good.

*I'm worried about this example because the point is best made with an extreme example, but hypotheticals like this sometimes come across as accusing the real group of holding these values. My point is that you can't define a group by its stated beliefs and count every person who holds those beliefs as a member against their will, because groups are more than just their stated beliefs.

Expand full comment

"I know tons of people who regularly give to causes they genuinely believe are worthwhile, based on rigorous analysis of whether those charities actually match to those values."

The difference is in the cause selection part not the charity selection part. For EAs, "genuinely believing a cause is worthwhile" is not enough. The cause has to be expected to be above other causes in impact. A framework EAs use about this is ITN, importance (or scale of effect if everything went well) x tractability (amount you can expect to effect th problem) x neglectedness (how many others are working on this, low hanging fruit remaining etc). It's kinda hard to separate causes from interventions and you don't have to fully, this is just a heuristic.

That's why EAs end up working on global health (narrowing down to anti-malaria, lead removal, vaccines, and mental health assistance in third world countries), farmed animal welfare (especially cage free corporate campaigns and policy lobbying outside of USA), and existential risk (nuclear war risk, pandemic and bioterrorism preparedness, and AI safety).

Frankly these causes just do look better for accepting a marginal effort or marginal dollar than more popular causes that Westerners tend to give to. Although those causes may be amazing and in my heart of hearts I wish they were impactable by a marginal dollar to the same lives saved/suffering averted/good done, I'm not really seeing how popular western causes can compete.

Expand full comment

I agree that's the difference! But that's the whole crux of the argument!

Freddie wrote an article saying "look we all support charitable giving. But by taking an extreme consequentialist view of charity, EA sneaks in a lot of assumptions that lead to weird conclusions like x-risk and longtermism which are contrary to most people's moral intuitions. And there are good arguments that this kind of consequentialism isn't actually effective!"

Scott replied that 1) At least EA genuinely pushes people to put their money where their mouth is (I agree!), and 2) You don't have to get into those weeds to believe that EA is good. Going into those details isn't the point, the point is the philosophy. So we can just ignore all the weird, controversial EA stuff when discussing whether EA is good.

But you can't, because the actual substance is the weird controversial part! Without the extreme focus on consequentialism there is no EA. So if the point is, as Scott argues, that anyone who genuinely gives to charity and puts thought into their charitable donations is EA, then EA doesn't really mean anything. If the point is "you should give specifically to these causes" then I don't see how you can just summarily decide the controversial causes are off the table for criticism. Hence the depiction of this whole idea as a "shell game."

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

"So if the point is, as Scott argues, that anyone who genuinely gives to charity and puts thought into their charitable donations is EA, then EA doesn't really mean anything."

That wasn't Scott's point though. That's waaay too inclusive. Here, a bit more about Bill Gates:

Bill Gates and the Bill and Melinda Gates Foundation (the foundation's researchers and grantmakers) do actually do the same sort of calculations as EAs, which is why they have been so monumental in anti-malarial work, including now some tech-forward work on malarial vaccines. I mean Bill even made that crazy speech a while back where he released mosquitos on the audience. So there you've got a massive. massive connection to the number one things EAs fund and promote as a giving opportunity*.

Additionally Bill Gates was extremely bullish on paying for covid vaccine production. He had that major speech where he slammed the govt for poor resource allocation, basically saying he'd spend, yano, how ever many million, to save however many billion that the economy was about to hemorrhage away due to COVID. And he did indeed give and facilitate a lot of good work around COVID. BTW pandemic preparedness and vaccines are a *huge* EA cause area.

I also know for a fact from literally watching it happen, that the Gates Foundation have had at least one meeting with an EA-related donor advisory firm over the years**. I doubt they really wanted anyone to know this, but, thinking for a bit about how the world works, it shouldn't surprise you that they would have had more meetings with other major donors like Open Philanthropy fund managers to coordinate replaceability of dollars and so forth. In other words, they are active in the EA ecosystem, even if only slightly and casually. Plenty of EA orgs and donors are that way.

So I think Scott would say that, yeah your friends don't fit EA, but Bill Gates does. I'm sympathetic to the idea that Scott can't just say Gates is an EA, only that individual can say that. But surely you can see the divide he is doing is not arbitrary or a "shell game"? I feel you are presenting a strawman of his argument tbh?

*EAs still promote global health giving heavily, especially anti-malarial, as the best *giving* opportunities. I mean this distinctly from what careers EAs recommend other EAs do. Different causes are more money constrained and others are more talent constrained. That said, Charity Entrepreneurship is an EA org who incubates nee EA charities, many of which are global health related, so occasionally even EAs know it will be right for someone to work directly in global health, rather than policy, fundraising, animals, or x-risk.

** as to whether the Gates Foundation folks took the advice, I don't know, but the point is they do similar enough explorations of value calculations that they wanted to know what others were saying. This is basically what every EA does, even if, say, an X-risker ends up disregarding/downgrading the animal welfare calculations, or a global health fan ends up disregarding/downgrading the X-risk arguments, when it comes to managing their own portfolio and efforts.

Eh, by the way, not every EA is consequentialist. I think in the recent survey only 80 percent or so identified that way? And utilitarians would be an even smaller subset of that. And that percent seems to be getting smaller every year based on my talking to EAs. But I think it's a very human thing to want to make the most of (at least, certain portions of) your life, effort, and money, and to care about "doing good for others". So EA still appeals to many non-consequentalist people, and continues to appeal or be useful to ex-consequentalists. I've also met zero EAs in recent years who claim to be "moral realists". So there's probably even fewer "moral realist" EAs than there are EAs who would say yes to the consequentalist question? This tells me that some EAs may be ascribing to consequentalism (and EA) as a tool, but not be as singlemindledly consequentialist as you might imagine. FWIW, Valuism seems to be growing a LOT in EA as a counterpoint to consequentialism, and I expect valuists to keep becoming a larger and larger subset of effective altruists, maybe one day surpassing consequentialists.

Valuism part 1 (explains Valuism): https://www.spencergreenberg.com/2023/02/doing-what-you-value-as-a-way-of-life-an-introduction-to-valuism/

Valuism Part 3 (discusses why EAs might adopt Valuism): https://www.spencergreenberg.com/2023/03/should-effective-altruists-be-valuists-instead-of-utilitarians-part-3-in-the-valuism-sequence/

Relatedly, I was kinda careful in my first comment to say "lives saved/suffering averted/good done" (emphasis on including "good done") to try to avoid necessitating a consequentialist framework. I do think everybody donating cares about "doing good" as they define it. But I was not overt enough that I was trying to allow a break from the consequentialist paradigm.

Expand full comment

As someone who believes both in conventional effective charity and the value to attempting to mitigate AI-based X-risk, as counterintuitive as it seems based on consistent underlying logic i think it would actually benefit both movements to become less publicly associated (if still overlapping in practice). My argument is as follows:

1. People are broadly in favour of conventional effective charity, even if they don;t care enough or realise how important it is.

2. People are broadly worried about AI and happy about safety research, even if they don't really understand the risk categories.

3. When people advocating explicitly for effective charity start saying the best use of funds is for ivory tower research of exactly the sort their pals do, people understandably get very suspicious very quickly. Our political enemies can then very easily make out the EA-AI-Silicon Valley cluster is a sinister monolithic cabal and discredit both causes associted with it. We should avoid this as much as possible by not co-branding them. AI Safety research is very important and people should advocate for it, just not while wearing the 'EA hat'.

Expand full comment

I strongly agree. AI safety efforts are particularly difficult to evaluate for effectiveness, as many in all corners of EA discuss frequently (AI safety corners most of all). This makes AI safety a constantly looming PR disaster for EA more broadly.

Effective AI safety funding violates charitable heuristics that exist for a reason, even if they aren’t best applied there. Higher pay attracts more grifters, even though it makes sense to at least attempt to match what the researchers who could actually accomplish something would make if they went elsewhere. Funders and fundees being closely knit makes sense when they’re all interested in the same, niche, neglected issue. But these and more are still issues.

On top of it all, most research will by nature amount to nothing, and even total success might just look like nothing happened. A close association between EA and AI safety is just begging to constantly saddle the former with accusations of ineffectiveness (not to mention elevated rates of fraud), many of which will be correct in hindsight, even if they were the products of reasonable bets a priori.

AI safety should keep using the toolkit of EA where applicable, and EA should keep funding some AI safety causes. But the world at large is becoming more interested in AI, and that will include a lot more non-EA funding with non-EA secondary priorities. Now is the perfect time to position EA and AI safety as sister movements, instead of arms of the same woman.

Expand full comment

> Once you stop going off vibes and you try serious analysis, you find that (under lots of assumptions) the calculations come out in favor of x-risk mitigation.

Where can I find some of this analysis?

I don’t expect the average EA skeptic to find it convincing, because good analysis offers many jumping-off points. A motivated critic has their pick of objections.

But there is a Straw EA out there who basically relies on Pascal’s Mugging. Next time I see it deployed, I would like to be able to gesture at a more defensible set of assumptions.

Expand full comment

1. Christians do this via tithes (to maintain local churches and fund this church charity programs) and often additional giving on top of that. Hell is also the ultimate x-risk, if you think of that. They don't choose careers based on it as much though, but they probably personally volunteer more. EA is just a different aim, but both it and even rationalism sometimes mirror religion a bit unconsciously.

3 is true, but unfortunately people don't. SBF is just Jim and Tammy Faye Bakker; televangelism is kind of "effective evangelization" in using centralized modern tech to reach nationwide or worldwide audiences, but it proved to be extremely vulnerable to individual empire building, hucksterism, and more. And christians have to deal with that despite it being unfair to tar all of them.

honestly though the riffing on EA dude to openAI is weird to me. The other side is venture capitalists and Microsoft lol, how on earth is EA worse than them? And for the past 10 years weve been grousing about internet technologies unexpected negative effects, like Facebook radicalization, Twitter mobs and cancel culture, youtube and dancing to the algorithim, paypals arbitrariness in freezing financial support, crypto as e-waste and ponzi schemes, etc.

why are we all suddenly "trust techies with AI completely, full steam ahead!"?

Expand full comment

A minor point, but Hell is the ultimate *s-risk*, not x-risk.

Expand full comment

haha, true, though there is a subset of christians that believe in annihilationism, the belief that God will just destroy the soul instead of torment it. I stand corrected though!

Expand full comment

The motte is "do the most good according to whatever moral philosophy you hold" and the bailey is "do the most good according to consequentialist calculation." The motte is nearly tautological, but then the EA movement is nothing like unique in following it. For example, Mormons really do donate a tenth of their earnings and take their charity very seriously. If you want to promote EA specifically, that requires defending consequentialism specifically. In that context, it's totally fair to bring up things like x-risk or the fringes of animal welfare that really do seem to follow from consequentialist principles but that most people find strongly counterintuitive, to put it diplomatically.

Expand full comment

I think this is true. An EA looking at a Christian who tithes 10% will say, "Sure, but you're sending most of that to your church. That church isn't the most effective use of the money, so most of your donation is wasted. EA is the best path forward if you want to maximize your donation!"

I think most Christians will bristle at demands to effectively defund all churches and send that money toward AI safety startups in Silicone Valley instead. Meanwhile, the Christian will turn around and say, "Sure you're giving 10%, but if most of that is wasted on weird/speculative x-risk and fringe ideas. I'd rather maximize my donation on institutions I trust to help those in need."

I think from the outside it's reasonable/understandable for people to question whether someone else's charitable giving is worthwhile or not. If you're not religious, maybe you don't see all the work a church is doing in a community to help bring people together, help people overcome addictions, overcome financial setbacks, heal relationships, and get people's lives on track.

If you don't directly participate, you'd be more likely to discount all that as "just wasted on building pretty buildings". And I think if you're not concerned about AGI and don't visit factory farms and a startup focused on ethical animal husbandry is a faceless website, maybe you don't see donations to EA priorities in these areas as either meaningful or making any kind of difference.

Maybe sometimes the other guy's motte looks like a bailey from far away in your own motte.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Part of Scott's point is that even that first part is not that common. Most people, regardless of whether or not they have some moral philosophy besides consequentialism, never sit down, think about their moral framework, ask themselves how different kinds of charitable giving would interact with that framework, and try to pick the charitable giving that would maximize whatever it is their moral framework thinks is good.

They probably _do_ think that whatever charitable giving they are doing increases their own moral frameworks view of "the good" (and are probably even right to some degree!), but that is not at all the same as doing a rigorous comparative analysis.

Expand full comment

If the goal is simply to get people to put more effort into evaluating their charitable giving according to their own goals, EAs should be livid that their movement has been hijacked by fundamentalist utilitarians.

Expand full comment

I was mostly just pushing back on the "everyone does this". You were arguing that it's a motte and bailey, where everyone already does/agrees with the motte. I was arguing that even the motte is a pretty unusual position to take.

I've got to be perfectly honest the entire class of argumentation of "hijacked by fundamentalist utilitarians" is one that I find completely uncompelling. I don't care who the group is. I don't care if they have some weirdos who do thinkgs I don't approve of. I don't care about the "group" or "movement" _at all_. I will agree with and support some of the specific actions they do and I will disagree with some of the other specific actions they do.

Why anyone ever gets more invested than that is something I don't get (I mean, I _do_, it's basic human tribalism). We should be better than that.

Expand full comment

I have no beef with Effective Altruists - but I do think the concept of "Effective Altruism" becomes less useful the more you widen it to include more abstract things like AI safety.

Like, is donating to a Christian Mission "effective altruism" if I frame it as a rational calculation based on my percentage belief that hell is real, the expected Quality of Years Lived by a soul not in hell (this is incidentally a large number), and the number of souls a particular donation is likely to save?

(And to be honest: while 100% your average Christian is not going to frame it in those terms, I actually don't think it's too far from how many Christians would defend how they donate their money, in broad strokes)

You can argue, yes, this is Effective Altruism, and I feel like if your definition of EA is "it's just about the mindset in picking causes", you kind of have to. It's hard for me to draw a fine distinction between that sort of argument and other fairly abstract "EA causes" like AI risk.

And I think that's an okay definition of EA, but maybe not a useful one? Because the result is instead of focusing on malaria nets, you end up focusing on convincing everyone else of the assumptions that you're making (convincing people that hell is real, convincing people that AI is dangerous), and ultimately, it just ends up a lot like any other charity: you end up funding things like "awareness campaigns" to to convince more people to join the cause rather than the more concrete stuff.

Whereas I feel like if you define Effective Altruism as specifically the concrete, measurable, short-term impact stuff, it becomes a lot more useful as a concept. The focus becomes on the effectiveness, on doing the most concrete measurable good.

That's not to say an Effective Altruist can't care about saving souls or safe AI, but I feel like that work should be considered separate: it's fine to be a Christian and an Effective Altruist, it's fine to be an AI Risk Proponent and an Effective Altruism, but it's probably not helpful to the concept of Effective Altruism if I frame my almsgiving or my support of AI alignment efforts as "effective altruism".

Expand full comment

I see what you mean. I think we can make that hypothetical Christian not an EA by adding one more criteria:

EAs should plan their reasoning and efforts in service of provable goals, eg goals that we can and will eventually look back upon to prove whether the EA's efforts accomplished the goal or not, and whether the goal is complete or requires continued work

I think this is a feasible inclusion and not getting too specific. I admit it is a mouthful but it's also just basic part of a checklist any decent project manager would ensure was completed before moving forward with any project. Project success criteria, and when we expect to check for them.

Obviously, donating to a Christian mission, you will never be able to test if anybody extra avoided hell because of your donation. So it doesn't fit.

AI Safety work can fit though, at least the work I'm aware of. I'd like to see all of it meet this bar, I'm sure doesn't. Some deliverables might be to answer fundamental questions about interpretability, to solve fundamental game theory questions, to design a series of tests that AI models can be put through to see if the models meet basic safety criteria, to promote those tests to AI labs and get commitments to use those tests. And yes, the big provable goals are about safety. We would hope to be able to answer questions like "Did our efforts reduce risk of AI catastrophe?" and "Is humanity still at theoretical risk from advanced AI, or has humanity reached a safe equilibrium with AI?" I have to admit these questions are uncomfortably far off to answer, but they are at least answerable someday. We might become slightly more confident in safety as time goes on, by repeatedly surveying AI ethics** experts about what types of abuse they suspect people could feasibly use AI for (ideally the feasible abuses keep dropping, and they can point to what changed that made them worry less and less). You might become more confident that some efforts have done good by watching AI use in countries with AI policy and safety framework vs countries with no such policies or safety tests. "Danger" can be proven by seeing any AI-caused catastrophes (in which case I hope the anti-safetyists would not complain anymore about money and effort going to AI safety). "Safety" can be proven, either after we get AGI (and we survive for some years, then we would know there wouldn't be anything more to do) or pre-AGI, just by virtue of how AI develops and how humanity seems to adjust (in which case AI safety work would likely fade out and be remembered as a silly diversion).

Either way, it won't be forever that funding and effort goes to AI safety, so I don't see a huge loss here, I do see some possibly massive wins that would effect real people alive today, and I don't see AI safety as so pie-in-the-sky as someone trying to save souls from hell. Maybe the divide between safetyists and people into more typical charity is just the long feedback loops? It's certainly not as easy to gauge as, say, measuring malaria trends in the regions where you supplied nets, but it does seem provable and disprovable eventually. Perhaps it is more a difference of size (time to wait to find out if you did good effectively or not) not a difference in kind (ability to know).

**note that AI ethics is different from AI safety, so this would not just be the safetyists surveying themselves. This is more of a corollary

Expand full comment

I don't know that I agree with the addition of that "rule" (it feels a bit like a kludge more than a natural extension of the philosophy), and feel like AI safety - and a lot of the other "abstract" EA goals don't really pass it anyway:

For one, any interpretation of the past is nearly as contentious as predictions of the future and it's all an N=1 trial - if we don't have an AI catastrophe, will it be A) because of the critical contribution of EA activists, or B) because of things that would have happened anyway, or C) there never really a risk of AI catastrophe in the first place?

And second, when even do you know it's safe to look back and say "ah, yes, we're past the risk of AI catastrophe"? By it's very nature it's the sort of game you play until you lose.

---

And just in general, it seems like fairly few things are so clear cut that you'll be able to retroactively judge how successful various initiatives were. Even if you pick something with a more clear success criteria (e.g. "CO2 levels return to X benchmark by year Y"), how do you weigh how much impact buying a castle and hosting meetings about climate change had?

(On the other hand, a charity that directly pulls carbon out of the air in a very concrete, measurable way I think would fit much better with the "malaria nets" style EA)

Expand full comment

I like your criteria, but it is basically the criteria of falsifiability, and I think we can add to it other criteria of this kind, and say it follows more or less the core of the rationalist movement.

Expand full comment

That's true, but I'm not sure how to formalize it for outsiders who don't know what rationality/LW is, who wish to evaluate how well EA is achieving its goals. I mean maybe the answer is just "fuck em, they will refuse to 'get it' even if you formalize criteria, because they don't want to" and it is tempting :P

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

At this point, it sorta feels as if EA is like fantasy sports for people who don't like sports. You guys don't have long winded debates about the merits and shortcomings of WAR ratings for pitchers vs position players, or how much money is too much to spend on Jalen Hurtz in an auction draft. Instead, you argue about the relative value of non-profits and their various activities. Just like with sports, arguing about EA has become an end in and of itself; it is as much a part of EA as the 10% giving pledge.

Expand full comment

Sorry, I am not very convinced. When I first encountered EA it was an attempt to use resources more efficiently recognizing that much of the world of charity and philanthropy was corrupt and misguided and truly ineffectual given the ratio of admin expenses to real help in a cost/benefit assessed way. All well and good, and taking to heart the realization that most charities were scamming the desire for people to do good but had no operational way to effectuate this desire was a good thing. Once again, all well and good. How did this go from a nerdy intellectual way to help do effective altruism to tithing and 80,000 hours and animal rights and global AI risk, etc. I respect the level of intelligence (or should I say rationalistic tendencies) and commitment that many in the EA movement have, but this is hardly protection from this inflationary hubris. Saving chickens counts, but thwarting the purpose of mosquitoes doesn’t. Once you start down the slope of animal rights it becomes hard to understand what animals to save and what should be destroyed. Local and farm to table is great as a kind of plaything to improve certain cooking outcomes, but when it needs a “philosophy” and turns into a movement that has grandiose dreams of saving the planet from some existential risk, things start looking a lot less like EA and a lot more like a pseudo-religion complete with attempts to acquire power by hook or crook. The capacity to flip means and ends becomes easier and easier as you couch things in terms of religious metaphors of preventing the apocalypse or ushering in the eschaton. Maybe it is just easier to think of this as nerds need a pseudo-religion too and nerds like to think of themselves (like all religious folks) as the good guys that are in concert with the arrow of history. My hope is that EA stops thinking bigger and starts thinking much smaller. Out with the idea that you can do more as a movement or powerful folks placed to alter global policy and in with recognizing that doing “good” and acting ethically in the world is an extraordinarily difficult thing but that careful thinking in a default EA mode can help folks forward in the own journeys to, if that is their wont, to effectuate change in the world—by charity or not. The minute acting ethically in the world turns into how do we accrete power to turn the world to our vision, you have become a religion or political movement even if you think that the best way to change the world is to put your “people” into place and “grow” your movement. A group of smart, or even super smart nerds can no more escape the knowledge problem and Knightian uncertainty than a bunch of rabid zealots. The great thing to me about EA, as I see it, is that it takes a world fraught with such uncertainty and impossible knowledge requirements and tries to turn some of that existential reality into a more manageable space using the tools of a rationalist utilitarian. An excellent balm for the world, no doubt. There is much to learn from those using the lens of a rationalist utilitarian, but the need to have or create a movement seems misguided. Movements very quickly move from persuasion to force and the reversal of ends and means, which is why FTX and many other EA folks with power find it easy to set ethics aside for the “greater good”. Is the greater good a planet with no humans, or 1 billion people or one with 20 billion people? Rejecting the idea of a movement or that EA must try techniques to conquer institutions or create counter-institutions or capture either private or public monies and prestige would be a first step. Apologies for the long post.

Expand full comment

> I checked to see if I was being a giant hypocrite, and came up with the following: wokeness is just a modern intensification of age-old anti-racism. And anti-racism has even more achievements than effective altruism: it’s freed the slaves, ended segregation, etc. But people (including me) mostly criticize wokeness for its comparatively-small failures, like academics getting unfairly cancelled. Why should people judge effective altruism on its big successes, but anti-racism on its small failures?

I don't agree with the premise. Wokeness is *not* "a modern intensification of age-old anti-racism," but an outright repudiation thereof. It's precisely what you alluded to above with the words "are these people just virtue-signaling? Is it bad for their coalition to appropriate something everyone believes?"

This article is largely organized around the ancient wisdom, "by their fruits ye shall know them," the that by looking at what people *actually do,* rather than what they say they support or want to do, you can get a good picture of their character. So let's compare today's woke movement to those who fought against racism in the past.

Abraham Lincoln spent significant time and effort fighting for equal rights. In an 1854 letter, years before he was elected to the Presidency, he wrote, "Our progress in degeneracy appears to me to be pretty rapid. As a nation we began by declaring that 'all men are created equal.' We now practically read it 'all men are created equal, except negroes.' When the Know-nothings get control, it will read 'all men are created equal, except negroes and foreigners and Catholics.' When it comes to this, I should prefer emigrating to some country where they make no pretence of loving liberty,—to Russia, for instance, where despotism can be taken pure, and without the base alloy of hypocrisy." (Surprisingly modern rhetoric for a 19th century statesman!) As President, he used his power repeatedly in the pursuit of equality, through the Civil War, the Emancipation Proclamation, and then pushing for the Thirteenth Amendment. He even ended up giving his life for the cause; after winning the war, he gave a speech in which he mentioned that one of the next things on his agenda was to pursue some degree of political equality for black people, including voting rights. In the audience was an actor named John Wilkes Booth, who was so infuriated by this idea that he vowed this would be the last speech Lincoln ever gave. Three days later he followed through on it.

Frederick Douglass, a contemporary of Lincoln's and one of the most influential black voices on the subject of the abolition of slavery, called for strict equality, nothing more, nothing less. He famously proclaimed, "Everybody has asked the question... 'What shall we do with the Negro?' I have had but one answer from the beginning. Do nothing with us! Your doing with us has already played the mischief with us. Do nothing with us! If the apples will not remain on the tree of their own strength, if they are wormeaten at the core, if they are early ripe and disposed to fall, let them fall! I am not for tying or fastening them on the tree in any way, except by nature's plan, and if they will not stay there, let them fall. And if the Negro cannot stand on his own legs, let him fall also. All I ask is, give him a chance to stand on his own legs! Let him alone!"

Martin Luther King Jr. famously called for a time when race would not matter, when people would be judged "not by the color of their skin but by the content of their character." He (much less famously!) also admonished his own people to meet whites halfway, telling them that they needed to shape up their own conduct if they wanted to be taken seriously. He condemned high rates of crime, sexual misconduct, and other societal improprieties among black people and told them that they needed to do better, that equality meant not only the power to do all the same things as everyone else but also the responsibility for the choices made with that power.

Today's wokesters are nothing like these past figures. Woke thought-leader Ibram X. Kendi wrote in "How To Be An Antiracist" that "The only remedy to racist discrimination is antiracist discrimination. The only remedy to past discrimination is present discrimination. The only remedy to present discrimination is future discrimination." (This sounds like nothing so much as Governor George Wallace's proclamation that we must have "say segregation now, segregation tomorrow, segregation forever!")

People who speak of equality and race-neutral policies are condemned as "racist" by the woke. *Black* people who follow in the footsteps of King and Douglass are treated even worse. (For example, my wife once lived in the same town as Bill Cosby. She tells me that "all the women" knew what he was like and that they should be wary of him. HIs character flaws were no secret, but it wasn't until he started telling black people that they needed to clean up their act that he started getting in trouble for it.) What we end up with is a system that is "anti-racist" not in the sense of being opposed to racism, but in the sense of anti-matter: exactly like matter in every way, except for a few specific properties, in which it's exactly the same except for being oriented in the opposite direction.

Meanwhile, the results they have produced, the fruits by which we are to know them, are not just cancel culture, but riots, pro-crime policies at both local and national levels, and creating lots of misery for the people they claim to be helping. (One of the most obvious examples being the 2008 financial crisis. They weren't using the term "woke" back then, but the ideas in the 1990s that led to government policy pressuring banks into making subprime mortgages more available to minorities and low-income people, who ended up hurt by far the hardest in the crash because of it, are easily recognizable as woke policy.)

They've appropriated the name of virtues everyone believes in, and used these names to shield themselves from well-deserved criticism when the things they do cause very real harm, and all too often exacerbate the problems they claim to be fighting, rather than alleviating them! And for that, they deserve all the criticism they receive and more.

Expand full comment

Thanks for the Matthew 7:20 reference.

Expand full comment

I remain convinced that EA is unpopular because it shows up a key inconsistency in others, not because of anything it is, itself. I suspect that most people don't actually think that charitable work as such is worth doing, and resent efforts to systematize making it more worth doing. That is, likely for most folks charity is just a de-mythologized form of tithing, and calling attention to the efficacy of this is grotesque and rude. Like asking if God really enjoys the smell you get when you incinerate an animal's corpse on an altar. Look bud this is a religious thing we're doing here, don't blaspheme it.

Statement of conflict of interest: I don't give to charities of any kind.

Expand full comment

Do you think charitable work is worth doing?

Expand full comment

Not for itself, no. If a man buys something nice for his son, is that a charitable work? What about for his nephew (ultimately cognate with the fun word 'nepotism')? His second cousin once removed? A friend? Someone a girl he's sweet on feels pity for? I believe that doing things that benefit members of our in-group answers a natural drive, akin to how we can't help but look towards the east at dawn or in the direction of a traffic accident. The energies of an active person will find plenty of targets near to hand without meaning to be charitable about it.

A metaphor strikes me as I write this. I hope I won't regret putting this into words, but here goes: there's something abhorrent about the transit from natural drive to optimized outcome. When I hear that some guy has fathered hundreds of children by donating at sperm banks (or worse yet, dozens by being a crooked fertility doc), my desire isn't to clap him on the shoulder and say "you old rascal you" but to have his many children sterilized and have him put to death. I'll stop short of attempting a false equivalence, here.

Expand full comment

I understand the sentiment, I've felt similar. What do you think of charity while genuinely feeling compassion for every person? e.g. part of the goal of Buddhism is to help you feel compassion for all sentient beings. All people naturally fall within your ingroup after that.

Expand full comment

A decent response. 2 things come to mind: as a member of AA who credits it with saving my life, that argument has some appeal to me. However, the vast majority of people quit drinking without it, and in many of its specifics, it’s inarguably weird. More to the point, it’s never been proven to work. But most importantly it operates with a philosophy of “attraction rather than promotion,” which avoids many of the pitfalls EA falls into. If you’re not shouting it to the rooftops, you’re not pissing people off with your smugness. Second, the focus on malaria prevention would seem to argue against the success of the rationalist, consequentialist approach? Cases are rising worldwide due to factors like global warming and a new urban dwelling mosquito ravaging cities in Africa. Wouldn’t money have better spent on preventing those things? Well you can’t predict them. Because nobody can predict anything as well as EAs, convinced of their own intellectual superiority, seem to believe they can.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

So, I've believed in the theory of utilitarianism for 12+ years at this point, and aiming to do altruism effectively is a natural extension of that. I've been around the EA community for 6+ years at this point, and I will say most of the people I have met in the community are very smart and genuinely good people.

But, I do have some problems with the movement, starting with the concept. The concept is just so broad, it’s not meaningful. As Freddie points out, I think it is akin to a movement that said “Do politics good” or “effectively make the world a better place.” It’s the kind of shit Silicon Valley made fun of in it’s first season where they showed all these small startups doing random stupid shit saying it was all to make the world a better place. Yea EA as a community has some central themes that Scott points out, but the concept itself is still vague and broad in a way that's a turn off to me and many others (it feels unnecessarily elitist I think?). I do wish it was called systematic altruism or something else a little more pointed.

Moving on, another thing I have a big problem within the EA sphere is the “math”, the “evidence, and the “consequentialism”. All in quotes because I don’t know of a better way to say that this stuff doesn’t really have evidence in a way the term is typically used, and it doesn’t use math in a factual way you’d kind of expert a hard science to use, and the consequentialism is just whatever someone conjures up rather than anything else. What does saving 200k lives today do for the future 500,000 years from now? What says donating that money to charities deemed less effective by EA (like research, or education) wouldn’t have a much stronger effect in the far future? The error bars on this stuff is just so high, it just isn’t that convincing. That’s why you can have SBF justifying everything he did, and MacAskill spending millions(maybe just a rumor) on promoting his book, because all this stuff is just whatever people feel like rather than something you can actually look at the evidence of.

It reminds me of an EA meeting where a high-up member of USAID came to talk with 20+ years of experience in global development. Someone asked him, “In your experience, what is the most effective intervention you’ve seen?” And he kinda scoffed at the question, he was like, “What do you mean most effective? Most effective for what??” “How do you compare a deworming program in one area of the world with educational support in another?”

EA would break this down into some type of metric and purport to have an answer, to a degree that I just don’t find appropriate. EA kinda feels like the wide-eyed kid that dreams big but doesn’t understand how the world works.

I probably can’t describe this correctly, but it also feels weird to me that a CEO of a tech conglomerate can potentially do more for the world than all of EA could, yet they wouldn’t be an EA unless they explicitly chose that career due to some like EA based career evaluation. (And if they would be considered an EA despite no interaction with the community, that’s not meaningful).

I kinda wish there was a movement that was more about being the best version of yourself, for yourself and for others. And I wish it didn't explicitly tell me how to do that, but gave me tips and tricks, personal stories, classes, training, whatever. I think that's something that would resonate much more strongly with me, and many others.

In short, I’m glad EA exists. I’m glad organizations like Givewell exist. I’m glad there are people out there genuinely trying to make the world a better place. I just hope the movement matures, maybe with a renaming, maybe with a split (or both). I hope the degree of confidence in their evidence and what they recommend lowers. I hope they expand the acceptable ways they consider effective altruism. I hope they broaden their messaging to reflect more with the average person. But I will always commend anyone who truly tries to improve the world/do what they think is best for others, EA or not.

Expand full comment

You're right on. To use utilitarian analysis you have to agree on some measures of utility and cost, and that's where the rubber hits the road. EAs all seem to agree among themselves on what has utility and what does not, from what I can tell mostly on a health-insurance-like QALY per dollar basis.

Expand full comment

I do not identify as EA, but I believe EA on balance is a positive force and does good things. For now, that is enough. Institutions often transmogrify; if EA is around in 100 years, it may be that it is no longer a force for good, but today it is.

Among the people I hang out with, I suspect less than 50% know the word "altruism". Probably way less than 50%. The EA label itself indicates intellectualism. This is not a bad thing, but it is certainly a barrier to entry.

Expand full comment

"4: It’s tautological that once you take out the parts of a movement everyone agrees with, you’re left with controversial parts that many people hate."

It can be the other way round --most variations of X have a feature no one likes, such as politicians lying. Then the unusual feature would be something everyone likes. Everyone would like honest politicians and efficient charities, but at the same time they re going to closely monitor them for being really honest and really efficient.

Expand full comment

Everyone loves an honest politician who tells them what they want to hear. Tricky part is which feature they prefer to retain when it's not possible to have both.

Expand full comment

I think one of the things about the EA movement that gives me pause is the low percentage (only about 20%) of adherents who are vegan, according to the Rethink Priorities survey in 2020. The easiest thing you can do is to not do something, namely sending economic signals in support of slavery, r*pe, torture, and murder. If the grand majority are that bad at math and care for others so much less than they care for their own sensory pleasure or convenience, they're hard to trust.

Expand full comment

Spoken like a true vegan!

Expand full comment

You can say (or write) "rape" here. This is not X-formerly-Twitter or Reddit; so far there are no Anti Evil Operations to pounce on you for using a no-no word or extreme sensitivity around triggering.

We might argue with you over "is artificial insemination of cattle rape?" but I'm pretty sure nobody is going to gasp and fall on the fainting couch over "you said 'rape' and not 'r*pe'!'

Though this does give a novel twist to the AI risk/doomerism argument: (AI) Artificial Insemination - the *real* existential threat!

Expand full comment

I'm getting Poe's law vibes from E1's comment...

Expand full comment

Then you are bad at reading vibes.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

Everything old is new again; little did I think we'd be seeing the return of 18th/19th century use of dashes to avoid naughty words in the 21st century.

D___m me, I was _________ ________ astonished that they didn't feel free to specify that ________ was the bad thing, not _________, __________ or ____________!

Yeah, they could be trolling, but we've already had vegans commenting on previous posts in these same terms about torture etc. and that evangelical vegan in the kidney donation post about how they'd be horribly worried that a donated organ might go to a meat-eater, who would then continue to live, and eat meat, and so condemn zillions of living feeling beings to torture etc.

How to know? Best to take everyone at face value unless they're *too* heavy-handed with the parody 🤷‍♀️

Expand full comment

Yeah I just thought there was a chance some moderator tool would increment a counter somewhere if I said one of the standard dirty words and make it more likely that the account would get banned. Rape.

Expand full comment

So far we're dodging that, Substack don't seem to be quite as Witch-hunter General as other places. Not yet at least, there isn't the same community organistation that does lead to "I find the word 'aubergine' triggering, I demand all mentions be censored and warnings about references to vegetables tagged" interventions as other places.

Expand full comment

FWIW I agree with you and many vegan friends do too (I'm lactovegetarian but basically vegan). It is honestly weird.

I think the rationalist culture has some myths about how hard or unhealthy it is to be vegan or lactovegetarian, and then they don't try. On the other hand you will sometimes meet some ex-vegans in the community who went vegan and "felt horrible". Ask them if they took a multivitamin and especially B12 and they will look at you confused. So it's one of the two, one is overblowing risks and the other is simply not being realistic about the change they are trying to do. Neither group is very good at taking small precautions and minimal planning to ensure the success of the project that is "going vegan".

But all this is very rude to bring up when you first meet someone, so an uncomfortable pall hangs over non-vegan EA spaces, if you are yourself a vegan. It's very hard to know where people stand. They *probably* care some about animals, but there they are eating meat.. so do they actually not care? Do they think what you are working on is dumb? Are they actually not altruistic and only working in EA for status or personal reasons? It sucks.

Expand full comment

> I think the rationalist culture has some myths about how hard or unhealthy

This would not be my bet.

I think one part are people not caring about animal suffering (which I think is very bad ethics), and another part are mostly lacking will-power.

And for the second part, either they think it is the correct choice to be vegan but still didn’t manage it, or they think it is better to use their will-power on something else and it is a rational choice to not focus on this (I saw a lot of justifications like that, by example this is basically the reason Yudkowsky give, if I remember correctly).

Expand full comment
Dec 4, 2023·edited Dec 4, 2023

Sorry I'm late. I agree they think it is better to use their willpower on something else. And I'd agree with them if I agree with their model of willpower here. What I'm trying to say is that I disagree with them here.

1. The obvious. It is easier than they think because vegan food is not as rare, expensive, hard to cook, unhealthy, unsatiating, untasty, unsatisfying as they believe.

2. More importantly and more to what you are getting at. They think it is "harder" (takes more willpower) than it has to. I think they don't model willpower well. A lot of them seem to subscribe to the belief that willpower/self control is finite resource, formally this is known as ego depletion which has been debunked. Interestingly, for people who *believe* willpower is a limited resource, ego depletion does hold. So I think the rats are spreading damaging memes here! A summary https://hbr.org/2016/11/have-we-been-thinking-about-willpower-the-wrong-way-for-30-years

I believe they could go vegan or close to it, if the rats (those who do fit your first criteria of caring about animal welfare) allowed themselves to feel impassioned about protecting animals, not being speciesist, and creating a future with strong animal welfare, could feel motivated to eat vegan. Conversely, if they allowed themselves to feel morally disgusted/horrified by animal treatment, they can find animal products to be much less appealing. Its not that hard. I think willpower is very creatable here! I think they are wrong that using your effort to do good is a zero-sum game, especially because food choices take place in the kitchen, grocery store, and during those times you are looking at a menu or choosing a restaurant. When you aren't thinking about work and other causes anyway.

Expand full comment

I am very late, but I agree with all of this :-)

Happy new year btw !

Expand full comment
Dec 1, 2023·edited Dec 2, 2023

I agree everyone should be vegan, but I don't think 20% is a bad score for a movement which isn't specifically about veganism, it is like, what, 10 times more than the base rate? I really don’t think a lot of movements do better, I would bet than even environmentalist do worse.

Also probably a lot more are pro-vegan, and are just not succeeding to be vegan, and eat a lot less meat/diary/eggs than the average (just like Scott).

So it doesn't really weight against my trust in EA.

Expand full comment

I understand what you're getting at, but I think you may be looking at the first derivative instead of the raw value. About 1% of the US population (which I will pretend for the sake of argument is the same population from which EAs are taken) is vegan, and 20% of EAs are. Some combination of caring about others plus logic plus grit got these people to be vegan. Say also there's no other group (other than the obvious one) who is more likely to be vegan than EAs. Still, 80% did the wrong thing! Whatever that combination was, it's not present in 80% of the group! Sure it's better than the piss-poor showing of the public, but that's faint praise. If I cared about, say, not getting cholera and I could go to restaurant A, where 99% of the staff didn't believe in germ theory and was showing signs of illness or restaurant B, where 80% of the staff didn't believe in germ theory and was showing signs of illness, I wouldn't choose restaurant B, *I'd stay home*. Positional accuracy is more important than directional accuracy. I can probably better figure out how to donate my money than the EA organization (of course it might turn out the leadership is only drawn from the 20%, but that seems unlikely given how much money they allocate to certain initiatives).

Expand full comment

have you responded to the (admittably vague) comments of distrubist and molebug that ea encouraging normal busy people to think about problems far away from these selves in itself is bad?

Because problems far away and abstract, make people dumber, easier to lie to and you end up supporting causes like wars where you dont know any virtue of either side and have no drop of historical context.

Does ea even remotely have an argument about how to prevent fraud "yeah im soooo great look at me take 1 million dollars to a 3rd world country" - next sam bankman?

Expand full comment

I'm going to say you're broadly a left winger or Blue Tribe or whatever you want to call 'people who live in San Fransisco are San Franciscans even if they're not 100% on board with the Democratic Party.'

The left is split into two philosophical factions that often pretend very hard they're the same. Utilitarians and idealists. Utilitarianism, I assume, you are familiar with. Idealists, who are mostly Hegelians, make up the farther left part of the party. Socialists, for example, are heavily influenced by Marx who was influenced by Hegel who was a famous idealist. Utilitarianism and idealism are completely incompatible as philosophical beliefs. They might agree on a political program but they will never agree on fundamental goals. They don't even share any philosophical heritage. They are really, really different.

You, and the entire EA movement, are utilitarians. Freddie, and the entire socialist movement, are idealists. You complained a few threads ago about how Freddie keeps showing up and saying, "Why don't you do more to support the revolution?" (more or less). And your response was something like: you refuse to define even what a revolution is or how it'd help.

But you're talking past each other. For a Hegelian idealist the revolution is the point. They believe in a series of succeeding world-historical spirits which it is their philosophical duty to advance. If you're not helping with the Hegelian undergoing then what you're doing has no value. On the other hand, utilitarians basically think all of that is fake and kind of made up. There is no spirit of the age because it can't be observed or quantified.

A good example I like to use is Mills vs Marx on taxes. When confronted with the argument that taxing rich people more was unfair because rich people had done nothing morally wrong but were suffering an additional burden Mills ultimately agreed it was not fair. He even added it was disincentivizing a good thing which was a concern. But he justified it by saying that it led to more good than it did bad so long as the money was spent wisely. In other words, it's not fair to the rich but it increases net utils so it's justified.

Marx meanwhile dismissed it by saying that all of it was class struggle and that whatever brings about the revolution was moral by definition. You had a moral duty to work to do that. In fact morality was defined by working to advance that.

You can see this today in the divide between neoliberal Democrats who want taxes in order to fund programs vs more left wing Democrats who say they would impose wealth destroying taxes because it'd be more fair or create a more just (in their opinion) society or reduce the influence of capitalism. The former is utilitarianism: taxes are justified because they create net utils. The latter is idealism: taxes are justified because they help bring about society wide change toward the ideal.

So you're both talking past each other because you have fundamentally incompatible worldviews. Marxists do not care about utils. EAs do not care about the weltgeist. The proper thing to do is to decide which one you are and then realize the criticism from another philosophical school will always be, at best, problematic.

PS: I suspect he's upset you're 'king of the nerds' because he's correctly identified that EA is the charitable arm of the rise of a new set of post-industrial elites, the 21st century equivalent of Carnegie libraries. A movement that will do a lot of good but doesn't subvert the existing system. The fact it produces net utils is actively bad because it prevents revolutionary consciousness.

Expand full comment

Does malaria itself not also hinder revolutionary consciousness? If deworming is the best way to improve school attendance it might also enable more Africans to read the Communist Manifesto.

Expand full comment

This is a utilitarian way of thinking. The revolutionary socialist idea is to heighten the contradictions of the system, more or less making it purposefully more painful in order to bring about its downfall. The classic here is On Contradiction by Mao though there are older works.

Expand full comment

So they're saying that the way to get less torture overall is to torture as many people as possible, all at once, so that they all hate being tortured SO MUCH, and anyone who suggests stopping the torture early is counter-revolutionary? And this will somehow bring about a desirable socialist situation where all people voluntarily help each other, whereas people immediately, today, voluntarily going out and helping other people, and politely trying to encourage others to do the same, cannot possibly have such an effect.

Yes, I think I rather strongly disagree with that.

Expand full comment

More or less, though that's a somewhat uncharitable way to put it.

To steelman the case using an example Marxists will use: there was a reactionary rebellion in Vendee against the French Revolution. One reason that the peasantry was willing to support the aristocracy in a reactionary rebellion was that the nobles of the Vendee gave generously to charity, had worked locally to make laws that made life easier for the peasantry, and had a strong culture of service toward the whole population. But they were still fighting to defend a system that put them legally above the peasants. They were resolving the tensions of the system and in doing so disrupting the 'natural historical' process that would lead to the end of feudalism. And this disruption gave them the ability to make trouble for the world-historically necessary French Revolution. Which would lead to a better world.

I will concede that is not particularly different from what you just said about torturing people enough in logical terms.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Perhaps a more appropriate argument to persuade Marxists, then, would be that the extirpation of certain contagious diseases is world-historically necessary, yet has long been stalled by political and economic decisionmaking processes which "resolve the tension" between compassion and convenience by doing bad math, intolerably leaving the people of some continents legally above those on others.

Expand full comment

Yeah, you can understand the philosophy and construct arguments within it. You just did it wrong in the specifics.

I'm surprised more people don't pay attention to this honestly. The far left does have influence in American politics and is hard to understand if you don't know how they think. But even more importantly this is part of how public policy is done in China and China's pretty important.

Expand full comment

Amazing comment!

Expand full comment

Thank you!

Expand full comment

Dammit, that should have been obvious, thanks for pointing it out! Continental vs. analytic once again rears its ugly head.

Expand full comment

Nerdy sum, nerdiani nihil a me alienum puto.

Expand full comment

Freddie may be participating in a movement with idealist heritage, and have inherited some intellectual touchstones from that, but I don't think it's right to assume that his position can thus be understood as a direct application of idealism. Indeed, if you follow his disagreements with other people on the left, they *very* often come down to him countering others' idealistic positions with utilitarianism. Basically "I understand that this is done with good intentions and according to a coherent moral theory, but I don't agree that it's actually likely to help."

Deciding which school you're in, and thereby determining which school you can take criticism from, is only practical when people consistently limit themselves to constructing their philosophies coherently within a single school.

Expand full comment

> Freddie may be participating in a movement with idealist heritage, and have inherited some intellectual touchstones from that, but I don't think it's right to assume that his position can thus be understood as a direct application of idealism.

No, Freddie's talked about Hegel and about how you can't understand Marx without it and how he's a Marxist. I believe you're mirroring, a common fallacy where you project what you think or feel onto other people.

> Indeed, if you follow his disagreements with other people on the left, they *very* often come down to him countering others' idealistic positions with utilitarianism. Basically "I understand that this is done with good intentions and according to a coherent moral theory, but I don't agree that it's actually likely to help."

This is a strange belief utilitarians have where any practical advice counts as utilitarian philosophy. It doesn't. The Catholic Church will often say things like, "I understand you have a lot of religious zeal but your current tactics will not work to convert more souls." This pragmatic display of strategy does not make Catholicism a utilitarian ideology. Its opposite, that non-utilitarians will not take account of the practical results of their actions, is also wrong.

> Deciding which school you're in, and thereby determining which school you can take criticism from, is only practical when people consistently limit themselves to constructing their philosophies coherently within a single school.

Most people admittedly limit themselves to less than one school, instead having something of a confusing mishmash of ideas they haven't thought through. I do not think this describes Freddie DeBoer, a committed Marxist and intellectual. I believe he has thought through his beliefs thoroughly, far more thoroughly than most.

Expand full comment

>No, Freddie's talked about Hegel and about how you can't understand Marx without it and how he's a Marxist. I believe you're mirroring, a common fallacy where you project what you think or feel onto other people.

Yes, but he's *also* talked about his disagreements with other people on the left where he makes explicit that he's criticizing them from a mechanistic "will this work?" standpoint. Just because one can construct a framework where each viewpoint is coherent only in isolation, and they lose meaning when combined, doesn't mean that people will actually behave according to that framework.

>Most people admittedly limit themselves to less than one school, instead having something of a confusing mishmash of ideas they haven't thought through. I do not think this describes Freddie DeBoer, a committed Marxist and intellectual. I believe he has thought through his beliefs thoroughly, far more thoroughly than most.

I agree that he's thought through his beliefs much more thoroughly than most. But still, there are times I find myself disagreeing with him because it appears that he hasn't applied his own reasoning from one domain to other domains.

Expand full comment

Social commentators like Freddie, and quite frankly his implacable critics (who are not the same, but are in the same profession) are not writing about the topics you spent most of your essay arguing about. You get to it at the end: criticizing EA is about taking away units of power from the social group that metaphorically goes to the libertarian meetings, not about taking donations away from Give Well. Most people are implicitly scared that any social group that gains power will use that power to impose something on them. If you think the people who attend libertarian meetings are weird aliens orthogonally from their online essays, you would want to stop them from attracting impressionable new recruits, or being able to define what's considered acceptable to say at an office party, even if you're generally in favor of deregulation or other libertarian ideas, lest you one day find yourself living in and being expected to conform to a weird alien society.

Expand full comment

Thanks for posting this further analysis, you've helped me clarify my thoughts and my mix of admiration and unease with EA. At this point it boils down to:

1. "One answer: don’t have opinions on movements at all, judge each policy proposal individually. Then you can support freeing the slaves, but oppose cancel culture. This is correct and virtuous, but misses something."

I think this is mostly where I stand. I'll defend EA against most general accusations, because it's doing plenty of good in the world, simple as that. Hell, even for those of us who are not AI-doom-pilled, the fact that people are being motivated to do research on how to make AI useful for human goals sounds like useful work.

2. about point #2 in your definition, "think really hard about what charities are most important, using something like consequentialist reasoning"

And this is the bit that I end up disagreeing with, even more starkly now that you've isolated the idea so clearly. My model of doing good in the world is "spray good in all directions, or in whichever directions you find yourself connected with". So even if malaria nets end up saving more QALYs/$ than vaccinations, or work with the homeless, or endowments to public libraries and the arts, I still prefer a world where people give generously to any and all of these according to their feelings and connections.

Human flourishing is complicated, and I actually enjoy the fact that there are no simplistic or totalizing shortcuts to promoting it.

Expand full comment

I read FdB's post and thought. Yeah that seems right. And then I read your post and thought, yeah that seems right. I also think you spend too much time worrying about what other people think. Screw them, you do you and pay as little attention as possible to the outside critics.

On a different note: I'm not an EA. I don't make enough money to give 10% away, (semi retired, working 25-30 hrs/ week). But your example has made me think more about volunteering ~10% of my time locally. So thanks for that.

Expand full comment

>Freddie has a piece complaining that woke SJWs get angry when people call them “woke” or “SJW”. He titles it Please Just F@#king Tell Me What Term I Am Allowed To Use For The Sweeping Social And Political Changes You Demand. His complaint, which I think is valid, is that if a group is obviously a cohesive unit that shares basic assumptions and pushes a unified program, people will want to talk about them. If you refuse to name yourself or admit you form a natural category, it’s annoying, and you lose the right to complain when other people nonconsensually name you just so they can talk about you at all.

Welcome to the inverse of the Euphemism Treadmill.

'Social Justice' and 'Woke' *were* both terms originally coined by the lefties in the movement, until the right took them over and strawmanned them to death and turned them into insults and slurs.

There's no new word that the left could suggest for themselves that the right won't apply the same treatment to.

So we're stuck in the situation where all popular labels are avoided, and any that emerge are quickly appropriated, corrupted, and rejected.

It's not fun for us, either.

Expand full comment

It's not fun for you to be part of a group so reviled, any name it chooses for itself immediately becomes an insult and a slur?

Expand full comment

"Liberal" might seem anodyne, unless you remember the way Rush Limbaugh said it.

Expand full comment

“If they’re shooting at you, you know you’re doing something right”

Expand full comment

"If everyone hates you, you know you're right"

Expand full comment

If you think it's everyone, you need to get out of your filter bubble.

'The enemy is everywhere and nowhere, is universally hated and winning the propaganda war, is weak but strong, is on the brink of defeat yet an existential threat.'

This is aesthetics, not an argument.

Expand full comment

A long time ago, before EA, I came across either the Cochrane study, or somebody commenting on it, while searching out the most effective charity for saving lives. (This could have been as early as 2004, my memory isn't clear). The reason I was searching out the most effective charity for saving lives is that I was involved in an argument (somebody argued I should support some policy or other, or maybe that I should donate to some charity or other, and I found it absurdly expensive with regard to its benefits, and, long story short, the argument demanded I provide a more cost-effective way to save lives). And when I examined the data, holy shit. I had an ace argument against so many ineffectual policies and charities.

"If you really wanted to save lives, you would be donating to provide malaria nets in afflicted countries."

At some point later I came across GiveWell, which was a nice all-in-one resource for these arguments.

(There's a nonzero if small chance, given where I was arguing in this timeframe, that I may have played some part in inspiring the existence of EA; I was -really- fond of this argument. Out of a sense of curiosity I spent some time digging through old Overcoming Bias and Less Wrong archives to see if I could find a smoking gun, and failed, but wow, I've been arguing with some of you for a long time in different forums using different pseudonyms. Hi, Nancy! Also, it's kind of surreal how many relative nobodies from fifteen-twenty years ago I had dumb internet arguments with are either internet famous or famous-famous today.)

I'm not an EA, to be clear, because I'm not a utilitarian. Used to be something like utilitarian, then a virtue ethicist, now I'm something I sometimes call a relative moralist. Think of "It's a Wonderful Life" as a moral framework, kind of. A big part of it is that I think we, as humans, need a moral framework that tells us whether or not we are "good". Broadly, the basic idea is that morality is relative to a relevant average.

If you are in a society where people walk by the drowning child, and you walk by the drowning child, you're not good, or bad. You're average. Substitute the average member of your society in for you, and nothing changes. If you throw out a flotation device but don't make too much more effort, you're good. If you wade out into the water and ruin your suit and save the child, you're something like heroic.

If the average person would throw out a flotation device, and you walk by - well, the average member of your society would do better. You're bad/evil. If you throw out a flotation device, you're not good or evil. And if you ruin your suit to save the child, you're good.

It's kind of an unholy union of virtue ethics and utilitarianism, where moral value is subjective and hard to mathematically evaluate, but relatively easy for us as humans to evaluate. Good people, and good acts, make the world a better place than the status quo; bad people, and bad acts, make the world a worse place than the status quo. It has a place for heroism and villainy, and doesn't subject people, outside of weird edge cases, to too much moral luck (insofar as it has moral luck, it's "people who naturally want to do good/bad acts seem to get an advantage", which seems kind of okay, and "people born into societies where the average person will eventually set themselves on fire to protest to save the rainforests would seem to get screwed over", which seems alien enough that I wouldn't expect the moral framework to continue to operate anyways - it's intended for humans, not bizarro-humans).

Because I think what most people really want from their ethical system is a reasonably clear answer on how to be a good person. Utilitarianism almost gets there, except you never "win" - as long as you could improve utility somehow, there's another step you need to take, and if you don't take it, you're in some sense guilty of not taking it. I don't think it actually does a good job, as a moral framework, of separating out the basic human concepts of "evil", "bad", "neutral", "good", and "heroic". Make number go up, never stop making number go up. And I've met some people for whom this works, who don't seem to have meaningful internal moral categories; the idea of creating an entity that will make everybody in the world slightly happier, but at the cost of torturing one particular person quite terribly, is purely a mathematical question to them, and they'll do the math and continue on with their lives. The question of whether it is good or evil to do so is basically beside the point. They'll make their choice in the trolley problem and move on.

The average person, however, doesn't actually operate like that, and trolley problems will haunt them for their entire lives. They don't need to know "utility went up" - they need to know whether or not they are a good person.

EA provides some kind of answer - here, tithe 10% and you're a good person. And so far so good. But then the question becomes "What am I tithing 10% towards?" And if the answer looks like utility, rather than goodness, then you stop answering the question of whether or not they are a good person or not.

Insofar as EA focuses on AI alignment, it may or may not be maximizing utility. What it isn't doing is answering the far-more-important-to-people question of "Am I a good person?" And insofar as EA associates with supervillains, this can seriously outweigh any good EA does, when people ask themselves whether or not they are a good person.

Maybe you don't lose the core people you really care about, if you lose all the people who are concerned with the question of whether or not they are a good person, instead of whether or not they are maximizing utility. But you do lose something important there: An opportunity to make the world a better place. Imagine if half the people currently donating 10% of their income stopped.

I don't make the argument above anymore, for a variety of reasons. But a lot of it comes down to "I don't think that argument actually cleaves reality at its joints for most people."

Expand full comment

This is a really bad job of steelmanning.

The line between "agrees with the philosophy" and "actively participates in the movement", which you use to distinguish EA from commonly held beliefs, is not particular to EA, and the reason that he calls it a "shell game" is EXACTLY that EA postures as "its a movement" "no its a set of principles" as convenient ...

"Tell me what to call my movement" FdB does - UTILITARIANISM ! It's RIGHT THERE!

Maybe rename to "Utilitarians in action" or "Nerds relentlessly quantifying morality"

Expand full comment

Not all EAs are even consequentialist though, let alone utilitarian. In a 2019 survey, 70% of EAs self-identified as utilitarian, but in my recent conversations the number seems a lot lower. Well, partially because most EAs I know don't identify as moral realists. EA really is a toolkit you can plug your values into. Turns out a lot of people have values that they feel are very much helped by consequentialist style thinking, even if they aren't consequentialists.

Expand full comment

> If I’m a YIMBY [sic] despite my policy preferences and because I’m considered outside of the YIMBY kaffeeklatsch, that means that it isn’t about policy and is about being a cool shitposter.

>I agree with Freddie: it’s better to define coalitions by what people believe than by social group.

Alternate framing: Lets say that Freddie spends 360 days a year trashing the political movements whose politicians are most likely to actaully pass YIMBY policies, and spends 5 days a year saying that he sure wishes the NIMBY politicians he actually helps to elect had more YIMBY policies.

In that case, does it make any sense to call him a YIMBY when his overall impact on the world is electing more NIMBY politicians?

These so-called 'social groups' are *voting blocks*. That makes them *very relevant* to the question of which policies get implemented.

If your whole deal is opposing the election of YIMBY politicians - even if you are opposing them on the basis of other issues and have no problem with their YIMBY policies - then yeah, people get to question your commitment to that movement.

Call it a revealed preference.

Expand full comment

I’m not an effective altruist but If I wanted to briefly summarize EA to a random stranger I’d say something like it’s a movement that helps donors to systematically find effective charities and commit to generating more resources for them in an efficient manner.

The squishiest part of my description probably lies in what “effective” constitutes. But while some people might not think EA staple charities like those that address AI risk or animal welfare or malaria (or pick one) are “effective” charities, I have a hard time seeing how they’re ineffective enough to warrant such numerous and energetic attempts to refute the EA movement compared to other charitable movements.

Can someone who objects to EA and believes the movement is a net negative please summarize why the world would be better off without EA? Like in just a few words that an average person like me might understand?

Thanks for your help.

Expand full comment

The other two points missed here are:

1. Not everyone is a consequentialist

2. Not everyone is altruistic

I think either one would be necessary for agreeing with EA as a philosophy.

Example of #1: I know people who donate to religious groups; the majority of them are not doing it because their soteriology makes them think that converting more people will maximize utility. Telling them that mosquito nets will save more lives is not going to move them.

Example of #2: I know people who see donating to charity as a purely transactional act. They do it solely because it makes them look or feel good. Again the mosquito net example will only move them inasmuch as people have made it attractive.

Expand full comment

To me, responding to the scandals confronting the EA movement and the corresponding critiques by pointing to the scrupulosity of EA adherents would be like the Catholic Church responding to the sexual abuse crisis by pointing to Mother Teresa.

These scandals are a test. How do you respond to the test? By covering your ears and trying to point to all the good that you do? Or by honestly taking a hard look at yourself, and considering whether these problems are revealing a real vulnerability or blind spot.

My perception is that EA's fatal flaw is hubris and arrogance that leads it to believe that it will not be subject to the same problems that have infected every other organization with a noble mission as it scales up. This response confirms it for me.

Expand full comment

Similarly, most people who criticize the Catholic Church for, say, focusing more on sexual morality than serving the poor probably would not come out of an audit of their own efforts for the poor looking significantly better than the Church.

But so what? When did we that become the standard?

--

EA is a Big Boy now. Time to start acting like it.

Expand full comment

The scandals are one man's crypto scam and a few women complaining about "creepy" men asking them out. If major EA figures had tried to cover for Bankman-Fried knowing his fraud, that would be a more substantial indictment of EA.

Expand full comment

So, have you been reading EA writings and noticing them being anything but self-flagellating

or leader-flagellating about the SBF stuff? That's all I've seen tbh. EAs took and are still taking it very very seriously. Not hubristic or arrogant at all.

Expand full comment

What do you call this essay that says SBF doesn't really matter relative to the good done? There have been many EAs (in Scott's threads, even) that just try to distance EA from SBF and explain how he doesn't really count, how it doesn't reflect on either the movement or the ideology, etc etc.

Of course, it's big tent, a movement is full of stuff, we can observe different people and both be right from those perspectives, yada yada.

What I have found somewhat darkly humorous is the uptick on EA forum of defending "earn to give" post-SBF, when the movement (at least 80K Hours) had backed away from that years before. I sort of get it, but it comes across as obligate contrarian more than anything else to defend the factor that ties EA to SBF through MacAskill.

Expand full comment

Hm yeah maybe this is a more bay area conclusion. I mean I also think it is not bad enough to outweigh the good done. Isn't that just what Scott means? Saying he says "it doesn't really matter" seems to be using loaded language that I don't remember him writing in the piece. I do think that EAs should think carefully about culture and how ea culture might have caused it, and yes a little self flagellating and dark nights of the soul in the process. Thing is, it's been a year and basically everyone in EA has done that by now. It was taken very seriously. Many reflections on the forum, Twitter, org meetings, leadership meetings, in EA conferences (literal talks and rooms set up for reflection on this), and dominating private conversations between EA groups of friends for months. Some people decided EA was partly at fault and are still feeling shame and trying to make it up and improve the culture (most EAs at least have an eye improving the culture these days), while some people decided that, no, it really wasn't wasn't EAs fault and the best thing to do is just keep trucking. I can't fault people for deciding the latter if they truly did think about it seriously and carefully. Maybe you would agree with that in their shoes. How many years should the median EA have their tail between their legs?

As for the switch back to E2G, with SBF gone it makes sense. Firstly, 80K switched tacks because they are a philosophically longtermist org and there was more funding for longtermist problems (thanks first to another brief billionaire who disappeared in 2020 or something, just in time for SBF/Alameda to assure everyone that more money was on the way). Many EAs don't have those same priorities so they never bought the whole funding overhang story anyway. It's actually really important that EAs don't defer to big orgs like CEA and 80K as gospel, but formulate their own value propositions from scratch.

Second, I don't think you have to even do entrepreneurship or finance style E2G if it's icky to you, and if you don't have decently-ethical products to bring to market. I think we should all hold that standard of eachother. So a more approachable route is that an engineer or doctor, any EA making say 150k+, could donate 50% of their salary and easily pay for one person's salary in global health or animal welfare, both fields where a new employee makes a low salary. And keeping up on new projects that wish to start can help you stretch this even further. For example, just 2-3 people doing this would fund LEEP's first year where they only spent ~190K as a small nonprofit, and then they theoretically would have a track record to approach larger funders for their next year. There are probably lots of little projects like that that more single donors could fund, that larger donors wouldn't take on. And once that community funding ecosystem gets solidified, then people with these new ideas could essentially crowdfund for them. It would either lead to an info-rich retrospective and closing the project, or it could be a big win, creating a new highly effective nonprofit. I like this experimenting focus. We aren't there yet but it's an exciting possible future that a broader focus on E2G could bring about. I think E2G framework facilitates a more risk-taking, hits-based framework *for giving*, because more disposable income always feels less precious to give away (you don't have to be a billionaire to have this mindset) where the giving pledge (10%) facilitates a more predictable and safe donor schedule. I think we need both

Expand full comment

The whole "EA's do their research" really seems more like "I have OCD about my donation choices". All sorts of "rational" people raved about FTX. Show me an EA publicly calling out FTX beforehand and I'll show you the 1-3% of EAs who are rational researchers.

And for YIMBYs, I'm really tired of these unicorns. Unless you literally own your backyard, are willing to have yourself, family or friends displaced for a year or so, you're literally not "Yes In MY BACK YARD". Saying you support YIMBY policies in somebody else's backyard doesn't make you a YIMBY. It makes you an astroturfed or a hypocrite. I've never met a YIMBY who owned their back yard. I suppose they exist, and I'd bet they hover in the low percentiles also. How about a poll?

Expand full comment

> And for YIMBYs, I'm really tired of these unicorns. Unless you literally own your backyard, are willing to have yourself, family or friends displaced for a year or so, you're literally not "Yes In MY BACK YARD".

Labels are not definitions, as the saying goes. You can't reason from the *name* of something; names are frequently misleading for whatever reason.

But also this completely gets wrong what YIMBY is about? YIMBY is about letting people do what they want with their land without interference from neighbors. It's "yes in your literal backyard if you want", not "yes in your literal backyard if we say so". It's as opposed to "yes in your backyard only if your neighbors agree".

Expand full comment

>YIMBY is about letting people do what they want with their land without interference from neighbors.

This makes YIMBYs sound like anti-HOA libertarians and homesteaders which would be... deeply noncentral examples. It's not, y'know, people that want chickens in city limits; it's mostly about housing reform. Back yard in the sense of my neighborhood/city at best, not in the sense of "literally this plot of land I own, pay taxes on, and mow."

YIMBYs tend to be in support of dense housing and apartment blocks, and I would be *amazed* if more than a handful of self-identified YIMBYs actually own any of the land they want developed and an even smaller fraction of those that will be developing it themselves, rather than selling it for someone else to develop the land.

Expand full comment

Yes, all of this seems basically accurate to me. So yes if we want to be more specific about what the things I said above actually cash out to, this is important. However, none of this alters the reasons that Josaphat's characterization was wrong!

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

> YIMBY is about letting people do what they want with their land without interference from neighbors

On the contrary, the YIMBY people agitate for removing restrictions on particular kinds of interference with neighbours: to block views of the horizon, exacerbate traffic density, pollution, and load on public services; to bring in poorer (and often less law-abiding) residents. Often enough, YIMBYs admit that they would like to eventually de-normalize low-density traditional "American dream" housing. That is, to make it even less within reach of the typical citizen than it currently is.

They also show undisguised contempt for mortgage holders' entirely reasonable interest in avoiding going underwater on their loans; for property owners' interest in preserving the market value of their properties (to fund their retirement, or to even be able to move) ; and for the entirely reasonable desire, shared by many people, to simply not be forced to live in high-density concrete hell, and to not have to live surrounded by a swarm of noisy, annoying, and sometimes thieving and homicidal lumpens.

Expand full comment

> On the contrary, the YIMBY people agitate for removing restrictions on particular kinds of interference with neighbours:

How is that "on the contrary"? Yes, they want you to be able to do what you want with your land, without people on surrounding land who are indirectly affected being able to block it. What's the disagreement here? Like, you may not *like* YIMBY, but I don't see that we're disagreeing about what it *is*, and nothing you're saying implies that Josaphat's characterization is *correct*.

Like, you may say, aha, but they *do* want things to go up in "your backyard" outside your control, and yeah arguably that's true in the usual sense of that phrase, but then you go back and read Josaphat's comment and they are *specifically restricting* the sense of that phrase so that it's not true anymore!

Expand full comment

NIMBYs say they want low-density, but what they really want is to live in a low-density area adjacent to a high-density city. In other words, they want to benefit from the very thing they bitch and whine about. You really hate people and love nature so much? Move to a rural area.

"That is, to make it even less within reach of the typical citizen than it currently is."

It's not within reach of the typical citizen because the price of housing is to high, and you say you want the price to be high because of "property values." You can't say you want housing prices high and then whine about the result.

"to not have to live surrounded by a swarm of noisy, annoying, and sometimes thieving and homicidal lumpens."

Your issue is with anti-discrimination law, not with YIMBYism.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

> ...hate people and love nature so much? Move to a rural area

This makes approximately the same amount of sense as e.g. "love sausage? but don't want to live next door to a hog farm? hypocrite!" -- AFAIK no one objects to the mere existence of the city. But not everyone wants to live inside it. Just as it is entirely possible to like warm weather and beach without necessarily wanting to move to the tropics.

> you want the price to be high

Generally, property owners want the price to remain stable (in particular, because the things that crash the price objectively cut into quality of life) -- rather than "to the moon" (esp. in non-Californian jurisdictions, where regular property tax adjustment is a thing.)

> Your issue is with anti-discrimination law

Does a hypothetical "no lumpens" world necessarily require discrimination laws? (Why not consistent enforcement of existing/uncontroversial laws ?)

Expand full comment

YIMBY as seen is more about trying to destroy individual property owner's rights so that they can increase density to reduce the insane rental prices in cities, as well as fulfill an idea of dense, walkable, car-free cities. The term has changed from what you mean, from "we are ok with development for economic gain"

Expand full comment

But their means is *increasing* individual property owner's rights over their land, not decreasing it. You could say it's destroying individual property owner's rights over their *neighbors'* land, but that's explicitly outside the scope of what Josaphat is saying.

Expand full comment

its only increasing the rights of landlords who want to increase density to pack more people in. its not done to increase rights overall, as opposed to get a result.

its not like the same people would be YIMBY if in response someone constructed a racetrack next to the dense housing they want. its more results than rights i think

Expand full comment

Possibly so, but that's a different claim than you originally made or that Josaphat is making.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

i guess. i think i was trying to emphasize its not from increasing rights as a focus, and that is the difference. its just the means to attain a goal they want to impose.

Expand full comment

Perhaps related question: Do you consider Bjorn Lomborg (https://en.m.wikipedia.org/wiki/Bj%C3%B8rn_Lomborg) an Effective Altruist and/or is he in any way formally associated with your movement?

Expand full comment

When Scott was much younger (livejournal, I seem to recall) he wrote once being glad to hear Lomborg had changed his stance on climate and took it more serious (even though Lomborg had just adjusted - but that is laudable, sure). I do see BL as much better at EA than most EA - but the movement is NOT the ideology, indeed. Hey, the movement of Rationality prefers to ignore even the author of "Rationality: What It Is, Why It Seems Scarce, Why It Matters" - cuz' Steven Pinker is not into AI-doom, I guess. I love them all. I really do.

Expand full comment

Feels like alot of these debates come down to talking past one another due to lack of agreed upon objective (as best as possible) principles by which to "evaluate" a social movement (EA, progressives, SJW's, Rationalists, etc).

Would be very curious to read an attempt at an elucidation of those principles that reasonable people could agree to in advance of such an argument (not naive enough to think that this would actually stop the arguments, but would be interesting).

Things that might arise: the movement's best or worst actions, ideas and adherents, gap between professed beliefs and actions (at both organizational and individual levels), naming conventions (including Capitalised Organizations and lower case generic case of belief), etc,

Anyone know of such an attempt that has been written already?

Expand full comment

I disagree with EA because consequentialism is wrong. 9/11 was worse than 3,000 people dying of natural causes, so it makes no sense to talk about how EA did the equivalent of preventing 9/11. The damage 9/11 did was mostly not from people dying. You can't quantify morality, it's about whether something makes you feel bad or not.

I think it's also wrong to suggest that consequentialist EA thinking is unpopular. It was popular enough that we had COVID restrictions that prioritized saving people over maintaining freedom and our cultural identity. I'm worried that EA will just empower this kind of homogenized, rote morality even more.

Also it's pointless to worry about AGI, since even if someone comes up with a way to "align" it, others will develop un-aligned AI anyway. It's not a technology that can be kept under control, like nuclear weapons.

Expand full comment

Were COVID lockdowns really an example of consequentialism? It might've been different in different places, but I recall it being mostly opponents of lockdowns who were talking about weighing the risks of COVID against harms lockdowns were doing to the economy and to people's mental health, while those arguing for the heaviest restrictions were more likely to argue from the deontological premise that nothing could justify gambling with the lives of the elderly.

Expand full comment

>I think this is the role of the wider community - as a sort of Alcoholics Anonymous, giving people a structure that makes doing the right thing easier than not doing it. Lots of alcoholics want to quit in principle, but only some join AA.

This is a good analogy. Some people use 12-step groups to help them recover from substance abuse. Others recover on their own. And those who recover on their own are gonna be extremely annoyed if you sound as if you don't know they exist. It is possible to do the thing without the group. And similarly, you don't need an all-encompassing social structure to donate to charity.

Expand full comment

"And anti-racism has even more achievements than effective altruism: it’s freed the slaves, ended segregation, etc."

Most of the people who ended slavery were racists & white supremacists, e.g., Abraham Lincoln. They just did not believe that white supremacy justified enslaving black people.

The people who ended legal segregation believed that individuals should be treated equally by the law. They did not believe that all racial disparities were the result of white racism.

The anti-racism of Kendi, DiAngelo, etc. should not be credited with ending slavery and legal segregation.

Expand full comment
founding

"The best books, he perceived, are those that tell you what you know already." -Orwell, 1984

Expand full comment

I don't think you should feel bad about "free riding" off GiveWell. That's why they exist! Maybe if you don't trust them or their methodology or something, I don't know, but it probably isn't even a great use of your time to do the same work they're doing unless you have reason to believe you have a comparative advantage for that work that they don't (which in your case I could actually believe, but in my case I certainly don't).

Expand full comment

I think DeBoer gets it wrong when he implies there's general agreement with EA because lots of people view themselves as "shining a light on problems that are neglected". "Problems that are neglected" is often in the sense of things being overlooked. But a lot of the stuff EA focuses on, on the other hand, are problems that are relatively obvious but nonetheless underfunded.

The ways that EA frames charitable giving also seem far from universally accepted. The way philanthropy is framed often puts "donating to GiveDirectly", "donating to Make-a-Wish", and "donating to Harvard University" in essentially the same category. EA disagrees. I think many people's concept of the effectiveness of nonprofits still focuses predominantly on internal organizational measures of efficiency, like administrative overhead. EA would say those matter only indirectly and may be misleading. EA's approach to encouraging people to donate also differs from another approach that I think is still popular, encouraging people to find a destination for their charitable giving that they particularly care about, that fits well with their affiliations and interests. EA instead focuses on arguments from marginal utility, that someone can do more good more easily than they might have expected.

Expand full comment

I feel like it's pretty easy to agree with the effective altruism point of view but think that their calculations are wrong and that donating to e.g. AI risk is totally ineffective. The criticism of the EA movement is then just that their heart is in the right place but they're bad at estimating the future impact of their actions, and if they wanted to do real EA they should all be trying to industrialise India or something.

Expand full comment

I don't think these arguments address the criticism that DeBoer is making. (It's funny, I never heard of DeBoer, but this is the second time in two days I've seen him mentioned in blogs I like. So, since I'm not too familiar with him let me admit I may be superimposing somewhat my own views.)

The criticism is of EA is that it presents itself as an ethical system in itself rather than merely a methodology.

If EA were defined as Scott defines it in this post, I don't think it would be controversial. Other people might think that the EA-associated people have some weird priorities, but there is no fundamental problem with people spending their charitable money on things that other people think are silly.

However, when I see people discuss EA, they rarely mean it as a form of deciding how to effectively be altruistic. Rather, it is an ethical system formalizing a fundamentalist utilitarianism. This is the reason it ends up with kind of weird, but basically harmless, priorities (long-termism, animal stuff, AI). However, as DeBoer points out in the article, utilitarianism has a lot of known philosophical problems and contradictions. It is often a practical ethical system, but it fails to capture much of what most people inherently think of as moral.

This creates a danger that a conventionally amoral ethical system could supplant other more traditional ethical systems, which would have significant bad consequences for society. Writing this, it occurs to me that EA people should be more concerned about unintended side effects of too much EA!

Expand full comment

Nit: "Everyone says they want to be a good person and donate to charity and do the right thing."

Personally, I'm a counterexample. I neither have these as goals, nor say that I have these as goals.

Expand full comment

You are not alone.

EA loses me with the "A".

Expand full comment

Many Thanks!

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

I find myself questioning the wisdom of engaging so thoroughly with Freddie's posts about EA. I didn't even read the Shell Game post because by now I expect FdB's EA criticisms are not actually helpful or insightful, just combative and clickable.

https://freddiedeboer.substack.com/behavior-is-the-product-of-incentives now returns Page Not Found, but that was the post where he bravely described the perverse incentives that he and others write within. He said he gets paid a lot more to write brief, conflict-oriented diatribes than to write lengthy, researched analyses such as his post about the Nation of Islam. I took that post seriously, and decided that if I was too busy to read his books and effort posts, then I probably shouldn't read him at all.

So, real question: Aren't we getting clickbaited? Do y'all endorse reading FdB's Shell Game post? Isn't it less like his books and more like a tweetstorm, and shouldn't we treat it accordingly?

Maybe the mistake is all mine--maybe if I chose not to read FdB's clickbait, then I shouldn't have read Scott's reply to it. Well, too late. Scott, could you say something about how you think about the Shell Game post (and I guess your own reply as well) in light of https://www.astralcodexten.com/p/its-bad-on-purpose-to-make-you-click ?

I admit that I did enjoy reading Scott's reply, especially section 4. But I wonder if I should treat it as a guilty pleasure.

Expand full comment

I am confused. 1. When Scott's father went to Haiti to provide free medical aid - was that EA?

2. All here ridicule art-museum donations. But the richest family in my town donated big time to our art-museum. And I am glad we have it. Esp. Rodin's thinker. If 20 billion people live for 2 billion generations, but "no art": we might just as well all end our shallow existence NOW. And art is not always free and should not always be for sale - and not always end up in the mansions of the rich and locked up by art-investors. You make it available to the public: "Phew, donated to an art-museum! Silly!" (I hate seeing tax-dollars spent on most art.)

3. I will sound brutalski now, forgive me if you can: SSC/ACX is aware of the average IQ in parts of Africa heavily affected by curable/preventable diseases. I assume the smarter part is aware of cheap interventions - impregnated bed nets, deworming, condoms ... - and can afford it (smarter earn more). Thus the donated help gets to the less-smart population of countries with an average IQ under 80. Since when is EA dysgenic? (SE-Asia had it worse with Malaria; seems they were able to do sth. about it.)

"It is relatively easy for me to ignore the plight of the poor, as long as I assume they're dumb (and I start with the assumption that most people, foreign or otherwise, are). But smart people are for me what Americans are for those America-first people; I can't bring myself to not care about them. They matter to me, in the way that everyone should matter to me but can't because I don't have enough emotional resources. In fact, all day I found myself worrying about that bright poor kid hoping she gets to a decent school at some point, even though I haven't spared a thought all day for any of the multiple-amputees I've seen wandering around. Not sure how I feel about that." I feel that is: good. And altruism that considers each life of same value (no human does in real life), seems neither efficient nor very "altrui"stic - if one does not really care about the "other". Sad about Kissinger and Shane Macgowan. Bin Laden is dead? GREAT!

Expand full comment

How much of that low average IQ in disease-ridden areas is actually genetic in origin, though? Childhood malnutrition and chronic stress don't tend to bring out the best in people. If your threshold of "worth saving" is "as smart and hardworking as southeast asians," most of the world doesn't make the cut. If the priority is genetic potential, it can't all just be hill-climbing to maximize known phenotypes; that's how you get dysfunctional purebred dogs and hemophiliac aristos. Gotta take some chances on new alleles, hand out fair opportunities to thrive and see who rises.

Expand full comment

How much? Maybe just one SD (as in the US), not the two shown in the data. MY personal threshold till I am "triggered" to actually WANT to donate my very limited resources to people I have never met, is well above 100. (That one girl in Cambodia: sure.) - Other people might "save" who they want, ofc.. At at TFR well above 4 in those regions, I fail to see much need to extra-boost those alleles optimising for less energy consumption above the neck. Let see who might rise and focus on them.

Expand full comment
Nov 30, 2023·edited Dec 1, 2023

I think a better critique of EA is that they fail to demonstrate that charity makes the world better. I think there's a very strong argument that the best way to improve the world is to maximize economic growth, and all charity (at least all institutional charity) represents a misallocation of resources away from the economic engine. There are many more future humans than current humans, so they must be given greater weight in Utilitarian calculations. If redistribution harms economic growth (an empirical question!) then charity offers a fixed one-time benefit at the expense of an exponentially greater future benefit.

This problem is exacerbated by the Peter Singer-inspired notion that all lives are equally valuable. I think that's obviously false. The drowning child in front of me in the US is much more valuable than the drowning child in the Democratic Republic of Malariastan. The US child has a much higher chance of growing up to be a scientist or engineer who could discover something to benefit humanity. But failing that, it has a rational expectation of one day contributing $70k/year to the world economy. The expected contribution of the Malariastan child is essentially zero if you consider cost of living - people in places like that are living at the subsistence level. IMO that difference really really matters. Having a philosophy that systematically reallocates resources from a functional post-industrial society to a dysfunctional malthusian society seems pretty clearly suboptimal from a "what is objectively best for the world" perspective.

Whether or not this line of reasoning is actually correct isn't totally clear to me, but I think it's something that at least deserves serious consideration. I've tried to get EA's to respond to this argument on several subreddits, but I've yet to encounter a robust response. I would like to challenge someone to seriously engage with me on this.

Expand full comment
Nov 30, 2023·edited Nov 30, 2023

1. Saying "economic growth is the biggest empowerment to mankind" is a slogan and not a plan. A plan is "if you donate 5000 dollars to malaria nets, you get a statistical equivalent a life in terms of QALYs. A plan is not

invest in no specified thing

?????

Profit.

2. I don't think we live in a world where "just" donating money to investors results in above average market returns. If what you are saying is true, it seems like:

a. Venture capitalists would be mostly funding constrained, and also since it's so easy to invest in growth, they would get above average market returns consistently, since they have access to a privileged market.

b. Grants for research should be relatively efficient, or research within companies should be getting increasing returns, at a generic population level.

I think we don't live in that world since VCs actually get below average returns, most grants don't discover new things and most industries do not derive increasing returns from R&D. This is not to say that it'd be *impossible* to do better, but at this point you're asking a question along the lines of "why can't EA, by fiat, become the most successful investing fund ever".

3. If we're talking about non investing related fields, most effective interventions to grow the economy, I believe healthcare, education, housing and immigration would constitute some of the biggest blockers to productivity. As far as I know there's no "shovel ready" project we can just throw money into and predictably increase the GDP. Or hell, if you really believed that any old child can be super productive, you'd be throwing money at some newlywed couple's face and scream "please baby make already I'm altruistically begging you for non perverse reasons".

4. Catch up growth is just going to be faster on several direct empirical data points, like China, Singapore and South Korea, and on reasoning grounds: it's a lot easier to build skyscrapers in a country without first world norms of safety, it's a lot easier to copy infrastructure projects of already developed nations than it is to invent new technologies. I haven't precisely checked, but the comparison isn't between one American child and one African child, but more like 1 American child to 600 African children (if we take the rough government estimate of 3 million dollars spent to save a marginal US life from traffic accidents, and 5000 for the Givewell estimate of dollars per marginal life from bed nets.) I think you can argue that the American life cost is inflated, but I think you have to get lower than 300k per child life saved to even come close to net GDP gain, assuming a gdp/capita ratio of 60 Kenyans generating 1k/year per 1 American generating 60k a year and I'm pretty sure if you believed that you'd also believe that all American child mortality can be solved by adding 2.1 million dollars to the problem). So no, not even doing your original naive analysis fits.

5. A reasonable model of money to wellbeing is that it scales with the log of money, so any amount of money that goes to people 100 times poorer, to the point where they're going to die, is going to do a lot more good. You need a pretty low discount rate or an actual lower value to their life independent of economic effects to do so.

These are just arguments I've thought of since your post here.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

>Saying "economic growth is the biggest empowerment to mankind" is a slogan and not a plan.

The plan is: maximize your wealth by developing a skill and participating in a first-world economy. Invest your profits in a way that maximizes your ROI.

>I don't think we live in a world where "just" donating money to investors results in above average market returns

When did I say anything about beating the market? Just invest IN the market. The positive externalities of generic market-driven economic growth outstrip the benefits of charitable transfers, unless those charitable transfers result in market-beating returns. If you can demonstrate that then I'm all for charity.

>but more like 1 American child to 600 African children

Ok, I don't measure value by number of lives. I measure value by economic output. I explicitly reject the Utilitarian notion of intrinsic value. All moral value is instrumental, and that is best (or at least most easily) captured by economic output. I think I could more-or-less prove that intrinsic and instrumental value have to converge over time, since intrinsic value 500 years from now depends heavily on instrumental value today. If humanity had experienced 1% less annual economic growth over the previous millennium then there would be far less than 8 billion people today.

In any case, having 1 more marginal American who could one day invent the next internet is much better than having 600 more marginal Africans who are living a subsistence existence. The marginal American is positive for the world. The marginal African is an additional charity case that the world has to worry about supporting.

>A reasonable model of money to wellbeing is that it scales with the log of money,

I'm skeptical of all happiness/wellbeing research. People's revealed preferences don't seem to indicate that there's a logarithmic tradeoff between work and leisure. Plenty of people work 2 jobs for twice the money or work much harder (like being a lawyer at a big firm) for much more money. But fine, let's assume happiness ~ log(money). Maximizing current happiness via redistribution is a one-time boost that is outweighed by the exponential future cost of any lost economic growth. Would you push a button to switch our world to one in which all wealth in medieval Europe had been evenly distributed at the cost of 1000 years of slower economic development (thereby consigning everyone today to extreme poverty)? If not, then why do you want to make a similar tradeoff today? Consider that there are more future people than current people. Don't they have a greater weight in Utilitarian happiness calculations? Marginally larger economic growth today means much more total happiness when integrated over future generations.

Expand full comment

> Ok, I don't measure value by number of lives. I measure value by economic output.

Yes, the sentence after that says that until you have less than 60 Kenyan kids per American kid's life saved, the 60 Kenyans will in fact have more economic output than one American.

> Having 1 more marginal American who could one day invent the next internet is much better than having 600 more marginal Africans who are living a subsistence existence.

Like, it's telling that your example is "invent the internet", when the internet was invented at CERN, a European based particle accelerator, by a British man, and the equivalent American version, project Xanadu is a footnote.

The above is an unfair gotcha, but still, many inventions are in fact by immigrants (see: Covid vaccine, Elon musk, 5G).

Anyway, your response is scope insensitive in three ways:

1. Total GDP in fact includes Africa, you cannot claim that growth matters for America but not for developing unless you have some scale by which to compare them.

2. Inventions often aren't "before their time", so if the timing of an invention gets missed it's usually not "no invention" but "slow invention" or "worse invention". Without a model of this that comes out decisively in favor of innovation, this is a values difference and not EA not listening to you on their terms.

3. Problems in the other parts of the world both provide reasons to innovate as well as sources of innovation. See: Norman Borlaug and the green revolution, the immigration point above plus first generation immigrants like Steve Jobs, the Covid vaccine only got developed when its inventor got hired at a non American institution.

> Maximizing current happiness via redistribution is a one-time boost that is outweighed by the exponential future cost of any lost economic growth.

I'm pretty annoyed that I've been repeatedly talking about saving a life, and how the African child goes on to at least be an averagely productive member of society, and you're still modeling it as a one time distribution. I'm use all of the failed startups, or frat boys being stupid, or trust fund kids existing in which first world funds end up doing nothing because of semi uncommon circumstances, or circumstances we agree are in the normal variation of doing something ultimately net good, but if you think those would be unfair and invalid points to counter your claims about growth, reconsider that saving lives has some type of cumulative benefit.

> Marginally larger economic growth today means much more total happiness when integrated over future generations.

Funnily enough, I'm a doomer, so I suppose I would take that trade. For what it's worth, I'd take the growth path if I assume we live.

Longtermism is already a fringe view in EA, I'm not a longtermist, and as far as I know no one on ACX is a longtermist either, so if you want to know an answer to "why isn't EA responding" it's because they don't share your views on the long term, they think that catchup growth and the sheer arbitrage of an African child to an American child even in terms of economic output makes sense and they don't put literally infinite value on inventions.

Expand full comment

>the 60 Kenyans will in fact have more economic output than one American.

I don't think that's right, because what I'm actually interested in is net value so you have to subtract things like living expenses. If 100% of a person's output goes to just feeding and housing that person then that person isn't really contributing to the world. That's why subsistence cultures never produce scientific research. Once you make that adjustment, I'm certain that the American is much more than 60 times as valuable as a Kenyan.

>The above is an unfair gotcha, but still, many inventions are in fact by immigrants

Yes, don't take my use of 'American' as specific. I'm using the US as a shorthand for all first-world countries.

>I've been repeatedly talking about saving a life, and how the African child goes on to at least be an averagely productive member of society, and you're still modeling it as a one time distribution

It IS a one-time distribution. The ongoing effects that you're talking about are making a false equivalence between the productivity of the saved African and the productivity of investing the difference-making charitable transfer in the stock market instead. My argument is that the productivity of the African is lower than that of the investment. That's because people in the third world are less productive than people in the first world. Hence the charitable transfer results in a net lowering of global economic growth.

>reconsider that saving lives has some type of cumulative benefit.

It has a benefit only if those lives are productive relative to the opportunity cost of not saving them.

>Funnily enough, I'm a doomer, so I suppose I would take that trade. For what it's worth, I'd take the growth path if I assume we live.

Wait, I'm confused. Absent near-term extinction you agree with me? Then what are we arguing about?

Expand full comment

> I don't think that's right, because what I'm actually interested in is net value so you have to subtract things like living expenses.

Why not do a back of the envelope calculation then? Including costs to raise a first world child.

> If 100% of a person's output goes to just feeding and housing that person then that person isn't really contributing to the world.

This seems pretty disingenuous if you don't also include the average costs of raising an American child.

> My argument is that the productivity of the African is lower than that of the investment. That's because people in the third world are less productive than people in the first world. Hence the charitable transfer results in a net lowering of global economic growth.

You haven't responded to any of my points regarding catchup growth, except for essentially declaring by fiat all African countries are subsistence based and will be subsistence based.

> Wait, I'm confused. Absent near-term extinction you agree with me? Then what are we arguing about?

Because I think you're wrong on the merits, provided no convincing evidence and we do not share values on the relative benefits of innovation vs catch up growth.

I think I'm going to disengage until someone does the numbers, even in a very handwoven way.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

>Why not do a back of the envelope calculation then? Including costs to raise a first world child.

That's not relevant to the question, since the charitable donation isn't going to raise an American child. The opportunity cost of any choice is the best alternative to that choice, and the best alternative to charitable donation in Africa isn't "raising an American child". I can put my money in the stock market and get 7% or whatever. My contention is that you cannot get 7% out of the rescued African.

>You haven't responded to any of my points regarding catchup growth

That's because I didn't detect a coherent point to respond to. Possibly the problem is my understanding, but I don't see what relevance catchup growth has here. If you're arguing that all potential third world lives should be modeled as eventually reaching first-world levels of productivity then I think I disagree. If you'd like to argue for that claim go ahead, or if you mean something else then please explain.

Expand full comment

In the anti-EA worldview, Norman Borlaug is a monster. Enabling the global population to increase so rapidly while climate change remains uncontrolled is akin to OpenAI burning timeline while AGI alignment remains unsolved.

Expand full comment

At some point you have to choose where to start, and EAs start with the idea that all human lives are equal. You start with the idea that the worth of human lives vary based on their economic output or a reasonable expectation of their future output. How's an EA supposed to offer a robust response to such a disagreement, even assuming that you can engage with one who's unperturbed by reasoning that starts so far beyond the pale for most normal people?

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

I always took the philosophical starting point for EA to be to "make the world better through rational analysis" so I think it's reasonable to question the all-lives-are-equal assumption in light of that goal. I think that any careful examination of human flourishing will eventually arrive at the fundamental importance of economic output. To wit, I think there's a strong argument that intrinsic and instrumental value more-or-less have to converge since intrinsic value 500 years from now depends heavily on instrumental value today. If humanity had experienced 1% less annual economic growth over the previous millennium then there would be far less than 8 billion people today.

Whether or not this view is unfashionable isn't relevant. I think it's correct and I think any Rationalist-type movement should be compelled to examine unfashionable ideas if they're potentially correct.

But I think EA's have a case to answer even if you assume that all lives are equally valuable. Lives saved is a linear function of wealth. Wealth is an exponential function of time. Unless you impose a discount rate on the intrinsic value of life then I don't see how the utilitarian calculus doesn't compel you to maximize economic growth even at the expense of near-term charitable interventions.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

>"make the world better through rational analysis"

Your 'make the world better' phrasing is doing a lot of the work and I do not think it is correct. I take EAs as starting with 'make lives better'. This also explains the common objection to EA that it's a good thing to care more about people closer to you than further away from you - note I used 'further' instead of 'farther' for a reason. Of course your economic evaluation of life is incomprehensible to the EA definition, take for example the EAs who try to make animals' lives better.

Expand full comment

Is that really a distinction? "Make the world better" is really just shorthand for "make it better for the people in it". But regardless of how you value life, the best way to make lives better is through economic development, as my last paragraph explains. I don't see why straightforward economic reasoning should be incomprehensible to EAs. If it really is then I suggest that they should probably drop 'rationalist' from their branding.

Expand full comment

"Ackshually, a white American kid will make $70,000 per year or have a 0.001% of inventing a life saving drug so according to these calculations you should spend the same money saving him and let 2000 African babies die" can be reasonably said to be incomprehensible to someone who gives money to efficiently make lives better.

Expand full comment
Dec 2, 2023·edited Dec 2, 2023

Saving 2000 babies now comes at the cost of saving 4000 babies in 10 years. Are you applying a discount rate to the value of human life or do you simply not understand the concept of compounding growth?

Expand full comment

The ideology is not the movement but it took only a decade from EA's inception for a sociopath to hijack it for his own motives. EA has a philosophical underpinning that naturally leads to this outcome. It's anti-localism to the extreme, saying that helping your neighborhood is counterproductive when you could be giving money half way around the world. However, it is much harder to make a difference on a global scale than a local one. That naturally leads to the belief that you need to increase your wealth and power as much as possible. The mild version is the guy who joins a hedgefund to use his increase his earnings to give more money to GiveWell. The more radical version is a guy who uses shady means to obtain power to make changes. I'm sure that most EA people are sincere but it's not surprising it would also attract the Sam Bankman-Fried's of the world.

Expand full comment

People didn't stop donating to anti-malaria charities because of SBF, so "hijack" seems like too strong word.

I would say he "parasitized" on the movement for a while.

Expand full comment

The people who go to Libertarian Party meetings *are* weird aliens. It's kind of a catch-all category for "doesn't want other people to stop them from doing X", which is so wide a net to cast that you can't help but draw in the cantina people.

Expand full comment

EA claims to just be it’s name (effective altruism with lowercase letters) while in fact pushing a very specific moral ideology based on an extreme form of utilitarianism. In fact it is not clear to most people that they should care about others who are far away more than their immediate family. Charles Dickens satirized such characters in a way that makes it easy to understand why.

For many philosophers, utilitarianism faces difficult problems that deontology doesn’t (such as Benthams mugging). That’s assuming you can even accept moral realism, which is a major subject of debate. To someone who isn’t a utilitarian, EA encourages people to look at numbers, statistics, and books about faraway places while neglecting their immediate family and community. Point this out to an adherent and they will immediately protest, “That’s not true! It’s just effective altruism” with lowercase letters. However if you read the materials and observe the behaviors of actual EA adherents, the pattern of extreme utilitarianism is apparent. Utilitarianism also explains why there’s broad spectrum attacks on EA; since most people aren’t utilitarians, people from all areas of the political spectrum will have a bone to pick with EA although they will express that concern in different ways.

Expand full comment

I think movement evangelism contributes to issues discussed in section 6. Any group that perceives itself as doing good on some dimension would naturally conclude that increasing group membership is also a way to achieve that same end.

EA due to its rationalist approach ends up being more explicit about this than most other groups. And so a defining feature of EA ideology becomes movement promotion, at least as perceived by outsiders.

As you say, its probably just something you have to eat and continue your work. But I think something like this is animating freddies post. You are perceived as slapping a self promoting brand on a pre-existing impulse to do good charity.

Expand full comment

It's slightly frustrating to me, because I think other organizations or movements can just mix together their self promotion and actually doing things funds, and the general public wouldn't question this except maybe if the self promotion is gauche or fraudulent. You might get a couple of people who insinuate some things, but if people can't point to a graph, they just don't think about it.

Expand full comment

I sympathize, but if its any consolation other groups also get criticized for excessive evangelism. Mormons come to mind. Americans attitudes have improved towards them over time probably because of positive role models. Now it seems like gentle teasing. This is where SBF stings a bit. But with so many kidney donors it seems like the sort of thing EA can bounce back from!

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Yeah, having the graph available is still way better than not.

I personally don't have optimistic outlooks on EA going mainstream, but also my previous idea of civilization would not have predicted that Bill Maher or news anchors would not only understand AI risk arguments, but start reinventing them! Hopefully my brain is just broken by cynicism, and someone can post this comment again in 10 years and laugh at me.

Expand full comment

"I find the sort of people who go to Libertarian Party meetings to be weird aliens." I feel this way. I also feel this way about rationalists.

Expand full comment

I remember, years and years ago, I read a essay (I can't remember where, maybe it was Paul Graham?) that said that a lot of influential people ended up wasting much of their later life responding to critics rather than getting actual work done. I seem to remember it saying that Newton spent the second half of his life responding to critics rather than doing any actual science. When I saw your second post in two days about a response to what I thought was, frankly, poor-faith EA criticism, I was worried the same thing was happening here. But you completely proved me wrong - this is top-notch.

I remember many years ago you expressed frustration that writing about charity got some of the lowest views, and incendiary topics went super viral. I feel like you've finally cracked that nut (perhaps unintentionally); this article was a blast to read and I wanted to send it to all my friends. It also did a lot to push me in the direction of EA. Great stuff all around.

Expand full comment

I keep picturing people sidling up to you with plates of steaming hot meat now, lol, like a game people play, 'Make Scott Sin' (working title)

I think I'm pretty much on Freddie's side. I think it's understandable to frame things in measurable good, but it's hubris to be so dogmatic or zealous that it obscures or excuses a lack of commitment to your own actual community. I know you can walk and chew gum or both be a good citizen and an effective altruist but the framing always seems so stark as if these causes are the only things that matter, the effectivist altruism that can be visited.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

> It's hubris to be so dogmatic or zealous that it obscures or excuses a lack of commitment to your own actual community.

EAs don't believe this though. I really don't get where this idea comes from? Somebody just said it sometimes and then people online said, oh yeah I bet those dumb EAs probably think that! Totally believable! In truth, EA as a culture is very overt that people should take care of their loved ones, its well understood that people who have to care for sick relatives etc can't donate much. In fact, EAs (or aspiring EAs, or departing EAs) in that predicament write their thoughts in private support facebook groups and slacks all the time, and everyone is very supportive. And 80k advisors explicitly recommend you not donate more than you can afford, and you should save for retirement etc, and keep hobbies and connections to your community. Seriously, all the high-up EAs I know live a well-rounded life.

EAs don't say that donating elsewhere absolves you of a responsibility to those close to you, rather the idea is that *if you are the type of person who believes in moral responsibility* (which it sounds like you are) then you should consider that trouble close to you does not necessarily absolve you of the responsibilty to help at least some others who are not close to you, depending on your means. Alternatively, if Helping Others (TM) is something you value highly, you can probably accomplish that better by looking at opportunities that aren't close to you, like FYI.

Additionally, most EAs are pro-taxation and paying a goverment to create social safety nets for their neighbors. Yes, they would be happy to do this out of their own nonprofit paychecks, with the understanding that this means nonprofit budgets need to be higher than in a world without a strong government. Despite the badmouthing you read online, very few EAs are libertarians. In EA surveys, 85% are on the left while only 15% or so are on the right/conservative. I'm not even confident most of the rationalists in the bay, let alone the EAs in the bay, are libertarian. And then most EAs live outside of the bay (NY, DC, Boston, Seattle, London, Oxford, Berlin, Netherlands, Australia, India) in places that take social welfare really seriously, culturally.

Most EAs, if asked, would prefer a huge tax increase on the wealthy over marginal extra donations to AMF, for example (Bill Gates wants this too). But they think that isn't the most tractable and neglected thing for them to work on. I think you'd agree.

Expand full comment

Thanks for such a detailed reply, Ivy! I think I just naturally recoil when anyone approaches something like a ones-and-zeroes solution for a thing as complex and unquantifiable as morality. But, and I say this sincerely, the intellect of the average ea is, I'm sure, much higher than mine, and maybe the folks like SBF or the guy who says we measure future lives as equal to current lives are correct. I meant no offense anyway, and thank you for the education. I'm sure the average EA is doing their best to make the world a better place, and I can certainly respect that.

Expand full comment

Ah, thank you thank you. Sorry I'm late. Glad my comment was helpful :)

I understand the kneejerk suspicion (if not kneejerk presumptions). The average EA's intellect may be high, but it's valid to wonder if their wisdom is high. I'm just pretty confident the movement is evolving in the right direction from what I've seen. For example, people in EA 10 years ago used to be very hesitant to have kids (from their own reasoning about counterfactual donations and effort, not some top-down edict). But now most people I know in EA want kids guilt-free, and everyone I know is very wary of naive views that lead to burnout and alienation from non-EA community.

The more I think of it, I think EA really should rebrand. And not to run from PR disasters, but to be honest and clear with the public. A lot has changed since the movement's name was chosen.

Expand full comment

I think Freddie made some good points. At some level, EA does make “motherhood” statements. Sure, ending homelessness, child hunger….etc etc are undoubtedly good things….not unlike apple pie.

And to borrow loosely from Hitchens, ‘name one good deed an EA adherent can do or say, that a non EA adherent could not’. (The other half doesn’t apply….I can’t think of anything EA would espouse that is inherently bad).

I think your point about people who identify as EA being people who actually walk the walk (and donate their 10%, or roll up their sleeves, etc) is the strongest testament for the value of EA as a movement, in-group, or life philosophy. I do wonder if there is a litmus test of EA “members” that establishes these individuals as doing more than merely paying lip service. If EA encourages or compels people to actually walk the walk and do something “good” (however defined), I would already consider that net “effective”.

I am curious how you would respond to Freddie’s characterization that EA boils down (or ought to) to utilitarianism.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

"I think most of the people who do all three of these would self-identify as effective altruists"

Unless this means something like 'once you describe the label, they would agree that, yes, what they do sounds a lot like EA', then surely this is [citation needed]; does it feel intuitive that, globally, most of them have even heard the term “Effective Altruism”? Don't people who belong to churches regularly debate what charitable organisations to support, scrutinising their financial reports and other forms of due diligence?

Expand full comment

Scott's post clarifies a confusion I'd had about EA. I had been thinking of it mostly as a philosophy and not a "social technology" or "social cluster." As a philosophy, it is, in my opinion, wanting, and the critiques Freddie lobs at it are pretty good. Or maybe "philosophy" is the wrong word, and I mean something like "ideology" or some other term. (I realize Scott addresses that very point in item #6 of his essay. I'm just saying I'm still working through what I think about it.)

EA, as a philosophy (again, maybe not the right word), just doesn't seem that distinct from already existing approaches to charity. Someone I read online last night chided Scott for being in a Silicon Valley bubble and assuming that people who do all three things listed in item #1 see themselves as EA'ers. That person pointed out lots and lots of people follow and have followed that approach to charity and have done so long before EA ever existed. (I'd link to what I read, but I'm not sure that person wants to be linked to, so I'm not.)

Again, though, none of that is necessarily a criticism of EA as a "social technology."

Even on that front, it's not off the hook. At Freddie's blog, I accused EA of being "cult-like." I think that's probably the wrong term, and I regret using it. But I do think EA is "pre-cultish." By that I mean, it's benign now and may never become a cult, but if EA advocates aren't careful, it may start to become a cult. That, I should add, is true of many (most?) movements/organizations/philosophies/political parties.

Expand full comment

> By that I mean, it's benign now and may never become a cult, but if EA advocates aren't careful, it may start to become a cult.

In that case, you'll be relieved to know that forum users are constantly having discussions on how to improve the culture, organisations are reworking their internals and processes, and EAs and donors are watching each other like hawks for cultish behavior. Really, we don't need others to watchdog us and fingerwag at us. Maybe in the before-times. But post-SBF, almost everybody has gotten more serious and aspires to much greater professionalism. Because, excuse the language but it must be said, EAs are not actually insane morons and are therefore capable of course correction.

You can probably tell, the repetitive obvious critiques are really getting tiresome for the median EA. I wish people commenting online could note the obvious truth that they have access to significantly *less* insight about EA than EAs do, not more. What 90% of the peanut gallery thinks they see, we see too, probably sooner and probably already working on it, sometimes fumblingly but best as we can. Others are welcome to help, of course.

Expand full comment

"When I talk to the average person who says “I hate how EAs focus on AI stuff and not mosquito nets”, I ask “So you’re donating to mosquito nets, right?” and they almost never are"

Good call. I just went and donated to AMF so I can own EAs now. (really, thanks for the kick in the pants)

Expand full comment

Really effective altruism would advocate for unbanning DDT or selective mosquito extinction rather than mosquito nets.

Like, the problem I have with EA is not that their intentions are wrong or bad, but that they mostly advocate for ameliorations rather than solutions to problems. And they listen to Peter Singer too much - if a kid is drowning in a pond, just take off your shoes and save the kid.

Expand full comment

There already *are* specific plans how to make mosquitoes extinct. One of them, if I remember correctly, is to genetically engineer a mutation that will cause all their kids to be male (and carrying the same mutation). If the mutant mosquitoes are otherwise identical to the original ones (i.e. equally good at survival and reproduction), this could drive the whole population to extinction. The mutants would have twice as many male kids as non-mutants (because all of their kids would be male), so the fraction of the males with mutation would increase in every generation... and when it reaches 100%, the next generation of mosquitoes is the last one.

However, this is not a thing you could simply do. Not because the technology is difficult, but because unless done with government approval, this would count as bio-terrorism or something like that, and would probably get all participants in prison, and the entire effective altruism movement would probably be put on some suspected terrorist list, which would greatly complicate all their activities in the future.

And all journalists would freak out, of course. Plus there is a chance that this plan would fail, like maybe a new mutation would appear in nature that would counter the effects of this one. (Also there is a chance that this plan would succeed, and later some assholes would use the same technology to make other species extinct, perhaps including humans. You can't make the DNA of the mosquitoes flying out there a well-guarded secret.)

In real life, some very effective actions are illegal, and doing them would hurt effective altruism in long term.

Expand full comment

This is one of those situations where people who are convinced of the moral rightness of their position may perceive that the risks of intervention by the law are worth the potential gains to humanity in a sort of Benthamite utilitarian calculus.

This experiment has already been run in the wild IIRC:

https://debug.com/

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

In arguing that EA is unique you write:

"1. Aim to donate some fixed and considered amount of your income (traditionally 10%) to charity, or get a job in a charitable field."

This description does not distinguish EA at all from other charitable endeavors. The 10% donation is just a completely standard Tithe. See: https://en.wikipedia.org/wiki/Tithe, and the get a job in a charitable field is just a slightly more general version of "or pursue missionary work / be involved directly with the charity"

To elaborate on tithes, If you are a Christian then giving to the church is a charity, and hence the church Tithe is exactly what you describe. Tithes are such standard procedures that, for example, in Austria it is automatically collected as tax. See: https://en.wikipedia.org/wiki/Church_tax. I believe other religions, and likely even other (non-religious) charity movements have similar conventions.

This kind of EA uniqueness mindset, when in fact a lot of EA is just doing pretty standard charity/religion/cult/social-movement type stuff while making a lot of noise about being special (which is also pretty standard for a charity/religion/cult/social-movement), is one of the main qualms I have with EA. To clarify though, EA is large group of people and there are people associated with EA doing great things I greatly respect, e.g., the writer of this blog!

Expand full comment

You were supposed to take all 3 points into consideration as the uniqueness criteria. They work together. The main differentiator is actually number 2 imo. Scott glosses over it because he is so used to seeing the difference between the ways EAs try to think about things and how regular philanthropy/do-gooders do, but let me know and I can spell it out for you if you want.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

I don't think that trying to get bang for ones buck is a unique thing. I think more or less all institutions, movements, collectives and groups attempts it with varying level of success.

There a multiple issues preventing efficient action, e.g.:

- Corruption

- A lack of the necessary analytical training or culture

I think that EA does does well on having lots of analytics, but it has been hit hard by the SBF fraud on the first point. Further EA is not unique in having analytics.

The whole field of operational research and economics exists independently from EA and by far predates it :P You could also argue that EA is about actually doing operation research on "actually maximally improving the world." But once EA starts wading into topics like animal welfare it starts to feel like any old charity movement (that is focused on efficiently achieving its goals).

To elaborate on how all charities can pursue operation research, my understanding is that churches have for a long time been focused on efficient outreach and retention. I believe, possibly erroneously, that they actually commission studies on the topic. [remember that to a dogmatic christian converting others to the faith is by far the most important thing one can do to help others, vastly more important than helping them materially in this world]

One can vaguely say, that EA might on average be better than most other institutions, movements, collectives, whatever you call it. However, that to me is particularly clear, nor a clear cut distinction.

Do spell it out if you think I have missed something. To be fair to you, I can see you (and Scott) might be arguing that basing decisions on analytics is a central tenant of EA in a way it is not for other movements. I still do not think that is a crystal clear distinction, e.g. Christianity has lots of parables about how one should actually try to genuinely be good, rather than act out being good (and I think other movements and religions have this too). It seems to me that this aspect of EA is just a slightly more modernized take (with sprinkles of economics thoughts and operational research) of very standard stuff.

To give another instances of non-EA that fits into the second point Scott makes: I'm not sure that the Bill and Melinda gates foundation has anything to do with EA. It just seems to be charity focused and have good operational research, and to claim all instances of this combination as EA is to me a bit of a stretch. (Happy to be corrected here, re the Bill and Melinda gates foundation)

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Sure, I see what you are getting at. But there is a distinction.

This will be a long reply, but it will be mostly copypasted from what I've written elsewhere. You don't have to give a similarly long response. First I'll go over the difference in reasoning, then I'll go over how EA the Bill and Melinda Gates Foundation is.

1. I agree #2 is a lot like economics. In that case what separates it from economics are numbers 1 and 3, and also commitment to reason, and considering tradeoffs, a certain mode of thought. So, it's like economics + philanthropy but still doesn't really capture it:

Scott starts off calling this mode of thought "consequentialist reasoning", and it often is, but it's also just people's value calculations taken *very seriously*.

Necessary aspects are, at minimum: (1) Considering what is most important, what your fundamental goals are or should be. (2) *Try very hard* to do clear, consistent, reasoning and truthseeking, including checking assumptions and fundamental stories about how the world works if relevant. (3) multiple promising competing options, (4) BOTEC with some considerations of tradeoffs (this is just EA thought at a minimum, remember?), and (5) Doublechecking your results with others (not just anybody, not just to say you did it, but you are actually seeking others who you could reasonably expect to give good input, eg give you new considerations or, ideally, prove you wrong if you are wrong. Because you care more about the truth than validation or appearing to be good, sound, and correct)

Spelling this whole process out makes it clear that plenty of things you might want to name and say "oh but in that case X is not different from EA!" are in fact different. For example: a grantmaking foundation that focuses on preventing teen pregnancy may compare multiple competing charities via complex calculations (3 and 4), and they may even doublecheck their results with other grantmakers in the space (5). But they would not be EA unless they had taken this questioning all the way to the top, also considering their fundamental goals and values, and whether preventing teen pregnancies was a highly effective way to accomplish those fundamentals (1 and 2) when compared with other causes (3, 4, and 5 again, but more thoroughly, more tradeoffs of other causes considered).

This is why moral philosophy is so foundational to EA. Because EAs take the fundamental value discovery part way more seriously than most people. For a lot of EAs they decide they value "lives saved" or "suffering-hours reduced" but sometimes that isn't the case. Sometimes it is "intelligence increased", "life-years added", "prosperity increased", "freedom, dignity, and liberty increased", "preferences respected" or anything else that truly is your fundamental in your moral framework. Anybody can do it. It is, as Scott says “consequentialist reasoning” but non-consequentialists can use it too, and that's what EAs do. Non-EA grantmakers and donors almost never go deep enough, or ask questions high up to their assumptions enough, to qualify as EA.

The calculations (BOTEC at minimum) can use lots of inputs determined by your values. Plenty of reasonable inputs don't lead to AI-risk as the best cause, for example. But the point is that this whole process is pretty unique to EAs. I, like you, have friends and family who donate "thoughtfully", but it's obvious that they are doing fundamentally different thought than EAs are. These non-EA friends and family rarely do the systematized parts (BOTEC of a variety of options), and never do the rest (proactively discovering their values, questioning their assumptions, and consulting with others who could actually tell whether their choices are well-supported and if there is something even better out there). Actually it is especially hard to do that last part, doublechecking with knowledgable others, without engaging with the EA community. I don't know where I would begin if not positing on the EA Forum.

So, it is the way of thought (which supports choosing highly effective actions) that makes EA different from others who try (in different or halfhearted ways) to put their money and actions where their mouth is.

2. I don't exactly agree with Scott calling Gates an EA if he wouldn't do it himself. However the distinction does make sense for him where it never would for churches or any other foundation/major donor I can think of. Here is a bit more about Bill Gates so you can see the connections:

-Bill Gates and the Bill and Melinda Gates Foundation (the foundation's researchers and grantmakers) do actually do the same sort of calculations and reasoning as EAs. We know this because their reasoning led to the same conclusion where hardly anyone else's had: work on reducing and solving malaria is one of the most impactful things one can work on. He also hasn't been shy about implying it's morally repugnant that more people aren't helping with Malaria, invoking how much disposable income and privilege westerners have. This is very EA morality, not every EA thinks like that but a lot do. I mean Gates even made that crazy speech where he released mosquitos on the audience. Philosophically and reasoning-wise, he matches.

-Speaking of malaria the cause area. How much good has Gates done? More than anyone. They have poured so much money toward anti-malarial work, including now some tech-forward work on malarial vaccines. So now you've got his funding habits connecting to EAs, because the number one things EAs fund and promote as a giving opportunity is still AMF.

-Additionally Bill Gates was extremely bullish on paying for COVID vaccine production. He had that major speech where he slammed the govt for poor resource allocation, basically saying he'd spend how ever many million, to save however many billion that the economy was about to hemorrhage away due to COVID. And he did indeed give and facilitate a lot of good work around COVID. This relates to EA because pandemic preparedness and vaccines are a *huge* EA cause area. I am reasonably confident from what I've read that the Gates Foundation funded Alvea, an EA-founded COVID vaccine nonprofit, and he has probably funded more I just don't know about because I don't work in bio or philanthropy.

-I also know for a fact (from my own eyes), that the Gates Foundation have had at least one meeting with an EA-related donor advisory firm**. I was only working in philanthropy a couple months, in one firm, so that's a small sample size. As to whether the advice impacted their donations, I don't know, and I doubt the Gates Foundation really wanted anyone to know about the meeting, they probably don't want to cast their decisions into doubt or risk political damage. But that they met with EAs is not surprising. I also expect that they have met with major EA donors, specifically Open Philanthropy fund managers at least, to discuss replaceability of dollars and so forth. In other words, they are active in the EA ecosystem. Plenty of self-described EA orgs and donors engage that way (slightly and casually)! But checking what other informed people think is fundamental to EA.

-Gates has also signed the Center for AI Safety's Statement on AI Risk which states that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." AI Safety is EA. CAIS may even be an EA-founded org. CAIS is heavily funded by Open Phil, the largest EA Foundation.

In sum, by all EA metrics, Gates's donations have been highly effective, and he is actually in overt support of all the major EA-recommended causes, except animal welfare, I think. All this taken together (and I'm sure I'm missing a lot) paints a very solid picture of Gates as an EA. It was not a random or one-time intersection for him. It was not a case of a broken clock being right twice a day. Gates's focuses are just as planned as EA's, and the reasoning would seem to be near-identical because of the near-identical conclusions. EAs are not clones of each other, there are disagreements in the movement. But heck, he might actually agree on more EA stuff than any of the EAs who have their pet cause area?

By comparison, you can see, EAness simply doesn't make sense to ascribe to churches or most grantmakers. They sometimes think and compare carefully I guess... but rarely consistently, and always according to their (by EA lights) rather arbitrary goals. Gates' difference here, paired with actual involvement with EAs, make his being an EA a reasonable case to make.

Finally, I'll admit that philanthropy may be changing. Maybe EA's major win in the longterm will have been to prompt other foundations and charities to become more focused on impact, to avoid public embarrassment and be able to compete for funding. But just a few years ago EA was maligned for claiming you could compare charities and causes at all. I also read pieces by old-hat grantmakers and philanthropic magazines sneering at the new effectiveness and impact-measurement focus. Now commentors in 2023 speak like the average donor outside of EA acts as reasonably as EAs try to, like that's the default. I don't buy it yet.

Expand full comment

All charities (provided they are not corrupt) will aim to get bang for their buck. E.g., a church wants to ensure the spread of its religion, and considers that charity work. Again, I fail to see how this is unique to EA.

Further, EA has shown it can be just as vulnerable to corrupt members as other movements. For example, Sam Bankman-Fried was benefited from talking a big EA game. It helped him commit financial fraud enabling him to enrich himself and e.g., spending his money on socializing with celebrities.

Expand full comment

When I hear about charities that EA likes, it’s mostly aid for extreme poverty (mosquito nets, etc.) or X risk. What does the movement think about medical research donations? How does increasing the odds of curing lung cancer compare to reducing the risks for AI?

Expand full comment

This is just a guess, I have not read the relevant research... but I think that malaria nets are orders of magnitude more effective per additional dollar spent.

Medical research is expensive and there are already billions of dollars spent on it. Suppose you donate extra $1000, what difference would it make? One day of salary for a research assistant? A new chair? And even if the cure for lung cancer is finally developed, it will still cost additional money to actually cure a specific human.

On the other hand, $1000 can buy hundred anti-malaria nets, and on average one of them directly saves a human life. (I just made up the numbers, but I think they are approximately correct.)

Expand full comment

That makes sense for the charity aspect, but surely longtermists should love medical advances. Especially if you focus less on specific drug trials and more on changing policy to make it easier to run studies and approve drugs. Don’t know if it’s clearly more valuable than research into AI risk, but I think it has enough merit to consider.

Expand full comment

> Especially if you focus less on specific drug trials and more on changing policy to make it easier to run studies and approve drugs.

I agree. But please notice that it is quite difficult to evaluate the effectiveness of money spent that way. How would you estimate in advance how much money you need to spend on lobbying until the policy change happens? Even approximately?

(Also, such change would probably be even *more* criticized by the journalists. I am not suggesting that this should stop anyone, just stating the fact. Policy changes in this direction are politically coded as libertarian and thus generally right-wing, so the clickbait headlines already write themselves. "Effective Altruists talk about feeding the poor, but they actually donate to right-wing political causes.")

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

> I don’t think a movement our size is capable of rebranding.

Okay, I'm not entirely sold that we should rebrand, but I disagree it would be as hard as many say. IMO, only CEA (and their projects like the forum and conferences) and student and city groups need to rebrand. Anything that has both (1) effective altruism in the name and (2) is public-facing. Then websites like 80K and GWWC that mention EA a lot can selectively change that text out. I don't think this would be a big deal? Only a medium deal ;) Any rebrand just needs some PR note on what is changing in focus* enough to warrant a new name. Again, not totally sold, but think it's doable.

**but actually things wouldn't even have to change much! We could just clear up confusion so the untrue stuff stops being said, at least. A PR statement should just clarify what the general priorities and tactics are, and publicize the many leadership changes that have happened since 2022 that it make it a good time for reaffirming goals. A transparent reasoning of rebranding just needs to be soothing enough to keep critics from spamming "Never forget that [New Name] is just Effective Altruism, evil rebranded!" every few months on Twitter. This kind of affirmation could breathe new life into the movement, or at least help many of us breathe a sigh of relief. "Ah, we are all the same new page, with a fresh start"

Expand full comment

"When I talk to the average person who says “I hate how EAs focus on AI stuff and not mosquito nets”, I ask “So you’re donating to mosquito nets, right?” and they almost never are."

EA's critics are surprisingly similar to LLMs. They're just generating strings of text, for no reason other than that they were prompted to.

Expand full comment
Comment deleted
Expand full comment
Dec 4, 2023·edited Dec 4, 2023

I understand where you're coming from, and that's a good heuristic in general. But the simpler explanation here, for both of our feelings, is that EA's critics actually are usually talking out of their ass, and it isn't surprising to see people do that.

Expand full comment
Comment deleted
Expand full comment

Yes, I agree with all of this, except equating EA with its critics which is generally a bad mentality as the odds of two sides being evenly bad is low (I didn't read Freddie's work so I only have EA criticism from journalists to go off of, didn't realize that this time was different). Also, based on human history, the world as we know it seems like the kind of place where a group of people start actually trying to do the right thing for the world and then get senselessly screeched at for decades for various convoluted justifications. My thinking is that it seems reasonable that this is what happened with EA, e.g. the people most dedicated to preventing a new era of trigger-happy biological warfare, in addition to many other rather intensely good things.

Your original comment was pretty interesting now that I go into it. I think that people have a natural error rate, and will sometimes make unusually smarter statements and unusually dumber statements than their average. Maybe people with wider/higher error rates are the ones most often making extraordinarily smart statements, whereas all the non-extraordinary statements are irrelevant. Or maybe my math here is too simple. One way or another, people will occasionally make mistakes and be too aggressive, and it doesn't make sense to say that means their whole worldview is as broken as a pastor's even if they sometimes make statements as flawed as a pastor's statements. That protocol could presumably have a negative impact on a group if people run it too frequently. In this case, it's ambiguous since I didn't have freddie's article in mind at all when I wrote the LLM comment.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

I've noticed that too! Seriously like next-token predictors. The overall statements are so pattern-matchy too.

Expand full comment

I wrote a much shorter and less precise response before I read Scott's and it makes me happy that we said some of the same things. I particularly like the focus on the "doing" of doing good better. That's what EA is about, getting people who generally do think a certain way to actually act in accordance with that.

Here's what I wrote. If this is considered inappropriate self-promotion, please let me know and sorry. If you think I got anything wrong, I'd also love to hear that.

https://fourofalltrades.substack.com/p/effective-altruism-has-good-consequences

Expand full comment

I've been giving 10% of my income to GiveWell for the past 4 years, 1 year after I read your Giving What We Can post on SSC. I agree with the quoted statement "I hate how EAs focus on AI stuff and not mosquito nets". So you've got at least one person in your supposedly-empty category of non-armchair EA critics.

In fact, I kind of resent your suggestion that anyone who takes EA seriously will naturally end up "tempted" to fund x-risk mitigation. Off the top of my head I can think of three reasons that wouldn't happen:

A. you assign a high discount rate or high opportunity cost to charity with delayed impact. This could be because you care more about the present day or believe that future people will be vastly more capable of dealing with future problems.

B. you disagree that "avoiding extinction" == "saving a life" x 8 billion. Only the most basic form of positive utilitarianism would lead you to that result.

C. You are extremely skeptical about the methods used to measure x-risk, the solutions proposed, or both. You value your money too much to put more than a token amount into moonshot bets.

Expand full comment

You're assuming that "consequentialist reasoning (where eg donating to a fancy college endowment seems less good than saving the lives of starving children)" is some sort of well defined thing. It ISN'T. Not even close.

Even if we think utilitarian, most people (idiots IMHO) want to maximize total utility, others (much less idiots, IMHo, sadly also much less common) want to maximize some sort of mean utility. Others might be concerned with median utility, others with ensuring that the utility of the lowest 10% (of humans? of living things?) is above some level.

And then of course there are completely different prime goals. For some the prime goal is "have you heard the good word of <insert name>". For some it's ensure the survival (as I understand it) of my culture (as I understand it). For some it's spread life and/or intelligence through the universe.

EA insists, and insists loudly, that IT has the one true moral goal, and that every one is not serious about ethics. This is deeply insulting and goes down about as well as every other attempt to insist that your religion makes you holier than me has gone down through history.

And the EA population, much like all these other holier than thou populations, refuses to even see the point from the other side. It's all "I hear what you are saying, but the truth is what god REALLY wants is maximum total utility, so it just doesn't matter what your crazy mean utility idol tells you because there's only one way to get into heaven and it's our way".

Like every good idea in history, EA started off as a heuristic that was reasonable in many situations. But then became weaponized as a way to condemn other people, and that's where we are today. The two crazies kinda work together – you get the fundamentalists who treat the scriptures literally (what if animals could suffer? how about plants? how about electrons?) and are OK with the craziness that results, and you get the social climbers who hear the craziness and think "that would make a fine tool for cutting down other people".

Yes yes, you plea for moderation and common sense. Normal people always plea for moderation and common sense, that's what makes them normal. BUT regardless of that we land up burning people who refuse to concede the particular craziness of the hour...

Expand full comment

"And anti-racism has even more achievements than effective altruism: it’s freed the slaves, ended segregation, etc."

So maybe the answer is to admit that the problem is solved and disband, not keep inventing new supposed problem to be outraged about?

And maybe the same is true for EA? Admit that things are pretty good compared to say 1900, let alone 2023 BCE, live your life doing charity as you wish, but don't feel a need to evangelize or explain?

Just as most of what requires statistics is probably garbage (if your social effect can only be toused out by careful manipulation of 100,0000 numbers, then honestly, WTF cares) so perhaps it's just NOT THAT IMPORTANT to be funding the most maximally beneficial charity? You can fund bed nets (save lives), I can fund archaeology (make life worth living as I see it) and we're both doing fine things for the world. It's just NOT IMPORTANT (or to put it another way, it's a matter of *opinion*, not of calculation) whether I "should" be donating my money to bed nest rather than archaeology.

Expand full comment

Is it just me or do most of the critiques against EA feel like rationalizations to dislike the kind of people that run or participate in EA activities? I do not understand the amount of effort being expended to debate the philosophical underpinnings of a charity or philanthropic belief. If you object to how EA people approach charity or activities the appropriate response is to shrug and move on to the next thing right? Why get so worked up? The most dangerous thing that EA might accomplish is slow down research into AGI. Am I missing some nefarious activity?

Deboer’s critique is what, that EA is dumb? That they have the wrong approach and he can’t take them seriously from a philosophical standpoint? My initial reaction to his article was who cares? Much to my astonishment many people seem to care deeply. I think Scott’s main point is a good one, whatever you may think of the people involved, their motivations, and their priorities you should at a minimum acknowledge that EA has done some good work. I’m sick of the endless trying to see the worst of everything and questioning people’s motives. EA is just one of many approaches to giving and trying to make things better. If you don’t like it there are plenty of other ways you can approach it. I’m sure the people being helped will be thankful regardless of how you decided to do it.

Expand full comment
Comment deleted
Expand full comment

First, I don’t feel like I’m being criticized (except by you) because I don’t consider myself to be part of the EA tribe. My question, unaddressed by your insulting comment, stands; why are people so exorcised by this particular group of people doing this kind of work? There are countless numbers of organizations, movements, and groups doing charitable work. All of them have core beliefs and all of them are at risk of attracting grifters. The fact that Deboer doesn’t like or trust the foundational beliefs of some of the prominent members is neither surprising or interesting. My key irritation was set off by this passage:

“While a lot of its specific aspects are salutary, none of them require anything like the ethical altruist framework to defend them; the framework seems to exist mostly to create a social world, enable grift, and provide the opportunity for a few people to become internet celebrities. It’s not that nothing EA produces is good. It’s that we don’t need EA to produce them.”

Who the fuck is *we*? Freddie’s subscribers? It seems pretty obvious that the people that received the mosquito nets, chickens living better lives, kidney recipients, etc. needed someone to step up. Could another group of people come up with the same interventions using a different philosophical foundation? Maybe? But did they? Scott’s whole point is that EA actually showed up and did something. If SBF is the worst thing to come out of EA than it is an unqualified success of a movement. Even adding on the irritation that Freddie gets from some people getting attention on the internet because of it would still point to EA being a laudable mission.

“While a lot of its specific aspects are salutary, none of them require anything like the ethical framework to defend them...” The obsession of the underlying ethical frameworks is one of the reasons it is so hard to accomplish things in today’s world. As Scott says, building coalitions is the only way to tackle big problems. You don’t have to agree with mainline EA to either work with them on particular things or acknowledge when they do something good. Deboer sets up the whole essay such that EA folks need to “defend” their way of thinking about helping people. The basis of the piece is just toxic IMO. Why make people defensive, why attack them just because you don’t think along the same lines?

Expand full comment

You are what in Europe we call a Liberal. The term Libertarian, when used in the American sense to mean Minarchism, is associated with some rather questionable individuals and policies.

Expand full comment

I wonder if the crux of the argument is maybe because "Effective Altruism" is a bit like a bistable Necker cube. Depending on how people encounter the message, one person might hear "effective ALTRUISM" as in "do things that are ALTRUISTIC subject to also being more effective in the world", and another might hear "EFFECTIVE altruism" as in "do things that are EFFECTIVE subject to also being more altruistic". Theoretically, the order of what's the main objective and what's the constraint shouldn't matter, but in practice I think it does. One might lead someone to consider ideas like best way to donate money and consider termite suffering. The other might lead one to consider ideas like earning to give and potentially also selling one's granny, apparently (https://www.honest-broker.com/p/why-i-ran-away-from-philosophy-because).

(Full thoughts here: https://aliceandbobinwanderland.substack.com/p/when-you-read-effective-altruism)

Expand full comment

I wanted to comment this to de Boer but I’m not a subscriber so I couldn’t. So I did a restack but I have 0 followers so I will also put it here:

Quotes are from de Boer and it’s directed at him not at Scott:

> It’s not that nothing EA produces is good. It’s that we don’t need EA to produce them.

This is where you are making the mistake most outsider writers make when discussing EA: you’re only looking at the concepts and not reporting out the facts on the ground.

Without the EA movement, the piles of money that have been given to philanthropies that you probably have no objection to would not have been given.

Why? Because EA is more than a concept. It’s a functional community where people hang out together and push each other to give. In that way, there is accountability around giving and so people give more.

Yes, they also have weird earnest conversations about the edge cases of what it means to do good. This is their idea of fun. Who cares? I also think watching football is pretty stupid and trivial but a lot more people do that and I don’t clutch my pearls about it. If they want to debate about termites to pass the time: who gives a shit?

MEANWHILE: While having a very real in person community in major cities all over the world, they also learn about giving and discuss giving, making them more sophisticated around doing so in soft development of cultural capital that has real value for the world.

I am not an EA. But I have looked into it on the ground. There are a lot of people spending a lot of time together, committing to this movement and feeling part of something by doing so. This makes it bigger than a concept and that’s why all this endless prognostication on EA — such as yours — on the purely theoretical level irresponsible.

It is a concept but it is also a committed body of people, bothe famous and rank and file, which makes it also an institution. So if you want to talk about it in a way that is intelligent and doesn’t do a disservice to your readers, you need to actually have a look at how EA is instantiated in the actual world, and that requires more work than reading tweets.

You could be right that as a philosophy it’s not actually all that interesting, but that’s what strengthens my point: it’s not the ideas that matter but the actual engagement between people that EA has engendered.

It’s a global, secular accountability project that convinces young people to share their wealth in a way that I challenge you to find a comparable peer organization.

Addendum:

> This is why EA leads people to believe that hoarding money for interstellar colonization is more important than feeding the poor

👆This is just pure straw man garbage and you know it. STFU.

Expand full comment
Dec 3, 2023·edited Dec 3, 2023

While I think EA is generally dumb (see, for example, my comments elsewhere in this post) I think there's a better response to Freddie and that's that EA does have a unique take on charity: they reduce it to a single quantifiable measure (QALYs saved) and then optimize ruthlessly for that measure. IMO that's actually a significant innovation. Surfacing objective data allows that data to act as a price signal. That, in turn, unleashes the vast power of market forces. I have no idea if market forces have all that much power without some sort of rational selfish interest behind the price signal, but at the very least it provides a mechanism for large-scale coordination and social information processing.

Now I don't think that QALYs is a wise choice of Thing to Optimize For, but I also don't think that you can argue that it's not a legitimate innovation to the space.

Expand full comment

I thought the premise of EA was that nothing like it had existed before EA came along.

So to me its obvious that Bill Gates was doing EA already, so I don't know what EA brought to the table that Bill Gates didn't.

Could someone explain it to me? Like didn't we have charities that tried to use data and metrics and take on some basic utilitarianism before?

What's *groundbreaking*? Because I really feel like EAs sold themselves as groundbreaking. They didn't sell themselves as "The Gates Foundation with (maybe) slightly different napkin math and (maybe) the demographics are a bit different (younger?)."

Why should I put time and energy into EA that I didn't put into The Gates Foundation or things like it? If I already rejected donating to The Gates Foundation, why would I choose EA?

Expand full comment

I'm not a fan of ea. I'm not concerned with AI risk. It's the people that use the AIs that could be troublesome, but that is more of a problem with human nature. I dislike the vegan focus in the ea movement. I know that ea is more than ai risk and veganism, but I hear so much about those topics from ea sources that it is hard to remember sometimes. Plus, there are other organizations that do good without focusing on things that I think are pointless or actively dislike.

Expand full comment

I think if you want to make the argument that EA is good because it's doing things everyone agrees are good, you need to count Bill Gates not just as an EA, but as more of an EA than say Yudkowsky.

Expand full comment

Well, I think the issue is not “oh you aren’t donating to mosquito net people” and more, hey, I volunteer at my local soup kitchen and donate to a local church and I resent the fact that I bunch of self righteous a holes who work in finance and tech of all things are now judging me for the good I do in the world

Moreover, something I stress, doing good has a deontological component as well as a utilitarian one, and focusing on the latter to the complete detriment of the former is ultimately a negative for charity overall.

Expand full comment