The app for independent voices

Very good post! Nice to see some engagement with this argument. I must admit that I agree with the dustspecks worse than torture conclusion, however, and I do think there are several places where your objections don't fully address the arguments, or where slightly revised versions go through.

For the first argument, I think you misinterpret replacement. The premise is not supposed to be that since 100000T<T, then 100000T(1-e)<T(1-e). Rather the premise is just supposed to be that for any level of pain P, then 100000P(1-e)<P. From this, the conclusion does follow (together with transitivity).

This is just supposed to be a brute intuition. Maybe you don't share it (I do find it quite plausible), but that's at least the idea as I understand it.

In another comment you say that "the math doesn't support this assumption/premise. That's of course true in the sense that the principle is not a theorem of mathematics. But it wasn't supposed to be! It's supposed to be a substantive assumption, that is nonetheless highly plausible.

When you argue that mild pains have decreasing marginal badness, you say that it "makes sense even at the individual level." It seems to me that it *only* makes sense at an individual level. If it's a new person who gets a dust speck every time, no one would get used to it, and it doesn't seem plausible that it gets less bad.

I'm also not sure I see why we should think that very bad pains become exponentially worse the more people experience them. That seems quite implausible. If anything, if you're inclined to say that people getting used to pains makes them less bad, this would seem to be the effect here as well.

Besides, it seems to me that the reasoning here is slightly misreading the intent behind the argument. Sure, one can agree that the experience [novel pain] is worse than the experience [pain, and you are used to it]. But the idea about the uniformity of badness is that two states are equally bad *when they have the same bad-making features*. But these two states don't, assuming that how used you are to a pain is relevant for its badness.

As for the model you develop itself, it would seem to be susceptible to egyptology style objections. How bad inflicting a mild pain on someone is depends on how many other people experience mild pain elsewhere in the universe. So if I can obtain some fixed benefit by inflicting a mild pain on someone, whether or not this is good depends on how many people are in mild pain on distant planets or whatever. And depending on your moral theory, what you should do in that situation would also depend on that. Maybe the model avoids this in some way that I missed? It seems like there might be ways of fleshing out the model where this doesn't become a problem.

(Note that this is not the objection you respond to further down. I agree that you can say that the badness depends on external factors without the badness *for each person* depending on that. This is about what decisions you should take instead.)

Also, the model seems to be sensitive to how we individuate pains. If I have chronic back pain, but it flares up two times with a little time in between, does that count as a single pain that varies in intensity over time, or as two separate pains (as you say, the time a pain lasts is also relevant for whether it meets the threshold). So something would have to be said about this, and I suppose I have a hard time seeing what would be a principled way of doing so (though you may have something in mind that I hadn't considered).

(I can't speak to the alternative solution you sketch)

As for the risk argument, you're right to point out that as formulated it doesn't necessarily lead to a 100% chance of torture, since running separate lotteries on different galaxies would have the chance of yielding no tortures, or several tortures.

But we can modify the example to accomodate this: There are Rayo's number tickets (numbered 1-Rayo's number), and Rayo's number people. Each person has the opportunity to buy a ticket or not. If they don't, they stub their toe, and if they do, they enter the lottery. After everyone has decided, a Rayo's number-sided die is rolled, and the person with the ticket number corresponding to the result of the roll (if such a person exists) is tortured. If everyone buys a ticket, then it is guaranteed that one and only one person will be tortured, and if everyone refrains everyone will stub their toe (or get a dust speck in their eye, if we want to use that example instead--the point is that they experience a pain under intensity i_c)

If we say that: 1) If the offer were only offered to you, you should buy the ticket, and 2) whether or not other people have taken the offer doesn't affect what you should do (and that what people should do is not agent-relative in some very strange way), then it follows that everyone should prefer to buy the ticket.

I feel like the objection you raise sort of misses the point. The argument is not really about whether you can have causal influence on other galaxies or not. Maybe the talk about "what you should prefer" is confusing here. We can simply add the lemma "if everyone should prefer some option, and what one person picks doesn't effect the outcomes for any other person, then it's best if everyone picks that option" (note this avoids collective action problems/prisoner's dilemma style situations). With this, the conclusion that the many small pains are worse than a torture follows. Or if we don't want this detour into decisionmaking, we can just make the original principles about axiology:

1) If it is best for any single person if they take the ticket, and 2) whether or not other people have taken the offer doesn't affect the goodness of the original person's decision, then it follows that it's best if everyone takes the ticket.

This got very long, lol, sorry. Again, I enjoyed your post, and I thought the model was very interesting (even though I of course have some worries about it). Sorry if I misunderstood any of your points, there were a lot of points to go through:)

The Dust-speck vs. Torture argument is bad math.
Mar 26
at
3:47 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.