Is It Morally Obligatory to Precommit Not to Benefit from Negative-Utility Events?

TL;DR: Iff the potential for people like you to benefit from such an event is a significant causal component of the possibility of that event happening.


Consider the following hypothetical situations:

  1. The murder-mystery situation: Your wealthy elderly relative plans to name you as their heir. Should you decline?
  2. You’re a geologist studying a particular active volcano. If it erupted, measurements from the eruption would provide extremely important and valuable data for your research, providing a boost to your career; however, it would also cause mass loss of life and property damage. Should you precommit not to publish any papers using data from such an eruption?
  3. You’re a worker in a high-income country, and a major issue this election season is a proposed protectionist trade reform that would increase the wages of workers like you in domestic industries, but stifle economic opportunities for those in developing countries. Alternatively, if you don’t buy into the economic assumptions that lead to that scenario being bad, the proposed reform is a trade liberalization that would decrease prices of consumer goods in your country, but lead to exploitation of workers in developing countries. Should you precommit not to take a job in a protected industry/buy foreign-produced goods, or, if you have to do so, to buy an ethics offset in the form of a charitable donation?

In all three of these situations, something bad might happen in the future, but you stand to benefit if it does. You can turn down the benefit, but doing so won’t help the people who were harmed by the bad event in the first place. The question is whether, ethically speaking, you should precommit to turn it down.

It would be useful to have a general principle which serves as an answer to this question. At this point it’d be nice to dramatically reveal one, but I kind of already did that at the top of the page. So instead I’ll discuss the two most obvious alternative answers and why I don’t find them satisfactory.

The first alternative answer is an unconditional yes; if you can anticipate a future situation where you would have the chance benefit from something bad, you should precommit not to take that opportunity. This answer is bad because it leaves free utility on the ground. In many cases, it will lead to your pointlessly punishing yourself for something that you had no control over, to no one else’s benefit. Obviously this outcome is to be avoided if possible.

The second alternative answer is an unconditional no; it’s always okay to take whatever opportunities come your way as long as you don’t directly cause anyone else to be harmed in the process. This answer is bad because your future action is not the only causal variable in play; other people’s expectation of what you will do in the future may influence their own behavior. If whether or not you stand to benefit from something affects whether that thing will happen—possibly because someone who’s looking out for your interests has some measure of control over it—then it’s ethically obligatory for you to take this into account and, if you determine that the event is bad overall, make sure that you don’t stand to benefit from it.

Even if you personally have little control over the event in question, it’s appropriate to consider not only the precommitment that you’d make as an individual, but the precommitment that you’d make if you were deciding for everyone who is like you in relevant ways—that is, everyone whose position is close enough to yours and who uses a sufficiently similar reasoning process. Otherwise, you end up with the outcome where everyone defects because it seems individually rational for each of them. (At this point I’d normally wave my hands and say “something something timeless decision theory”, but honestly I don’t yet understand the math well enough to know if that’s at all applicable here.)

The answer I propose is a compromise between these two positions. If the potential for people like you to benefit from a bad event is a significant causal component of the possibility of that event happening—that is, if the event would be significantly less likely to happen if that potential were gone—then you should remove that potential for yourself, by precommiting to decline the benefit. Note that it has to be a significant causal component; for instance, if society as a whole has any say in the event happening vs. not happening, then to the extent that society cares positively about you at all (which it probably does, at least a little), there’s at least that much of a causal component there. But if it’s only that tiny amount, and not a situation where the interests of people like you are the primary driver of the possibility, then feel free to pick up that free utility off the ground.

So, with this in mind, how would I resolve the situations at the top of this post?

  1. If you are, in fact, a character in a murder mystery, then definitely decline; not only is it ethically obligatory, but as an added bonus it also makes you less likely to be suspected! In real life, I think you only need to do this if there’s a significant possibility of the relative being murdered for the inheritance. This is a bit of a gray area, since you never really know, but in general I’d say it’s fine.
  2. No need to make a precommitment here; your research and career have absolutely no causal impact whatsoever on whether the volcano erupts.
  3. Here I think you should precommit not to benefit. The reason the bad trade policy is being proposed is presumably because workers/consumers in your country want it, and you yourself fall in that class. You’re similar enough to other members of it that you should make the precommitment, because if they all did the same, then the bad proposal would go away (since nobody would have any political incentive to back it) and the people in developing countries would benefit.

4 thoughts on “Is It Morally Obligatory to Precommit Not to Benefit from Negative-Utility Events?

  1. “You’re similar enough to other members of it that you should make the precommitment, because if they all did the same, then the bad proposal would go away.”
    This, and the whole “people like you” aspect, seems inconsistent with the rest. If you’re a utilitarian, you’re not looking for what is the “right” action in some deontological sense, what everyone ought to do, you’re looking for the action that will maximize utility. But your precommitment has little-to-no effect on whether all the other people like you follow suit; you could easily imagine a situation (many members of the society, high degree of self-interest on their parts, lack of information about others’ precommitments, etc) where your individual potential effect is arbitrarily less than the expected effect of being made the rich relative’s heir.
    Another way to put it: what “people like you” *should* do isn’t relevant from a utilitarian perspective; only what they are likely to do matters.

    Like

  2. / Its wonderful as your other blog posts : D, thankyou for putting up. “It takes less time to do things right than to explain why you did it wrong.” by Henry Wadsworth Longfellow.

    Like

Leave a Comment (under CC BY-SA 4.0)