Self-Serving utilitarian arguments
8 min read

Self-Serving utilitarian arguments

Tim is a utilitarian and has dedicated his life to doing good. He works on existential risk by day and runs a hedge fund by night that funnels money to the global poor. Tim calculates that he's on track to save a 3 million lives during his lifetime.

One day, Tim gets sick and is rushed to hospital. The doctors inform Tim that there's a 10% chance he's going to die if he doesn't drink exactly 10ml of medicine X (which makes Tim start to suspect he's in some kind of philosophical thought experiment). The hospital only has 10ml of medicine X. To make matters worse, there are ten other people in the lobby that all have a 100% chance of dying if they don't each receive 1ml of medicine X. Should Tim take the medicine himself or let the ten other people have it?

Tim knows that the average person saves around 3 lives over the course of their lifetime, whereas he has 1 million times this impact. And if he dies, there isn't some Tim-like figure waiting in the aisles who will do the same amount of good. So a 10% chance that he dies is a loss of 300,000 lives in expectation. If the ten others die that's only a loss of 40 lives in expectation (the original 10 plus the three they will each save). So surely, according to utilitarianism, Tim ought to take the medicine and let the other ten people die.[1]

Utilitarianism values someone's continued existence and quality of life by how much value it produces. Most people's lives are of similar intrinsic value since humans have short lives and a pretty limited range of wellbeing. But the indirect value produced by each person can vary a lot because the range of impacts people can have on other lives is pretty huge. Imagine each person has a number above their head representing how much good they are expected to produce if they continue to live. For Tim this might be some large positive number, while for Stalin it might be some large negative number.

In general, this means that utilitarianism can put a radically different value on extending or improving people's lives. Not because it thinks some people are vastly more intrinsically valuable than others, but because it thinks the people Tim will save are no less intrinsically valuable than the people sitting in the lobby.

This discrepancy can be used to justify self-serving utilitarian arguments.[2] A self-serving utilitarian argument is an argument of the form "I expect to do a lot of good over the course of my life. So a small extension of my life or improvement in my productivity is extremely valuable: much more valuable than it is for the average person. Therefore I should be willing to prioritize myself above others or to violate commonsense morality if I expect it to extend my life or improve my productivity."

There's something inherently icky and objectionable about self-serving utilitarian arguments. If Tim's numbers are correct, there would have to be over 7500 people dying in the lobby before Tim should save them rather than accept a 10% chance of death. Sounds pretty objectionable.

Discrepancies in the amount of indirect good people do can also mandate self-sacrifice. If you have to choose between extending your own life and extending the life of someone who will do much more good than you, the same utilitarian reasoning demands that you sacrifice yourself. But self-serving arguments are more concerning than self-sacrificing arguments. A purely self-interested person could make a utilitarian self-serving argument in order to do immoral things or prioritize their own wellbeing above that of others - potentially by a significant amount. As movies like to remind us, utilitarianism is a dangerous tool in the hands of well-intended people who reason poorly and ill-intended people who reason well.

Utilitarians might object that this problem only arises because we've engaged in some pretty naive utilitarian reasoning and that, for non-naive utilitarians, self-serving arguments will rarely be justified.

There are many reasons for non-naive utilitarians to frown on self-serving arguments. Using self-serving arguments will harm your reputation and the reputation of utilitarianism as a whole. They may contribute to a bad character, which we have utilitarian reasons to avoid. They flout longstanding social norms that we have utilitarian reasons to respect and uphold. They involve using utilitarianism as a decision procedure rather than a criterion of rightness, which is often a bad idea. And they don't show sufficient deference to moral theories that object to this behavior and that we ought to give some weight to.

While I agree that non-naive utilitarians will encounter good self-serving arguments much more rarely than naive utilitarians, I'm not convinced the non-naive utilitarian can caveat them out of existence. Sometimes acting in accordance with self-serving arguments seems morally justified. In fact, failing to act in accordance with a good self-serving utilitarian argument can be a sign of bad character.

To see why, imagine you've been tasked with delivering \$10M in a sealed suitcase to the bank. The money is going to be used to buy medicine for people in developing countries. You can either cycle a dangerous and tiring route to the bank or take a comfortable, secure car for \$50, which is the only other money you have on you. If you cycle there's a 10% chance the money will be lost (let's suppose you have to cycle next to a convenient pit of fire that would completely destroy it) so you decide to take the car. At that moment a homeless person approaches asking if you can spare some money for food. What do you do?

In this scenario you have a self-interested reason to prefer to take the car since it's more comfortable than cycling. And the comfort you get from taking the car is clearly less than the benefit the homeless person will get from buying food. But if you give the homeless person \$50 you're risking a 10% chance that people in developing countries will miss out on \$10M. Wouldn't this be a horribly reckless thing to do? Shouldn't you consider yourself a steward of the good this money can do? Wouldn't it be a pure pretense of niceness to give the person \$50 in these circumstances? Wouldn't you feel an aching sense of moral shame if you gave the person \$50 only to have the \$10M tumble into the fire pit? I certainly would.

In this case, it seems like you would be justified in using your \$50 to take the car to the bank. This happens to be the thing that's in your self-interest, but that's not the primary reason you're doing it: you're doing it because you know that you have a responsibility to protect the good that will come from the money in the suitcase. So couldn't someone who is going to give away \$10M in future earnings be justified in spending money to eliminate a 10% chance of death for them even if that money could eliminate a higher chance of death for someone else?

You might want to argue that there's a big difference between sacrificing the happiness of the homeless person to ensure you can do more good in the future and sacrificing ten people's lives to ensure you can do more good in the future. But the difference seems to be one of degree rather than kind. If we accept that you should use the money to take the comfortable car, we've accepted that it's sometimes okay for a person to take an action that's in their self-interest and that sacrifices the self-interest of others in order to protect some amount of good they will do in the future. We've accepted a self-serving utilitarian argument.

It's worth noting that the ability to make credible self-serving arguments doesn't come cheap. If someone justifies a \$50 car ride because they plan to give away \$10M of their money, the argument only works if they're actually going to give away that money. But it's hard to distinguish between the people making good faith arguments for self-serving actions (buying a safer car, renting an expensive office space, etc.) from the people making bad faith arguments for the same actions. This is bad news for utilitarianism. You don't want to be the moral theory that people appeal to just so they can cut in line at the ER.

Of course, we all sometimes act in ways that are purely in our self-interest. And some of those actions are going to be unethical. When I buy a nice pair of speakers instead of giving the money to the poor, I could try to give a long-winded utilitarian justification of how the speakers make me more productive at my job, but the truth is that I wanted the speakers and so I bought them. I was mostly just being selfish.

When it comes to self-interested actions, a self-serving utilitarian argument made in bad faith is clearly worse than a self-serving utilitarian argument made in good faith. But a self-serving utilitarian argument made in bad faith is also worse than no argument at all. If we do something wrong for self-interested reasons but are honest about it, we've done something bad. If we also try to give a self-serving utilitarian argument to avoid taking responsibility for it, we've done something worse.

So it's important to be able to distinguish between good faith and bad faith self-serving arguments. If we assume that the self-serving arguments being made are equally plausible, there are still several ways that utilitarians can distinguish themselves from those making self-serving arguments in bad faith:

The first is simply through prior behavior. Has a person directed their life towards doing good? If utilitarianism demands self-sacrifice, have they done it? If so, it's more likely that they're arguing in good faith. If someone only pulls out utilitarian reasoning when it suits them, we should be skeptical.

The second is through pre-commitments. If someone has pre-committed to do good — if they've taken a public pledge, put money into a donor-advised fund, or committed to having a high-impact career — we can be more confident they are arguing in good faith. It's easy to claim that you're going to do a lot of good somewhere down the line. But the person who pre-commits is showing they're willing to pay the piper.

The third is through the use of independent adjudication. We can't really trust ourselves to evaluate self-serving arguments, since we're clearly not impartial about the outcome. But we can ask a morally upstanding friend or colleague to decide for us. Appealing to independent adjudication is a good sign that someone is arguing in good faith. (This kind of independent adjudication also reduces the likelihood that utilitarians will underinvest in their own wellbeing, e.g. because they're afraid of being seen as selfish.)

Self-serving utilitarian arguments are easy to abuse and we clearly have reason to be skeptical of them. But if they can be morally justified, even if rarely, then we might not want to reject them out of hand. We do want some way of distinguishing good faith from bad faith self-serving arguments. If a well-reasoned self-serving argument is coming from a person that has a good track record of altruistic behavior, has pre-committed to doing good in the future, and has deferred to independent adjudicators about what they ought to do, I think we can be pretty confident it's being made in good faith.


Of course, those people save three lives, and the people they save also save three lives, and so on. But the same is true of the 3 million people Tim saves. I'm just going to assume that the saving 3 million lives over a short period of time is better than saving far fewer lives over that period. ↩︎

I wanted to call them "gross utilitarian arguments" but if there's anything the repugnant conclusion has taught me, it's that there can be downstream costs to putting a negative judgment in a name. It's kind of like naming your kid "Bad Pete". ↩︎