People ‘offset’ bad actions in various ways. The most salient example of offsetting is probably carbon offsetting, where we pay a company to reduce the carbon in the atmosphere by roughly the same amount that we put in. But there are arguably more mundane examples of acts that look a lot like offsetting (“I know I promised I’d make it to your game tonight, but I have to work late. I’ll take you out to dinner to make up for it!”). Let’s call an action intended to offset immoral behavior ‘moral offsetting’. In this post I want to ask a couple of questions: first, what is moral offsetting? Second, is it something we should be in favor of?
What is moral offsetting? Here’s one natural account: moral offsetting is making up for a harm by performing a compensatory action of equal or greater moral value. It presumably has to be an action that you wouldn’t have taken otherwise: it’s not moral offsetting if I don’t increase my carbon donations, or if I was already going to take you to dinner, because the relevant thing is what would have happened. So the idea is that your offsetting action genuinely makes a difference, because even if there’s a possible world where you do the right thing and do the offsetting action, the offsetting action isn’t something you actually will do unless you behave immorally, just like you normally won’t give to carbon offsetting organizations if you’re not going to be using any carbon. So we have three worlds that might be brought about:
GOOD: I don’t work late and make it to the game tonight, fulfilling my promise.
OFFSET: I work late and miss the game, but take you out to dinner.
BAD: I work late and miss the game, and don’t take you out to dinner.
Sometimes when we offset we are trying to prevent a harm from happening at all. I think some people think this is what is happening when we carbon offset, but I actually suspect that view of carbon offsetting is wrong. So to take a different example, suppose I take one of your yoghurts from the fridge knowing that I can replace it with the same type of yogurt before you get home. Taking the yoghurt would have harmed you if I hadn’t ‘offset’ my action by replacing it with one from the store. I offset to prevent a harm from happening.
In other cases of offsetting we are letting a harm happen, but are trying to compensate for it by giving something of equal or greater value to the person or people harmed. If you would much rather I take you out to dinner and break my promise to see your game, then you might be quite happy with my offer. I’m better off because I’d rather work late and buy you dinner than satisfy my promise, and you are better off because you’d also prefer this, even though you’d be pretty annoyed if I broke my promise without offering to take you to dinner.
But can we morally offset harms in cases where a harm has or will occur, and where we cannot compensate those who are harmed by it? Suppose, for example, I am deciding whether to eat a steak or not. I believe that eating a steak is wrong because it incentivizes people to bring into existence future cows that will have bad lives. My eating the steak doesn’t harm the cow that the steak comes from – they’re already dead – but it does, in expectation, harm some future cow (of course, given elasticity of demand, my particular steak may have no impact). But even if some cow is brought into existence as a result of my eating the steak, it will be virtually impossible for me to help that cow. How can I pick that cow out among all of the other cows brought into existence? But perhaps I can still morally offset my eating of the steak. Imagine I can choose between the following three worlds:
GOOD: I don’t eat the steak.
OFFSET: I eat the steak and donate \$50 (that I would have otherwise spent on new sneakers) to an effective animal charity.
BAD: I eat the steak and don’t give anything to charity.
Let’s suppose that the expected percentage of a cow that’s created when I eat a steak causes 10 units of expected harm in the world, and that my \$50 creates 50 units of expected wellbeing in the world: more than enough to compensate the harm. Since the overall wellbeing of the world is at least as good in OFFSET as it is in GOOD (the cosmic moral balance has been restored!) should we not conclude that if bringing about GOOD is permissible then bringing about OFFSET is also permissible? This at least seems plausible on harm-based accounts of moral permissibility.
Cases like this one involve us forcing a trade of harms between distinct agents: in the case above we are forcing a harm on an expected cow in order to give increased wellbeing or reduced harms to some (presumably different) set of actual/expected animals. In doing so, we make the world a better place overall. This might not be acceptable on most justice-based accounts of ethics, but it at least seems plausible that such forced trades are permissible on harm-based accounts.
But if this is why we think that it’s acceptable to morally offset, then it’s not clear that we should care about the similarity of the two agents we’re forcing to trade harms. Suppose that I could create 50 units of expected wellbeing by donating just \$30 to some charity that helps humans rather than animals. If we think that the reason morally offsetting was acceptable in the case above is that OFFSET is a better world, wellbeing-wise, than GOOD, then surely this would mean that we are permitted donate \$30 to the human charity rather than \$50 to the animal charity. After all, why does it matter if I force a trade between an expected cow and expected/actual animals, or between an expected cow and expected/actual humans? Superficial resemblances between those who are harmed and those who are benefited seems morally irrelevant in cases like this. As long as we make very sure that we create at least as much good in the world as we do harm, the act of eating the steak and offsetting is morally equal to or better than the act of not eating the steak on the harm-based account.
There are, of course, a few objections that one can level against the offsetting view, even if we accept a harm-based account of moral permissibility. The main objection I foresee people raising is that in cases where we can morally offset, we also have an additional choice of world available to us – one where we perform the good action and the offsetting action. For example, in the promise case I could have brought about the following world:
BEST: I make it to the game and take you out for dinner.
And in the steak case I could have brought about the following world:
BEST: I don’t eat the steak and I donate the money to charity.
Even if we think that OFFSET is at least as good as GOOD, it’s obvious that BEST will always be better than OFFSET and so, at least according to maximizing views, I should always bring about BEST rather than OFFSET. And since bringing about BEST means not acting immorally, I’m never permitted to act immorally and then offset.
There are a couple of things that we can say in response to this. It’s worth pointing out that – on this view – we’re also never permitted to bring about GOOD. That is, we’re never permitted to just keep promises and be vegetarians, because we are obligated to do as much good as we can in addition to this. This level of demandingness is consistent with maximizing views of course, but it still means that GOOD is not more morally permissible on these views than OFFSET is (and in many cases GOOD may be much worse than OFFSET).
What’s more, it’s not clear that BEST is really what we should be comparing either GOOD or OFFSET to. As I said at the beginning, moral offsetting involves undertaking an action that you wouldn’t undertake if it weren’t for the fact that you wanted to offset your immoral action. So the fact that there’s an even better world where you offset despite having done nothing wrong seems like an irrelevant counterfactual. This essentially boils down to the debate between actualism and possibilism in ethics. Imagine you are trying to decide whether to go to the movies with your friends or not. You know that you ought to finish grading papers tonight, and that if you go to the movies then you are sure you will get back late and fall asleep without grading the papers. But your friends will be mildly disappointed if you don’t go to the movies. The actualist says: don’t go to the movies, because if you do that then you won’t grade the papers, and a ‘grading, no movies’ world is morally preferable to a ‘movies, no grading’ world. The possibilist says: but it’s at least possible for you to do both, and since a ‘movies, grading’ world is better than either of these worlds, you ought to go to the movies! If, like me, you find the possibilist’s position implausible, then you also have reason to doubt that their appeal to BEST as an argument against moral offsetting.
Finally, a large worry with the moral offsetting view is that it could be used to justify any degree of immoral action. Couldn’t we use this argument to justify stealing, torture, or any other wicked act, as long as we were willing to pay a very high price in moral compensation? At first glance, there’s no obvious reason why the moral offsetting argument shouldn’t extend to highly immoral actions, and I think that those who defend harm-based views in ethics should find that troubling.
There are a few different things that the harm-based ethicist could say in response to this, however. First, they could point out that as the immorality of the action increases, it becomes far less likely that performing this action and morally offsetting is the best option available, even out of those options that actualists would deem morally relevant. Second, it is very harmful to undermine social norms where people don’t behave immorally and compensate for it (imagine how terrible it would be to live in a world where this was acceptable). Third, it is – in expectation – bad to become the kind of person who offsets their moral harms. Such a person will usually have a much worse expected impact on the world than someone who strives to be as moral as they can be.
I think that these are compelling reasons to think that, in the actual world, we are – at best – morally permitted to offset trivial immoral actions, but that more serious immoral actions are almost always not the sorts of things we can morally offset. But I also think that the fact that these arguments all depend on contingent features of the world should be concerning to those who defend harm-based views in ethics. Such views at least seem to allow that it could, at least in principle, be better to commit a gravely immoral action and then offset than to fail to commit the gravely immoral action in the first place. I imagine that many of us, if presented with a case of this sort, would be inclined to reject any moral theory that entailed such a conclusion.