Absurd Being

A place to take a moment to reflect on what it all means

A Few Problems with Utilitarianism


In this article I am going to outline a few problems with utilitarianism that occurred to me after a discussion with a friend. Of course, there is a vast philosophical literature on this subject but I am going to restrict myself to a few comments that relate directly to what my utilitarian friend had to say on the topic.

I will define “utilitarianism” as the ethical principle that one ought to act so as to maximise happiness (or minimise the suffering) of sentient or ‘semi-sentient’ beings (i.e. including most animals).

Some of the arguments you will encounter might strike you as somewhat pedantic; calculating happiness, giving away half of your salary to those in greater need, etc., and it is a perfectly reasonable response to claim that utilitarianism is useful as a guide but ought not to be taken to a literal extreme. Indeed, that is a position I would also endorse. I have addressed this article to the moral agent who takes utilitarianism seriously and believes happiness/suffering really is the only, or at least, the best, way to assess moral problems. Note that this category also includes the person who claims to be using utilitarianism as a guide only but then turns around and advocates a suspicious ethical position purely based on utilitarian grounds, such as; claiming it is unethical to bring children into the world, or arguing that we ought to push one fat man in front of a trolley to save six others (why were they standing on the railway tracks in the first place?)[1].


The Problem of Calculation

One of the most obvious problems with utilitarianism is how we are supposed to actually go about calculating happiness/suffering. Clearly, we cannot always know which action will produce the most happiness and this is a significant problem for a theory which is supposed to tell us which action from a range of actions is the morally correct one based on total happiness/suffering.

But this is just a lack of information which can be overcome – imagine some future technology which is able to calculate happiness and suffering across multiple options and recommend the ‘best’ one.

Let’s take the practical problems first. I maintain that such a technology is impossible; i.e. not just a ‘temporary’ limitation imposed by our current technological capabilities which may one day be overcome. How can I be so sure of this? No technology could ever know the exact happiness of even a single individual, let alone all of the individuals affected by a moral decision. You might imagine this is just short-sightedness on my part. Happiness is, so the naturalist theory goes, nothing more than a chemical/hormonal state of the brain. Raising the serotonin levels in the brain makes one feel good. We know this. Couldn’t a sufficiently advanced culture use this to calculate how happy a person feels at any one time? No. No two brains are the same. So, even (to keep with my very simplified example) the same levels of serotonin in the brains of two people would never translate to exactly the same levels of happiness. But wouldn’t an advanced robot (let’s call it the H-2000) be able to compensate for this? No, it wouldn’t. How could it possibly know how much happiness a particular amount of serotonin in the brain translated to in person A if there is no purely objective benchmark to compare it to? It would have to simply ask him or her how happy they felt at any given moment. This kind of subjective reporting is notoriously flawed and hardly likely to produce adequate intuitions even if it were practical to ask everyone affected by an ethical decision how the decision would make them feel.

And it’s not even as simple as that because happiness isn’t directly correlated with serotonin levels. The H-2000 would have to know the “happiness index” associated with every single brain state for every single person affected by the moral decision. Clearly this is impossible but even if it weren’t, we run up against the same problem, i.e. there is no way to judge this in a completely objective fashion. We must simply ask them how happy they are. In other words, the H-2000, the advanced robot specifically designed to calculate the happiness generated from a particular action, can do no better than a human with a clipboard can today.

There is a second reason why the H-2000 is impossible in a practical sense and this is related to the fact that utilitarianism is a consequentialist theory. Consequences, or intended consequences, are the only things that matter in utilitarianism, but consequences ‘exist’ in the future. This means that in order to calculate the total happiness produced by an action, we need to predict the future. Considering that we are dealing with human beings, not billiard balls or other inanimate objects, this presents an insurmountable problem.

Materialism tells us that we are nothing more than atoms and all atoms are bound by mechanical, physical laws. If materialism is true, we would be fully determined and the future would, in theory at least, be completely predictable. On the other hand, if materialism isn’t true, we aren’t determined and the future is absolutely unpredictable, no matter how sophisticated our technology is. I believe the latter is true (i.e. materialism is false and we aren’t determined) and I have written about this at length in other articles accessible on this website. This means that future happiness cannot be calculated.

Those are the problems that make the H-2000 impossible as a practical, real-world machine, but even if we grant that it is physically possible, the problems don’t stop there. Since we are calculating the happiness/suffering that results as a consequence of certain acts, we must delineate the limits of our inquiry. Where do we draw the line beyond which further consequences no longer matter? For example, imagine a teenager is debating whether or not to steal a ring for his girlfriend. If he steals it, his girlfriend will be happy. She will then be in a good mood when she goes home and may smile at all the people she passes. Do we factor in their increased happiness? But her parents might ask her where she got the ring and take it from her suspecting it was stolen. This would increase the suffering of the girl and her family. Do we include all of that? But perhaps, a year later, she might be deciding between two boys to date; one a dodgy character, the other a stand-up young fellow. Remembering the problems she had with her thieving boyfriend of a year ago, she might then choose the nice guy. Perhaps they live happy lives together, do we factor in all of the happiness they experience to our teenager’s decision? It might sound like I’m being ridiculous but this is a valid concern. Where do you draw the boundary beyond which the consequences of the action under consideration no longer matter?

Perhaps a more relatable ethical dilemma (which isn’t a dilemma for non-utilitarians) is forcing your daughter to go to the dentist. Such immediate suffering can only be justified by appeal to the future. If she doesn’t go, nothing may happen for years but when she is in her thirties perhaps she will suddenly need several root canals causing significant physical and financial suffering. If we don’t take these consequences into consideration, which won’t arise for decades, forcing our children to go to the dentist seems ethically unjustifiable. The bigger problem is that any answer you give to this problem won’t be a utilitarian one (concerned with maximising happiness or minimising suffering) meaning that you have admitted utilitarianism is insufficient as an ethical theory.


The Problem of Happiness

Another commonly raised objection to all ethical theories that try to derive formulae we can follow in order to determine ethically correct decisions in any circumstance, such as utilitarianism, is that it is trivially easy to come up with situations in which the theory gives plainly incorrect answers.

Imagine there is a fat kid being teased by a group of boys, all of whom are taking great pleasure in the act. If we were to simply tally the happiness/suffering of this situation it is easy to see that the happiness of the teasers could quite possibly outweigh the suffering of the teased. If the happiness of the, say, three boys doing the teasing doesn’t get there, simply add as many delighted spectators as you need until it does. According to utilitarianism, teasing, at least in those cases where there are enough people getting pleasure from it, is ethically respectable. Clearly this is false.

Of course, you could try to introduce clauses to rectify this, perhaps by saying that with the addition of each spectator (and the addition of his or her happiness to the total) the fat boy’s suffering increases the same amount. However, attempts like this are obviously arbitrary and really only serve to show that utilitarianism on its own doesn’t do the job.

Unconvinced? What if a benevolent, but utilitarian, alien race descends on Earth and announces that they need the atoms in all the human bodies on the planet to save the species on another planet, in effect, destroying not only all individual humans but the human race as well. The species they want to save have created a global society and live in perfect harmony with each other generating almost perfect happiness (that is, happiness with little to no suffering), certainly much more net happiness than humans are able to produce at a global level. Should we allow them to take our atoms? Utilitarianism says yes, but surely this can’t be the ethical decision.


The Onerous Problem

An ethics that is blind to anything except happiness and suffering also turns out to be an ethics that is incredibly demanding. This need not necessarily be a bad thing, perhaps ethics should be demanding, but utilitarianism is so onerous I believe we are justified in wondering whether what it prescribes is really ethical after all.

Imagine there are three people who each need organ transplants. One needs a liver, another a kidney, and one more a heart. Now, you are the only perfect match for all three of these individuals and have been asked to sacrifice your life for theirs. What does ethics dictate here?

Well, utilitarianism is clear. You ought to sacrifice your life in a heartbeat. The happiness of one life for three. The suffering of one family for the suffering of three. It’s a no-brainer. If you are still wavering, imagine the three sick people are all leaders in their respective fields who will provide untold amounts of happiness to the world. One is a cancer researcher, one a concert pianist, one a… philosopher(?), well, you get the picture.

Another example of the same ilk which you won’t have to engage your imagination to envisage because it will almost certainly apply to your life right now questions the use of your salary. How much of your salary do you absolutely need to give you the basics of life? All of it? Probably not. To pay for food, clothing and shelter, you probably only really need 50% of your current salary (if you need more you can almost certainly get this figure down). But why should you? Because utilitarianism demands it of you.

How much happiness does that 50% of your salary that you are currently spending on things like a new couch to replace the one that was slightly worn, that iPad you don’t really need, a nice meal a couple of times a week, etc. actually secure you? Let’s say 100 units of happiness; 100H per week. Is that the most happiness that 50% of your salary could procure in the world? Clearly not. What if you used that money to rent a house for, and feed, ten homeless people in your city? How happy would they be to be inside and warm with food in their bellies? Without doubt, more than your 100H.

But they might be ungrateful, you say? No chance. You (and your H-2000) carefully screened them first and only chose people whom you knew would genuinely appreciate your offer. If you truly think utilitarianism provides accurate ethical injunctions, you need to carefully think about how much happiness every dollar you spend is currently generating.

But it doesn’t end there? What about your time? How much happiness is your watching TV generating? Wouldn’t you be able to generate much more happiness by working at the local soup kitchen or volunteering at a resthome? What about your job? How much happiness do you bring to others? Would you be able to increase this if you changed jobs? If yes (and how many of us couldn’t be doing something that brought much more happiness into the world than we currently do), then utilitarianism dictates that you ought to change jobs. Perhaps a perfect utilitarian world would be one filled with cancer researchers.


The Problem of Happiness 2

Is maximising happiness / minimising suffering the only thing that matters when weighing up ethical options? I think there is a strong intuition that most of us want to be happy and Aristotle says, quite rightly I think, that happiness is the ultimate good everybody aims at. No matter what you do, you do it because you imagine it will make you happy at some point in the future (or at least happier than you are now) or reduce your suffering. As a broad brush summary of our individual aims that is fine, but can we ground an ethical theory on it?

Any theory that reduces something as complicated and situation specific as human morality to a single dimension is just too simplistic to be of much use. Any parent can hardly have failed to notice how, at times, the fair apportioning of something, a dessert perhaps, can produce an extreme (although unjustifiable) amount of suffering in the child who wanted the lion’s share, even if the other party is too young to be aware of how their rights have been transgressed so they don’t add much to the total happiness of the situation. Would you follow utilitarian rules in this situation, teaching your child it is okay to screw others over as long as they don’t know and you get a certain amount of happiness from your deception? Of course, not. This is because we value things other than happiness; in this case, honesty.

What if you recognised the utilitarian dilemma that your daughter would only get a little more happiness from a birthday present (you’re quite well-off and she lives a comfortable life compared to most of her classmates) whereas one of her less affluent friends from school would get much more happiness from the same gift? Should you bow to utilitarian demands and give your daughter’s gift to her friend instead (perhaps behind your daughter’s back in order to prevent her contributing her suffering to the situation)? Of course, not. We would instantly recognise a parent who did this as being a bad parent, even if their actions increased the happiness in the world.

We can think of innumerable examples where various virtues dictate certain actions that a strict utilitarian analysis wouldn’t be able to fathom. Happiness/suffering alone (or any other reductive criteria for that matter) is just incapable of providing a formula that would satisfy our ethical expectations for such a prescription.

The Problem of Materialism

This is a problem neither unique to, nor inherent in, utilitarianism but rather affects all ethical theories if you also believe materialism is true. If you subscribe to materialism (the overwhelmingly popular position these days), it is very difficult to see how you can also see any moral agents in the world.

If materialism is true, then everything is matter (including humans). As we know, matter is inert; in the sense that it doesn’t think, doesn’t have opinions, and certainly doesn’t have freewill. In fact, on this scientific worldview, there is absolutely no difference between humans and other animals, or at least the difference is nothing more profound than the difference between, say, a chimp and a mouse. Everything we do, like the chimp and the mouse, has been dictated by evolution and determined by natural, mechanical laws. We may think we are above nature in some way but this is an illusion for the materialist; our instincts have been refined by evolution but they are instincts nonetheless.

Now no one holds a chimpanzee morally responsible for its actions. If you go to the zoo and one chimpanzee steals a banana from another, you don’t ‘blame’ the offender or declare that he has acted immorally. Why not? Because animals aren’t moral agents. This is trivial. If materialism is true and all humans are also just animals, how can we be moral agents?

This is easy. Humans have reason and this makes us different from animals.

Okay, but what is reason? Reason is nothing more than specific kinds of thoughts. What are thoughts then? Materialism tells us they are the firing of certain clumps of neurons in certain ways. And what are neurons? Clumps of atoms. And we all know that atoms are just nature’s billiard balls. No atom ‘thinks’, no atom has freewill, and so no conglomeration of them has these things either. All there is are just mindless electrochemical signals being triggered by mindless causes in a purely physical system. Congratulations. You have just described what happens in every animal with a brain; including humans, chimpanzees, and mice. If we don’t hold mice morally responsible for their actions, there is no rational basis for holding humans responsible for theirs.


The Problem of Morality

There is an assumption that has been lurking in the background which I now want to bring to light. We have been discussing one ethical theory without even asking the more fundamental question, “Is there even such thing as an absolute, universal morality?” Can we, in good faith, say, “You ought to do xyz” as if it is a universal law?

I suspect there isn’t and we can’t.

If you want to lift an ethical theory above mere human whim and ground it in something more… absolute, you only have two options as I see it. First, you can believe in a transcendent deity who dictates ethics from on high or second, you can insist the ethical principles you have discovered are intrinsic to the universe in some way.

There are significant problems with the first option, the most obvious of which is the fact that your ethics are now grounded in faith, which is hardly the absolute guarantor you were seeking. Another problem here is whether God’s injunctions are good because God says they are (in which case they are arbitrary; i.e. if God says it is right to murder apostates or stone homosexuals to death then so be it) or whether they are good for some other reason, in which case go to option two.

The second option claims ethical principles are somehow intrinsic to the universe but it is hard to see what this could possibly mean. Is the universe (all the electrons and quarks in existence?) the kind of thing that could prefer one ethical decision over another? It doesn’t seem so. And even a cursory glance at the immense amounts of suffering all living creatures in just our tiny corner of the universe undergo for purely natural reasons seems to stop this argument dead in its tracks.

It’s at this stage that we start to wonder if there are absolute moral truths we can discover.

So, if you’re not a moral realist, then you’re a moral relativist. You believe that anything goes and no ethical decision is ‘better’ or ‘worse’ than any other.

Not quite. I do deny that there are moral truths out there for us to discover but this doesn’t mean that we can’t make our own moral values. Indeed, I would argue that that is what we are doing and it’s the only way morality comes into the world.

The intuition is to recoil from this for two reasons. First, it sounds like relativism. How can any moral position be better than another if there are no human-independent criteria? Second, how can we even have morality if the “right” thing is right by mere human fiat?

Let me take the second objection first. The idea that morality cannot be decided upon by sentient creatures is both false and pernicious. It’s false because there is no one else to do it and pernicious because if we don’t take responsibility for our own moral values, we become susceptible to the moral values of others, others who have “been told” them by God, for example. In this day and age, I hardly need spell out the insanity that can follow from this. Instead of asking how we can have morality if we are making it all up, we would do better to ask how we can have morality if we refuse to make it all up.

How does this not lead to relativism? Saying we are collectively making morality up as we go is not the same as saying all codes of conduct or sets of moral values are the same. Nor is it saying we can’t have rational, sensible discussions with others in the community in which we put forward our ethical positions and give reasons to back them up. Agreement on moral values can be reached without us pretending that we aren’t authoring them.

How does this relate to utilitarianism? Only indirectly, really. It’s less a criticism of utilitarianism and more a criticism of anyone who thinks utilitarianism (or any other ethical theory) is striving for some absolute, human-independent moral truth.


Conclusion

In this article, I have tried to illustrate a few problems with one specific version of utilitarianism. I have not tried to be exhaustive nor have I endeavoured to be purely original; many of the arguments I have offered here are common fodder in philosophy classes the world over.

The bottom line is that utilitarianism, like most ethical theories (one notable exception may be virtue ethics), is too simplistic to account for the complexities that real human situations throw our way. Utilitarianism attempts to give an ethical formula which is tidy and easy to apply but to the degree that it succeeds in doing this, it relinquishes the flexibility any good ethical theory needs to handle real-world problems. Utilitarianism’s goal is to provide an objective formula you can apply without having to think too much or engage your ethical muscles. Unfortunately, it delivers just that, a theory that’s better if you don’t think about it.



[1] This refers to a famous set of ethical thought experiments designed by Phillipa Foot to uncover our ethical intuitions in which a trolley is imagined to be bearing down on one or several individuals and you are presented with various options to save them.




Comments


  Name:

  Comment:
 

 

User Comments


Existentialism

Philosophy Categories

Existentialism


Ancient Greece


Stoicism


Medieval


Modern