I’m continuing from last time by introducing the moral theory of utilitarianism (light-hearted but well written FAQ here). Utilitarinism is the most well-known version of consequentialism which, generally, states that the only thing that makes an action moral is the consequence of that action. This alone seems deeply troubling to a lot of people; after all, most of us don’t believe that the ends always justify the means. However, the real interesting part of Utilitarianism comes later, so we’ll take consequentialism as a given for this article.
Utilitarianism is the stunningly simple idea that moral actions are actions that maximize good (sometimes referred to as “utility”) in the universe. Where utilitarianism gets interesting is attempts to define what the good really is.
Utilitarianism was first developed by Jeremy Bentham and John Stuart Mill. They formulated the main principle as “the greatest good for the greatest number”. The good, they argue, is pleasure and happiness and the bad is suffering.
It’s not too difficult to see why happiness was chosen as the ultimate good for a moral system (side note: if this seems obvious to you consider alternative goods such as fulfilling your duties (Kant) or maximizing your own happiness and not giving a damn about anyone else (Ayn Rand)). Happiness is something we all desire and suffering is something we all try to avoid. If our goal is to bring about the best consequences, maximizing happiness seems to be a great way to do it.
Here’s how it might work (we’ll use made-up numbers to try to “measure” the happiness in the system). Let’s say you and I are walking together and I have an ice cream cone. I’m getting 1 util (a “unit” of happiness) out of the ice cream cone. You, however, love ice cream and would get 5 utils out of it. Right now the system contains 1 util. In this case we could maximize utility by transferring my cone to you (system has 5 utils).
But wait! Maybe I’m a selfish person and I would actually suffer -5 utils by seeing you with the ice cream that I thought was mine. Now if I give you the ice cream cone I’m suffering -5 utils and you’re enjoying +5 for a net of 0 utils. The system has actually gone down in utility.
But wait! Maybe you’re something of a bully and you’d actually gain 3 utils from taking the ice cream from me. Now I’m suffering -5 utils and you’re enjoying 8 utils for a net of 3 (system has gone up).
As you can see, trying to “measure” happiness can get complicated fast. How could we possibly know how much enjoyment people get out of things? How can we possibly know how much people will suffer as a result of our action or inaction? Are there different kinds of happiness? Is the happiness you get out of, say, sex, better than the happiness of solving a difficult chemistry problem? How could you possibly know any of that?
The above example was for two people and it gets increasingly complicated with more. Would instituting a law that everyone has to purchase their internet from Comcast decrease happiness in the world? It’d sure anger all the customers but maybe Comcast would enjoy the profits enough to make it worthwhile.
As it is, utilitarianism seems unworkable. It requires a knowledge of other people’s subjective experience that we just don’t have and actually figuring out the exact consequences of an action are, even for normal actions, really hard. Someone like the President is going to have an impossible time figuring out the exact change in happiness as a result of, say, a new healthcare policy.
Can Utilitarianism do better than Jeremy Bentham’s formulation? Well, maybe. Preference Utilitarianism is a very popular formulation that states that, instead of maximizing people’s happiness, we ought to maximize the fulfillment of their preferences.
Even if it doesn’t sound like it, that’s a pretty substantial difference. We no longer have to stay awake wondering if our perfectly utilitarian society is just going to become a group of drugged-up wireheads; we can just poll people and ask “hey, would you like to be strapped to a chair with a machine stimulating the pleasure-centers of your brain for the rest of your natural life?” and listen when they say “umm, no?”.
Better still, while it is quite difficult to calculate happiness it’s much easier to calculate preferences. The field of Economics already has a whole host of tricks for doing so. We don’t have to ask people “how much, would you say, do you value the new iPhone5?” we can just set the price and see who buys it. People’s actions say a lot about their innate preferences.
On the other hand, what about situations where people’s preferences are harmful to them? Someone who is anorexic may prefer to starve themselves, but it doesn’t seem like fulfilling their preference is a good thing. It’s pretty foolish to suggest that people always know what’s best for them.
Utilitarianism also has problems of narrow-mindedness. In focusing on happiness or fulfilling preferences it ignores a lot of what intuitively seems to be relevant to moral problems (for example, an individual’s duties and responsibilities). Some moral problems just don’t seem to be about happiness. New parents, in disciplining children for instance, often sacrifice their own happiness for the sake of their children (even if their decisions aren’t making the kid happy either).
If you find all the math and complicated scenarios frustrating, just remember that, at its core, Utilitarianism is about happiness. I think most utilitarians (and most Christians) would agree that if you live your life trying to make people happy you’re not going to screw up too badly.
Ultimately, I don’t find Utilitarianism convincing as a moral theory. The whole idea of needing to know the exact consequences of your actions seems unworkable in practice and utilitarianism conflicts with my intuitions in several different ways. On the other hand, I have great respect for utilitarians. It’s always a good thing when people seriously consider the consequences of their actions and it’s certainly a good thing when we care about what happens to others (sorry, Ayn Rand).
Speaking of Rand, next time we’ll talk about a very different moral system that starts from the a very similar place: the idea that maximizing your own happiness and self interest is the only moral principle.
Thomas Carey is a husband and father. In his spare time, he’s a Ph.D. student at the University of Colorado, Boulder and tries to relax from both by hiking, climbing, and playing board games.