Reading this post will not maximize your happiness; look at this picture of a puppy instead

Puppy link here

I’m continuing from last time by introducing the moral theory of utilitarianism (light-hearted but well written FAQ here). Utilitarinism is the most well-known version of consequentialism which, generally, states that the only thing that makes an action moral is the consequence of that action. This alone seems deeply troubling to a lot of people; after all, most of us don’t believe that the ends always justify the means. However, the real interesting part of Utilitarianism comes later, so we’ll take consequentialism as a given for this article.

Utilitarianism is the stunningly simple idea that moral actions are actions that maximize good (sometimes referred to as “utility”) in the universe. Where utilitarianism gets interesting is attempts to define what the good really is.

Utilitarianism was first developed by Jeremy Bentham and John Stuart Mill. They formulated the main principle as “the greatest good for the greatest number”. The good, they argue, is pleasure and happiness and the bad is suffering.

It’s not too difficult to see why happiness was chosen as the ultimate good for a moral system (side note: if this seems obvious to you consider alternative goods such as fulfilling your duties (Kant) or maximizing your own happiness and not giving a damn about anyone else (Ayn Rand)). Happiness is something we all desire and suffering is something we all try to avoid. If our goal is to bring about the best consequences, maximizing happiness seems to be a great way to do it.

Here’s how it might work (we’ll use made-up numbers to try to “measure” the happiness in the system). Let’s say you and I are walking together and I have an ice cream cone. I’m getting 1 util (a “unit” of happiness) out of the ice cream cone. You, however, love ice cream and would get 5 utils out of it. Right now the system contains 1 util. In this case we could maximize utility by transferring my cone to you (system has 5 utils).

But wait! Maybe I’m a selfish person and I would actually suffer -5 utils by seeing you with the ice cream that I thought was mine. Now if I give you the ice cream cone I’m suffering -5 utils and you’re enjoying +5 for a net of 0 utils. The system has actually gone down in utility.

But wait! Maybe you’re something of a bully and you’d actually gain 3 utils from taking the ice cream from me. Now I’m suffering -5 utils and you’re enjoying 8 utils for a net of 3 (system has gone up).

As you can see, trying to “measure” happiness can get complicated fastHow could we possibly know how much enjoyment people get out of things? How can we possibly know how much people will suffer as a result of our action or inaction? Are there different kinds of happiness? Is the happiness you get out of, say, sex, better than the happiness of solving a difficult chemistry problem? How could you possibly know any of that?

The above example was for two people and it gets increasingly complicated with more. Would instituting a law that everyone has to purchase their internet from Comcast decrease happiness in the world? It’d sure anger all the customers but maybe Comcast would enjoy the profits enough to make it worthwhile.

As it is, utilitarianism seems unworkable. It requires a knowledge of other people’s subjective experience that we just don’t have and actually figuring out the exact consequences of an action are, even for normal actions, really hard. Someone like the President is going to have an impossible time figuring out the exact change in happiness as a result of, say, a new healthcare policy.

Can Utilitarianism do better than Jeremy Bentham’s formulation? Well, maybe. Preference Utilitarianism is a very popular formulation that states that, instead of maximizing people’s happiness, we ought to maximize the fulfillment of their preferences.

Even if it doesn’t sound like it, that’s a pretty substantial difference. We no longer have to stay awake wondering if our perfectly utilitarian society is just going to become a group of drugged-up wireheads; we can just poll people and ask “hey, would you like to be strapped to a chair with a machine stimulating the pleasure-centers of your brain for the rest of your natural life?” and listen when they say “umm, no?”.

Better still, while it is quite difficult to calculate happiness it’s much easier to calculate preferences. The field of Economics already has a whole host of tricks for doing so. We don’t have to ask people “how much, would you say, do you value the new iPhone5?” we can just set the price and see who buys it. People’s actions say a lot about their innate preferences.

On the other hand, what about situations where people’s preferences are harmful to them? Someone who is anorexic may prefer to starve themselves, but it doesn’t seem like fulfilling their preference is a good thing. It’s pretty foolish to suggest that people always know what’s best for them.

Utilitarianism also has problems of narrow-mindedness. In focusing on happiness or fulfilling preferences it ignores a lot of what intuitively seems to be relevant to moral problems (for example, an individual’s duties and responsibilities). Some moral problems just don’t seem to be about happiness. New parents, in disciplining children for instance, often sacrifice their own happiness for the sake of their children (even if their decisions aren’t making the kid happy either).

If you find all the math and complicated scenarios frustrating, just remember that, at its core, Utilitarianism is about happiness. I think most utilitarians (and most Christians) would agree that if you live your life trying to make people happy you’re not going to screw up too badly.

Ultimately, I don’t find Utilitarianism convincing as a moral theory. The whole idea of needing to know the exact consequences of your actions seems unworkable in practice and utilitarianism conflicts with my intuitions in several different ways. On the other hand, I have great respect for utilitarians. It’s always a good thing when people seriously consider the consequences of their actions and it’s certainly a good thing when we care about what happens to others (sorry, Ayn Rand).

Speaking of Rand, next time we’ll talk about a very different moral system that starts from the a very similar place: the idea that maximizing your own happiness and self interest is the only moral principle.

Thomas Carey is a husband and father. In his spare time, he’s a Ph.D. student at the University of Colorado, Boulder and tries to relax from both by hiking, climbing, and playing board games.

2 thoughts on “Reading this post will not maximize your happiness; look at this picture of a puppy instead

  1. A quick comment: Obviously the topic of utilitarianism isn’t thoroughly summed up by the above. There are a lot of good counter-arguments to most of the claims I put forth and I’ll try to describe some of them here.

    The biggest problem seems to me to be making the “great” be the enemy of the “good”. The objection that hedonistic calculus is cumbersome and inaccurate is true, but it kind of misses the point. Forcing yourself to consider all the relevant facts and factors is the important part. Making the conscious effort to consider your surroundings and the impact of your actions will make your behavior, on average, more moral and it works better than other moral systems you could try. Is your calculus with made-up numbers going to give you exactly the right answer? No, but it’ll still give you an answer that’s pretty good.

    Objections against using happiness and fulfillment of preferences also seem sort of question-begging (side note: begging the question has to do with assuming your conclusion in a premise and has nothing to do with demanding an answer). How do YOU know that fulfilling people’s preferences isn’t a good thing? What are you drawing on that makes you think you know their lives and desires better than they do? In an effort to disprove utilitarianism aren’t you sneaking in your own moral system here as an implied premise?

    One final point that was alluded to in the above but never really expanded on is the accusation that utilitarianism is going to lead to “moral” decisions that seem appalling (rape being legal if it turns out that rapists enjoy rape more than the people being raped are suffering, for instance). Utilitarianism is allowed to take a wider view of things — would a society where rape is legal actually be happier than the one we have today? Almost certainly not…and our revulsion at such an idea probably means that it wouldn’t increase net happiness. If you try to construct a scenario you find appalling it’s pretty clear that Utilitarianism wouldn’t actually endorse it.

  2. Pingback: Where Is John Galt? | The Porch

Leave a comment