PART FIVE: THE GREATEST GOOD FOR THE GREATEST NUMBER
5.1: What's “utilitarianism"?
Okay, first, confession time. Consequentialism isn't really a moral system.
No, this FAQ wasn't just an elaborate troll. Consequentialism is sort of like a moral system, but it could better be described as a template for generating moral systems. Consequentialism says that you should act to make the world better, but leaves the meaning of “better" undefined. Depending on how you define it, you can get any number of consequentialisms, some of which are stupid.
For example, consider the proposition that World A is better than World B if and only if World A contains more paper clips. This is a consequentialist moral system (it breaks the Principle of According Value to Other People, but we weren't expecting this to be a good moral system anyway). A moral reasoner could happily go about solving moral dilemmas by choosing the action which would result in the most paperclips.
So obviously we need to specify a definition for “better world" that fits our moral intuitions a little bit better than that.
The first strong attempt at this was made by Jeremy Bentham, who declared that world-state A is better than world-state B if it has more a greater sum of pleasure and lesser sum of suffering across everybody. This makes a bit of sense. Things like dying, being poor, and getting hurt are all the sort of harms we want to avoid in a moral system, and they all seem classifiable as inflicting suffering or denying pleasure. “Utilitarianism" describes the systems of morality that descend from refinements of this original concept, and “utility" describes our measure of how good a particular world-state is.
5.2: What's wrong with Jeremy Bentham's idea of utilitarianism?
It suggests that drugging people on opium against their will and having them spend the rest of their lives forcibly blissed out in a tiny room would be a great thing to do, and that in fact not doing this is immoral. After all, it maximizes pleasure very effectively.
By extension, any society that truly believed in Benthamism would end out developing a superdrug, and spending all of their time high while robots did the essential maintenance work of feeding, hydrating, and drugging the populace. This seems like an ignoble end for human society. And even if on further reflection I would find it pleasant, it seems wrong to inflict it on everyone else without their consent.
5.3: Can utilitarianism do better?
Yes. Preference utilitarianism says that instead of trying to maximize pleasure per se, we should maximize a sort of happiness which we define as satisfaction of everyone's preferences. In most cases, this would be the same - being tortured would be painful and unpleasant, and I also prefer not to be tortured. In some cases, they differ: being forcibly drugged with opium would be pleasant, but I prefer it not happen.
Preference utilitarianism is completely on board with the idea that people want things other than raw animal pleasure. If what makes a certain monk happy is to deny himself worldly pleasures and pray to God, then the best state of the world is one in which that monk can keep on denying himself worldly pleasures and praying to God in the way most satisfying to himself.
A person or society following preference utilitarianism will try to satisfy the wants and values of as many people as possible as completely as possible; thus the phrase “the greatest good for the greatest number".
In theory this is difficult, since it's hard to measure the strength of different preferences, but the field of economics has several tricks for doing so and in practice it's usually possible to come up with an idea of which choice satisfies more preferences by common sense.
5.31: Can utilitarianism do even better than that?
Maaaaaybe. There are all sorts of different forms of utilitarianism that try to get it more exactly right.
Coherent extrapolated volition utilitarianism is especially interesting; it says that instead of using actual preferences, we should use ideal preferences - what your preferences would be if you were smarter and had achieved more reflective equilibrium - and that instead of having to calculate each person's preference individually, we should abstract them into an ideal set of preferences for all human beings. This would be an optimal moral system if it were possible, but the philosophical and computational challenges are immense.
5.4: Oh no! How do I know which of these many complicated moral systems to use?
In most practical cases, it doesn't make a whole lot of difference. Since people usually desire what they prefer, and prefer to be happy, the more commonly used utilitarianisms usually return pretty similar results outside outlandish thought experiments with mind-altering drugs or infinite amounts of torture. They're fun to debate, and there are some complicated problems where one or another system seems to fail, but pretty much any of them would beat most people's usual moral habits of unjustified heuristics and awkward signaling attempts out of the water. Even a general belief in consequentialism without any utilitarian system or any firmer grounding than your basic intuitions can be pretty helpful.
Or, to put it another way, you don't need a complete theory of ballistics in order to avoid shooting yourself in the foot.
I'm going to keep on using “utility" interchangeably with “happiness" most of the time for the sake of readability, even though preference utilitarian purists will probably throw a fit.
5.5: I thought utilitarianism was about everyone living in ugly concrete block-like buildings.
“Utilitarian architecture" is the name of a style of architecture that fits this description. As far as I know it has no connection with utilitarian ethics except sharing a name. Real utilitarianism says that we needn't build ugly concrete block-like buildings unless they make the world a better place.
5.6: Isn't utilitarianism hostile to music and art and nature and maybe love?
No. Some people seem to think this, but it doesn't make a whole lot of sense. If a world with music and art and nature and love is better than a world without them (and everyone seems to agree that it is) and if they make people happy (and everyone seems to agree that they do) then of course utilitarians will support these things.
There's a more comprehensive treatment of this objection in 7.8 below.
5.7: Summary of this section?
Morality should be about improving the world. There are many definitions for “improving the world", but one which doesn't seem to have too many unpleasant implications is satisfying people's preferences. This leads to utilitarianism, the moral system of trying to satisfy as many people's preferences as possible.
PART SIX: RULES AND HEURISTICS
6.1: So what about all the usual moral rules, like “don't lie" and “don't steal"?
Consequentialists accord great respect to these rules. But instead of viewing them as the base level of morality, we view them as heuristics (“heuristic" - a convenient rule-of-thumb which is usually, but not always true).
For example, "don't steal" is a good heuristic, because when I steal something, I deny you the use of it, lowering your utility. A world in which theft is permissible is one where no one has any incentive to do honest labor, the economy collapses, and everyone is reduced to thievery. This is not a very good world, and its people are on average less happy than people in a world without theft. Theft usually lowers utility, and we can package that insight to remember later in the convenient form of “don't steal."
6.2: But what do you mean when you say these sorts of heuristics aren't not always true?
In the example with the axe murderer in 3.5 above, we already noticed that the heuristic “don't lie" doesn't always hold true. The same can sometimes be true of “don't steal".
In Les Miserables Jean Valjean's family is trapped in bitter poverty in 19th century France, and his nephew is slowly starving to death. Valjean steals a loaf of bread from a rich man who has more than enough, in order to save his nephew's life. Although not all of us would condone Jean's act, it sure seems more excusable than, say, stealing a PlayStation because you like PlayStations.
The common thread here seems to be that although lying and stealing usually make the world a worse place and hurt other people, in certain rare cases they might do the opposite, in which case they are okay.
6.3: So it's okay to lie or steal or murder whenever you think lying or stealing or murdering would make the world a better place?
Not really. Having a hard-and-fast rule “never murder" is, if nothing else, painfully clear. You know where you stand with a rule like that.
There's a reason God supposedly gave Moses a big stone with "Thou shalt not steal" and not "Thou shalt not steal unless you have a really good reason." People have different definitions of "really good reason". Some people would steal to save their nephew's life. Some people would steal if it helped defend their friends from axe murderers. And some people would steal a PlayStation, and think up some bogus moral justification for it later.
We humans are very good at special pleading - the ability to think that MY situation is COMPLETELY DIFFERENT from all those other situations other people might get into. We're very good at thinking up post hoc justifications for why whatever we want to do anyway is the right thing to do. And we're all pretty sure that if we allowed people to steal if they thought there was a good reason, some idiot would abuse it and we'd all be worse off. So we enshrine the heuristic “don't steal" as law, and I think it's probably a very good choice.
Nevertheless, we do have procedures in place for breaking the heuristic when we need to. When society goes through the proper decision procedures, in most cases a vote by democratically elected representatives, the government is allowed to steal some money from everyone in the form of taxes. This is how modern day nation-states solve Jean Valjean's problem without licensing random people to steal PlayStations: everyone agrees that Valjean's nephew's health is more important than a rich guy having some bread he doesn't need, so the government taxes rich people and distributes the money to pay for bread for poor families. Having these procedures in place is also probably a very good choice.
6.4: So is it ever okay to break laws?
I think civil disobedience - deliberate breaking of laws in accord with the principle of utility - is acceptable when you're exceptionally sure that your action will raise utility rather than lower it.
To be exceptionally sure, you'd need very good evidence and you'd probably want to limit it to cases where you personally aren't the beneficiary of the law-breaking, in order to prevent your brain from thinking up up spurious moral arguments for breaking laws whenever it's in your self-interest to do so.
I agree with the common opinion that people like Martin Luther King Jr. and Mahatma Gandhi who used civil disobedience for good ends were right to do so. They were certain enough in their own cause to violate moral heuristics in the name of the greater good, and as such were being good utilitarians.
6.5: What about human rights? Are these also heuristics?
Yes, and political discussion would make a lot more sense if people realized this.
Everyone disagrees on what rights people do or do not have, and these disagreements about rights mirror their political positions only in a more inscrutable and unsolveable way. Suppose I say people should get free government-sponsored health care, and you say they shouldn't. This disagreement is problematic, but it at least seems like we could have a reasonable discussion and perhaps change our minds. But if I assert “People should have free health care because everyone has a right to free health care," then there's not much you can say except “No they don't!" The interesting and potentially debatable question “Should the government provide free health care?" has turned into a purely metaphysical question about which it is theoretically impossible to develop evidence either way: “Do people have a right to free health care?"
And this will only get worse if you respond “And you can't raise my taxes to fund universal health care, because I have a right to my own property!"
Whenever there's a political conflict, both parties figure out some reason why their natural rights are at stake, and the arbitrator can do whatever ey feels like. No one can prove em wrong, because our common notion of rights is an inherently fuzzy concept created mainly so that people who would otherwise say things like "I hate euthanasia, but I guess I have no justification" can now say things like "I hate euthanasia, because it violates your right to life and your right to dignity." (I actually heard someone use this argument a while ago)
Consequentialism allows us to use rights not as a way to avoid honest discussion, but as the outcome of such a discussion. Suppose we debate whether universal health care will make our country a better place, and we decide that it will. And suppose we are so certain about this decision that we want to enshrine a philosophical principle that everyone should definitely get free health care and future governments should never be able to change their mind on this no matter how convenient it would be at the time. In this case, we can say “There is a right to free health care" - i.e. establish a heuristic that such care should always be available.
Our modern array of rights - free speech, free religion, property, and all the rest - are heuristics that have been established as beneficial over many years. Free speech is a perfect example. It's very tempting to get the government to shut up certain irritating people like racists, neo-Nazis, cultists, and the like. But we've realized that we're not very good at deciding who genuinely ought to be silenced, and that once we give anyone the power to silence people they'll probably use it for evil. So instead we enforce the heuristic “Never deny anyone their freedom of speech".
Of course, it's still a heuristic and not a universal law, which is why we're perfectly willing to prevent people from speaking freely in cases where we're very sure it would lower total utility; for example, shouting “Fire!" in a crowded theater.
6.51: So consequentialism is a higher level of morality than rights?
Yes, and it is the proper level on which to think about cases where rights conflict or in which we are not certain which rights should apply.
For example, we believe in a right to freedom of movement: people (except prisoners) should be allowed to travel freely. But we also believe in parents' rights to take care of their children. So if a five year old decides he wants to go live in the forest, should we allow the parents to tell him he can't?
Yes. Although this is a case of two rights conflicting, once we realize that the right to freedom of movement only exists to help mature reasonable people live in the sort of places that make them happy, it becomes clear that allowing a five year old to run away to the forest would result in bad consequences like him being eaten by bears, and we see no reason to follow it.
But what if that child wants to run away because his parents are abusing him? Everyone has a right to dignity and to freedom from fear, but parents also have a right to take care of their children. So if a five year old is being abused, is it okay for him to run away to a foster home or somewhere?
Yes. Although two rights once again conflict, and even though “right to dignity and freedom from fear" might not be a real right and I kinda just made it up, it's more important for the child to have a safe and healthy life than for the parents to exercise their “right" to take care of him. In fact, the latter right only exists as a heuristic pointing to the insight that children will usually do better with their parents taking care of them than without; since that insight clearly doesn't apply here, we can send the child to foster care without qualms.
The proper procedure in cases like this is to change levels and go to consequentialism, not shout ever more loudly about how such-and-such a right is being violated.
6.6: Summary?
Rules that are generally pretty good at keeping utility high are called moral heuristics. It is usually a better idea to follow moral heuristics than to calculate utility of every individual possible action, since the latter is susceptable to bias and ignorance. When forming a law code, use of moral heuristics allows the laws to be consistent and easy to follow. On a wider scale, the moral heuristics that bind the government are called rights. Although following moral heuristics is a very good idea, in certain cases when you're very certain of the results - like saving your friend from an axe murderer or preventing someone from shouting “Fire!" in a crowded theater - it may be permissible to break the heuristic.
PART SEVEN: PROBLEMS AND OBJECTIONS
7.1: Wouldn't consequentialism lead to [obviously horrible outcome]?
Probably not. After all, consequentialism says to make the world a better place. So if an outcome is obviously horrible, consequentialists wouldn't want it, would they?
It is less obvious that any specific formulation of utilitarianism wouldn't produce a horrible outcome. However, if utilitarianism really is a reflective equilibrium for our moral intuitions, it really shouldn't. So the rest of this chapter will be a discussion of why several possible horrible outcomes would not, in fact, be produced by utilitarianism.
7.2: Wouldn't utilitarianism lead to 51% of the population enslaving 49% of the population?
The argument goes: it gives 51% of the population higher utility. And it only gives 49% of the population lower utility. Therefore, the majority benefits. Therefore, by utiltiarianism we should do it.
This is a fundamental misunderstanding of utilitarianism. It doesn't say “do whatever makes the majority of people happier", it says “do whatever increases the sum of happiness across people the most".
Suppose that ten people get together - nine well-fed Americans and one starving African. Each one has a candy. The well-fed Americans get +1 unit utility from eating a candy, but the starving African gets +10 units utility from eating a candy. The highest utility action is to give all ten candies to the starving African, for a total utility of +100.
A person who doesn't understand utilitarianism might say “Why not have all the Americans agree to take the African's candy and divide it among them? Since there are 9 of them and only one of him, that means more people benefit." But in fact we see that that would only create +10 utility - much less than the first option.
A person who thinks slavery would raise overall utility is making the same mistake. Sure, having a slave would be mildly useful to the master. But getting enslaved would be extremely unpleasant to the slave. Even though the majority of people “benefit", the action is overall a very large net loss.
(if you don't see why this is true, imagine I offered you a chance to live in either the real world, or a hypothetical world in which 51% of people are masters and 49% are slaves - with the caveat that you'll be a randomly selected person and might end up in either group. Would you prefer to go into the pro-slavery world? If not, you've admitted that that's not a “better" world to live in.)
7.3: Wouldn't utilitarianism lead to gladiatorial games in which some people are forced to fight and risk death for the amusement of the masses?
Try the same test as before. If I offered you a chance to live in a world with gladiatorial blood sports or our current world, which would you choose?
There are many reasons not to choose the gladiator world. If gladiators are chosen involuntarily, you might end up as one and die. Even if you didn't, you'd have to live in fear of ending up as one, which would be distracting and unpleasant and probably take away from your enjoyment of the games. Speaking of which, do you really enjoy gladiatorial games? Do you really expect the majority of other people to do so? If so, do you expect their preference in favor of the games to be as strong, even when summed up, as an involuntary gladiator's preference against participating?
And do you really expect they would have to force people to become gladiators when people voluntarily join things like football, rugby, and boxing?
Most likely there are thousands of people around who would love to become gladiators if given the choice, and the reason our society doesn't currently hold gladiatorial games is not a lack of gladiators, but the fact that it offends our sensibilities and we would feel upset and outraged knowing that they exist. Utilitarianism can take this upset and outrage into account as well as or better than any currently existing moral system and so we would expect gladiatorial games to continue to be banned.
I know this was a weird question, but for some reason people keep using it as their go-to objection.
7.4: Wouldn't utilitarianism lead to racists' preferences being respected enough that it would support discrimination against minorities, if there are a sufficiently large number of racists and a sufficiently small number of minorities?
First, racists and minorities aren't the only two groups in society. There are also, hopefully, a number of majority group members who have strong enough preferences against racism that they overpower the preferences of the racists.
Second, racists seem unlikely to have as strong a preference in favor of discriminating as minority groups have a preference in favor of not being discriminated against.
Third, racists' preference may not be discrimination per se, but another goal which they use discrimination to accomplish. For example, if a racist thinks minorities are all criminals, and wants to avoid crime, ey may discriminate against minorities. But this racist doesn't have a preference against minorities, ey has a preference against crime. We can respect that preference by trying to lower crime while ignoring the fact that ey happens to be misinformed about whether minorities cause crime or not.
But if there is some form of racism so strong that it overcomes all of these considerations, then this may be one of the cases where a form of utiltiarianism stronger than simple preference utilitarianism is needed. For example, in coherent extrapolated volition utilitarianism, instead of respecting a specific racist's current preference, we would abstract out the reflective equilibrium of that racist's preferences if ey was well-informed and in philosophical balance. Presumably, at that point ey would no longer be a racist.
7.5: Wouldn't utilitarianism lead to healthy people being killed to distribute their organs among people who needed organ transplants, since each person has a bunch of organs and so could save a bunch of lives?
We'll start with the unsatsifying weaselish answers to this objection, which are nevertheless important. The first weaselish answer is that most people's organs aren't compatible and that most organ transplants don't take very well, so the calculation would be less obvious than "I have two kidneys, so killing me could save two people who need kidney transplants." The second weaselish answer is that a properly utiltiarian society would solve the organ shortage long before this became necessary (see 8.3) and so this would never come up.
But those answers, although true, don't really address the philosophical question here, which is whether you can just go around killing people willy-nilly to save other people's lives. I think that one important consideration here is the heuristic-related one mentioned in 6.3 above: having a rule against killing people is useful, and what any more complicated rule gained in flexibility, it might lose in sacrosanct-ness, making it more likely that immoral people or an immoral government would consider murder to be an option (see David Friedman on Schelling points).
This is also the strongest argument one could make against killing the fat man in 4.5 above - but note that it still is a consequentialist argument and subject to discussion or refutation on consequentialist grounds.
7.6: Wouldn't utilitarianism mean if there was some monster or alien or something whose feelings and preferences were a gazillion times stronger than our own, that monster would have so much moral value that its mild inconveniences would be more morally important than the entire fate of humanity?
Maybe.
Imagine two ant philosophers talking to each other about the same question. “Imagine," they said, “some being with such intense consciousness, intellect, and emotion that it would be morally better to destroy an entire ant colony than to let that being suffer so much as a sprained ankle."
But I think humans are such a being! I would rather see an entire ant colony destroyed than have a human suffer so much as a sprained ankle. And this isn't just human chauvinism either - I think I could support my feelings on this issue by pointing out how much stronger feelings, preferences, and experiences humans have than ants (presumably) do.
I can't imagine a creature as far beyond us as we are beyond ants, but if such a creature existed I think it's possible that if I could imagine it, I would agree that its preferences were vastly more important than those of humans.
7.7: Wouldn't utilitarianism require us to respect every little stupid preference someone has, like if some Muslim gets offended when people draw pictures of Mohammed, or whatever, then everyone has to stop drawing Mohammed?
I asked this question in Less Wrong and got some interesting answers back. The first and most important answer was yes, if an action causes harm to a group, whether physical or psychological, without providing any benefits to any other group, stopping that action would be a nice thing to do.
However, it's also possible that the reaction we would call “offense" isn't always an expression of violation of a strong preference, but of a group demanding status. So if a Muslim gets really offended at hearing about a cartoon of Mohammed, it's not that ey experienced “psychic pain" or “preference violation" so much as that getting upset about it is a way of showing how much ey likes Islam.
Other responses went into game theory; it may sometimes be in people's benefits to self-modify into a utility monster if they want to constrain the behavior of other agents, but other agents should precommit not to take this self-modification into account in order to discourage it.
Finally, there was a slippery slope argument: although not drawing Mohammed would probably have no effects other than making a couple of Muslims happier, it would set a precedent for always backing down when things were considered “offensive", and eventually this precedent would force us to stop activities that are genuinely useful.
7.8: Way back in 5.6 you addressed the question of whether utilitarianism was opposed to art and music and nature. You said it wasn't by design opposed to these things, and that makes sense. But might it not end up that art and music and nature just aren't very efficient at raising utility, and would have to be thrown out so we could redistribute those resources to feeding the hungry or something?
If you were a perfect utilitarian, then yes, if you believe that feeding the hungry is more important than having symphonies, you would stop funding symphonies in order to have more money to feed the hungry. But this is your own belief; Jeremy Bentham isn't standing behind you with a gun making you believe it. If you think feeding the hungry is more important than listening to symphonies, why would you be listening to symphonies instead of feeding the hungry in the first place?
Furthermore, utilitarianism has nothing specifically against symphonies - in fact, symphonies probably make a lot of people happy and make the world a better place. People just bring that up as a hot-button issue in order to sound scary. There are a thousand things you might want to consider devoting to feeding the hungry before you start worrying about symphonies. The money spent on plasma TVs, alcohol, and stealth bombers would all be up there.
I think if we ever got a world utilitarian enough that we genuinely had to worry about losing symphonies, we would have a world utilitarian enough that we wouldn't. By which I mean that if every government and private individual in the world who might fund a symphony was suddenly a perfect utilitarian dedicated to solving the world hunger issue among other things, their efforts in other spheres would be able to solve the world hunger issue long before any symphonies had to be touched.
Efficient charity is a big issue for utilitarians, but remember that if you're doing it right, each step you take towards consequentialism should result in greater satisfaction of your own moral goals and a better world by your own standards.
7.9: Doesn't utilitarianism sounds a lot like the idea that “the end justifies the means"?
The end does justify the means. This is obvious with even a few seconds' thought, and the fact that the phrase has become a byword for evil is a historical oddity rather than a philosophical truth.
Hollywood has decided that this should be the phrase Persian-cat-stroking villains announce just before they activate their superlaser or something. But the means that these villains usually employ is killing millions of people, and the end is subjugating Earth beneath an iron-fisted dictatorship. Those are terrible means to a terrible end, so of course it doesn't end up justified.
Next time you hear that phrase, instead of thinking of a villain activating a superlaser, think of a doctor giving a vaccination to a baby. Yes, you're causing pain to a baby and making her cry, which is kinda sad. But you're also preventing that baby from one day getting a terrible disease, so the end justifies the means. If it didn't, you could never give any vaccinations.
If you have a really important end and only mildly unpleasant means, then the end justifies the means. If you have horrible means that don't even lead to any sort of good end but just make some Bond villain supreme dictator of Earth, then you're in trouble - but that's hardly the fault of the end never justifying the means.
7.10: It seems impossible to ever be a good person. Not only do I have to avoid harming others, but I also have to do everything in my power to help others. Doesn't that mean I'm immoral unless I donate 100% of my money (maybe minus living expenses) to charity?
In utilitarianism, calling people “moral" or “immoral" borders on a category error. Utiltiarianism is only formally able to say that certain actions are more moral than other actions. If you want to expand that and say that people who do more moral actions are more moral people, that seems reasonable, but it's not a formal implication of utilitarian theory.
Utilitarianism can tell you that you would be acting morally if you donated 100% of your money to charity, but you already knew that. I mean, Jesus said the same thing two thousand years ago (Matthew 19:21 - “If you want to be perfect, go and sell all your possessions and give the money to the poor “).
Most people don't want to be perfect, and so they don't sell all their possessions and give the money to the poor. You'll have to live with the knowledge of being imperfect, but Jeremy Bentham's not going to climb through your window at night and kill you in your sleep or anything. And since no one else is perfect, you'll have a lot of company.
That having been said, there are people who take the idea of donating as much as possible seriously, and they are some pretty impressive people.
PART EIGHT: WHY IT MATTERS
8.1: If I promise to stay away from trolleys, then does it really make a difference what moral system I use?
Yes.
The majority of modern morality is a bunch of poorly designed attempts to look good without special consideration for whether they screw up the world. As a result, the world is pretty screwed up. Applying a consequentialist ethic to politics and to everyday life is the first step in unscrewing it.
The world has more than enough resources to provide everyone, including people in Third World countries, with food, health care, and education - not to mention to save the environment, prevent wars, and defuse existential risks. The main thing stopping us from doing all these nice things is not a lack of money, or a lack of technology, but a lack of will.
Most people mistake this lack of will for some conspiracy of evil people trying to keep the world divided and unhappy for their own personal gain, or for “human nature" being fundamentally selfish or evil. But there's no conspiracy, and people can be incredibly principled and compassionate when the opportunity arises.
The problem is twofold: first that people are wasting their moral impulses on stupid things like preventing Third World countries from getting birth control or getting outraged at some off-color comment by some politician. And second that people's moral systems are vague and flexible enough that they can quiet their better natures by saying anything inconvenient or difficult isn't really morally necessary.
To solve those problems requires a clear and reality-based moral system that directs moral impulses to the places they do the most good. That system is consequentialism.
8.2: How can utilitarianism help political debate?
In an ideal world, utilitarianism would be able to reduce politics to math, pushing through the moralizing and personal agendas to determine what policies were most likely to satisfy the most people.
In the real world, this is much harder than it sounds and would get bogged down by personal biases, unpredictability, and continuing philosophical confusions. However, there are tools by which such problems could be resolved - most notably prediction markets, which can provide a mostly-objective measure of the probability of an event.
There are many cases in which the consequentialist thing to do is to be very wary of consequentialist reasoning - for example, we know that centrally planned markets have bad consequences, and so even if someone provided a superficially compelling argument for why a communism-type plan might raise utility, we would have to be very skeptical. But a more developed science of consequentialist political discourse would aid us, not hinder us, in making those judgments.
For interesting examples of utilitarian political discourse, take a look at this essay on immigration or my own essay on health care policy.
8.3: You talk a big talk. Give an example of how switching to consequentialist ethics could save thousands of lives with no downside.
Okay. How about opt-out organ donations?
Right now organ donations are opt-in, which means you have to fill out some forms and carry a little card around with you if you want your organs to be used to help others if you die. Most people, when asked, approve of having their organs used to help others if they die, but haven't bothered filling out the forms and getting the little card.
At the same time, about a thousand people die each year because there aren't enough organs for everyone, and many times that number suffer poor health for years before finally getting a transplant.
A few countries, such as Spain, had a very clever idea - why not switch to opt-out organ donations? In opt-out organ donations, everyone is signed up to donate organs after death by default. If you don't want to, you can fill out some forms and carry a little card and then you don't have to. It's the opposite of our own system.
In America, this was rejected on the grounds that someone might accidentally forget to fill out the forms, and then die, and then their organs would be used to save someone else's life when they hadn't consented to that.
So on the one hand, we have the lives of a thousand people a year, plus the suffering of many more. On the other, we have the (still entirely theoretical) fear that maybe someone might both really not want their organs given away, but apparently not enough to sign a form saying so, and so would be really upset about losing their organs if they were able to be upset about things which they're not because they happen to be dead at the time.
Remember back in 3.5, when I said that the more useless an option, the better signaling opportunity it provides? Well, being against opt-out organ donations makes a heckuva signaling opportunity. So it's no surprise that professional ethicists, the people who have the most incentive to prove they're more moral than everyone else, have mostly come out against it. They are so very moral that they refuse to ever violate anyone's hypothetical preference, even if they are dead and didn't care enough to sign a piece of paper and relaxing the rules this one time would save a thousand lives a year. Are they great ethicists, or what?
Well, if you've read the rest of this FAQ, hopefully you will answer “what", which makes you better than much of the academic ethicist community, the government, and the voting public.
Yes, a simple common-sense intervention to save a thousand lives a year has not been tried because people are insufficiently consequentialist. This is not nearly the end of the low-hanging fruit available by getting a saner moral system.
8.4: I am interested in learning more about utilitarianism. Where can I do so?
Less Wrong is a great community full of some very smart people where utilitarianism is often discussed. Felicifia is a community specifically about utilitarianism, although I have not been there much and cannot vouch for it. And Giving What We Can is an amazing utilitarianism-oriented group with a almost militant approach to efficient charitable giving.
Derek Parfit's Reasons and Persons and Gary Drescher's Good and Real are two excellent books about morality that consequentialists might find useful.
And game theory and decision theory are two peripheral fields that often come up in consequentialist systems of morality.
Wikipedia also contains discussion of and further links about consequentialism and utilitarianism.
8.5: I have a question or comment about, or a rebuttal to, this FAQ. Where should I send it?
scott period siskind at-symbol gmail period com should work, but be aware I am terrible about replying to email in a timely fashion/at all.