Utilitarianism is simple. Whatever action creates the most happiness for the most people is the right thing to do.
That’s it.
Jeremy Bentham came up with this in the 1700s. He called it “the greatest happiness for the greatest number.” John Stuart Mill refined it later.
The math is straightforward. Count up all the pleasure an action creates. Count up all the pain. Subtract pain from pleasure. The action with the highest score wins.
Sounds reasonable. And it is, mostly.
But then you get the weird cases.
Would you kill one healthy person to harvest their organs and save five dying patients? The utilitarian calculus says yes. Five lives outweigh one.
Most people say that’s horrifying. But they can’t explain why, if saving lives is what matters.
Or this: is it better to have a world with a billion people living amazing lives, or ten billion people living lives barely worth living? The math says ten billion. More total happiness.
That feels wrong too.
The problem isn’t that utilitarianism is stupid. It’s that reducing all moral decisions to a happiness calculation misses something.
Maybe some things matter beyond consequences. Maybe there are rules you shouldn’t break, even for good outcomes. Maybe the way you treat people matters as much as the results you get.
But here’s what I like about utilitarianism: it forces you to care about outcomes. It asks whether your good intentions actually help anyone.
That question alone makes you a better person.