Utilitarianism and Its Flavors (with Nick Beckstead)

May 16, 2021 Episode Page ↗
Overview

Spencer Greenberg and Nick Beckstead delve into utilitarianism, dissecting its components, limitations, and practical applications. They explore decision theory, population ethics, and various "branch points" within the moral framework.

At a Glance
16 Insights
1h 30m Duration
14 Topics
15 Concepts

Deep Dive Analysis

Defining Utilitarianism and Its Core Components

Exploring Different Forms of Consequentialism

Utilitarianism's Appeal and Personal Journey

The Utilitarian Theory of Value and Aggregation

Challenges in Population Ethics and the Repugnant Conclusion

Departing from Strict Utilitarianism: Constraints, Options, Obligations

When to Apply Utilitarian Calculations vs. Heuristics

Understanding Different Theories of Well-being (Utility)

Determining Which Sentient Beings Morally Count

Meta-Ethical Interpretations of Moral Philosophy

Decision Theory and Expected Utility Maximization

Harsanyi's Aggregation Theorem Explained

Addressing Population Ethics and the Neutral Level of Well-being

Concluding Thoughts on Utilitarianism's Practical Application

Utilitarianism

A moral theory that defines good actions as those that produce the greatest amount of good for the greatest number of sentient beings. It reduces morality to maximizing overall well-being.

Consequentialism

The foundational idea within utilitarianism that the rightness of an action or rule is determined solely by its outcomes or consequences. It focuses on achieving the most beneficial results.

Act Consequentialism

A specific type of consequentialism where the morally right action is the one that produces the best overall consequences in a particular situation, regardless of general rules. It prioritizes the outcome of each individual act.

Rule Consequentialism

A specific type of consequentialism that determines rightness by whether an action adheres to a set of rules that, if universally followed, would lead to the best overall consequences. It emphasizes the long-term benefits of rule-following.

Utilitarian Theory of Value

The component of utilitarianism that states what is good ultimately reduces to what is good for individual sentient beings. It contrasts with views that prioritize abstract ideals or non-sentient entities.

Population Ethics

A subfield of moral philosophy that addresses how to evaluate outcomes involving different numbers of people, particularly future or potential people. It grapples with how to aggregate well-being across varying populations.

Repugnant Conclusion

A paradox in population ethics where total utilitarianism implies that a world with a vast number of people experiencing lives barely worth living is better than a world with fewer, very happy people. This challenges intuitive notions of a good world.

Moral Constraints

Limits on actions that are considered morally wrong, even if they might lead to the best overall consequences according to a strict utilitarian calculation. Examples include violating rights or lying.

Moral Options

The idea that individuals are not always morally obligated to perform the single most utility-maximizing action, but have a range of choices compatible with an ethical life, as long as they don't cause harm or violate constraints. It allows for personal pursuits.

Special Obligations

Moral duties that arise from specific relationships (e.g., family, professional roles) or commitments (e.g., promises). These obligations are often considered binding even if they don't maximize overall utility impartially.

Hedonic Theory of Well-being

A view that well-being consists primarily of the balance of positive feelings (pleasure) over negative feelings (suffering). It focuses on subjective experiences of enjoyment and pain.

Preference Theory of Well-being

A view that well-being is achieved when an individual's preferences or desires are satisfied. It is broader than hedonism, encompassing what people want, even if it doesn't directly lead to pleasure.

Objective List Theory of Well-being

A view that certain things are inherently good for people (e.g., knowledge, loving relationships, accomplishment) regardless of whether they are desired or produce pleasure. A good life involves some measure of these items.

Expected Utility Theory

A decision-making framework for situations with uncertainty, where one chooses the action that maximizes the sum of the utilities of all possible outcomes, weighted by their probabilities. It serves as a criterion for rational action under risk.

Harsanyi's Aggregation Theorem

A theorem demonstrating that if individual well-being functions satisfy certain rationality axioms, and a social welfare function satisfies Pareto optimality and impartiality for a fixed population, then the social welfare function must be a utilitarian sum of individual well-being.

?
What is utilitarianism?

Utilitarianism is a moral theory that posits doing good means achieving the greatest amount of good for the greatest number of people, or more broadly, sentient beings, by reducing morality to what benefits individuals.

?
What are the main components or 'moving parts' of utilitarianism?

Utilitarianism consists of three main parts: consequentialism (the idea that right action reduces to doing the most good), a utilitarian theory of value (goodness reduces to what's good for individual sentient beings), and a theory of well-being (what goodness for an individual consists of, e.g., hedonism, preference satisfaction, or objective list).

?
What are the different 'flavors' of consequentialism within utilitarianism?

There are several flavors, including act consequentialism (doing the most good in a specific instance), rule consequentialism (following rules that generally lead to the best consequences), and global consequentialism (applying consequentialist evaluation to any focal point like actions, rules, or life policies).

?
Why might someone move away from strict utilitarianism?

Strict utilitarianism can be incredibly demanding, requiring constant maximization of good and potentially leading to counterintuitive conclusions (like the Repugnant Conclusion) or conflicts with common-sense morality regarding rights, personal options, or special obligations.

?
When is it appropriate to use utilitarian calculations in decision-making?

Utilitarian calculations are most appropriate and helpful when there are very high values at stake, the thinking can be applied repeatedly (e.g., comparing funding for malaria vs. measles), and there's no temptation to violate conventional moral constraints or special obligations.

?
What are the different theories of well-being within utilitarianism?

Theories of well-being include hedonism (well-being as feeling good/pleasure), preference theory (well-being as getting what one wants/satisfying preferences), and objective list theories (well-being as achieving inherently good things like loving relationships, accomplishment, or knowledge).

?
Which beings should count in utilitarian calculations?

The question of which beings count (moral patienthood) depends on the chosen theory of well-being; if well-being is about feeling good, then conscious beings capable of feeling count, becoming an empirical question of what experiences that.

?
How should we interpret moral theories like utilitarianism from a meta-ethical perspective?

From an anti-realist meta-ethical perspective, moral theories like utilitarianism are not about uncovering objective truths about the universe, but rather serve as proposals for how to live our lives, express commitments to norms, or provide useful frameworks for decision-making.

?
What is the role of decision theory in utilitarianism?

Decision theory, particularly expected utility theory, provides a foundation for what to do when one doesn't know what will have the best consequences, suggesting that one should choose the action with the highest expected value based on subjective probabilities and utilities.

?
Does Harsanyi's Aggregation Theorem prove that we should be utilitarians?

Harsanyi's Aggregation Theorem shows that if individual well-being rankings satisfy certain rationality axioms, and a social welfare function satisfies the Pareto principle and impartiality for a fixed population, then the social welfare function must be a sum of individual utilities. While powerful, it doesn't cover population ethics (creating new people) and assumes well-being rather than arbitrary preferences.

?
Why is simply maximizing average utility not a good approach in population ethics?

Maximizing average utility can lead to counterintuitive results, such as suggesting that adding more people who are suffering (but less than the current average) could make the world better, even if it helps no one and increases total suffering.

?
How can the 'neutral level' of well-being be defined in population ethics?

The neutral level, where adding a person neither makes the world better nor worse, can be argued to be zero. One intuitive way to define zero is as the value of a maximally short life where nothing happens.

1. Prioritize Reducing Suffering

Focus on reducing suffering and increasing happiness, as these are widely agreed-upon goods that can serve as a common ground for moral action. This principle provides a shared objective for improving the world.

2. Apply Utilitarianism Selectively

Use utilitarian reasoning as your primary guide when your goal is to impartially help others through actions conventionally regarded as acceptable and within your rights. Recognize that it’s a powerful framework for “doing good” but not necessarily a master theory for all moral situations.

3. Integrate Moral Constraints

Acknowledge that strict utilitarianism can conflict with common-sense morality regarding constraints (e.g., not lying, respecting rights), options (e.g., personal life choices not being optimal), and special obligations (e.g., family duties). When applying utilitarian frameworks, respect these conventional moral boundaries in practice.

4. Employ Heuristics for Complex Problems

For complex, high-stakes, or unquantifiable decisions, prioritize using effective heuristics (e.g., funding revolutionary scientists, comparing projects to near-term alternatives) over attempting precise utilitarian calculations. These heuristics can often lead to better outcomes when direct calculation is impractical or misleading.

5. Calculate for High-Stakes Decisions

Engage in detailed expected utility calculations when facing high-stakes decisions that are repeatable or involve significant, quantifiable impact, such as allocating funds between different charitable causes. This approach is most appropriate when you have a productive way to run the numbers.

6. Practice Impartial Empathy

Treat everyone else’s well-being with the same seriousness and care as your own or that of your loved ones, extending the “golden rule” to all sentient beings. This perspective fosters a noble and productive framework for doing good.

7. Aspire to Expected Utility

View expected utility theory as a criterion for successful action to aspire to, rather than a constant computational directive. Aim to make choices that an ideally rational self, with perfect information, would determine has the highest expected value, especially in domains where utilitarianism is applicable.

8. Sum Well-being for Fixed Populations

For situations with a fixed population size, adopt the utilitarian approach of summing up everyone’s well-being to determine the best outcome, provided you accept principles like individual expected utility maximization, Pareto efficiency, and impartiality. This provides a clear framework for evaluating actions in such contexts.

9. Adopt Total Utilitarianism

For population ethics, consider adopting a total utilitarian view where the neutral level of well-being (neither good nor bad to add a life) is set at zero, conceptualized as a maximally short life where nothing happens. This approach avoids issues with average utility and past dependence.

10. Avoid Average Utility Maximization

Do not solely aim to maximize average utility, especially in population ethics, because it can lead to counterintuitive outcomes where adding more suffering beings could theoretically increase the average if their suffering is less severe than existing suffering. This highlights a flaw in that aggregation method.

11. Deconstruct Moral Theories

When evaluating moral theories like utilitarianism, break them down into core components (e.g., consequentialism, theory of value, theory of well-being) to understand their full complexity and variations. This helps in grasping the nuances and different “flavors” of a theory.

12. Choose Consequentialist Approach

When making decisions, consider whether to prioritize actions that maximize immediate good (act consequentialism) or to follow rules that generally lead to the best outcomes (rule consequentialism). This choice influences how you navigate ethical dilemmas.

13. Justify Rights with Consequences

When debating or grounding moral rights (e.g., free expression), seek to justify them by explaining their positive consequences for sentient beings or society, rather than simply stating them as inherent. This approach can lead to more satisfying and compelling arguments.

14. Choose a Theory of Well-being

When applying utilitarian principles, explicitly consider and choose a theory of well-being to maximize (e.g., hedonic pleasure, preference satisfaction, or an objective list of goods like relationships and knowledge). This choice clarifies what you are ultimately trying to optimize.

15. Empirically Define Moral Patients

After establishing your theory of well-being, determine which beings count as “moral patients” (e.g., animals, insects) by empirically assessing which ones are capable of experiencing that defined well-being. This separates the ethical goal from the scientific question of consciousness.

16. View Ethics as Proposals

Consider adopting an anti-realist meta-ethical stance, viewing moral theories not as objective truths but as proposals for how to live and live together that we reflectively endorse. This reframes ethical discussions around shared commitments and meaningful ways of life.

doing good is about doing the greatest amount of good for the greatest number of people.

Nick Beckstead

let's like treat everyone else's well being with like the same seriousness and care as I would my own well being or like the well being of people I love.

Nick Beckstead

I don't think, you know, if somebody's mission in life is to do as much good as possible, I think most of the good ways of doing that don't require a lot of lying or like breaking promises or violently coercing people to do things.

Nick Beckstead

I think utilitarianism, that process always ends with, well, because if we did that, it would be worse for these sentient beings by a greater amount than this other thing.

Nick Beckstead

almost everyone agrees that suffering is bad and that it's not just bad for like themselves, but it's like bad for other people to suffer too.

Spencer Greenberg

I'm more married to them than I am to the idea of utilitarianism per se. Let's respect these things in practice. But I'm still really into the idea of doing as much good as possible a big part of my life.

Nick Beckstead

I don't think that I'm like uncovering the final truth about the structure of the universe by doing it.

Nick Beckstead

The thing that comes closest to proving that we should be utilitarians is this argument called Harsanyi's aggregation theorem.

Nick Beckstead
17 or 18
Spencer's age when first struck by Jeremy Bentham's ideas When reading Jeremy Bentham for a class.
90 minutes
Duration of Nick's explanation of utilitarianism at a party To Spencer Greenberg.
6
Number of impossibility theorems proven by Gustav Arrhenius Regarding conditions for population ethics theories.
10 billion
Population in Derek Parfit's 'World A' example All with a very high quality of life.
10 to the 80th
Approximate population in Derek Parfit's 'World Z' example Or some innumerable number, with well-being just somewhat above zero.
1%
Decrease in well-being per person in an iterated population ethics example While doubling the population, leading to a 'World Z' situation.
83%
Percentage of people who found 'life-changing questions' valuable According to Clearer Thinking's scientific studies.
78%
Percentage of people who would recommend 'life-changing questions' to others According to Clearer Thinking's scientific studies.
88%
Percentage of people who enjoyed answering 'life-changing questions' According to Clearer Thinking's scientific studies.
1955
Year Harsanyi's aggregation theorem paper was published A conceptually important and underrated theorem.