Estimating the long-term impact of our actions today (with Will MacAskill)

Sep 7, 2022 Episode Page ↗
Overview

Spencer Greenberg speaks with Will MacAskill, author of "What We Owe the Future," about long-termism, an ethical stance prioritizing the long-term future. They discuss its implications for doing good, handling uncertainty, and the challenges of calculating expected value in altruism.

At a Glance
18 Insights
1h 6m Duration
12 Topics
5 Concepts

Deep Dive Analysis

Defining Long-termism and its Moral Importance

Distinguishing Weak vs. Strong Long-termism

Arguments for Strong Long-termism: The Vastness of the Future

Sources of Uncertainty Regarding Long-termism

Handling Uncertainty: Intellectual Inquiry and Robust Strategies

Critique of Solely Maximizing Highest Expected Value

The Optimizer's Curse and Skepticism Towards Calculations

Balancing Top-Down Allocation with Bottom-Up Individual Action

Addressing the Pascal's Mugging Critique of Long-termism

Comparing Individual Impact: Seatbelts, Protests, and World War III

The Challenge of Concreteness and Feedback Loops in Long-termism

Predictable Long-Term Impacts: Extinction and Value Locking

Long-termism

Long-termism is an ethical stance that emphasizes the moral importance of ensuring the long-term future goes well, considering the potential vastness of future generations and the significant impact present actions can have on them. It suggests that positively impacting the long-term future should be a key priority of our time.

Strong Long-termism

Strong long-termism is the view that positively impacting the long-term future is the single most important priority humanity should focus on. This is based on the idea that future generations could vastly outnumber present ones and that their moral worth is equal to those alive today.

Optimizer's Curse

The optimizer's curse describes a phenomenon where, if one evaluates multiple courses of action with some inherent noise or error in the estimation process, and then selects the option with the highest estimated value, that chosen option is likely not as good as its estimate suggests. This is because noise often biases the best-estimated option upwards.

Robust Decision-Making

Robust decision-making involves choosing actions that are likely to yield good outcomes across a wide variety of possible worldviews or future scenarios, rather than optimizing for a single, narrow best-guess scenario. It's a practical approach to decision-making when facing significant uncertainty and computational limitations.

Pascal's Mugging

Pascal's Mugging is a thought experiment that highlights the unintuitive implications of expected value calculations when dealing with tiny probabilities of enormous (or infinite) amounts of value. It suggests that one might be compelled to take actions for extremely unlikely, but infinitely beneficial, outcomes, which often feels counter-intuitive or non-robust.

?
What is long-termism?

Long-termism is the idea of taking seriously the vast scale of the future and the moral importance of ensuring the long-term future goes well, focusing on actions in our lifetimes that could impact all future generations.

?
What is the difference between weak and strong long-termism?

Weak long-termism suggests that positively impacting the long-term future should be a key priority among many, while strong long-termism claims it should be the number one priority, considering the immense potential value in the future.

?
Why might strong long-termism be true?

Strong long-termism is supported by the potential vastness of the future, with trillions upon trillions of future people who have significant moral worth, and the possibility of current actions having an impact on their well-being for millions or billions of years.

?
What are the main sources of uncertainty regarding long-termism?

Uncertainty stems from how little we know morally and empirically, including whether we can predictably impact the very long term, the annual risk of civilization-ending catastrophes, and whether a consequentialist ethical framework is the correct way to think about ethics.

?
How should one handle uncertainty about long-termism and its implications?

Handling uncertainty involves prioritizing intellectual inquiry, building well-motivated and reflective resources (like the effective altruism community), and favoring 'robust effective altruism' by doing things that look good across a wide variety of worldviews or possible future scenarios.

?
Should one always pursue the highest expected value action, even with high uncertainty?

While calculating expected value is a useful exercise, blindly following the highest expected value, especially with many uncertain parameters, can be misleading due to our imperfect reasoning and the 'optimizer's curse.' It's often justified to also consider heuristics and robust decision-making.

?
Is long-termism susceptible to the 'Pascal's Mugging' problem?

Will MacAskill argues that long-termism does not rely on 'Pascal's Mugging' because the probabilities of the risks discussed (e.g., World War III, AI catastrophe, biorisk) are not tiny, but rather medium-sized and significant, unlike the extremely small probabilities involved in Pascal's Mugging scenarios.

?
Do short-term interventions provide better feedback loops than long-term ones?

If one accepts that most value is in the long-term, then while short-term interventions might provide immediate feedback on narrow goals (e.g., reducing malaria), they don't provide feedback on their ultimate impact on the long-term future, which is where most of the value lies, making them similar to long-term interventions in this regard.

1. Prioritize Long-Term Future

Take seriously the moral importance and vast scale of the long-term future by identifying and acting on events in your lifetime that can steer civilization towards a better path for future generations.

2. Value Future Generations Equally

Acknowledge that future generations have significant moral worth, similar to people in the present, and that their vast potential numbers mean actions impacting them can have an enormously greater scale of positive effect.

3. Prioritize Existential Risk Reduction

Dedicate significant effort to reducing existential risks (e.g., human extinction, totalitarian lock-in), as these are highly impactful regardless of whether one adopts a long-termist or short-termist perspective.

4. Preserve Future Optionality

Actively work to give future generations as much optionality as possible, rather than passively allowing current events to constrain their potential futures, especially regarding risks like bioweapons or AI.

5. Make Long-Term Impact a Key Priority

Consider positively impacting the long-term future as a key priority among many, rather than the sole priority, given current societal underinvestment in this area.

6. Practice Robust Altruism

Engage in actions that are beneficial across a wide variety of possible future scenarios and worldviews, rather than focusing solely on interventions that only look good under a narrow set of assumptions.

7. Invest in Pandemic Preparedness

Support and invest in robust interventions like early detection systems, advanced PPE, rapid vaccine deployment, and sterilizing technologies (e.g., far UVC light) to protect against future pandemics, as these are reliably good across various scenarios.

8. Focus on Predictable Long-Term Impacts

Identify and act on interventions with predictable long-term impacts, such as reducing existential risks like human extinction, as these effects persist indefinitely and do not wash out over time.

9. Embrace Intellectual Humility

Cultivate a nuanced perspective on complex topics, acknowledging uncertainty and being open to the possibility that your current beliefs might be false, especially when advocating for high-stakes ideas.

10. Implement Epsilon-Greedy Altruism

Allocate a significant majority of resources to what is currently believed to be most important, while reserving a smaller fraction for exploratory projects to discover new, potentially high-impact opportunities.

11. Diversify Altruistic Efforts

Adopt a diversified approach to doing good by spreading efforts across multiple important cause areas, rather than going all-in on a single ‘most important’ thing, due to inherent uncertainties and the likelihood of priorities changing.

12. Invest in Intellectual Inquiry

Prioritize intellectual inquiry and research to gain a better understanding of complex issues, such as the validity and implications of long-termism, acknowledging that current priorities might change with new knowledge.

13. Build Altruistic Resources

Focus on growing communities and resources that are careful, reflective, cooperative, and morally motivated, as this will be beneficial even if specific priorities shift in the future.

14. Combine Expected Value with Heuristics

While performing back-of-the-envelope calculations for expected value is useful, also pay attention to heuristics like learning opportunities and personal strengths, as humans are imperfect reasoners.

15. Skepticism Towards Highest EV

Be skeptical of interventions estimated to have the highest expected value, as noise and errors in reasoning can systematically bias estimates upwards, making the chosen action seem better than it truly is.

16. Act on Medium Probabilities

Take seriously actions that address medium-sized probabilities (e.g., 0.5% to 20%) of very large negative outcomes, as these are significant risks that warrant substantial investment and effort, similar to how we approach plane safety.

17. Contribute to Collective Efforts

Engage in collective actions (like protests or organizational lobbying) where individual contributions, though having a low probability of being the sole difference-maker, can still be justified by the very large stakes involved.

18. Leverage Flow-Through Effects

Consider excelling in an area where you can have a strong positive impact, even if it’s not directly related to your ultimate priority, as positive flow-through effects can indirectly contribute more than a mediocre direct effort.

I feel terrified that I'd be kind of on stage advocating for ideas that might be false. So I really want to figure it out if they are.

Will MacAskill

It's just kind of surprisingly rare to see someone who's advocating something by being like, oh yeah, but maybe it's wrong and here are reasons it might be wrong.

Spencer Greenberg

For whatever you care about, you know, that can be well-being, but also knowledge, art, creating a just society, exploration, almost all of that is still yet to come. We're very early on in civilization. We're much more at the beginning of history rather than at the end, at least if we don't cause our own extinction.

Will MacAskill

I think that a lot of philosophy is too hard for humans, at least humans today. Maybe one day humans will be smarter.

Spencer Greenberg

If I tell you to, like, oh, you can get on a plane and it'll have, like, a one in a thousand chance of killing you, like, you're not going to get on. You're not going to take that flight.

Will MacAskill
1 million years or 700,000 years
Typical mammal species lifespan If humanity lives as long as a typical mammal species
hundreds of millions of years
Earth's habitability duration If humanity lasts until the Earth is no longer habitable
many billions of years, even trillions of years
Potential future lifespan of humanity (off-Earth) If humanity manages to live beyond Earth
a thousand to one
Ratio of future people to present people Future generations outnumbering us by this ratio, even with conservative estimates
0.1%
Annual risk of extinction of civilization A hypothetical risk level that would limit human lifespan to thousands of years
20%, maybe one in three
Probability of World War III in Will MacAskill's lifetime Will MacAskill's personal estimate
one in three or more
Probability of rapid technological increase from AI Will MacAskill's personal estimate
10%
Probability of AI resulting in existential catastrophe Will MacAskill's personal estimate, with particular worry about human misuse
0.5%
Probability of worst-case biorisk leading to extinction/civilization end Will MacAskill's personal estimate
one in a thousand
Probability of dying in a plane crash Example used to illustrate a significant, non-Pascalian probability
one in 300 million
Probability of dying driving one mile Example used to illustrate a very low probability where action (wearing a seatbelt) is still rational
75%
Percentage of EA resources for most important things Will MacAskill's suggested allocation for the effective altruism community
25%
Percentage of EA resources for open, interesting exploration Will MacAskill's suggested allocation for the effective altruism community