How can we save the world? (with Toby Ord)

Jul 15, 2021 Episode Page ↗
Overview

Guest Toby Ord discusses "The Precipice," humanity's current precarious era of escalating power and existential risks like nuclear war, engineered pandemics, and AI. He argues for urgent global prioritization to reduce these risks and achieve "existential security" for humanity's vast future.

At a Glance
15 Insights
1h 6m Duration
13 Topics
8 Concepts

Deep Dive Analysis

Defining 'The Precipice' and Humanity's Precarious Moment

Natural Risks: Asteroids, Supervolcanoes, and Pandemics

Anthropic Reasoning and Observer Selection Effects

Current Technological Risks: Nuclear War and Environmental Destruction

Distinguishing Existential from Catastrophic Risks

Future Technological Risks: Engineered Pandemics

Future Technological Risks: Nanotechnology and Proliferation

Climate Change as a Potential Existential Risk

Achieving Existential Security and Global Coordination

Resource Allocation for Existential Risk Mitigation

Differential Technological Development for Protection

Understanding Long-termism and Future Generations

Balancing Near-Term Urgency with Long-Term Impact

The Precipice

The current historical period where humanity's escalating power, particularly through technologies like nuclear weapons, has created the capacity for self-destruction, threatening our entire long-term future. It represents a precarious moment where we must end this risky period.

Existential Risk

A risk that permanently destroys humanity's long-term potential, which could include human extinction, unrecoverable collapse of civilization, or an unescapable totalitarian regime. These events are irreversible and destroy our entire future, not just the present.

Global Catastrophic Risks

Risks that cause immense disaster, often defined as killing 10% or more of the global population. Unlike existential risks, these do not necessarily involve the permanent destruction of humanity's long-term potential or complete unrecoverability.

Anthropic Selection Effects

Observer selection effects that mean we can only find ourselves in a time and place where we could exist, which can skew probabilities when considering how likely certain events are. This can be used to infer that certain highly destructive events must be rare if we've survived this long.

Anthropic Shadow

A phenomenon where a strange lull in recent times, such as an unusual absence of large asteroid collisions, could be evidence that such a lull is necessary for the existence of intelligent life. It's a way to use indirect evidence to understand selection effects.

Existential Risk Factor

Something that is not an existential risk itself but, if it occurs, increases the probability of an existential risk. An example is a great power war, which could heighten the chances of various global catastrophes or existential threats.

Differential Technological Development

The strategy of influencing the order in which technologies are developed, specifically by accelerating protective or defensive technologies while potentially slowing down or carefully managing risky ones. The goal is to gain an advantage in mitigating future threats.

Long-termism

A moral framework that emphasizes taking the long-term future of humanity seriously, recognizing that current actions have profound and lasting effects on future generations. It considers the vast potential of future lives and aims to safeguard humanity's potential over immense timescales.

?
What is 'the precipice'?

It is the current precarious period in human history where our escalating technological power, like nuclear weapons, has given us the ability to destroy ourselves and our entire future.

?
What are the biggest natural risks to humanity?

Supervolcanic eruptions are considered the largest known natural risk, as asteroid impacts and supernovae/gamma-ray bursts are incredibly unlikely, and natural pandemics are complex due to anthropogenic mediation.

?
How do anthropic selection effects influence our understanding of risks?

We can only exist in a time and place conducive to life, meaning our observation of past events is biased. This can be used to infer that certain highly destructive events must be rare if we've survived this long.

?
What is the difference between existential and catastrophic risks?

Existential risks permanently destroy humanity's long-term potential (e.g., extinction, unrecoverable collapse), while catastrophic risks cause immense harm (e.g., killing 10% of the population) but don't necessarily end our long-term future.

?
How serious is climate change as an existential risk?

While a runaway greenhouse effect like Venus's is currently thought to be physically impossible for Earth, extreme warming could lead to a collapse of civilization from which humanity might not recover, thus posing an existential threat.

?
What are the main technological risks humanity faces?

Current risks include nuclear war and extreme climate change, while future risks on the horizon are primarily engineered pandemics and advanced artificial intelligence.

?
How could nanotechnology pose an existential risk?

Nanotechnology could lead to self-replicating nanomachines that outcompete natural systems (gray goo scenario) or, more plausibly, vastly empower individuals or small groups to create weapons of mass destruction, leading to global instability.

?
What would a world with existential security look like?

It would be a world where humanity universally prioritizes its continued survival, with institutions and norms built to keep existential risks low, making it difficult for any group or nation to threaten global destruction. Space settlement alone is not a complete solution due to correlated risks.

?
How should resources be allocated to mitigate existential risks?

Currently, humanity spends significantly less on existential risk reduction than on trivial things like ice cream. A substantial increase in funding, perhaps a few percent of world product, is needed, primarily for research and building up relevant communities.

?
What is differential technological development?

It is the strategy of accelerating the development of protective technologies (e.g., metagenomic sequencing for rapid pathogen identification) while potentially slowing down or carefully managing risky technologies, to improve humanity's overall safety.

?
What is long-termism and why should we care about the far future?

Long-termism is a moral perspective that takes the vast, potential future of humanity seriously, recognizing that our actions today have profound, long-lasting consequences for billions of future lives. We should care because our generation is uniquely positioned to safeguard or destroy this immense potential.

?
Is focusing on near-term problems or the long-term future more important?

Existential risks that could strike in the near future (e.g., next 10-20 years) are urgent because if our generation doesn't address them, no one else can. This 'near-term long-termism' is highly impactful and feels like a duty, though influencing the world thousands of years from now is much harder to predict.

1. Embrace Long-Term Perspective

Adopt a long-termist perspective in moral and policy decisions, seriously considering the profound and lasting impacts of current actions on future generations of humanity.

2. Prioritize Wisdom Growth

Actively work to increase collective wisdom and governance capabilities at a pace that matches or exceeds technological advancement to prevent global catastrophe.

3. Reduce Existential Risk Probabilities

Actively work to reduce the per-unit-time probability of existential catastrophes, aiming for consistently declining or extremely low rates to ensure humanity’s long-term survival.

4. Rapidly Lower Existential Risk

Prioritize and implement strategies to reduce existential risk as quickly as possible, as current levels are unsustainable for long-term survival.

5. Achieve Existential Security

Work towards a state of “existential security” by making global risk reduction a top priority, establishing robust institutions, and developing norms to maintain low levels of existential risk permanently.

6. Address Urgent Existential Risks

Recognize the immediate urgency of addressing existential risks that could materialize within the next few decades, as failure to act now means no future generation will have the opportunity to do so.

7. Navigate Dangerous Era Safely

Dedicate a substantial portion of humanity’s collective effort and focus to navigating the current dangerous period, aiming to survive and overcome present existential threats.

8. Prioritize Existential Risk Reduction

For those adopting a long-termist ethical perspective, the most effective current approach is to focus on reducing existential risks until this area is thoroughly addressed and resourced.

9. Increase Existential Risk Funding

Increase global spending on existential risk reduction to at least match the amount spent on ice cream annually, as current allocations are disproportionately low given the stakes.

10. Invest in Risk Research

Direct funding towards building research communities and enhancing the academic credibility of existential risk studies, as this area is currently under-researched and under-resourced.

11. Prevent Great Power War

Prioritize efforts to avoid great power wars, as the risk of such conflicts significantly increases overall existential risk, potentially more than many individual specific threats.

12. Accelerate Protective Technologies

Prioritize and accelerate the development of defensive or protective technologies to mitigate risks and enhance global security before dangerous technologies proliferate.

13. Identify Defensive Technologies

Apply a “reading vs. writing” or “information gathering vs. action” lens across various technological fields to identify and prioritize the development of protective and defensive technologies over those that heighten aggressive powers.

14. Advance Metagenomic Sequencing

Invest in and accelerate the development and deployment of metagenomic sequencing technology, which can rapidly identify unknown pathogens and serve as a powerful defense against engineered pandemics and bioweapons.

15. Focus on Existential Risk

When addressing existential risk, narrow your focus specifically on threats that permanently destroy humanity’s long-term potential, rather than broadly addressing all forms of unmitigated disasters, to make effective progress.

Our wisdom needs to grow at least as fast as our technology. Otherwise, we're in deep trouble.

Carl Sagan (as quoted by Spencer Greenberg)

Humanity spends more each year on ice cream than we do on existential risk reduction.

Toby Ord

If we don't deal with the risks of the next 30 years, say, no one else is going to be able to.

Toby Ord

An existential risk is a risk to humanity's long-term potential, that it would be permanently destroyed.

Toby Ord

If you've only got power that can wreak havoc on a local scale, then you only need to be able to kind of govern at this local level. If you have power that can wreak kind of global havoc, then the kind of interconnectedness of different countries and some ability to kind of manage on this world scale becomes essential.

Toby Ord
at least 200,000 years
Humanity's existence on Earth so far A conservative estimate for Homo sapiens
about a billion years
Earth's remaining habitability If humanity plays its cards right
around about 1 in 10,000
Natural existential risk per century Meaning humanity could last about a million years from natural causes
about one in a million per century
Frequency of 10-kilometer asteroid impact Equivalent to once every hundred million years
about one in 150 million
Risk of 10-kilometer asteroid hitting in next century Due to scanning 99% of the sky
one in 50 million chance
Risk of near-enough supernova in next century To cause trouble for Earth
one in 50 million chance
Risk of near-enough gamma-ray burst in next century To cause trouble for Earth
one every 80,000 years
Frequency of Toba-sized supervolcano eruption Central estimate for a very large eruption
between one-third and one-half
Kennedy's estimate for Cuban Missile Crisis turning into full-scale nuclear war His personal estimate
about one in six
Toby Ord's estimate for existential catastrophe in the next 100 years Suggests an unsustainable level of risk
about six centuries
Average survival time with current existential risk level If the risk remains at one in six per century
about 3%
Spanish Flu (1918) death estimate Of the global population
immense efforts and time
Time to sequence a human genome (past) For the Human Genome Project
less than an hour or for less than a thousand dollars
Time to sequence a human genome (present) Due to rapid advances in technology
only two years
Time between CRISPR/gene drive development and student use Illustrates rapid proliferation of powerful technologies
1% of the way
Species loss level compared to historical mass extinction To the level that would classify as a mass extinction historically
something like a hundred pounds
Estimated cost for metagenomic sequencing Per sample, for mature technology
like a tenth lower
Estimated reduction in existential risk from avoiding great power war Rough guess; e.g., from 16% to 14.5% over 100 years