Beyond saving lives: happiness and doing good (with Michael Plant)

Sep 17, 2025 Episode Page ↗
Overview

Dr. Michael Plant discusses applying evolutionary thinking and the 'Darwinian trap' concept to global challenges like climate change and AI safety. He emphasizes the need for global governance to address these existential risks, particularly the rapid advancement of AI and its potential for misuse or gradual human disempowerment.

At a Glance
9 Insights
1h 12m Duration
16 Topics
7 Concepts

Deep Dive Analysis

Transitioning from Climate Change to AI Work

Progress and Challenges in Global Climate Governance

Optimism on Climate Solutions and Role of Technology

Defining and Illustrating the Darwinian Trap Concept

The Fragility of Life and Evolutionary Suicide

Applying Evolutionary Thinking to Society, Companies, and Academia

The AI Arms Race and its Darwinian Demon Dynamics

Why AI is a More Pressing Concern than Climate Change

AI's Potential to Accelerate Dangerous Technologies

The Risk of Gradual Human Disempowerment by AI

AI as a Unique Meta-Technology

Strategies for Global Governance of AI Technology

Challenges and Outlook on AI Safety and Governance

The Fermi Paradox and Life's Self-Destruction Tendency

Decentralized Solutions for Multipolar Traps

Reframing the Narrative on Global Problems

Darwinian Demon

A Darwinian demon is a selection pressure that incentivizes an agent to behave in ways that are detrimental to others. Examples include predators maximizing their survival at the expense of prey, cancer cells replicating aggressively to destroy the host, or corporations prioritizing short-term profit over environmental or public health.

Darwinian Angel

A Darwinian angel represents a selection pressure that favors mutually beneficial cooperation. The natural world is full of such examples, from cooperating molecules forming metabolism, to genes creating genomes, cells forming organisms, and humans building societies and economies through layers of cooperation.

Evolutionary Suicide

This concept describes situations where natural selection, which optimizes for short-term survival across generations, can lead to mutations that are initially highly successful but ultimately cause the extinction of the species or group. For example, a predator becoming so efficient it hunts all its prey to extinction, then dies itself.

Fragility of Life Hypothesis

This hypothesis suggests that life is incredibly fragile and has been very lucky to survive for billions of years. It posits that throughout evolutionary history, there have been many points where life could have gone extinct due to 'landmine' mutations or aggressive replication by early life forms, such as bacteria or algae causing climate change.

Goodhart's Law

Goodhart's law states that when a metric becomes widely used as a target or measure of success, it becomes susceptible to being 'hacked' or manipulated. In academia, for instance, metrics like the H-index or number of citations can lead researchers to engage in p-value hacking or even fraud to meet targets, rather than producing high-quality science.

AI as Meta-Technology

AI is considered a meta-technology because, unlike specific tools like cars or vacuum cleaners, it enhances fundamental cognition and intelligence itself. This means it can enable innovations across all possible fields, making it a powerful dual-use technology capable of solving complex problems or creating new, dangerous weapons.

Gradual Disempowerment Scenario

This AI risk scenario suggests that instead of a sudden catastrophic event, humans could gradually lose control and agency as societies increasingly outsource labor and decision-making to AI. Companies and countries, no longer dependent on human labor, would have less incentive to care about human welfare, leading to a world not controlled by or alien to humans.

?
Why did Christian transition from working on climate change to AI?

Christian was initially inspired by utilitarianism and global governance for climate change, but felt AI safety was not taken seriously. After the 'ChatGPT moment,' AI became more recognized as a serious global coordination problem, prompting his return to the field.

?
Has global governance made progress on climate change?

Yes, significant progress has been made, including the Paris Agreement, all countries agreeing to phase out fossil fuels, and the development of global building standards for calculating carbon emissions, which are becoming legal requirements in many jurisdictions.

?
What are the three main strategies for combating climate change?

The three realistic strategies are government collaboration (major countries coordinating), technology (making it self-interest not to pollute or enabling carbon capture), and pressure on corporations (since a small number do most of the polluting).

?
How does natural selection relate to long-term survival?

Natural selection is an optimization algorithm that cannot make long-term predictions; it primarily optimizes for survival from one generation to the next. This can lead to 'evolutionary suicide,' where mutations that are highly adaptive in the short term cause extinction in the long run.

?
How does the Darwinian trap apply to the AI arms race?

The AI arms race is a Darwinian trap because financial selection pressures incentivize AI companies to develop and release models faster, even if they pose existential risks. Pausing for safety testing means losing the race for capital and talent, creating a strong disincentive to prioritize safety.

?
Why is AI considered more important or pressing than climate change?

AI is seen as more pressing because it is a 'meta-technology' that can unlock innovations across all fields, including the creation of new and dangerous weapons, and it is developing much faster than anticipated. It can accelerate multiple arms races simultaneously, posing an existential risk.

?
How could AI accelerate the creation of dangerous biological weapons?

AI can make it significantly easier to understand specific gene functions and automate feedback loops for genetic combinations, allowing for rapid engineering of viruses with enhanced lethality, airborne transmission, longer incubation periods, or even ethnicity-specific targeting.

?
What is the 'gradual disempowerment' scenario for AI?

This scenario suggests that humans might gradually cede control and decision-making to increasingly intelligent AI systems. As corporations and countries become less reliant on human labor, their incentives to care about human welfare diminish, potentially leading to a society alien and uncontrolled by humans.

?
How can global governance be applied to AI, given it's software?

While AI is software and easily copied, its governance can focus on the highly concentrated AI chip (GPU) value chain. Mechanisms could be built into the hardware to prevent AI model training without certain safety evaluations, or export controls could track chip location and usage.

?
What is the speaker's overall outlook on AI safety?

The speaker's outlook fluctuates between optimism and pessimism. While emotionally optimistic about increasing awareness and progress, objectively, he finds it hard not to be pessimistic, given life's tendency to self-destruct and the challenges of global coordination against local incentives.

?
How does the Fermi Paradox relate to the 'fragility of life' hypothesis?

The Fermi Paradox (why we don't see alien life) could be explained by the 'fragility of life' hypothesis: life tends to self-destruct before reaching technological advancement or before AI can spread across the galaxy. This suggests a continuous 'great filter' throughout evolution.

?
What are decentralized solutions to multipolar traps or Darwinian demons?

Decentralized solutions leverage the interconnectedness of global economies and supply chains. By increasing supply chain transparency and allowing essential actors in a value chain to 'vote' against dangerous uses (e.g., by refusing to provide components), collective action can be enforced through reputational markets.

1. Change Problem Narrative: “Hate the Game”

Shift your perspective on complex problems from blaming individuals (corrupt politicians, greedy CEOs) to understanding and addressing the underlying systemic “game” or incentives that drive problematic behavior. Focus on changing the system rather than just criticizing the players.

2. Invest in Technology Impact Prediction

Prioritize and advocate for significant investment in predicting the long-term implications of new technologies through advanced simulations before their widespread deployment. This helps avoid unforeseen negative consequences.

3. Advocate Symbolic AI Development

Support a shift in AI development towards symbolic approaches that allow for mathematical proofs of an AI’s likely actions, rather than building general intelligences whose behavior is impossible to fully predict. This aims for more controllable and predictable AI.

4. Advocate Supply Chain Transparency

Push for greater transparency in supply chains, especially for critical technologies like AI chips. This allows all involved actors to understand the end-use of components and leverage their influence for safer outcomes.

5. Leverage Value Chain “Votes” for Safety

In interconnected value chains, recognize that each essential actor holds a “vote.” If you are such an actor, consider leveraging your position to pull out if a product is deemed dangerous, potentially forcing a safer approach.

6. Support AI Chip Tracking

Support initiatives that track the location and access of advanced AI chips to prevent them from being acquired by dictatorial regimes or terrorist groups. This reduces the risk of misuse of powerful AI technology.

7. Apply Goodhart’s Law

When a metric becomes widely used, be aware of Goodhart’s Law, which states that it will be hacked. Look for ways people might be gaming the metric rather than genuinely improving the underlying goal.

8. Identify System Hacking in Promotions

Observe if promotions in your workplace are driven by “hacking the system” (e.g., befriending superiors) rather than pure competency. This awareness can help you understand underlying selection pressures and how to navigate them.

9. Leverage Supply Chain for Net Zero

If your company is in a jurisdiction committed to net zero, actively push your supply chain producers to also adopt net zero practices. This is crucial because 90% of emissions are often located in supply chains, creating a top-down network effect.

So you can sort of see all of life as this tug of war between, you know, the forces of defection, the demons, and the forces of cooperation, the angels.

Christian

So I actually think that, you know, life is just incredibly fragile. And I call that the fragility of life hypothesis. And I think we might have been just very lucky for 4 billion years or so.

Christian

So it's sort of this weird situation where they all realize that the technologies that they're developing might have an existential risk, but they're not willing to pause.

Christian

I think AI is actually the most important thing, because it is, in a sense, the ultimate dual use technology, meaning that it can be used for both good and bad.

Christian

So instead of hating the players, what we need to hate the game, or perhaps even better change the game.

Christian
90%
Average percentage of corporate emissions located in supply chains For companies committed to net zero, their suppliers must also go net zero due to this.
2008
Year Christian first got into AI safety and governance His interest predates the recent AI boom.
2013-2014
Years Christian worked at the Future of Humanity Institute During Nick Bostrom's work on superintelligence.
100%
Lethality of rabies virus If contracted, it is more or less guaranteed to be fatal.
10 months
Incubation period of Hepatitis B Used as an example for a weaponized virus's potential incubation period.
$1,500
Cost to create a chicken sandwich from scratch (Stone Age tech, no trade) Took six months, illustrating economic interconnectedness.
100,000
Number of NVIDIA GPUs used for XAI's latest model Highlights the massive compute requirements for advanced AI models.
4 billion years
Duration life has existed on Earth Used to contextualize the fragility of life and the Fermi Paradox.