Darwinian Demons: Climate Change and the AI Arms Race (with Kristian Rönn)

Sep 10, 2025 Episode Page ↗
Overview

Kristian, entrepreneur and author of The Darwinian Trap, discusses global governance challenges in climate change and AI. He argues AI is a meta-technology posing existential risks via arms races and gradual human disempowerment, advocating for global chip governance and a shift in problem-solving narratives.

At a Glance
16 Insights
1h 17m Duration
17 Topics
9 Concepts

Deep Dive Analysis

Motivation for Climate Work and Shift to AI

Progress and Challenges in Climate Global Governance

Effectiveness of Climate Agreements and Future Outlook

Three Strategies for Climate Change Mitigation

Transition from Climate to AI Safety and Governance

The Darwinian Trap: Darwinian Demons and Angels

Evolutionary Suicide and the Fragility of Life Hypothesis

Applying Evolutionary Thinking to Society and Institutions

Goodhart's Law and its Impact on Metrics

The AI Arms Race Through a Darwinian Lens

Why AI is a More Pressing Global Risk

AI as a Meta-Technology and New Arms Races

Gradual Disempowerment of Humans by AI

Governing AI: Challenges and Chip-Level Solutions

Realism of AI Governance and Need for Catastrophe

Decentralized Solutions for Global Cooperation

Changing the Narrative on Societal Problems

Global Governance

The challenge of coordinating globally on issues like climate, AI, and nuclear to reduce global risks, which Kristian sees as the most impactful area to work on.

Darwinian Demon

A selection pressure that incentivizes an agent to behave in a way that is detrimental to others, maximizing its own survival at their expense. Examples include predators, cancer cells, or corporations prioritizing short-term profit over environmental health.

Darwinian Angel

A selection pressure that promotes mutually beneficial cooperation, leading to complex, cooperative structures observed throughout nature, from molecules to societies.

Evolutionary Suicide

A biological concept where a mutation that is highly adaptive and successful in the short term inadvertently leads to the extinction of the species or group in the long term, because natural selection cannot predict future consequences.

Fragility of Life Hypothesis

The idea that life is inherently fragile and has been incredibly lucky to survive for billions of years, having navigated numerous points in evolutionary history where self-destructive mutations or environmental changes could have led to its complete extinction.

Goodhart's Law

The principle stating that when a metric becomes widely used as a target for control or evaluation, it ceases to be a good measure, as individuals or systems will find ways to optimize for the metric itself rather than the underlying goal it was intended to represent.

Meta Technology

A technology that, unlike specific tools with limited purposes, can enable innovations and advancements across all possible fields, fundamentally transforming capabilities across an entire system or society. AI is described as such a technology.

Multipolar Trap

A situation where individual or group incentives drive competitive behaviors that lead to collectively suboptimal or destructive outcomes, even when all parties are aware of the potential negative consequences. This is also referred to as Darwinian demons by Kristian.

Trusted Execution Environment

A secure area within a processor that can run small, isolated computer programs to verify specific conditions. In the context of AI, it could potentially be used to test if an AI model is being trained according to safety protocols directly within the chip hardware.

?
What inspired Kristian to work on climate change and then transition to AI?

Kristian was inspired by utilitarianism and the need for global coordination to reduce global risks, initially focusing on climate change as a domain with momentum for global governance. He transitioned to AI safety when the technology became more tangible and people started taking it seriously after the ChatGPT moment.

?
Has climate as a field made progress on global governance and achieving real goals?

Yes, significant progress has been made, including the Paris Agreement and all countries agreeing to phase out fossil fuels, which sends signals for capital flows into green technologies and creates network effects in value chains.

?
Do climate agreements, like phasing out fossil fuels, actually cause positive effects given the lack of enforcement?

Yes, they cause positive effects by sending signals for capital investment in green technologies and pushing companies in committed economies to reduce emissions throughout their supply chains, even if direct enforcement is limited.

?
What are the three realistic strategies for addressing climate change?

The three realistic strategies are government collaboration (political will), technology (making non-polluting options self-interested or easier), and pressure on corporations (targeting the relatively small number of companies responsible for most pollution).

?
How can evolutionary thinking be applied to societal structures like companies and academia?

Evolutionary thinking can be applied by identifying units of culture or strategy (memes) that are copied (e.g., business models, research methods) and selected for (e.g., companies going out of business, researchers getting tenure), leading to adaptive behaviors.

?
Why is AI considered a more pressing or important cause area than climate change?

AI is considered more pressing because it is a 'meta technology' that can unlock innovations across all fields, including new and dangerous weapons, and its development is happening much faster than anticipated.

?
How can AI lead to new and dangerous arms races beyond traditional warfare?

AI can accelerate arms races in areas like engineered pandemics (e.g., making rabies airborne with a long incubation period and targeting specific ethnicities) or atomic precise manufacturing (e.g., creating swarms of nanobots for harmful purposes).

?
What is the concern about the 'gradual disempowerment of humans' by AI?

The concern is that as societies increasingly outsource labor and decision-making to AI, human welfare may become less relevant to corporations and countries, leading to a future where humans have no control over a society utterly alien to them.

?
How can global governance for AI be implemented, given that software is easily copied?

A tractable approach is to govern the physical AI chips (GPUs) required to train large models, as their design and manufacturing are highly concentrated. Mechanisms could be built into the hardware to reject training unless safety evaluations are met.

?
Is it realistic to implement chip-level governance for AI right now, given the current state of generalist AI models?

It is not realistic at this moment for generalist AI models because it's hard to know what a neural network will do. Current efforts focus on tracking chip location to prevent access by adversarial regimes. A shift to more symbolic, domain-specific AI might be needed for more direct chip-level safety controls.

?
How can the world be convinced to implement strong AI regulation without an imminent catastrophe?

The US can use export controls on AI chips to incentivize other countries to track chip usage and ensure safety. This step-by-step approach could eventually lead to international cooperation, potentially after a 'Cuban Missile Crisis' moment for AI.

?
Why don't we see evidence of advanced alien civilizations in the universe (Fermi Paradox)?

Kristian suggests that life might self-destruct too quickly, often before reaching the stage of developing advanced AI. This 'continuous filter' could be due to evolutionary suicide mechanisms, where short-term beneficial mutations lead to long-term extinction.

?
What are potential solutions to 'multipolar traps' or Darwinian demons in global governance?

Solutions include a strong world government (with risks of totalitarianism) or decentralized options that leverage the interconnectedness of global economies and supply chains. By increasing supply chain transparency and allowing actors to 'vote' for safety, cooperation can be incentivized.

1. Shift Problem-Solving Narrative

Change the narrative from blaming individuals (corrupt politicians, greedy CEOs) to understanding and addressing the systemic “game” or root causes that incentivize harmful behaviors, as this approach is more effective for solving core problems.

2. Adopt Utilitarian Long-Termism

Care about all beings, including those in the future, and focus on global coordination to reduce global risks like climate change, AI, and nuclear threats, as this perspective is crucial for tackling complex, interconnected issues.

3. Prioritize AI for Climate Solutions

Recognize that aligned artificial intelligence could significantly simplify solving climate change by enabling breakthroughs in areas like tree planting, fusion energy, and carbon capture, making it a highly impactful area to focus on.

4. Govern AI Through Chip Controls

Advocate for and implement governance mechanisms at the compute level (AI chips/GPUs), potentially building hardware-level checks to prevent training of dangerous AI models without safety evaluations, as this offers a tractable way to control powerful AI development.

5. Develop Specialized AI Tools

Shift AI development from creating general intelligences to building specialized tools (e.g., chess engines, protein folding algorithms) that excel in specific domains, making them easier to control and predict for safety.

6. Embrace Slow Technological Evolution

For long-term survival, civilizations should adopt a strategy of slow and careful technological innovation, investing heavily in predicting and simulating the implications of new technologies before deployment, to avoid “landmines” in the fitness landscape.

7. Increase Supply Chain Transparency for Safety

Foster greater transparency in technology supply chains (e.g., AI chips) to empower every actor to “vote” against dangerous uses, leveraging interconnectedness to enforce safety standards and prevent unilateral risky actions.

8. Utilize Export Controls for AI Safety

Governments, especially those with control over advanced AI chip production, should use export controls to enforce tracking and safety evaluations for AI models trained with these chips, incentivizing safer development globally.

9. Monitor AI Chip Locations

Implement systems to track the location and access of advanced AI chips (GPUs) to prevent their use by hostile nations or terrorist groups, as this is a more tractable problem for immediate AI governance.

10. Prioritize Safety Over Profit in AI

Recognize that financial incentives can override safety concerns in AI development, as seen with OpenAI’s board drama, and advocate for structures that prioritize safety and mission over profit to mitigate existential risks.

11. Beware of Hacking Metrics

Be aware of Goodhart’s Law: when a metric (e.g., academic citations, paper count) becomes a target, it ceases to be a good measure, as people will find ways to hack it rather than genuinely improve quality.

12. Pressure Supply Chains for Net Zero

If you are a company or consumer in an economy committed to net zero, pressure suppliers in your value chain to also adopt net zero practices, as on average 90% of emissions are located in supply chains.

13. Invest in Green Technologies

Support and invest in green technologies, as international agreements signal capital flows towards them, driving innovation and cost-effectiveness, making them more competitive than fossil fuels.

14. Answer Life-Changing Questions

Visit clearerthinking.org to answer a set of scientifically validated “life-changing questions” that 83% of people found valuable for gaining new insights and improving their self-understanding.

15. Subscribe to “One Helpful Idea” Newsletter

Sign up for the “One Helpful Idea” email newsletter at podcast.clearerthinking.org to receive a weekly valuable idea, new podcast episodes, essays, and event announcements.

16. Provide Podcast Feedback

Give feedback, ask questions, and leave comments for podcasts you listen to, as this helps the creators improve the show over time.

I never felt like climate in and of itself was the most impactful cost area. But I did feel like global coordination and global governance was the most impactful cost area.

Kristian

So you can sort of see all of life as this tug of war between, you know, the forces of defection, the demons, and the forces of cooperation, the angels.

Kristian

And I think one like mental model of this is that you could sort of imagine life on this earth as sort of a random walk throughout like a fitness landscape. And in that fitness landscape, there might be, you know, landmines, but we will never be born on a planet where we stepped on the landmine.

Kristian

What Goodhart's law states is that whenever a metric becomes widely used, there is like ways to sort of hack that metric.

Kristian

So I think AI is more special because it's essentially a meta technology in a way.

Kristian

intelligence is, I think, a completely different thing, because then we're talking about like fundamental cognition. And intelligence, or at least my favorite definition of intelligence is that it is like an agent's capability of achieving whatever their goal is in multiple different environments, which essentially means that if you're more intelligent, you're not just better in chess, or you're not just better in physics, you're like better across the board at everything.

Kristian

Instead of hating the players, what we need to hate the game, or perhaps even better change the game.

Kristian
almost five years
Ryan Kessler's tenure as editor and audio engineer Before stepping into the producer role.
four years
Typical duration of political election cycles Compared to climate change's impact over several decades.
two years ago
Time since agreement to phase out fossil fuels was signed Signed in the United Arab Emirates as an extension to the Paris Agreement.
2050
Common target year for countries to phase out fossil fuels As part of nationally determined contributions (NDCs).
2060 and 2070
Less ambitious target years for fossil fuel phase-out For some countries' NDCs.
90%
Average percentage of company emissions located in supply chains Highlighting the network effect of climate commitments.
2008
Year Kristian first got interested in AI safety and governance Initial interest.
2013-14
Years Kristian worked at the Future of Humanity Institute When Nick Bostrom worked on superintelligence.
more than 10 years in the future (2035)
Kristian's prediction for when current AI intelligence would be reached His prediction was that AI would advance slower than it did.
100%
Lethality of rabies If contracted, it is guaranteed to be fatal.
10 months
Incubation period of Hepatitis B Used as a hypothetical for an engineered airborne virus.
$1,500
Cost to create a chicken sandwich from scratch (Stone Age tech, no trade) Took six months to complete.
a hundred thousand
Number of NVIDIA GPUs used for XAI's latest model Illustrates the compute requirements for large AI models.
4 billion years
Duration of life's existence on Earth Referred to in the context of the fragility of life hypothesis and the Fermi Paradox.