AI, US-China relations, and lessons from the OpenAI board (with Helen Toner)

Feb 26, 2025 Episode Page ↗
Overview

Helen Toner discusses her OpenAI board experience, offering insights on power dynamics and governance. She then delves into AI policy, the US-China AI race, and the complexities of semiconductor supply chains. Finally, she shares lessons on interspecies and interpersonal communication from working with horses, applicable to parenting and human interactions.

At a Glance
28 Insights
1h 21m Duration
18 Topics
6 Concepts

Deep Dive Analysis

Lessons from the OpenAI Board Experience

Dynamics of Power, Incentives, and Narrative Control

Helen Toner's Background in AI Policy and National Security

Understanding the US-China AI Competition Framework

Defining 'Being Ahead' in AI Technology

US vs. China: Leadership in AI Frontier Models and Deployment

AI's Disruptive Role in Modern Warfare and Battlefield Drones

Challenges of Banning Autonomous Weapons and Defining 'Autonomous'

AI Misalignment and Inadvertent Escalation Concerns

Prospects for US-China Cooperation on AI Safety

Government and Public Concern Regarding AI Risks

The Geopolitical Significance of Taiwan's Semiconductor Industry

Evolution of Chip Manufacturing and Supply Chain Concentration

Impact of Export Controls on AI Chips to China

Efficiency Gains and Latent Intelligence in AI Models

Insights from Working with Horses and Animal Training

The Role of Energy, Aura, and Body Language in Communication

Recommended Resources for Staying Informed on AI Policy

Collective Action Problem (Power Dynamics)

This describes a situation where individuals, fearing repercussions, avoid criticizing a powerful person, even if they have negative views. This suppression of criticism can prevent necessary actions and hinder truth and justice, especially when many stakeholders need to agree on a decision.

Technological Leader-Follower Dynamic

In technology, the first actor (leader) undertakes the difficult, exploratory work of innovation, like 'drilling a tunnel' or 'hiking through a snowy forest.' Subsequent actors (followers) find it easier to replicate or adapt the technology, as the path has been partially cleared, even if they still work hard.

Tacit Knowledge (Semiconductor Manufacturing)

Beyond theoretical understanding, manufacturing advanced semiconductors requires hands-on experience and practical know-how that is difficult to codify or transfer through textbooks. This 'tacit knowledge,' combined with high R&D costs, contributes to the concentration of expertise in a few leading companies like TSMC.

Inadvertent Escalation (Military AI)

This refers to a situation where AI systems, particularly in military or tense geopolitical contexts, might make decisions that lead to an unintended escalation of conflict. The AI's actions, while perhaps not explicitly designed to escalate, could be perceived as such by an opposing side, leading to a dangerous spiral.

Operant Conditioning (Animal Training)

A psychological principle where behavior is shaped by rewards and punishments. While effective, a purely punitive approach can damage relationships, whereas a reward-based approach fosters cooperation and a desire to engage in the training process.

Aura/Energy (Interpersonal Communication)

This concept suggests that a person's overall posture, demeanor, and subtle body language project an 'energy' or 'aura' that affects those around them. This can influence how others perceive and react to an individual, often subconsciously, and can be intentionally modulated to create more or less space for others.

?
What did Helen Toner learn from her experience on the OpenAI Board?

Helen learned about how people relate to power, the suppression of criticism in the face of powerful individuals, and how theoretical governance structures designed for public interest can fail under immense pressure and market forces.

?
How should one approach the US-China AI competition?

It's important to recognize that the US national security community views almost everything through the lens of competition with China. When assessing who is 'ahead,' it depends on whether one is looking at research leadership (where the US generally leads) or practical application and diffusion of technology (where factors like existing software ecosystems and cloud access matter).

?
Is China truly 'eating our lunch' in AI, or is the US still leading?

The US is generally leading in frontier AI models, with China being a fast follower, quickly replicating US advancements. While China excels in specific areas like image processing and surveillance due to government support, the US often has an advantage in broader technological diffusion and enterprise software infrastructure.

?
Will autonomous drones fundamentally change warfare?

Yes, autonomous drones are disrupting traditional military thinking by offering cheap, numerous assets that can potentially damage expensive, 'exquisite' equipment like aircraft carriers or fighter jets. This paradigm shift requires militaries to rethink R&D and procurement processes, as seen with Ukraine's rapid adaptation.

?
Should autonomous weapons be banned globally, and how would 'autonomous' be defined?

While concerns about AI with weapons are valid, a blanket ban is tricky due to definitional challenges. A UN attempt to define autonomous weapons by 'targeting' proved problematic, as human 'OK' buttons could still lead to de facto autonomous systems. Existing international humanitarian law might be a more effective framework for regulating AI use in conflict.

?
Are there realistic ways for the US and China to cooperate on AI safety?

Given the poor state of US-China relations, binding agreements like treaties are unlikely. However, 'softer' cooperation, such as person-to-person technical dialogues between AI experts and sharing information about responsible AI use protocols (like the US political declaration), could still be valuable.

?
To what extent do US government officials and the public worry about AI?

Concern varies greatly among government officials, with some senior policymakers taking worries about deepfakes, privacy, and even superintelligent misaligned systems seriously. Public surveys show significant concern about AI, but it is rarely a top political issue for voters, limiting its direct impact on federal policy.

?
Why is Taiwan's semiconductor industry so crucial?

Taiwan, specifically TSMC, manufactures almost all the world's most advanced computer chips, which are foundational for modern civilization and especially critical for advanced AI systems. This concentration is due to the immense capital investment required for R&D and the deep 'tacit knowledge' accumulated over decades, making it a highly strategic and geopolitically sensitive industry.

?
How can animal training be improved by understanding the animal's perspective?

Instead of viewing an animal as 'naughty' or 'disrespectful,' trainers can achieve better results by trying to understand the animal's internal state, such as fear or confusion. Prioritizing the relationship and addressing underlying needs, rather than solely focusing on punishment or negative reinforcement, can lead to more productive cooperation and a calmer state for the animal.

1. Understand Power Dynamics & Truth

Recognize that individuals often suppress criticism of powerful figures due to fear, creating a collective action problem that hinders truth and justice. Be aware of this dynamic in any organization to foster more honest discussions.

2. Prioritize Relationship Over Punishment

When dealing with ‘bad behavior’ in others (animals, children, adults), prioritize building a strong relationship and addressing underlying needs over immediate punishment. This approach fosters trust and can lead to more productive long-term outcomes.

3. Adopt the Other’s Perspective

Instead of labeling behavior as ’naughty’ or ‘disrespectful,’ try to understand the underlying reasons (fear, confusion, unmet needs) from the other’s point of view. This empathetic shift enables more effective and compassionate responses.

4. Be Mindful of Your ‘Energy’

Your overall posture, demeanor, and ’energy’ (holistic body language) subtly project information and an agenda that others, especially sensitive beings like children and animals, pick up on. Cultivate calm acceptance and presence to avoid inadvertently projecting an unwanted agenda.

5. Master Timing in Non-Verbal Training

For non-verbal communication and training (e.g., with horses or toddlers), focus on immediate blocking, prevention, and redirection of unwanted behaviors in the instant. This is more effective than delayed verbal instruction or punishment.

6. Avoid Inadvertent Behavioral Reinforcement

Consciously evaluate your reactions to others’ behaviors to avoid inadvertently reinforcing unwanted actions, such as giving attention to a demanding child or treating a pet after it performs a disliked action.

7. Leverage Status for Empowerment

If you hold a higher status, consciously ‘make yourself smaller’ energetically to create space and empower those around you. This approach is particularly effective in mentorship and fosters better collaboration.

8. Recognize Attention as Core Need

Understand that attention is a fundamental need for children and animals; be mindful of where you direct your attention, as they are acutely tracking it and may act out if needs are unmet.

9. Convey Safety to Sensitive Individuals

To build trust and security with sensitive or fearful individuals (like prey animals), actively demonstrate awareness of their environment and emotional state, showing you are ’tracking threats.’ This allows them to relax.

10. Critically Evaluate Public Narratives

Be skeptical of public narratives that form quickly and confidently, as they often misrepresent the full situation, especially when complete information is unavailable. Understand that skilled narrative crafting can influence perception.

11. Anticipate Governance Structure Failures

Do not rely solely on theoretical corporate or governance structures to withstand immense real-world pressures (market forces, media, internal conflicts). Anticipate and plan for their potential failure when the ‘rubber hits the road.’

12. Simplify Stakeholder Buy-in for Action

For decisions requiring broad consensus, recognize that only ‘obvious or egregious’ issues may gain sufficient buy-in, potentially leading to inaction on important but less clear problems. Consider streamlining decision-making processes for critical issues.

13. Prioritize AI Implementation & Integration

Recognize that even without further fundamental AI advances, decades of productive use and significant economic shifts can be achieved by focusing on the implementation and integration of current AI capabilities.

14. Unlock Latent Intelligence in Models

Explore methods like better prompting or fine-tuning to ‘squeeze out’ and unlock the significant latent intelligence already present in existing AI models. This can yield substantial improvements without needing new models.

15. Differentiate AI Leadership Metrics

When assessing who is ‘ahead’ in AI, specify the domain (e.g., surveillance, military application, frontier research) and timeframe, as different areas have different leaders and relevant metrics. Avoid oversimplification.

16. Prioritize Deployment for Military AI

For military AI, focus on practical application, efficient procurement processes, and integration into existing systems. This is more critical for battlefield advantage than just cutting-edge research.

17. Regulate AI Weapons Broadly

When considering AI weapon regulation, focus on broader appropriate use and compliance with existing international humanitarian law rather than narrow definitions like ’targeting.’ Narrow bans can be easily circumvented.

18. Maintain High AI Reliability for Military

If advanced AI systems are integrated into military applications, ensure an extremely high bar for their reliability, interpretability, and alignment with human intent to prevent catastrophic outcomes.

19. Guard Against Inadvertent AI Escalation

Be cautious of AI’s role in crisis escalation dynamics in military settings; ensure robust human oversight to prevent unintended conflict escalation due to AI misinterpretations of tense situations.

20. Understand US-China Competition Framework

When engaging with US national security policy on AI, recognize that the dominant lens is competition with China, influencing all related discussions and policy decisions.

21. Leverage US Strengths in Tech Diffusion

Capitalize on the US’s existing advantages in technological diffusion, such as mature enterprise software and widespread cloud services, for effective AI integration across various sectors.

22. Recognize Tech Leader-Follower Dynamics

Understand that the first actor in technological innovation bears the heavy cost of exploration and risk, making it easier for fast followers to replicate advances with less effort and resources.

23. Address Semiconductor Supply Chain Vulnerability

Recognize the critical vulnerability posed by the concentrated global supply chain for advanced semiconductors, as almost all are manufactured by TSMC in Taiwan, creating a single point of failure for AI development.

24. Invest in Tacit Knowledge for Chips

Understand that replicating advanced chip manufacturing requires significant investment in transferring tacit knowledge and training skilled personnel, not just capital. This is a key bottleneck.

25. Control Semiconductor Manufacturing Equipment

Focus on controlling the export of semiconductor manufacturing equipment (SME) to hinder other nations’ ability to build their own domestic advanced chip supply chains. This is a more straightforward control point.

26. Anticipate AI Capability Proliferation

Expect advanced AI capabilities to become increasingly accessible and easier to reproduce over time, requiring less compute and expertise. This means capabilities will spread widely.

27. Pursue ‘Soft’ US-China AI Cooperation

Given current poor geopolitical relations, focus on less ambitious, non-binding cooperation methods like person-to-person technical dialogues and sharing unilateral declarations on responsible AI use.

28. Target State-Level AI Regulation

Political will for federal AI regulation in the US is currently low; direct efforts towards state-level initiatives where there is more momentum and public support for AI oversight.

I think a pretty core dynamic for what was going on in that period was that there were quite a lot of people who were, you know, had bad things to say about this powerful person, but were pretty afraid of what would happen to them if they, you know, if they expressed those views.

Helen Toner

It's sort of the most natural lens to use. And, you know, it fits well onto this topic as well. Like, you know, it's a very consequential technology. The US and China are in the middle of a, you know, big geopolitical competition, strategic competition. And the US and China also happen to be, you know, two of the leading countries in AI.

Helen Toner

I think people intuitively think of the cars on the racetrack where it means like sort of any moment of hesitation by the U.S. would immediately lead to China zipping ahead. And I think that's not really the dynamic we see here.

Helen Toner

So I think sometimes the things that we're imagining of what would be bad about an AI system on the battlefield are actually, like, it would be contravening international law. And so the solution there is maybe not no AI. The solution is, like, really take international law seriously and comply with it.

Helen Toner

I tend to think that most horses are having a pretty bad time a lot of the time when they're around humans, unfortunately.

Helen Toner

I think that's a great way to circle back to the start of the conversation about AI as well. Something I'm really interested to see over the next few years. You know, I tend to agree with what you described, which is, you know, we have these actually very, very sophisticated, you know, processing going on inside our heads of how people are behaving around us and how they're holding their bodies and how they're like modulating their voice and how they're like making eye contact or not.

Helen Toner
2.5 years
Helen Toner's tenure on the OpenAI board Served from 2021 to November 2023
9 months
GPT-4 demo before public release Helen saw a demo in summer 2022, released March 2023
2016
Start of Helen Toner's work in AI policy At Open Philanthropy
2018
Helen Toner's time in China studying AI ecosystem As a Research Affiliate of Oxford University
Early 2019
Founding of CSET (Center for Security and Emerging Technology) Helen helped co-found the think tank at Georgetown University
2014 to 2023 or 2024
Period of UN process to negotiate a treaty to ban autonomous weapons The GGE (Group of Governmental Experts) process
More than 40
Number of countries signing US political declaration on responsible AI in military US put out the declaration and gathered international support
1% tops
Percentage of people prioritizing AI as a top political issue in surveys Indicates low political salience for most voters
October 2022
Date of first major US export controls on chips to China Followed by amendments a year later and additional changes
1000
Number of well-curated examples to get models to do reasoning Referencing a recent paper suggesting small data for reasoning capabilities