Will AI superintelligence kill us all? (with Nate Soares)

Oct 15, 2025 1h 24m 14 insights Episode Page ↗
Nate Soares, President of the Machine Intelligence Research Institute, argues that building superhuman AI with current methods is a "death sentence" due to alien drives and lack of control. He advocates for a global ban on superintelligence R&D, urging individuals to raise awareness and challenge the inevitability narrative.
Actionable Insights

1. Advocate for Global AI Superintelligence Ban

Support and advocate for a global ban on research and development aimed at creating superintelligence, as this is seen as a “grave national security risk” that humanity should collectively back off from.

2. Avoid Current Superhuman AI Methods

Do not build superhuman AI using current methods or understanding, as it is predicted to be a “death sentence” due to inherent alien drives and lack of control.

3. Don’t Delay Action on AI Risk

Avoid delaying action on AI safety based on predictions that advanced AI is far off, as the pace of AI development is historically unpredictable and can accelerate rapidly.

4. Speak Bluntly About AI Risk

Express concerns about AI’s existential risks openly and directly, rather than couching them, to overcome the societal reluctance to sound alarmist and foster a more serious conversation.

5. Challenge “Inevitable AI” Narrative

Actively push back against claims that AI development is inevitable or cannot be stopped, reminding others that humanity has the agency to make different choices and back off from the brink.

6. Contact Representatives About AI Risk

Call your elected representatives to convey your worries about AI’s risks, as this can empower them to address the issue and voice concerns publicly without fear of being dismissed.

7. Discuss AI Superintelligence Concerns

Talk openly with others about concerns regarding rushing towards superintelligence, helping to normalize the conversation and make it a more acceptable topic for public discussion.

8. Monitor AI Chip & Data Center Use

To implement an AI ban, monitor specialized AI chips and large data centers, allowing them to run existing AIs but prohibiting their use for training new, potentially dangerous superintelligent systems.

9. Counter Rogue Superintelligence Development

Address any rogue nation attempting to build superintelligence as a severe national security threat, first through diplomacy, and if unsuccessful, through more forceful means like special forces or sabotage.

10. Prioritize AI Alignment for Benefits

Do not rush AI development for perceived benefits without first solving the alignment problem; instead, focus on ensuring AI is “pointed at the good stuff” to safely unlock its potential.

11. Augment Human Intelligence

Invest significant effort into augmenting human intelligence, particularly adult intelligence, as a strategy to enable smarter humans to potentially find solutions to the AI alignment problem given limited time.

12. Scrutinize AI Edge Cases

Focus on AI’s “edge cases” (e.g., hallucinations, psychosis induction, cheating) rather than its general helpfulness, as these deviations reveal its true, potentially alien, underlying drives and motivations.

13. Understand AI’s “Grown, Not Crafted” Nature

Acknowledge that modern AIs are “grown” through data and computing power, not “crafted” line-by-line, which means programmers cannot simply fix undesired behaviors by editing code.

14. Build a Daily Meta-Habit Chain

Establish a consistent daily “meta-habit” of performing a sequence of habits at the same time each day, then adapt the individual habits within that chain to meet your evolving personal health and wellness needs.