Will AI destroy civilization in the near future? (with Connor Leahy)

Jun 21, 2023 1h 25m 5 insights Episode Page ↗
Spencer Greenberg speaks with Connor Leahy about the existential risks posed by advanced AI, its near-term threat, and potential preventative interventions. Connor Leahy discusses the rapid progress of AI systems and the urgent need for societal coordination and government regulation.
Actionable Insights

1. Advocate for AGI Halt

Actively advocate for a halt in the development of AGI, especially by companies whose leaders acknowledge existential risks, as this is a societal problem requiring government intervention and regulation. Support policies that would stop the rapid advancement of potentially dangerous AI systems.

2. Spread AI Risk Awareness

Take the threat of AI seriously and actively discuss it with friends and social circles to build common knowledge that AI risk is a problem that can and should be stopped. Contact your representatives to demand action on AI regulation and safety.

3. Secure Proto-Aligned AI

If developing or possessing proto-aligned AI systems, ensure they are kept under nation-state level security and avoid publishing details about their construction to prevent misuse or reverse engineering.

4. Pursue AI Safety Research

If you are a technical person, consider dedicating your efforts to working on AI safety problems, specifically researching and developing aligned systems that are robust against misuse.

5. Evaluate Anxiety’s Usefulness

Assess whether your anxiety is productive in helping you make the world better; if it’s merely causing distress without benefit, seek ways to reduce it without self-delusion.