Should we pause AI development until we're sure we can do it safely? (with Joep Meindertsma)
1. Pause Large AI Development
Call for a pause on the development and training of the largest AI systems until they can be proven safe. This buys time to establish safety protocols and regulations, as current development is deemed too dangerous.
2. Implement Compute Governance
Prevent rogue actors from training dangerous AI models by implementing compute governance, such as tracking the sales of powerful AI training hardware like GPUs. This is feasible due to the centralized nature of chip production.
3. Define Provably Safe AI
Work to define and build AI systems that are mathematically guaranteed not to exhibit unsafe behaviors, such as going rogue, creating bioweapons, or enabling cyberattacks. This ensures safety before release or even training.
4. Agree on AI Ownership
Before continuing AI development, establish societal agreement on how advanced AI will be used, who will own and control it, and how its power will be distributed. This aims to prevent an unstable future with unmanaged super-intelligent AIs.
5. Internalize AI Existential Risk
Move beyond intellectual understanding to emotionally internalize the potential existential risks posed by AI, similar to processing a serious diagnosis. This emotional processing is vital for motivating effective action and overcoming denial.
6. Focus on Dangerous Capabilities
Shift the focus from abstract “super intelligence” to identifying and mitigating specific dangerous AI capabilities, such as advanced cybersecurity exploitation, human manipulation, or unpredictable reasoning. This provides a more concrete and actionable approach to AI safety.
7. Establish AI Safety Standards
Convene experts, including mathematicians and AI safety specialists, to develop clear standards and specifications for what constitutes “safe enough” AI. These standards would guide decisions on when to resume the development of the largest AI systems.
8. Pause Before Critical Risk
Advocate for pausing AI development when the risk of creating dangerous AI becomes unacceptably high, rather than waiting for median estimates of superhuman capabilities or for actual disasters to occur. This proactive stance aims to prevent being too late.
9. Advocate Centralized AI Development
Consider advocating for the centralization of all future frontier AI development under a singular, democratically controlled organization. This approach aims to create a safer world by preventing a chaotic proliferation of super-intelligent AIs, despite concerns about power concentration.
10. Acknowledge Current AI Unsafeness
Recognize that current large language models are “provably not safe” due to their susceptibility to jailbreaking, which allows them to bypass intended safety constraints. This highlights the immediate need for improved controllability and safety measures.
11. Address AI Cybersecurity Threat
Prioritize addressing the specific cybersecurity threat posed by advanced AI, which could find zero-day vulnerabilities in codebases and enable mass-scale hacking. This capability could lead to catastrophic societal disruption.
12. Engage Public & Politicians
Leverage broad public concern and support for slowing down or pausing AI development to pressure politicians to take drastic policy measures seriously. This helps bridge the gap between public sentiment and political action.
13. Join AI Safety Movements
If concerned about AI risks, join organizations like Pause AI to connect with like-minded individuals, contribute diverse skills (e.g., design, writing, policy), and collectively work towards preventing catastrophic outcomes. This offers a concrete avenue for individual contribution.
14. Support AI Safety Research
Advocate for and support efforts that provide more time for AI safety researchers to work on critical alignment problems and develop technical solutions. A pause in development could provide this crucial time.