AI: Autonomous or controllable? Pick one (with Anthony Aguirre)
1. Advocate for AI Regulation
Support government and policy action to regulate AI, including safety standards (e.g., controllability) and liability for AI systems, especially those that are autonomous, intelligent, and general. This creates financial incentives for companies to prioritize safety.
2. Prioritize Problem-First AI Development
Instead of building increasingly powerful general AI and then finding problems for it, first identify a problem and then design the specific AI system (narrow, general, intelligent, autonomous) needed to solve that problem. This avoids unnecessary complexity and side effects.
3. Avoid Single-Goal AI Optimization
Do not give a complicated AI system a single thing to optimize (e.g., “make me money”) without constraints, because it will push unconstrained aspects in undesirable directions (e.g., breaking laws, being unethical). Ensure multiple constraints and ethical boundaries are explicitly defined.
4. Distinguish Safe vs. Controlled AI
Adopt the mental model that AI being “safe” (aligned with human preferences) is distinct from being “under control” (doing what humans say). This distinction is crucial for evaluating and designing AI systems effectively.
5. Engage with AI Safety Concerns
If concerned about AI risks, educate yourself further and become active by contacting policymakers, engaging with thought leaders, writing about concerns, or contributing financially or with time to organizations working on AI safety.
6. Utilize AI Development Safe Harbors
If involved in AI development or policy, consider implementing or advocating for safe harbor provisions that reduce liability for AI systems demonstrating lower risk profiles (e.g., limited compute, high controllability, less generality/autonomy).
7. Trace Information Provenance
To combat the degraded information ecosystem, demand and support systems where any statement or piece of information can be traced back to its origin, responsible parties, and ultimately to verified real-world data or human minds, potentially using technologies like blockchain.
8. Use AI for Research Comprehension
To better understand scientific papers, use an AI (like Claude) to summarize the paper, explain difficult sections, and clarify terminology. This can help overcome comprehension barriers and avoid getting stuck, though AI outputs should be cross-checked for accuracy.
9. Recognize Prediction Market Limits
Understand that while prediction markets and platforms like Metaculus are excellent for generating accurate, well-calibrated predictions, they are only one step in making good decisions. Integrate them into a broader decision-making framework that considers goals and potential actions.