Is AI going to ruin everything? (with Gabriel Alfour)

Jul 16, 2025 1h 19m 18 insights Episode Page ↗
Guest Gabriel Alfor, co-founder of Conjecture, discusses the catastrophic risks of accelerating technology and autonomous AI agents, emphasizing the need for better institutions and a scientific approach to AI alignment. He also explores redesigning social media to foster constructive engagement and manage information flow.
Actionable Insights

1. Prioritize Value Science

Invest in developing a scientific process to understand, measure, and aggregate human values at individual, societal, and global levels. This is critical for building truly aligned AI systems and effective institutions.

2. Adopt Meta-Scientific AI View

Approach AI alignment as a pre-paradigmatic field, acknowledging that current methods are likely incomplete and avoiding overconfidence. This fosters a more humble and exploratory approach to AI safety.

3. Plan for AI Uncertainty

When facing the uncertain future of AI, adopt a “portfolio of beliefs” and develop plans for multiple potential outcomes (e.g., AI goes well, AI leads to dominance, AI leads to extinction). This ensures preparedness for various scenarios rather than waiting for consensus.

4. Establish International AI Treaties

Create international treaties and stringent regulations for AI development, mandating incremental growth and proving agent safety at smaller scales before wider deployment. This ensures responsible AI development aligned with human values.

5. Increase AI Debates

Facilitate more frequent and high-quality debates between AI company CEOs, AI safety advocates, academics, and independent experts. This helps manage disagreements constructively and explore solutions in a field lacking consensus.

6. Cultivate Institutional Improvement

Foster a societal drive and ambition specifically focused on continuously building and improving institutions. A lack of such a movement is a bottleneck preventing necessary institutional evolution.

7. Treat Regulation as Iterative

View regulation as an ongoing, iterative process rather than a fixed end state, allowing for continuous refinement and adaptation. This creates more effective and responsive regulations that can evolve with changing circumstances.

8. Experiment with Regulations

Implement and test regulations in different contexts and locations to gather data and learn what works best. This avoids the pitfalls of one-shot, universally enforced regulations and promotes adaptive governance.

9. Institutions Need Expiry Dates

Design laws, institutions, and companies with built-in expiry dates that require explicit renewal. This combats institutional decay and ensures ongoing relevance and effort in maintenance.

10. Allocate Maintenance Resources

Acknowledge and proactively allocate resources for the “maintenance tax” of institutions and systems. This is crucial because things naturally require effort and resources to prevent decay.

11. Redesign Social Media Actions

Design social media platforms to offer more constructive actions beyond likes and shares, such as facilitating group discussions for collective action or direct contact with political representatives. This converts online emotional involvement into positive, real-world impact.

12. Differentiate Influencer Regulation

Implement different regulatory standards for social media influencers based on their audience size, treating large influencers (over a million followers) as media antennas. This ensures appropriate oversight for those with significant reach.

13. Stricter Rules for Large Influencers

Enforce more stringent regulations and deontological codes for large social media influencers (over a million followers). This ensures individuals with wide audiences adhere to shared values and standards.

14. Higher Bar for Influencer Fake News

Implement higher standards and potential fines for large influencers who spread provably false information (“fake news”). Judicial oversight should determine what is provably false, rather than executive government.

15. Stricter Personal Attack Rules

Be much more stringent about personal attacks, especially concerning private individuals, when communicating to a large audience. This fosters a more respectful and less harmful information environment.

16. Utilize Tech for Polling

Leverage technology to regularly poll experts (PhDs) and citizens on important questions. This gathers broad input and informs decision-making, modernizing governance.

17. Reject “Competition is Death”

Discard the “thought-stopping cliche” that competition, especially for attention, means inevitable failure for new services or ideas. This mindset stifles innovation and creativity, as valuable services can thrive amidst competition.

18. Introspect on Personal Values

Engage in deep introspection to understand one’s own values, as this understanding is a prerequisite for developing “social tech” to align AI with human values.