Is AI going to ruin everything? (with Gabriel Alfour)

Jul 16, 2025 Episode Page ↗
Overview

Guest Gabriel Alfor, co-founder of Conjecture, discusses the catastrophic risks of accelerating technology and autonomous AI agents, emphasizing the need for better institutions and a scientific approach to AI alignment. He also explores redesigning social media to foster constructive engagement and manage information flow.

At a Glance
18 Insights
1h 19m Duration
16 Topics
5 Concepts

Deep Dive Analysis

Initial Concerns About AI's Catastrophic Potential

The Danger of Rapid Technological Acceleration

Properties and Purpose of Robust Institutions

The Urgent Need for Institutional Design and Reform

Mechanisms and Solutions for Institutional Decay

Erosion of Trust in Institutions and Societal Decline

Social Media's Role in Worsening Information Environments

Designing Better Information Markets and Social Media

Rethinking Regulation as an Iterative Process

Concerns Regarding AI Agents and Alignment Challenges

The Superficiality of Current AI Alignment Progress

Why AI Companies Hinder True Alignment Iteration

The Difficulty of Instilling Human Values in AI

Naivety of Constitutional AI Approaches

The Pre-Paradigmatic State of AI Alignment Research

Strategies for Addressing AI Uncertainty and Disagreement

Technological Acceleration

The idea that technological progress, especially with AI, can become extremely sudden and sharp, leading to capabilities (like destructive weapons or bioweapons) that outpace humanity's ability to manage them, potentially destroying civilization.

Institutional Decay

The pervasive tendency for institutions (and even companies, families) to degrade over time due to increasing entropy, often prioritizing self-preservation over original goals, leading to bureaucracy or value drift.

Information Market Design

The concept that the free flow of information, while desirable, needs carefully designed rules and structures (like those in sports or drug regulation) to prevent negative outcomes such as toxicity, misinformation, and external sabotage, rather than arising spontaneously.

AI Alignment

The challenge of ensuring artificial intelligence systems act according to human values. It is considered a complex problem requiring deep understanding of values, resilient decision theory, and the ability to predict future impacts, which current AI systems lack beyond superficial performance.

Pre-Paradigmatic Field

A stage in scientific development where a field lacks objective benchmarks, a standard vocabulary, and agreed-upon methods for comparing different approaches. In such a state, current efforts are largely exploratory, and it is unreliable to extrapolate from limited tests to broader conclusions.

?
What is the primary concern regarding rapid technological advancement, especially with AI?

The concern is that technology, particularly with AI acting as an accelerator, can advance so suddenly and sharply that humanity's institutions and wisdom cannot keep pace, leading to the creation of powerful destructive capabilities (like advanced weapons or bioweapons) that could destroy civilization.

?
What characteristics define effective and resilient institutions?

Good institutions should view disagreements as learning opportunities, welcome dual-use technologies with confidence they'll be used for good, encourage extensive experimentation (e.g., policy trials in specific cities), and facilitate broad communication (e.g., polling experts and citizens using technology).

?
Why are current institutions ill-equipped for the modern world?

Many existing constitutions and institutional frameworks are centuries old, implicitly designed for a pre-modern era (e.g., horse travel) and have not kept pace with rapid technological and economic growth, making them inadequate for dealing with contemporary challenges.

?
How does institutional decay occur, and what are potential solutions?

Institutions decay over time due to increasing entropy, often prioritizing self-preservation over original goals. Solutions include implementing built-in expiry dates for laws and institutions requiring explicit renewal, and recognizing the need for a 'maintenance tax' – continuous effort and resources for upkeep.

?
How has social media contributed to the decline of institutions and public discourse?

Social media has worsened political debates and allowed foreign propaganda to bypass traditional media regulations. The lack of meaningful regulation has fostered internal 'vicious circles' of negative human impulses and created external security problems due to unchecked sabotage and propaganda, effectively losing the 'information battle'.

?
Should the flow of information on social media be regulated, or should it be a free marketplace of ideas?

While a free flow of information is desirable, it requires careful design and rules, much like regulated sports or drug markets, to prevent negative outcomes. Unregulated information markets do not spontaneously produce the best outcomes and can lead to toxicity and misinformation.

?
What distinguishes 'small' from 'big' influencers in terms of social media regulation?

Small influencers (e.g., group chats under 100 people, users under 1000 followers) should have strong privacy rights and broad freedom. Big influencers (over 1 million followers) are considered media antennas and should abide by more stringent rules, including a higher bar for spreading provably false information and stricter guidelines for personal attacks on private individuals.

?
Why is current AI alignment considered insufficient despite visible progress in LLMs?

Current AI alignment efforts are seen as 'hill climbing on what's easily visible,' addressing superficial issues like hallucinations or overly eager responses. However, they fail to align AI with deeper human values, decision theory for future prediction, or resilience to unknown problems, which are much harder to diagnose and address.

?
What is the main issue with 'Constitutional AI' approaches that involve giving AIs long lists of human principles?

Such approaches are considered extremely naive because the principles themselves are often contradictory (e.g., 'do no harm' is impossible in a complex world). Humans lack a coherent specification of values, making it impossible for an AI to consistently follow such a list, leading to unpredictable and potentially harmful outcomes.

?
What actions should be taken given the lack of consensus on AI's future?

For those outside the debate, manage uncertainty by holding a portfolio of beliefs (good outcome, team dominance, extinction) and planning accordingly. For those in the trenches, foster more open and respectful debates between AI developers, safety advocates, and academics, and seek to build plans that work across different theories.

1. Prioritize Value Science

Invest in developing a scientific process to understand, measure, and aggregate human values at individual, societal, and global levels. This is critical for building truly aligned AI systems and effective institutions.

2. Adopt Meta-Scientific AI View

Approach AI alignment as a pre-paradigmatic field, acknowledging that current methods are likely incomplete and avoiding overconfidence. This fosters a more humble and exploratory approach to AI safety.

3. Plan for AI Uncertainty

When facing the uncertain future of AI, adopt a “portfolio of beliefs” and develop plans for multiple potential outcomes (e.g., AI goes well, AI leads to dominance, AI leads to extinction). This ensures preparedness for various scenarios rather than waiting for consensus.

4. Establish International AI Treaties

Create international treaties and stringent regulations for AI development, mandating incremental growth and proving agent safety at smaller scales before wider deployment. This ensures responsible AI development aligned with human values.

5. Increase AI Debates

Facilitate more frequent and high-quality debates between AI company CEOs, AI safety advocates, academics, and independent experts. This helps manage disagreements constructively and explore solutions in a field lacking consensus.

6. Cultivate Institutional Improvement

Foster a societal drive and ambition specifically focused on continuously building and improving institutions. A lack of such a movement is a bottleneck preventing necessary institutional evolution.

7. Treat Regulation as Iterative

View regulation as an ongoing, iterative process rather than a fixed end state, allowing for continuous refinement and adaptation. This creates more effective and responsive regulations that can evolve with changing circumstances.

8. Experiment with Regulations

Implement and test regulations in different contexts and locations to gather data and learn what works best. This avoids the pitfalls of one-shot, universally enforced regulations and promotes adaptive governance.

9. Institutions Need Expiry Dates

Design laws, institutions, and companies with built-in expiry dates that require explicit renewal. This combats institutional decay and ensures ongoing relevance and effort in maintenance.

10. Allocate Maintenance Resources

Acknowledge and proactively allocate resources for the “maintenance tax” of institutions and systems. This is crucial because things naturally require effort and resources to prevent decay.

11. Redesign Social Media Actions

Design social media platforms to offer more constructive actions beyond likes and shares, such as facilitating group discussions for collective action or direct contact with political representatives. This converts online emotional involvement into positive, real-world impact.

12. Differentiate Influencer Regulation

Implement different regulatory standards for social media influencers based on their audience size, treating large influencers (over a million followers) as media antennas. This ensures appropriate oversight for those with significant reach.

13. Stricter Rules for Large Influencers

Enforce more stringent regulations and deontological codes for large social media influencers (over a million followers). This ensures individuals with wide audiences adhere to shared values and standards.

14. Higher Bar for Influencer Fake News

Implement higher standards and potential fines for large influencers who spread provably false information (“fake news”). Judicial oversight should determine what is provably false, rather than executive government.

15. Stricter Personal Attack Rules

Be much more stringent about personal attacks, especially concerning private individuals, when communicating to a large audience. This fosters a more respectful and less harmful information environment.

16. Utilize Tech for Polling

Leverage technology to regularly poll experts (PhDs) and citizens on important questions. This gathers broad input and informs decision-making, modernizing governance.

17. Reject “Competition is Death”

Discard the “thought-stopping cliche” that competition, especially for attention, means inevitable failure for new services or ideas. This mindset stifles innovation and creativity, as valuable services can thrive amidst competition.

18. Introspect on Personal Values

Engage in deep introspection to understand one’s own values, as this understanding is a prerequisite for developing “social tech” to align AI with human values.

We need to make sure our wisdom grows as our power grows, and that we may be imbalanced, where we don't seem to necessarily be coming that much wiser as a species, but we are becoming way more powerful.

Spencer Greenberg

I think there's a world that have institutions that are strong enough that if we find a way to build nukes for like less than $1,000, we're hyped. We are not worried about people detonating them. We're like, whoa, now we can terraform, we can change weathers, we can do some geoengineering. And it's like obvious that we're going to do it for like, going to use it for good things. I think tomorrow, if we discover a way to build nukes for $1,000, we're all afraid. We're obviously not in that world.

Gabriel Alfor

I think the institutions were also quite bad before. I just think they were better than what was before. Like in an absolute scale, I think we truly live in terrible times institutionally, but, you know, even morally, even happiness-wise and things like this. I just think that in relative terms, it's better.

Gabriel Alfor

I love competition and I think competition is great, both as an end and as a mean. I just think it should be designed.

Gabriel Alfor

Alignment is pretty paradigmatic. We're not ready yet. When it's going to be a science, it's going to be pretty obvious and we're going to see it in our societies.

Gabriel Alfor
250 years old
Age of many current constitutions Making them outdated for modern challenges and technological progress.
less than $1,000
Hypothetical cost to build nukes Used as a metaphor for easily accessible destructive power that current institutions cannot manage.
less than 100 people
Maximum size for social media group chats with strong privacy rights Proposed threshold for differentiating regulation levels.
less than 1000 followers
Maximum follower count for 'small' influencers with significant freedom Proposed threshold for differentiating regulation levels.
more than a million followers
Minimum follower count for 'big' influencers subject to stringent rules Proposed threshold for influencers considered 'media antennas' requiring stricter regulation.
99.9999%
Hypothetical AI alignment percentage Used to illustrate how people might dismiss rare but catastrophic AI failures if most interactions appear benign.