What do socialism and effective altruism have in common? (with Garrison Lovely)

Jun 26, 2024 Episode Page ↗
Overview

Spencer Greenberg and Garrison Lovely discuss leftism, socialism, and AI. They explore how socialist principles align with effective altruism, critique corporate profit motives, and analyze the three main factions in the AI debate, advocating for cooperation to ensure AI safety and ethics.

At a Glance
13 Insights
1h 10m Duration
17 Topics
8 Concepts

Deep Dive Analysis

Leftism, Socialism, and Effective Altruism: Core Alignments

Tensions Between Effective Altruism and Socialism

Socialist Perspectives on Capitalism and Wealth Distribution

Critique of Market-Based Solutions and Government Intervention

McKinsey Experience and Shift Towards Leftist Views

Modeling Corporate Behavior: Profit-Seeking vs. Founder-Led

The Left's Underestimation of AI's Impact

Three Main Camps in the AI Debate: X-Risk, Ethics, and Boosters

Tensions and Potential Cooperation Between AI Safety and AI Ethics

Challenges of Open-Sourcing Powerful AI Models

Shift of AI Research from Academia to For-Profit Labs

Proposal for a CERN-like Entity for AI Research

Government Incentives for Building Highly Intelligent AIs

Government's Ability to Regulate Fast-Moving AI Technology

Impact of Profit-Seeking on AI Risk and Safety

Mission-Driven AI Organizations Subsumed by For-Profit Entities

Corporate View of Existential Risk Versus Bankruptcy

Radical Egalitarianism

The belief that all people around the world should be treated equally in terms of how individuals prioritize donations or policy, extending beyond national borders. Both the far left and effective altruism share this core principle.

Market Socialist Economy

An economic system that utilizes markets for exchange and price signals, but where ownership is rationalized or structured to maximize public benefit, rather than purely private profit.

Worker Co-determination

A system, like in Germany, where workers are legally mandated to be represented on the boards of the companies they work at, influencing corporate governance and the distribution of income between labor and management.

AI Safety Crowd (X-risk)

A group concerned that artificial intelligence poses an existential risk to humanity, potentially leading to human extinction or permanent disempowerment. They focus on preventing catastrophic outcomes from highly capable AI.

AI Ethics Crowd

A group focused on the immediate and existing harms perpetrated by AI systems, such as bias, hallucination, discrimination, and lack of transparency, rather than speculative future existential risks.

AI Boosters (Effective Accelerationists)

A camp that believes AI is overwhelmingly beneficial, will not kill everyone, and should be built as fast as possible, opposing regulation and advocating for the rapid creation of smarter-than-human AI.

Alignment Tax

The idea that efforts and resources spent on making an AI model safer or more aligned with human values could otherwise be directed towards improving its capabilities or speeding up its deployment, thus incurring a 'tax' on development.

Global Public Good

A good whose benefits are widely distributed and non-excludable, but whose costs are borne acutely by specific actors. Markets tend to systematically under-provide these goods because individual entities cannot capture all the benefits of their investment.

?
How do effective altruism and socialism align despite their differences?

Both effective altruism and socialism share a core commitment to radical egalitarianism, believing that all people globally should be treated equally, and are concerned with wealth inequality, though they propose different solutions.

?
What is the typical socialist view on capitalism's role in poverty reduction?

Many socialists reject or are highly skeptical of the view that capitalism has been the primary force for lifting people out of poverty, pointing to factors like predatory international arrangements and the unique economic model of China.

?
What are some examples of government interventions that socialists might favor to improve societal welfare?

Socialists might favor robust welfare states, universal health programs, worker cooperatives, or co-determination (worker representation on company boards), and nationalizing natural monopolies to improve public benefit and labor protections.

?
Why is the political left not more engaged with AI development and its future implications?

The left has historically been less focused on technology, partly due to an aesthetic association with Silicon Valley and political opponents, and skepticism about AI's capabilities or imminence, despite AI's potential to replace human labor.

?
What are the main points of tension between the AI safety and AI ethics communities?

The AI safety community emphasizes AI's high capabilities and existential risks, while the AI ethics community focuses on immediate harms, biases, and failures of current AI systems, leading to accusations that safety advocates hype AI or distract from present issues.

?
How does the shift of AI research to for-profit companies affect AI safety?

When AI research moves from academia to for-profit labs, researchers gain access to massive resources but also become financially and socially invested in the labs' success, potentially leading to increased competition and corner-cutting on safety efforts.

?
What are the incentives for governments to develop highly intelligent AIs compared to private firms?

Private firms have strong profit-maximizing incentives to develop AGI as a cheaper labor replacement, whereas governments optimize for diverse factors like stability, economic growth, and popular support, and AGI's radical implications might negatively impact stability.

?
How do profit-seeking motives influence the risks associated with AI development?

Profit-seeking firms are incentivized to create more capable and 'agentic' (autonomous) models at the expense of safety, as safety efforts often incur an 'alignment tax' that diverts resources from capabilities or faster deployment.

?
Why is it difficult for corporations to properly account for existential risks from AI?

From a profit-maximizing corporation's perspective, bankruptcy in 20 years looks similar to human extinction, as its downside risk is bounded at zero (bankruptcy), making it difficult to price existential risk into its decision-making, especially since existential risk mitigation is a global public good.

1. Cultivate Self-Awareness & Feedback

Regularly reflect on mistakes and seek anonymous feedback from your team to learn and improve your behavior and projects. This creates a valuable feedback loop for continuous personal and professional growth.

2. Iterate & Improve Ideas

Share your work (e.g., essays) publicly to gather comments, suggestions, and criticisms. Use this feedback to quickly update and refine your ideas and output, leading to continuous improvement.

3. Adopt a Problem-Solving Toolkit

When facing a problem, first identify the core issue, then consider which “tools” (e.g., market solutions, government regulation) are best suited to address it. This allows for more effective and tailored solutions.

4. Understand Corporate Behavior Models

Recognize that founder-led companies reflect their leader’s vision, while large organizations often prioritize profit maximization. This model helps predict corporate actions and understand when companies might help or harm.

5. Advocate for AI Whistleblower Protections

Support policies that protect whistleblowers in AI labs, as this encourages transparency and helps address concerns about AI systems doing harm. This is a concrete step to improve AI safety and ethics.

6. Push for AI Company Liability

Advocate for imposing liabilities on AI companies if their models cause harm. This increases the cost of negligence for companies, incentivizing them to build safer and more ethical AI systems.

7. Support AI Licensing Regimes

Advocate for a licensing regime where powerful AI models require government approval, testing, and information sharing. This could ensure greater accountability and safety in AI development.

8. Embrace Collective Action

Adopt a mindset of collective action, working with others to coordinate efforts and push back against concentrated power. This approach is crucial for achieving broader societal changes and empowering labor.

9. Critically Evaluate Profit-Seeking

Understand that profit-seeking does not always lead to good outcomes, as market failures are common, especially for global public goods like existential risk mitigation. This critical lens helps identify areas where market forces may be insufficient or harmful.

10. Approach Societal Change with Humility

When considering radical societal changes, adopt a “Burkean leftist” approach by having humility about changing too much too fast. Recognize that society’s current state is a result of compounded decisions, and respect this reality while still striving for improvements.

11. Engage the Left on AI

If you identify with leftist or socialist perspectives, actively focus on and engage with the implications of AI technology. This is crucial because AI’s goal of replacing human labor and its potential risks align with core leftist concerns.

12. Consider a Public Option for AI

Explore the idea of an international consortium (like CERN) for AI research and safety, pooling government resources and talent. This could reduce profit motive considerations and foster international collaboration over competition.

13. Advocate for Agile Government Regulation

Support the establishment of new regulatory agencies with flexibility and capacity to adapt to fast-evolving industries like AI. This ensures that government oversight can keep pace with technological advancements and effectively manage risks.

I think it's pretty wild that people around the world are not treated equally in how people prioritize how they donate policy. I understand that governments are going to prioritize their own citizens, but just as individuals, I don't see why I should care more about people in my own country than people overseas.

Garrison Lovely

Our society is failing at a moral level to allocate resources in a way that promotes maximum benefit to everybody.

Garrison Lovely

We just do execution, we don't do policy.

Richard Elder

Cutting corners on safety is largely what AI development is driven by. I don't think actually in the presence of these intense competitive pressures, that intentions particularly matter.

Dan Hendricks (quoted by Garrison Lovely)

To a corporation, a public, you know, shareholder value maximizing corporation, bankruptcy and extinction look similar.

Garrison Lovely
$5,000
Cost to save a life from preventable diseases Amount of money that could save a life from preventable diseases.
over $100 million
Estimated cost to train GPT-4 Cost for training a large AI model like GPT-4.
10^25 flops
Compute threshold for AI models requiring testing (Biden executive order) Floating operation points per second, a threshold for AI models requiring testing.