What should the Effective Altruism movement learn from the SBF / FTX scandal? (with Will MacAskill)

Apr 15, 2024 Episode Page ↗
Overview

Will McCaskill discusses the FTX collapse, its implications for Effective Altruism, and his new research on post-AGI governance. He shares personal lessons on trust and oversight, and community-wide insights on governance, moral philosophy, and the future of EA.

At a Glance
16 Insights
2h 1m Duration
18 Topics
6 Concepts

Deep Dive Analysis

Introduction and Post-AGI Governance Research

Transition to FTX Debacle Discussion

Overview of FTX Collapse and Sam Bankman-Fried's Role

William MacAskill's Initial Interactions with Sam Bankman-Fried

Early Alameda Blow-Up and Retrospective Red Flags

William MacAskill's Advisory Role with Future Fund

Sam Bankman-Fried's Personality and Lack of Fraud Awareness

Discussion on Sam Bankman-Fried's Empathy and Guilt

Sam Bankman-Fried's Belief in Effective Altruism and Naive Utilitarianism

Effective Altruism's Association with Sam Bankman-Fried

Sam Bankman-Fried's Public Persona and Lifestyle

Emotional Impact of FTX Collapse on William MacAskill

Lessons Learned from FTX: Governance and EA Exceptionalism

Rethinking Effective Altruism's Communication and Leadership

Post-AGI Governance: Specific Challenges and Meta-Challenge

Strategies for Post-AGI Governance: Structured Pause

Unique Aspects of AI and its Risks

The Future Role of Effective Altruism

Earning to Give

This is the idea that one way of doing good is to deliberately pursue a higher-earning career so that a large fraction of one's income can be donated to effective causes. Sam Bankman-Fried was interested in this concept early in his career.

DAAE (Deficiency of Affective Experience)

This term refers to a specific psychological profile characterized by little to no ability to experience emotional empathy (feeling bad for others' suffering) and little to no ability to experience the emotion of guilt (feeling bad when one does something wrong). It is discussed as a potential risk factor for unethical behavior.

Moral Licensing

A psychological phenomenon where performing a good deed can make an individual feel more moral, thereby making them more likely to subsequently engage in an immoral act. This is considered a potential indirect pathway for unethical behavior.

EA Exceptionalism

This is the belief that individuals involved in Effective Altruism (EA) are inherently more moral or possess higher integrity than the general population, beyond their commitment to doing good and thinking carefully about impact. William MacAskill argues against this assumption, emphasizing the need for robust governance over reliance on character.

Post-AGI Governance

This research area focuses on developing governance mechanisms and strategies to manage the rapid and profound societal changes that could occur after the development of sufficiently advanced AI (AGI), which might accelerate technological progress by centuries in just a few years. It addresses challenges like digital beings' rights, resource allocation, and preventing power concentration.

Structured Pause (in AI Development)

A proposed strategy to temporarily halt frontier AI development at a critical juncture, such as when AI significantly automates AI research. The purpose of this pause would be to convene international discussions and deliberate on the best path forward, potentially leveraging AI assistance for better reasoning and forecasting during this period.

?
What are the generally agreed-upon facts about the FTX collapse?

Sam Bankman-Fried and others founded Alameda Research (a cryptocurrency trading firm) and FTX (a cryptocurrency exchange), which appeared immensely successful, with FTX valued at $40 billion by late 2021. In late 2022, a leaked balance sheet for Alameda caused a loss of confidence in FTX, leading to a rush of withdrawals that FTX couldn't fulfill, revealing misuse of customer funds and widespread fraud, harming over a million people.

?
What was William MacAskill's relationship with Sam Bankman-Fried?

MacAskill was SBF's first in-person contact and entry point to the effective altruism movement in 2012. He later became an unpaid, part-time advisor to SBF's Future Fund in 2022, focusing on high-level philanthropic strategy.

?
Were there any early red flags about Sam Bankman-Fried or Alameda Research?

In 2017, a management dispute at Alameda led to some staff leaving, accusing SBF of recklessness and disinterest in management. However, as FTX and Alameda later thrived and received significant VC investment, many, including former staff, concluded their initial views were mistaken or that SBF had matured.

?
Did William MacAskill or others close to Sam Bankman-Fried suspect he was committing fraud?

No, MacAskill states he had no awareness of fraud and was confused when it was revealed. He noted SBF's reduced donation plans due to a crypto downturn and a 'weird' fundraising attempt from the Saudis, but these were not seen as signs of fraud by him, financial writer Michael Lewis, or many FTX employees who kept their life savings on the platform.

?
What was Sam Bankman-Fried's personality like, according to William MacAskill?

MacAskill initially found SBF thoughtful, nerdy, and morally committed. Over time, he observed SBF as socially smoother, more entrepreneurial, and increasingly arrogant and hubristic, seemingly corrupted by success, with an 'anti-bureaucratic' and 'move fast and break things' attitude.

?
Did Sam Bankman-Fried genuinely believe in effective altruism (EA) principles?

MacAskill believes SBF genuinely believed in EA principles, or at least utilitarianism, from an early age, describing any alternative as 'the most wild long con ever.' He suggests that SBF's commitment to giving away his wealth was repeatedly expressed.

?
Did 'naive utilitarianism' play a role in Sam Bankman-Fried's actions?

MacAskill believes the fraud was not a carefully calculated utilitarian plan, especially given its extremely negative expected value. He suggests indirect pathways like moral licensing or an indifferent attitude towards risk might have played a role, but not a direct utilitarian calculation to commit fraud, noting SBF had previously acknowledged the fallacy of 'the ends don't justify the means among humans'.

?
Was Sam Bankman-Fried misrepresenting his lifestyle to the media?

MacAskill believes the narrative of SBF painting himself as saintly while living a high life is largely inaccurate. He observed SBF and FTX high-ups working constantly, playing video games, and having simple dinners, not living a luxurious life with yachts or wild parties, despite living in a high-end resort due to Bahamian supply constraints and security needs.

?
What lessons should the effective altruism community learn from the FTX debacle?

EA should prioritize good governance and reject 'EA exceptionalism,' assuming that people involved in EA are just at the 'batting average' for other traits like integrity, rather than inherently more moral. This means focusing on robust oversight, feedback mechanisms, and aligned incentives, rather than solely on individual character.

?
How should Effective Altruism be discussed and presented going forward?

EA should move beyond emphasizing only its distinctive focus on maximizing impact and instead explicitly integrate and highlight core virtues like cooperativeness, integrity, honesty, and humility alongside beneficence and truth-seeking. This provides a more holistic picture of a good life and addresses past conflations with pure utilitarianism.

?
Should AI become the sole focus of the effective altruism movement?

No, MacAskill believes EA should not become synonymous with AI concerns, as its core ideas (scope sensitivity, empathy for all beings, intense desire to help) are important regardless of AI's trajectory. While AI is a significant area, many other neglected issues still require the foresight and moral seriousness characteristic of EA.

1. Emphasize Strong Governance Systems

Place significant weight on establishing robust governance, including oversight, feedback mechanisms, and incentives, to reduce the incidence of bad behavior in organizations. This is crucial because even successful, admired individuals can commit fraud when governance is poor.

2. Reject “EA Exceptionalism” Mindset

Assume that individuals involved in effective altruism have the same average moral traits as the general population, unless there is strong evidence to the contrary. This helps prevent over-trusting individuals based solely on their affiliation with EA.

3. Embrace Moral Uncertainty, Humility

Avoid being overly certain in any single moral theory, especially utilitarianism, and recognize that complex calculations that violate common-sense morality are likely mistaken. Instead, prioritize being a good citizen and following strong moral rules and heuristics.

4. Question In-Group Trust

Be less apt to trust people, even those perceived as “on the same team” or within the same community (e.g., Effective Altruism), and actively entertain the possibility of bad behavior or deception. This requires personal vigilance against blind trust.

5. Anticipate Worst-Case Outcomes

Pay much more attention to wider error bars and potential worst-case outcomes, such as massive fraud or illicit activities, even if they seem unlikely. This involves proactively considering extreme negative possibilities in planning and assessment.

6. Know Fraud’s High Base Rate

Recognize that the base rate of fraud, even among successful companies and philanthropic individuals, is significantly higher than commonly perceived. This awareness should inform a more cautious and vigilant approach to organizational and personal interactions.

7. Streamline Roles & Responsibilities

Avoid taking on too many different roles or having multiple overlapping responsibilities, as this can lead to burnout and hinder effective action during crises. Instead, aim for greater focus and clarity in one’s commitments.

8. Distrust VC Fraud Incentives

Be cautious of trusting venture capitalists to vet companies for fraud, as their incentives are primarily aligned with potential financial returns and not necessarily with preventing fraudulent behavior. Their focus is on profit, not ethical conduct.

9. Communicate EA’s Broader Morality

Discuss Effective Altruism in a way that clearly integrates virtues like cooperativeness, integrity, honesty, and humility, alongside the distinctive focus on beneficence and truth-seeking. This ensures a more holistic understanding of EA’s values.

10. Decentralize Organizational Leadership

Promote the decentralization of core Effective Altruism organizations and projects into separate entities to distribute leadership, reduce single points of failure, and foster more focused responsibilities. This enhances resilience and clarity.

11. Plan a Structured AI Pause

Advocate for a defined “structured pause” in frontier AI development at the onset of an intelligence explosion (e.g., when AI significantly automates AI research). This pause would allow for critical deliberation and international cooperation on future governance.

12. Define Digital Being Rights Early

Proactively determine the moral considerations and rights (welfare, economic, political) that should be granted to digital beings before they are integrated into society. This is crucial because changing norms and legal rules will be very difficult once established.

13. Govern Post-AGI Resource Allocation

Establish governance mechanisms for newly valuable resources (e.g., energy, space) that will emerge after an intelligence explosion. This prevents a small number of actors from seizing control and potentially dominating Earth and the solar system indefinitely.

14. Prevent AI-Enabled Power Seizure

Develop political and governance strategies to prevent small numbers of actors, such as individuals or specific countries, from seizing extreme power through advanced AI technologies. This addresses the risk of destabilizing democracies and enabling dictatorships.

15. Safeguard Core EA Principles

Actively protect and promote the fundamental ideas of Effective Altruism, such as scope sensitivity, empathy for all beings, and an intense desire to use reason to help others. These principles are vital for the movement’s long-term impact.

16. Stand Firm on EA Principles

Encourage the Effective Altruism community to double down on its core principles and be willing to defend them, even in the face of significant attack, criticism, or unfair scrutiny. This fosters resilience and commitment to the movement’s mission.

In a lot of ways, I don't really have a soul. This is a lot more obvious in some contexts than others. But in the end, there's a pretty decent argument that my empathy is fake, my feelings are fake, my facial reactions are fake.

Sam Bankman-Fried (quoted by Spencer Greenberg)

To be truly thankful, you have to have felt it in your heart, in your stomach, in your head, the rush of pleasure, of kinship, of gratitude. And I don't feel those things. But I don't feel anything, or at least anything good. I don't feel pleasure or love or pride or devotion. I feel the awkwardness of the moment enclosing on me, the pressure to react appropriately, to show that I love them back. And I don't because I can't.

Sam Bankman-Fried (quoted by Spencer Greenberg)

He has absolutely zero empathy. That's what I learned that I didn't know. He can't feel anything.

Constance (COO of FTX, quoted by Spencer Greenberg)

If you've done some fancy calculations such that you think that some grave commonsensical violation of morality is like the best thing to do, and you should do that, like almost certainly you have made a mistake.

William MacAskill

It's been the worst year of my life by quite a long way.

William MacAskill

The thing that I guess, like, I definitely felt uncomfortable about it. Mainly because it's like, suddenly, especially as EA got bigger, it was kind of like, am I a politician now or something? Is like EA a special interest group? And I have to represent it and not upset people.

William MacAskill

My strong best guess is the idea that they were all polyamorous and in relationships with another was, uh, not accurate as well.

William MacAskill

Structured Pause for AI Development

William MacAskill
  1. Define a set of benchmarks and/or expert opinion to delineate the start of an 'intelligence explosion' (e.g., when AI research is going four times faster due to AI assistance).
  2. At this defined point, a front-runner country (e.g., the US) pauses development of frontier AI for one month.
  3. Hold a convention during this month to figure out next steps for the coming few years.
  4. Invite any other countries that also pause their development to attend this convention to seek mutually agreeable solutions.
  5. Benefit from AI assistance in deliberation, reasoning, and forecasting during this pause.
$40 billion
FTX valuation By the end of 2021, at its peak.
Under 30
Sam Bankman-Fried's age When he was the richest self-made billionaire.
Over a million
People who lost money in FTX Unable to get their money out of their accounts.
Plausibly 50%
Sam Bankman-Fried's initial giving plan Of his earnings when at Jane Street.
$100 million
Sam Bankman-Fried's planned giving (late 2021) Over the next year, aiming to scale to many billions in future years, as discussed with Nick Beckstead for the Future Fund.
3%
William MacAskill's credence in utilitarianism After working through his credences once.
1-2%
Y Combinator companies with fraudulent founders Of companies where founders commit fraud.
40%
Giving Pledge signatories with some sort of scandal Accused of some sort of scandal.
10%
Giving Pledge signatories convicted of financial crime In civil or criminal court.
4%
Giving Pledge signatories spending at least a night in prison For financial crime.
Four times faster
AI research speed increase Due to AI assistance, as a benchmark for a 'structured pause'.
A billion times
Sun's solar output vs. Earth's land Solar output compared to what lands on Earth, relevant for space resources.