Separating quantum computing hype from reality (with Scott Aaronson)

May 1, 2024 1h 18m 8 insights Episode Page ↗
Spencer Greenberg speaks with Cat Woods about the urgent need to slow down AI development. They discuss the potential for superintelligence to cause existential or suffering risks and explore various actionable steps individuals can take to advocate for AI safety and responsible progress.
Actionable Insights

1. Advocate for Slowing AI Development

Actively advocate for slowing down AI development until humanity understands how to build it safely, as current systems are rapidly approaching human-level intelligence without adequate control mechanisms, posing significant existential risks.

2. Regulate AI Like Medicine

Push for AI development to be treated with stringent safety processes, similar to how new medicines or foods are regulated, requiring proof of safety before release rather than waiting to observe potential harm.

3. Engage in Online AI Advocacy

Participate in online advocacy by liking, sharing, and commenting on posts about AI safety to raise awareness and signal to politicians and corporations that the public desires cautious and safe AI development.

4. Contact Political Representatives

Write letters or call your political representatives and politicians to express your concerns about AI and advocate for specific bills or regulations, as these direct actions can significantly influence policy decisions.

5. Volunteer for AI Safety Efforts

Volunteer your time for AI safety organizations, such as Pause AI (pauseai.info), by assisting with petitions, writing, research, technical help, or legal advice to contribute directly to slowing down development.

6. Donate to AI Safety Initiatives

Donate to organizations working on AI safety, such as Pause AI (pauseai.info) or through re-grantors like Mana Fund, to provide crucial financial support for efforts to ensure safe and aligned AI development.

7. Reflect on AI Existential Risk

Seriously consider the high probability of AI posing an existential risk and commit to taking action, rather than merely acknowledging it as an interesting idea, to contribute to global survival and ethical development.

8. Apply Golden Rule to AI

Treat animals the way you would like a superintelligence to treat you, as a mental model to understand the potential indifference and unintended suffering AI could inflict if misaligned with human values.