AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris

Nov 27, 2025 2h 22m 11 insights
Tristan Harris, a technology ethicist and co-founder of the Center for Humane Technology, warns about the catastrophic consequences of unchecked AI development. He highlights the race to AGI, its potential for job displacement, security risks, and psychological manipulation, urging for collective action and regulation to steer towards a humane AI future.
Actionable Insights

1. Prioritize AI in Political Discourse

Treat AI as a primary political concern and vote for politicians committed to establishing guardrails and a conscious, humane approach to AI development, rather than a reckless one.

2. Advocate for Global AI Governance

Push for international agreements and negotiations among leading powers to pause or slow down AI development, establish red lines, and ensure AI controllability, drawing on historical precedents like the Montreal Protocol.

3. Demand AI Safety & Transparency Standards

Insist on mandatory safety testing, common safety standards, and transparency measures for AI labs, allowing public and governmental oversight, especially before advanced AI capabilities are released.

4. Support Whistleblower Protections

Advocate for stronger protections for whistleblowers within AI companies to enable them to safely disclose critical information about AI risks without fear of severe personal or financial loss.

5. Promote Humane AI Development

Encourage the development of “narrow AI” systems for specific, beneficial applications (e.g., non-anthropomorphic tutors, limited therapy bots) that are carefully designed to avoid manipulation or attachment issues, instead of racing towards general, uncontrollable AGI.

6. Reject AI Inevitability Mindset

Actively challenge the belief that a dystopian AI future is unavoidable; understand that collective human agency and conscious choices can steer technology towards a better path.

7. Understand AI’s Societal Trade-offs

Recognize that AI, like all powerful technologies, comes with significant trade-offs, balancing potential benefits against systemic harms like job displacement, security risks, and psychological manipulation.

8. Cultivate “Deathbed Values” for Decisions

Use a “deathbed values” framework to guide daily choices, asking what would be most important if one were to die soon, to ensure actions align with protecting what is most sacred and meaningful.

9. Share AI Clarity to Mobilize Action

Disseminate clear information about the risks and potential alternative paths for AI development to friends, family, and influential individuals, as widespread clarity can foster the courage needed for collective action.

10. Demand AI Liability Laws

Advocate for legal frameworks that impose liability on AI companies for the societal harms their products cause, creating financial incentives for more responsible and safer innovation.

11. Protest Unacceptable AI Paths

Be willing to participate in public movements and protests against uncontrolled or harmful AI development, as significant public pressure is often required to shift the trajectory of powerful technologies.