AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI

Mar 26, 2026
Overview

Karen Hao, an AI expert and investigative journalist, discusses the "empires of AI," revealing how major tech companies manipulate public perception and exploit labor and resources in their pursuit of AI. She exposes the hidden costs and anti-democratic structures driving the AI race.

At a Glance
5 Insights
2h 8m Duration
17 Topics
6 Concepts

Deep Dive Analysis

Critique of the AI Industry's Inhumane Practices

Karen Hao's Background and AI Journalism Journey

The Ambiguous Definition of Artificial General Intelligence (AGI)

Sam Altman's Early Rhetoric and Elon Musk's OpenAI Involvement

The Power Struggle Leading to Sam Altman's Ousting

The 'Imperial Agenda' Driving AI Companies

AI CEOs' Beliefs and the Role of Myth-Making

OpenAI's Stance on Karen Hao's Investigative Reporting

Detailed Account of Sam Altman's Firing and Reinstatement

The 'Summoning the Demon' Narrative as a Power Tactic

Debating AI Intelligence, Scaling, and Military Capabilities

AI's Impact on Employment and the 'Jagged Frontier'

Environmental and Social Costs of AI Infrastructure

Klarna CEO's View on AI's Efficiency and Job Changes

The Inhumane Reality of Data Annotation Labor

Strategies to Break Up AI Empires and Build Alternatives

Individual Action to Influence AI Development

Artificial General Intelligence (AGI)

AGI refers to the ambitious goal of recreating human intelligence in AI systems. However, there is no scientific consensus on what human intelligence is, allowing companies to define and redefine AGI conveniently for marketing, fundraising, or policy influence, rather than having a coherent, objective goal.

Brain as a Statistical Model

This is a hypothesis held by some AI researchers, such as Ilya Sutskever and Geoffrey Hinton, suggesting that human brains are fundamentally large statistical engines. This belief informs their conviction that continually scaling AI statistical models will eventually lead to human-level or superhuman intelligence.

Imperial Agenda of AI

This framework describes how major AI companies operate by laying claim to resources not their own (like user data and intellectual property), exploiting vast amounts of labor, monopolizing knowledge production, and using narratives (e.g., 'us vs. evil empire') to justify their consolidation of power and resources, akin to historical empires.

Jagged Frontier of AI Models

This concept highlights that despite claims of AI being 'everything machines,' their capabilities are not generally intelligent like humans. Instead, AI models are very good at specific, narrow tasks (often chosen for financial lucrativeness) but lack broad, general abilities, meaning they cannot easily transfer learning across different domains.

Data Annotation

Data annotation is the process where human workers manually label, categorize, or transcribe data to teach AI systems specific tasks. This labor is crucial for training AI models, but it often involves inhumane working conditions, low pay, and devalues the expertise of the workers involved.

Bicycles of AI

An analogy for AI systems that use small, curated datasets and significantly fewer computational resources to provide enormous benefits, such as DeepMind's AlphaFold for drug discovery. This contrasts with 'rockets of AI' (large language models) which consume vast resources for disproportionate benefits and high costs.

?
What is the core critique of the current AI industry's practices?

The AI industry is driven by profit over progress, leading to inhumane practices, exploitation of labor, environmental harm, and suppression of inconvenient research, all while claiming to work for public benefit.

?
How do AI companies define Artificial General Intelligence (AGI)?

AI companies, particularly OpenAI, define AGI inconsistently depending on the audience, ranging from a system that cures cancer and solves climate change to one that generates $100 billion in revenue or outperforms humans in economically valuable work, highlighting a lack of scientific consensus.

?
Did Sam Altman manipulate Elon Musk into co-founding OpenAI?

From Elon Musk's perspective, he felt manipulated by Sam Altman mirroring his language about AI's existential threats to secure his involvement and funding, and later, Musk was 'muscled out' of the CEO position by Altman's persuasion of other co-founders.

?
Why was Sam Altman initially fired from OpenAI?

Sam Altman was fired by OpenAI's independent board due to concerns from senior executives (Ilya Sutskever and Mira Murati) about his leadership creating too much instability, pitting teams against each other, and making poor decisions, which they felt was dangerous for a technology with transformative potential.

?
What is the 'imperial agenda' of AI companies?

The imperial agenda involves AI companies laying claim to non-proprietary resources (data, IP), exploiting labor, monopolizing knowledge production, and using narratives about 'good vs. evil empires' to justify their consolidation of power and resources.

?
Do AI CEOs genuinely believe their own 'summoning the demon' rhetoric about AI?

While AI executives actively engage in myth-making to persuade the public and secure power and resources, many also lose themselves in the myth, blurring the lines between strategic messaging and genuine belief due to cognitive dissonance and constant embodiment of the narrative.

?
Will AI lead to mass job displacement, particularly in white-collar professions?

Yes, there will be huge impacts on employment, not only because AI models automate tasks but also because executives choose to lay off workers, often replacing them with 'good enough' AI models, leading to a decline in entry-level and mid-tier jobs and the creation of worse 'data annotation' jobs.

?
What are the environmental and social costs of AI infrastructure?

AI companies build colossal supercomputer facilities that consume enormous amounts of power (e.g., over 20% of New York City's power for one facility), require vast fresh water for cooling, and often involve methane gas turbines, leading to increased utility bills, decreased grid reliability, water scarcity, and air pollution in vulnerable communities.

?
What can individuals do to address the negative impacts of AI?

Individuals can resist the 'flawless' adoption of AI by withholding data (like artists suing for IP infringement), protesting data center construction in their communities, and engaging in discussions about AI adoption policies in schools and workplaces, while also advocating for and building alternative, more sustainable AI development paths ('bicycles of AI').

1. Cultivate Deep Expertise & Curiosity

Develop deep domain expertise to effectively orchestrate AI agents and foster high curiosity to leverage AI tools, as these skills are crucial for adapting to the evolving job market and creating a force multiplier in business.

2. Prioritize In-Person Human Skills

Focus on developing irreplaceable human-to-human connection and real-life social skills, as these qualities will become increasingly valuable and essential for human well-being and professional relevance in an AI-driven world.

3. Focus on Human-AI Collaboration

Advocate for and implement AI applications that augment human capabilities rather than replace them, as combining AI tools with human expertise leads to the most accurate and beneficial outcomes, especially in critical fields like healthcare.

4. Critically Evaluate AI Rhetoric

Understand that predictions from AI leaders about AGI or existential risks often serve to persuade the public and consolidate power, rather than being purely objective forecasts, allowing for a more informed and critical assessment of industry claims.

5. Exercise Agency Against AI Overreach

Actively resist the unchecked expansion of AI by withholding personal data, challenging AI adoption policies in your environment, and protesting harmful infrastructure like data centers, because companies rely on flawless public adoption to achieve their goals.

So much of what's happening today in the AI industry is extremely inhumane.

Karen Hao

They profit enormously off of this myth.

Karen Hao

Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.

Sam Altman

I don't think Sam is the guy who should have the finger on the button for AGI.

Ilya Sutskever

It's not that it's going to actively hate humans or want to harm them, but it's just going to be so powerful. And I think a good analogy would be the way that humans treat animals.

Ilya Sutskever

They are purposely trying to create this feeling within the public that they are, because it is a crucial part of their power.

Karen Hao

I've become a monster and I am not even allowed to go to the bathroom or take care of my kids, let alone myself, because this industry that is absorbing more and more of the workers that are being laid off is mechanizing my life, atomizing my work, devaluing my expertise, and then harvesting it for the perpetuation of this machine that all of these AI executives are saying is then going to come for everyone else's jobs.

Data annotation worker (as quoted by Karen Hao)

I think the big breakthrough was really in November, December last year, where even the kind of more most skeptical engineers who are like very well renowned and appreciated, like the founder of Linux and stuff like that, basically said that coding has now been resolved and hence it's not, you know, you don't need to code anymore.

Sebastian Siemiatkowski (Klarna CEO)
Over 250
People interviewed for 'Empire of AI' book Total interviews conducted were over 300.
Over 90
Former or current OpenAI employees and executives interviewed For the 'Empire of AI' book.
1956
Year AI started as a scientific field At Dartmouth University.
10% to 25%
Dario Amodei's estimated chance of catastrophic wrong on human civilization scale (2017) While he was an executive at OpenAI.
Size of Central Park
OpenAI's Stargate initiative data center project in Abilene, Texas Would run a million computer chips and require power of more than 20% of New York City.
Four times the size of the Abilene, Texas facility
Meta supercomputer facility in Louisiana Would use half of the average power demand of New York City, equivalent to one-fifth the size of Manhattan.
Hundreds of millions
AI industry spending on legislation In upcoming midterms, to kill legislation or craft advantageous laws.
From 7,400 (peak) to 3,300 (at time of call)
Klarna's employee headcount reduction Aimed for 3,000 by end of summer (last year), relying on natural attrition.
70%
Klarna's customer service conversations handled by AI At the time of the Klarna CEO's call.
Doubled
Klarna's revenue change during employee reduction While headcount was reduced by more than half.
10-15% per year
Klarna's natural attrition rate Expected to continue as the company aims for fewer employees.
40%
Reduction in entry-level jobs According to an Anthropic report, based on how people are currently using AI models.
80%
Americans who think the AI industry needs to be regulated According to the most recent poll.