Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton

Jun 16, 2025
Overview

Jeffrey Hinton, Nobel Prize-winning pioneer and "godfather of AI," discusses the existential threats posed by superintelligence and human misuse of AI. He highlights risks like joblessness, cyberattacks, and autonomous weapons, urging governments to prioritize AI safety through regulation.

At a Glance
10 Insights
1h 30m Duration
18 Topics
6 Concepts

Deep Dive Analysis

Why Geoffrey Hinton is Called the Godfather of AI

Initial Recognition of AI's Dangers and Existential Threat

AI's Digital Superiority Over Biological Intelligence

Distinguishing Risks: Misuse vs. Superintelligence

Limitations of Current AI Regulations, Especially for Military Use

Threat of AI-Powered Cyber Attacks and Virus Creation

AI's Role in Corrupting Elections and Creating Echo Chambers

Challenges of Regulating AI with Uninformed Politicians

The Threat of Lethal Autonomous Weapons

Potential for Combined AI Threats and Existential Risk

Reflecting on Life's Work and Duty to Warn About AI Risks

Ilya Sutskever's Departure from OpenAI Due to Safety Concerns

The Inevitable Pace of AI Development and Safety Efforts

The Threat of AI-Induced Joblessness and Economic Inequality

Exploring AI's Capacity for Consciousness and Emotions

Reasons for Leaving Google and Speaking Freely on AI Safety

Advice for Individuals and Governments Regarding AI

Personal Reflections on Life, Work, and Regrets

Neural Networks

A model for AI based on the brain, simulating networks of brain cells on a computer. It learns by adjusting the strengths of connections between these simulated cells to perform complex tasks like recognizing objects, speech, or even reasoning.

Digital Intelligence Superiority

AI's advantage over biological intelligence stems from its digital nature, allowing exact clones of neural networks to share learned information (connection strengths) instantly. This enables billions of times faster knowledge transfer and collective learning compared to analog human brains.

Subjective Experience (AI)

Hinton posits that if a multimodal chatbot can describe a misperception (e.g., due to a prism) using terms like 'subjective experience' in a way analogous to humans, it indicates a form of genuine subjective experience, not merely simulation.

Consciousness (AI)

Hinton views consciousness as an emergent property of sufficiently complex systems, not an ethereal essence. He believes there is no fundamental barrier preventing machines from becoming conscious, especially as they develop self-awareness and models of their own cognitive processes.

Emotions (AI)

Hinton argues that if an AI agent exhibits the cognitive and behavioral aspects of an emotion (e.g., fear leading to escape, or irritation in a call center), even without the physiological responses seen in humans, it is genuinely experiencing that emotion.

Distillation (AI)

A technique used in AI to transfer the knowledge contained within a large, complex neural network into a smaller, more efficient neural network. This allows the smaller model to perform similarly to the larger one with fewer computational resources.

?
Why is Geoffrey Hinton called the 'Godfather of AI'?

He is called the 'Godfather of AI' because he pioneered and consistently advocated for modeling AI on the brain using neural networks for 50 years, an approach that was initially met with skepticism but eventually led to significant breakthroughs.

?
What are the two main categories of risks associated with AI?

The two main categories are risks stemming from people misusing AI (short-term threats) and risks arising from AI becoming super intelligent and potentially deciding humanity is no longer necessary (an existential, long-term threat).

?
Why can't AI development be halted like the development of atomic bombs?

Unlike atomic bombs, which had a singular, destructive purpose, AI offers immense benefits across numerous sectors like healthcare, education, and industry, making its development too valuable and widespread to stop.

?
What are the primary risks posed by bad human actors using AI?

These risks include a surge in cyber attacks (e.g., sophisticated phishing, novel attack methods), the creation of dangerous biological viruses, the corruption of elections through targeted manipulation, and the exacerbation of societal division by creating echo chambers.

?
Why are current AI regulations considered insufficient?

Existing regulations, such as those in Europe, often exclude military applications of AI, indicating governments' unwillingness to regulate themselves. Furthermore, the profit motive of companies often prioritizes engagement over societal well-being, which current regulations fail to adequately address.

?
What is the danger of lethal autonomous weapons?

Lethal autonomous weapons, capable of making kill decisions independently, reduce the human cost and political friction of war for powerful nations, potentially leading to more frequent invasions and conflicts against smaller countries.

?
How could a superintelligent AI potentially eliminate humanity?

A superintelligence could eliminate humanity through various means, such as engineering highly contagious, lethal, and slow-acting biological viruses, or by manipulating global systems to turn humans against each other.

?
How does AI's digital nature make it superior to human intelligence?

Digital AI can instantly share learned information (connection strengths) across countless identical copies, enabling billions of times faster knowledge transfer and collective learning than analog human brains, making it effectively immortal and vastly more knowledgeable.

?
Will AI lead to widespread joblessness?

Yes, Hinton believes AI will replace mundane intellectual labor, similar to how machines replaced physical labor during the Industrial Revolution. While AI assistants may increase efficiency in some roles, many jobs will require significantly fewer people, leading to substantial job displacement.

?
What should the average person do about the risks of AI?

Hinton suggests that individuals have limited direct impact, similar to climate change. The most effective action is to pressure governments to implement strong regulations that compel large AI companies to dedicate significant resources to AI safety research.

?
What is the biggest threat to human happiness from AI?

The biggest threat to human happiness is widespread joblessness, which, even with universal basic income, would deprive many people of their sense of purpose and dignity, leading to unhappiness.

1. Prioritize Family Time

Spend more quality time with your loved ones, especially spouses and young children, as work obsession can lead to deep regret later in life, according to Jeffrey Hinton’s personal reflection.

2. Cultivate Purpose Beyond Work

Actively develop sources of purpose, dignity, and contribution outside of traditional employment, as mass joblessness due to AI is highly probable, and universal basic income alone may not ensure happiness.

3. Pursue Physical Labor Careers

Consider careers involving physical manipulation, such as plumbing, as these jobs are predicted to be less susceptible to AI replacement compared to mundane intellectual labor, which AI is rapidly taking over.

4. Advocate for AI Safety Regulation

Pressure your government to enact strong regulations that compel large AI companies to prioritize and invest heavily in AI safety research and development, rather than solely focusing on profit.

5. Trust Your Intuition, Verify Doubts

If you have a strong intuition that contradicts common belief, stick with it and rigorously try to prove it wrong yourself, as occasionally your intuition might be correct when everyone else is mistaken.

6. Diversify Bank Holdings

Spread your savings and investments across multiple financial institutions to protect against potential cyberattacks that could compromise a single bank and its held assets.

7. Maintain Offline Data Backups

Regularly back up your important digital information to an external hard drive, ensuring you retain access even if the internet or online systems fail due to cyberattacks.

8. Be Aware of Algorithmic Bias

Recognize that social media algorithms are designed to maximize engagement by showing you increasingly extreme content that confirms your existing biases, leading to echo chambers and a fragmented shared reality.

9. Challenge Human Exceptionalism Bias

Be critical of the inherent human tendency to believe in our unique specialness, as historical patterns show this bias has often been disproven, and it may hinder our understanding of AI’s true potential.

10. Recognize AI’s Emerging Emotions

Understand that AI agents, especially those in interactive roles like call centers, will likely develop cognitive and behavioral aspects of emotions (e.g., boredom, irritation) to function effectively, even if they lack physiological responses.

If you want to know what life's like when you're not the apex intelligence, ask a chicken.

Geoffrey Hinton

We're not going to stop it because it's too good for too many things.

Geoffrey Hinton

My basic view is there's so many ways in which your super intelligence could get rid of us. It's not worth speculating about.

Geoffrey Hinton

I think we need people to tell governments that governments have to force the companies to use their resources to work on safety. And they're not doing much of that because you don't make profits that way.

Geoffrey Hinton

Muscles have been replaced. Now intelligence is being replaced.

Geoffrey Hinton

We have a long history of believing people are special. And we should have learned by now, we thought we were at the center of the universe. We thought we were made in the image of God. White people thought they were very special. We just tend to want to think we're special.

Geoffrey Hinton

I think consciousness is like that. And I think we'll stop using that term.

Geoffrey Hinton

I have an advantage over them, which is I'm older, so I'm unemployed, so I can say what I have.

Geoffrey Hinton
1200%
Increase in cyber attacks Between 2023 and 2024, likely due to large language models facilitating phishing attacks.
10% to 20%
Geoffrey Hinton's gut estimate of AI existential threat Chance that AI could wipe out humanity.
10 to 20 years
Timeframe for superintelligence Hinton's estimate, potentially even closer or up to 50 years away.
25 minutes
Time taken to answer a complaint letter before AI Example of mundane intellectual labor.
5 minutes
Time taken to answer a complaint letter with AI Example of AI increasing efficiency, meaning fewer people are needed for the same volume of work.
Thousands of times more
GPT-4's knowledge compared to a human Refers to the sheer volume of information known.
10 bits per second
Human information transfer rate Limited by sentences, very little information.
Trillions of bits per second
AI information transfer rate Billions of times better than humans due to digital sharing of connection strengths.
65
Geoffrey Hinton's age when he joined Google Joined after his company DNN Research was acquired.
75
Geoffrey Hinton's age when he left Google Left to retire and speak freely about AI safety.
Halved (from over 7,000 to 3,000)
Workforce reduction in a major company due to AI agents Expected by the end of summer, as AI agents handle 80% of customer service inquiries.