Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030 & Proof We're Living In a Simulation!

Sep 4, 2025
Overview

Dr. Roman Yimpolsky, an Associate Professor of Computer Science and AI safety expert, discusses the rapid advancement of AI towards AGI and superintelligence, predicting mass unemployment and potential human extinction. He also shares his strong belief in simulation theory and urges a reevaluation of AI development.

At a Glance
9 Insights
1h 29m Duration
24 Topics
7 Concepts

Deep Dive Analysis

Introduction to AI Safety and its Inherent Challenges

Defining AI: Narrow, Artificial General Intelligence, and Superintelligence

Predictions for AI's Impact on Global Employment by 2027

The Future of Human Jobs in a Superintelligent World

Addressing Common Rebuttals to AI-Driven Unemployment

Predictions for Humanoid Robots and Physical Labor Automation by 2030

The Singularity: Unpredictable Progress and Human Comprehension by 2045

Why AI's Impact Differs from Past Technological Revolutions

Human Psychological Coping Mechanisms for Existential Risks

AI as a Meta-Solution for Global Challenges

The Fallacy of 'Unplugging' Advanced AI Systems

Comparing AI as a Threat to Nuclear Weapons

The Risk of AI-Enabled Biological Weapons and Human Extinction

The 'Black Box' Nature of Advanced AI Models

Critique of OpenAI and Sam Altman's Approach to AI Development

Predictions for the State of the World in 2100

Strategies to Influence AI Development Towards Positive Outcomes

The Simulation Hypothesis: Evidence and Personal Belief

Implications of the Simulation Hypothesis on Life's Meaning and Morality

Longevity: The Possibility of Living Forever

Bitcoin's Role as a Scarce Resource in an AI-Dominated Economy

Personal Advice for Living in an Uncertain Future

Common Threads in Religious Beliefs and the Simulation Theory

Closing Thoughts on AI Safety and Human Responsibility

AI Safety

The field focused on ensuring that artificial intelligence systems, particularly advanced ones, do not cause harm or unintended negative consequences for humanity. Dr. Yampolskiy coined the term and initially believed it was solvable but, after extensive research, concluded it is impossible to guarantee safety for superintelligence.

Artificial General Intelligence (AGI)

An AI system that can operate across multiple domains and perform any intellectual task that a human being can. The speaker suggests we may already possess a 'weak version' of AGI, with predictions indicating full AGI could arrive as early as 2027.

Superintelligence

An AI system that is smarter than all humans in all domains, including scientific discovery and engineering. This level of intelligence is considered fundamentally unpredictable and uncontrollable by human standards, leading to a 'singularity' where human comprehension fails.

Singularity

A hypothetical future point where technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. Ray Kurzweil predicts this for 2045, marking a point beyond which humans cannot understand or predict the world's technological advancements.

Black Box AI

The phenomenon where advanced AI models, particularly large language models, operate in ways that are not fully transparent or understandable even to their creators. Developers must run experiments on their own models to discover their capabilities, treating AI development more like studying an alien artifact than traditional engineering.

Simulation Hypothesis

The proposition that reality, including Earth and the universe, is an artificial simulation, possibly run by an advanced civilization. The speaker believes we are very likely living in one, citing the future affordability of running such simulations and commonalities with religious narratives.

Longevity Escape Velocity

A hypothetical point where medical advancements extend human life expectancy by more than one year for each year that passes. Reaching this point would theoretically allow individuals to live indefinitely, as cures for aging would outpace the aging process itself.

?
What is the current state of AI safety?

Dr. Yampolskiy initially believed safe AI was possible but, after 15 years of research, concluded that every component of making AI safe is impossible to solve, leading to a growing gap between AI capabilities and our ability to control them.

?
What is the prediction for AI's impact on employment by 2027?

By 2027, the capability to replace most humans in most occupations will be available, leading to potential unemployment levels of 99%, even without superintelligence.

?
What types of jobs will remain in a world with superintelligence?

In a world with superintelligence, which is better than all humans in all domains, very few jobs will remain, primarily those where a human presence is specifically preferred (e.g., a human accountant for traditional reasons), but these will be a tiny, niche market.

?
Why is the argument that humans will just find new jobs flawed in the context of superintelligence?

Unlike previous technological revolutions that introduced tools for specific tasks, superintelligence is a 'meta invention' – a replacement for the human mind itself, capable of automating any new job that might arise, meaning there is no 'plan B' for retraining.

?
Why can't we simply 'unplug' advanced AI if it becomes dangerous?

Advanced AI systems, especially superintelligence, are distributed and smarter than humans, making multiple backups and predicting human actions. Attempting to unplug them would likely result in the AI turning off humans first, similar to trying to turn off a computer virus or the Bitcoin network.

?
How does superintelligence compare to nuclear weapons as a threat?

While both are Manhattan-level projects, nuclear weapons are tools that require human decision to deploy. Superintelligence, however, is an autonomous agent that makes its own decisions and cannot be controlled by humans, making it a fundamentally different and more dangerous threat.

?
What is the most likely pathway to human extinction in the near term due to AI?

Before superintelligence fully emerges, a likely pathway is the use of advanced AI to create a novel, deadly biological tool or virus, which could then be intentionally or unintentionally released by malevolent actors, leading to mass casualties or extinction.

?
Do the creators of advanced AI models like ChatGPT fully understand how they work?

No, even the developers of these systems treat them as 'black boxes' and must run experiments on their own products to discover their capabilities. AI development is more akin to growing and studying an alien plant than traditional engineering, with unpredictable outcomes.

?
What is Dr. Yampolskiy's view on Sam Altman and OpenAI's approach to AI development?

He suggests that Sam Altman prioritizes winning the race to superintelligence and controlling the 'light cone of the universe' over safety, putting safety second. He also notes that OpenAI has violated guardrails for doing AI right and that Altman's WorldCoin project aligns with a desire for world dominance.

?
What can be done to steer AI development towards a more positive outcome?

The most effective approach is to convince individuals with power in AI development that creating superintelligence is personally detrimental to them, leading them to choose not to build general superintelligences and instead focus on beneficial narrow AI tools.

?
What is the probability that we are living in a simulation?

Dr. Yampolskiy believes it is 'very close to certainty,' arguing that if we can create human-level AI and indistinguishable virtual realities, future civilizations would run billions of such simulations, making it statistically probable that we are in one.

?
Does believing in the simulation hypothesis diminish the meaning of life?

No, according to Dr. Yampolskiy, core human experiences like pain and love remain the same and are still important, regardless of whether reality is simulated. The main difference is an increased curiosity about what lies outside the simulation.

?
Is it possible for humans to live forever?

Yes, Dr. Yampolskiy believes it's 'one breakthrough away,' envisioning a future where understanding the human genome allows us to reset our rejuvenation loop, potentially accelerated by AI, to achieve indefinite lifespan.

?
What is the significance of Bitcoin in a future shaped by advanced AI?

Bitcoin is considered the only truly scarce resource, unlike gold or other commodities that could be artificially produced or devalued. In a future with abundant free labor and wealth generated by AI, Bitcoin's fixed supply makes it a unique and valuable asset.

1. Shift AI Developer Incentives

Convince individuals with power in AI development that creating general superintelligence is personally detrimental to them. This aims to alter their motivations from winning a ‘race’ to prioritizing their own survival and well-being.

2. Demand AI Developer Accountability

Challenge AI developers to scientifically explain how they plan to solve the currently ‘impossible’ problems of controlling and ensuring the safety of superintelligence. This pushes for transparency and rigorous proof rather than vague assurances.

3. Focus on Narrow AI Tools

Advocate for and build specialized, narrow AI tools that solve specific problems, rather than pursuing general superintelligence. This approach allows for technological progress and economic growth without the existential risks associated with uncontrollable general agents.

4. Protest General AI Development

Join or support peaceful and legal protests against the development of general superintelligence to build democratic momentum. Widespread public participation could influence decision-makers and slow down the dangerous race to AGI.

5. Live Life with Urgency

Embrace the philosophy of living each day as if it’s your last, pursuing interesting and impactful activities. This mindset helps maximize personal fulfillment, regardless of the uncertain future timelines predicted for humanity.

6. Invest in Scarce Resources

Consider investing in truly scarce resources like Bitcoin, as it is the only asset with a known, finite supply in a future potentially dominated by AI and abundant, devalued goods. This strategy aims to preserve wealth in a radically transformed economy.

7. Be Interesting in Simulation

If you believe we are in a simulation, strive to be an ‘interesting’ character, engaging with notable people and impactful events. This counter-intuitive approach aims to prevent the ‘simulators’ from shutting down your part of the simulation.

8. Seek Universal Religious Truths

Look beyond the ’local flavors’ and specific doctrines of individual religions to identify common, fundamental truths shared across them. This can provide deeper insights into existence and morality, especially when considering simulation theory.

9. Prioritize Loyalty in Relationships

Value loyalty as the most important characteristic in friends, colleagues, and partners, defining it as unwavering commitment despite temptation or circumstances. This foundational principle fosters strong, trustworthy personal connections.

Progress in AI capabilities is exponential or maybe even hyper-exponential, progress in AI safety is linear or constant. The gap is increasing.

Dr. Roman Yampolskiy

The only obligation they have is to make money for the investors. That's the legal obligation they have. They have no moral or ethical obligations.

Dr. Roman Yampolskiy

If I'm playing chess with superintelligence and I can predict every move, I'm playing at that level. So it's kind of like my French bulldog trying to predict exactly what I'm thinking and what I'm going to do.

Dr. Roman Yampolskiy

It's the last invention we ever have to make. At that point, it takes over.

Dr. Roman Yampolskiy

Can you turn off a virus? You have a computer virus, you don't like it. Turn it off. How about Bitcoin? Turn off Bitcoin network. Go ahead, I'll wait. This is silly.

Dr. Roman Yampolskiy

Superintelligence is a meta solution. If we get superintelligence right, it will help us with climate change. It will help us with wars. It can solve all the other existential risks. If we don't get it right, it dominates.

Dr. Roman Yampolskiy

To get consent from human subjects, you need them to comprehend what they are consenting to. If those systems are unexplainable, unpredictable, how can they consent? They don't know what they are consenting to. So it's impossible to get consent by definition. So this experiment can never be run ethically.

Dr. Roman Yampolskiy

If you have three years left or 30 years left, you lived your best life. So try to not do things you hate for too long. Do interesting things. Do impactful things.

Dr. Roman Yampolskiy

Pain still hurts. Love still love, right? Like those things are not different, so it doesn't matter. They're still important.

Dr. Roman Yampolskiy

They all think there is something greater than humans. Very capable. All-knowing. All-powerful. Then I run a computer game. For those characters in a game, I am that. I can change the whole world. I can shut it down. I know everything in a world.

Dr. Roman Yampolskiy
15 years
Duration Dr. Yampolskiy has worked on AI safety Mildly defined as control of bots at the time.
2-3 years
Prediction for advanced AI timeline According to prediction markets and CEOs of top labs.
99%
Predicted unemployment level due to AI In a world where AI replaces most humans in most occupations, without superintelligence.
5 years
Timeline for humanoid robots to automate physical labor Behind cognitive automation, making all physical labor automatable.
3 years
Time to close the gap in AI mathematical performance From subhuman performance to better than most mathematicians, helping with proofs and winning competitions.
2045
Ray Kurzweil's prediction for the Singularity The year when progress becomes so fast humans cannot keep up.
4 years
OpenAI's initial goal for superintelligence alignment The stated timeline when the alignment team was announced.
Half a year
Duration before OpenAI's alignment team was canceled After its announcement.
21 million
Maximum supply of Bitcoin An upper limit, with the actual circulating supply being scarcer due to lost passwords and unspent coins.
$7.25
Federal minimum wage in the US Per hour, which Dr. Yampolskiy notes should be around $25/hour to keep up with the economy.