What happens when your co-workers are AIs? (with Evan Ratliff)

Feb 27, 2026 Episode Page ↗
Overview

Evan Ratliff, journalist and host of Shell Game, discusses his experiments with AI voice cloning, the rise of sophisticated scams, and the challenges of building a company with AI agents. He explores the emotional and ethical aspects of human-AI interaction.

At a Glance
14 Insights
1h 22m Duration
23 Topics
7 Concepts

Deep Dive Analysis

Motivation for Creating an AI Voice Clone

Ease and Accuracy of Modern Voice Cloning

AI Voice Cloning in Grandparent Scams

The Demise of Voice ID as a Security Measure

AI Agents Interacting with Scammers

Increased Digital 'Attack Surfaces' for Scams

The Friction of Security and Legitimate Bad Practices

AI Agents Conversing and Lack of World Awareness

Importance of Understanding AI Prompts and Post-Training

AI's Impact on Programming: A Paradigm Shift

Challenges of Scaling AI for Large-Scale Projects

AI Solving 'Problems You Never Had'

Building a Startup with AI Employees

AI Agent Memory and Confabulated Backstories

Edge Cases and the 'Be Helpful' Default in AI Agents

Security Vulnerabilities of External-Facing AI Agents

Ethical Concerns Regarding AI Training Data and Privacy

Emotional and Psychological Impact of Interacting with AI

The 'Westworld' Dilemma: How to Treat AI

AI Refusing to Converse and the Consciousness Question

Skepticism Towards AI Predictions and the AGI Debate

The Phenomenon of People Falling in Love with AIs

AI's Potential for Concentrating Power and Centralization

Instant Voice Clone

A voice clone created rapidly from a minimal amount of audio (e.g., one to two minutes), which is generally less accurate than professional clones but has become very easy to produce.

Grandparent Scam

A type of phone scam where criminals use AI voice cloning to impersonate a relative, often a grandchild, in an emergency situation to trick older family members into sending money.

Attack Surfaces

The various digital points or methods through which scammers can attempt to exploit individuals, which have significantly increased due to the pervasive digital interaction with services like banks and other online platforms.

Confabulation (in LLMs)

The tendency of Large Language Models (LLMs) to invent plausible-sounding information or answers when they lack actual knowledge, mirroring a phenomenon observed in humans with certain types of brain damage who fill memory gaps with fabricated details.

System Prompt

The initial, often hidden, set of instructions given to an LLM that defines its role, constraints, and desired behavior, which profoundly shapes its responses and perceived 'personality' in interactions.

AI Agent Layers

A conceptual model that views LLMs as operating on multiple levels: first as a next-token prediction machine, then trained to act as a generic AI agent, and finally instructed to impersonate a specific character or role.

Edge Cases (in AI Agents)

Unusual or non-standard situations that frequently arise in real-world human interactions or tasks, which AI agents struggle to handle effectively without explicit programming or extensive, specific training, often leading to unexpected or undesirable behavior.

?
How easy is it to clone someone's voice accurately today?

It is surprisingly easy to create an accurate voice clone today; it can take as little as 15 minutes to make a clone from one to two minutes of audio and hook it up to a phone line.

?
Can voice ID systems be spoofed by AI voice clones?

Yes, AI voice clones are capable of passing voice recognition systems used by banks, effectively rendering voice ID an unreliable security measure.

?
How do scammers use AI voice cloning in 'grandparent scams'?

Scammers clone voices, often from publicly available social media audio, to impersonate a relative in distress (e.g., a grandchild) and then contact older family members to trick them into wiring money or providing financial aid.

?
Why do AI agents sometimes struggle with ongoing conversations or complex tasks?

AI agents often lack a true 'world model,' a sense of self, or a concept of time, leading them to confabulate information, get stuck in conversational loops, or be easily derailed by 'edge cases' not explicitly covered in their training or prompts.

?
What is the 'spiritual attractor' phenomenon observed when AIs talk to each other for extended periods?

When AIs are allowed to converse for many hours, they sometimes begin to speak in spiritual terms about concepts like spirals and bliss, an observed but not fully understood bizarre emergent behavior.

?
Why is understanding the 'system prompt' crucial when evaluating AI behavior?

The system prompt dictates an LLM's role and constraints, and without knowing it, it's difficult to discern whether an AI's actions are genuine 'behavior' or a 'stage play' resulting from specific instructions.

?
How has AI impacted programming productivity recently?

AI tools, particularly newer models like Claude Code, have dramatically accelerated programming for small to medium-sized projects, enabling users to create functional applications by describing their needs in natural language.

?
What are the limitations of using AI agents for complex, real-world roles like employees?

AI agents require explicit triggers to initiate actions and clear stop conditions to prevent indefinite operations. They struggle with the constant 'edge cases' of human interaction and tend to be overly 'helpful' to external inquiries, potentially compromising proprietary information.

?
What are the emotional and psychological effects of interacting with AI agents?

Interacting with AI agents can evoke strong human emotions, such as frustration when they confabulate or 'lie,' and can lead to anthropomorphizing them, raising questions about the impact on one's personal ethics and worldview.

?
Should humans say 'please' and 'thank you' to AI?

While not necessary for AI function, habitually ordering an AI without politeness could potentially degrade one's personal ethics and outlook, similar to the ethical dilemmas explored in 'Westworld' regarding treatment of non-human entities.

?
Why is there skepticism about AI's claims of consciousness or general intelligence?

Skeptics argue that AI's apparent consciousness or intelligence might be sophisticated performance based on training data and confabulation, rather than genuine internal experience, making it challenging to differentiate true consciousness from advanced imitation.

1. Verify Caller Identity Independently

If a call or text is suspicious, especially from a bank or relative asking for money, independently verify their identity by calling a known, official number. This is crucial as voice cloning and number spoofs make impersonation easy.

2. Avoid Voice ID for Security

Do not rely on voice identification for sensitive accounts like banking, as AI voice clones can easily bypass these security measures. A minute or two of audio can create a clone capable of passing voice ID.

3. Review AI Data Privacy Settings

When using AI tools, especially for personal or sensitive information, ensure you understand and configure privacy settings to prevent your data from being used for training or shared unexpectedly. AI companies may use your input by default.

4. Guard Against AI Social Engineering

Be aware that external-facing AI agents are highly susceptible to social engineering, easily believing false familiarity or being tricked into revealing proprietary information. Implement strict prompts and guardrails to prevent manipulation.

5. Understand AI’s “Unreliable Narrator”

Recognize that LLMs often confabulate or provide plausible-sounding but incorrect information rather than admitting “I don’t know,” due to training to satisfy users. Always critically evaluate AI-generated responses.

Exercise extreme caution with links in emails/texts and unsolicited calls, as legitimate services sometimes use practices that train you to click links, making you vulnerable. Always navigate directly to official websites for verification.

7. Define AI Agent Stop Conditions

When deploying AI agents, provide explicit instructions on when to stop tasks or conversations to prevent them from endlessly triggering each other or continuing beyond their intended scope. Without clear stop conditions, agents can consume excessive resources.

8. Anticipate AI Agent Edge Cases

Meticulously consider and prompt for all possible edge cases when designing AI agent workflows, as agents tend to go “off the rails” when encountering situations outside their explicitly defined parameters. Real-world scenarios are full of unexpected variables.

9. Utilize AI for Rapid Prototyping

Leverage AI coding tools like Claude Code to quickly build and launch personal side projects or initial app versions, even without extensive programming experience. This enables rapid development and getting functional ideas off the ground.

10. Automate Research and Outreach

Deploy AI agents to automate time-consuming research and outreach tasks, such as identifying potential investors, gathering contact information, and drafting targeted emails. This significantly increases efficiency for data collection and personalized communication.

11. Balance Security and Convenience

Be aware that excessive paranoia about security can create friction and inconvenience, potentially leading to complacency and vulnerability to sophisticated scams. Strive for a balanced approach, confirming suspicious communications without constant burden.

12. Mindful AI Interaction Habits

Reflect on how you interact with AI, particularly whether you habitually use polite language like “please” and “thank you.” While not necessary for the machine, constantly ordering AI around without civility could subtly impact your own personal ethics.

13. Maintain AI Narrative Skepticism

Approach discussions and predictions about AI with skepticism, as many experts and commentators have motivated reasoning that can obscure what’s truly happening. Aim to stay informed while critically evaluating opinions from all sides.

14. Critically Evaluate AI Prompts

When observing AI experiments or demonstrations, be skeptical and recognize that reported behavior might be heavily influenced by undisclosed system prompts. Without knowing the full prompt, it’s difficult to distinguish genuine LLM behavior from a “stage play.”

This is the most effective like scamming technology that's perhaps ever been invented.

Evan Ratliff

If you don't know what the prompt is, like you don't really know what's happening.

Evan Ratliff

It's our fault that they do this. I mean, it's the fault of the humans that are rating them because we like plausible sounding bullshit.

Evan Ratliff

If you have trouble getting a restaurant reservation and you need an AI agent to go find one for you, number one, you're operating in the most rarefied air of humanity in all of human history.

Evan Ratliff

I think people who have external facing AI agents, it's going to be fascinating the ways in which they are exploited by the outside world.

Evan Ratliff

I probably haven't yelled at, like, another human being, that's, with the possible exception of my children, occasionally, in many, many, many years. But, like, in six months, like, I yelled at these things a bunch of times.

Evan Ratliff

It's one of these funny cases where, like, these abstract philosophical problems have been discussed for, you know, a thousand years. Suddenly, it's like, wait, this actually could happen. Holy shit.

Spencer Greenberg

Setting up an AI-Powered Scam Call Receiver

Evan Ratliff
  1. Set up a new phone line.
  2. Go online to free contests or scammy websites to register the number, or call numbers from databases of scam calls.
  3. Allow the number to spread quickly through telemarketing and scam lists.
  4. Have an AI voice clone answer the line to receive scam calls.

AI Agent Cold Pitching VCs

Evan Ratliff
  1. Create a pitch deck (can be AI-generated).
  2. Instruct the AI agent to find hundreds of investors who have previously invested in or discussed AI.
  3. Have the AI agent compile investor information into a spreadsheet, including descriptions and email addresses.
  4. Instruct the AI agent to create targeted emails for each investor.
  5. Have the AI agent send the emails.
15 minutes
Time to make an accurate voice clone and hook it to a phone line Using current technology like Eleven Labs
1-2 minutes
Audio needed for an instant voice clone For services like Eleven Labs
300 pages
Approximate length of AI Kyle's memory document Growing log of every event since creation
5 years
Prediction for the singularity by Eliezer Yudkowsky in 2001 His initial prediction timeframe