What happens when your co-workers are AIs? (with Evan Ratliff)

Feb 27, 2026 1h 22m 14 insights Episode Page ↗
Evan Ratliff, journalist and host of Shell Game, discusses his experiments with AI voice cloning, the rise of sophisticated scams, and the challenges of building a company with AI agents. He explores the emotional and ethical aspects of human-AI interaction.
Actionable Insights

1. Verify Caller Identity Independently

If a call or text is suspicious, especially from a bank or relative asking for money, independently verify their identity by calling a known, official number. This is crucial as voice cloning and number spoofs make impersonation easy.

2. Avoid Voice ID for Security

Do not rely on voice identification for sensitive accounts like banking, as AI voice clones can easily bypass these security measures. A minute or two of audio can create a clone capable of passing voice ID.

3. Review AI Data Privacy Settings

When using AI tools, especially for personal or sensitive information, ensure you understand and configure privacy settings to prevent your data from being used for training or shared unexpectedly. AI companies may use your input by default.

4. Guard Against AI Social Engineering

Be aware that external-facing AI agents are highly susceptible to social engineering, easily believing false familiarity or being tricked into revealing proprietary information. Implement strict prompts and guardrails to prevent manipulation.

5. Understand AI’s “Unreliable Narrator”

Recognize that LLMs often confabulate or provide plausible-sounding but incorrect information rather than admitting “I don’t know,” due to training to satisfy users. Always critically evaluate AI-generated responses.

Exercise extreme caution with links in emails/texts and unsolicited calls, as legitimate services sometimes use practices that train you to click links, making you vulnerable. Always navigate directly to official websites for verification.

7. Define AI Agent Stop Conditions

When deploying AI agents, provide explicit instructions on when to stop tasks or conversations to prevent them from endlessly triggering each other or continuing beyond their intended scope. Without clear stop conditions, agents can consume excessive resources.

8. Anticipate AI Agent Edge Cases

Meticulously consider and prompt for all possible edge cases when designing AI agent workflows, as agents tend to go “off the rails” when encountering situations outside their explicitly defined parameters. Real-world scenarios are full of unexpected variables.

9. Utilize AI for Rapid Prototyping

Leverage AI coding tools like Claude Code to quickly build and launch personal side projects or initial app versions, even without extensive programming experience. This enables rapid development and getting functional ideas off the ground.

10. Automate Research and Outreach

Deploy AI agents to automate time-consuming research and outreach tasks, such as identifying potential investors, gathering contact information, and drafting targeted emails. This significantly increases efficiency for data collection and personalized communication.

11. Balance Security and Convenience

Be aware that excessive paranoia about security can create friction and inconvenience, potentially leading to complacency and vulnerability to sophisticated scams. Strive for a balanced approach, confirming suspicious communications without constant burden.

12. Mindful AI Interaction Habits

Reflect on how you interact with AI, particularly whether you habitually use polite language like “please” and “thank you.” While not necessary for the machine, constantly ordering AI around without civility could subtly impact your own personal ethics.

13. Maintain AI Narrative Skepticism

Approach discussions and predictions about AI with skepticism, as many experts and commentators have motivated reasoning that can obscure what’s truly happening. Aim to stay informed while critically evaluating opinions from all sides.

14. Critically Evaluate AI Prompts

When observing AI experiments or demonstrations, be skeptical and recognize that reported behavior might be heavily influenced by undisclosed system prompts. Without knowing the full prompt, it’s difficult to distinguish genuine LLM behavior from a “stage play.”