The non-technical PM’s guide to building with Cursor | Zevi Arnovitz (Meta)
Zevi Arnowitz, a non-technical PM at Meta, shares his unique AI-powered workflow for building real products from scratch. He details how he uses tools like Cursor and Claude code, along with custom slash commands, to manage the entire product development lifecycle.
Deep Dive Analysis
17 Topic Outline
Zevi's Non-Technical Background and AI Journey
Overview of Zevi's AI Workflow and Core Tools
Compartmentalizing AI with Projects vs. Memory
Graduating from Simpler AI Tools to Cursor
Live Demonstration: Building a StudyMate Feature
Creating Linear Issues with AI Slash Commands
AI-Assisted Exploration and Planning for Features
Executing Code with AI Models
Limitations of Simpler AI Building Tools
Multi-Model AI Code Review Strategy (Peer Review)
Personifying AI Models to Understand Their Strengths
Importance of Post-Mortems and Updating AI Documentation
Integrating AI Workflows in Larger Companies
Impact of AI on the PM Role and Skill Development
Using AI for Job Interview Preparation
Learning from Career Failures
The Current Opportunity for Junior Builders
6 Key Concepts
AI Projects (GPT/Claude)
These are compartmentalized AI chats that maintain custom instructions and a shared knowledge base, preventing the AI's memory from mixing up different contexts or topics. This allows users to create specialized AI 'personas' like a CTO for specific tasks.
Slash Commands
Reusable prompts saved within a codebase that can be invoked by typing '/' followed by the command name. They automate repetitive tasks by injecting pre-defined instructions into the AI's context, streamlining workflows like issue creation or planning.
AI Harness
Refers to the layers and interfaces built around core AI models (like Claude or GPT) that determine the level of control a user has and how opinionated the tool is in its suggestions or actions. A 'tighter' harness offers more guidance but less control, while a 'looser' harness provides more flexibility.
AI Peer Review
A technique where multiple AI models are used to review the same code independently. The primary AI model then assesses the feedback from the other models, either explaining why the identified issues are not valid or fixing them, effectively having AI models 'fight it out' over code quality.
Learning Opportunity Slash Command
A specific prompt designed to prime an AI model to explain complex technical concepts using the 80-20 rule, tailored for someone with mid-level engineering knowledge. This helps non-technical users quickly grasp the essence of difficult topics.
Post-Mortem for AI Workflows
The practice of analyzing instances where an AI model fails or produces suboptimal output. The user asks the AI to reflect on the root cause of the mistake, then updates system prompts, documentation, or tooling to prevent similar errors in the future, continuously improving the AI's performance.
7 Questions Answered
Non-technical individuals can build sophisticated products by adopting a structured AI workflow that includes planning, execution, and review, often starting with simpler AI chat projects and gradually moving to powerful coding environments like Cursor with specific AI models.
The primary challenge is effectively reviewing the code generated by AI, especially for non-technical users who may lack the expertise to identify errors or suboptimal solutions.
Different AI models possess distinct strengths—for example, Claude excels at planning and communication, Gemini is strong in UI design, and GPT's model is adept at fixing complex bugs. Leveraging these specialized capabilities across various project stages optimizes the development workflow.
To enhance AI output quality, users should provide clear context, specific instructions, and style guidelines. Additionally, conducting post-mortems on AI failures and updating prompts or documentation based on these learnings continuously refines the AI's understanding and performance.
AI can serve as a personal coach for interview preparation by conducting mock interviews, analyzing frequently asked questions, and providing critical feedback on responses. It can also generate ideal answers for learning and understanding best practices.
No, if used intentionally and correctly, AI can significantly enhance a PM's abilities by allowing them to gain experience at a higher strategic level, focus on core product strategy and user experience, and serve as a non-judgmental thought partner.
AI empowers junior individuals to build their own startups and significant products, providing an unfair advantage to those who are curious, optimistic, hardworking, kind, and effective communicators, enabling them to create value independently.
22 Actionable Insights
1. Adopt a Structured AI Workflow
Implement a systematic workflow for building with AI, encompassing issue creation, exploration, plan development, execution, multi-model review, and documentation updates, to ensure comprehensive and efficient product development.
2. Create an AI ‘CTO’
For non-technical users, establish an AI project (e.g., in GPT or Cursor) with a custom prompt to act as a CTO. This AI should own technical implementation, challenge your ideas, and avoid being a ‘people pleaser’ to ensure robust technical decisions.
3. Implement Multi-Model Peer Review
After AI generates code, have multiple AI models (e.g., Claude, Codex/GPT, Composer) independently review the code. Then, use a ‘peer review’ command to present these findings to your primary AI agent, prompting it to justify or fix identified issues, which significantly improves code quality.
4. Continuously Update AI Tooling
When AI makes mistakes or fails to execute correctly, ask it to reflect on the root cause within its system prompt or tooling. Use this insight to update documentation and tools, preventing recurrence and continuously improving AI performance.
5. Leverage AI for Learning
When encountering difficult technical concepts, use an AI ’learning opportunity’ command to have the AI explain the concept using the 80-20 rule. This approach, tailored for a ’technical PM in the making,’ helps build engineering knowledge efficiently.
6. Start AI Building Gradually
If code is intimidating, begin your AI building journey slowly with a simple chatbot UI (like a GPT project), then progress to low-code platforms (like Bolt or Lovable), and finally transition to more advanced environments like Cursor in light mode, gradually easing into full development.
7. Use Slash Commands for Efficiency
Save frequently used prompts as reusable slash commands within your AI development environment (e.g., Cursor/Claude) to quickly invoke specific actions like creating issues, exploring ideas, or generating plans, streamlining your workflow.
8. Capture Issues with AI
When an idea or bug arises mid-development, use an AI slash command (e.g., /create issue) to quickly capture it and create a Linear issue. This allows you to maintain focus on your current task while ensuring new ideas are recorded.
9. Explore Ideas with AI
After an issue is captured, use an ’exploration phase’ AI command to have the AI analyze the problem, understand the existing codebase, and ask clarifying questions. This ensures a deep understanding of the problem and guides the best technical implementation.
10. Create Detailed Plans with AI
Utilize an AI command (e.g., /create plan) to generate a structured markdown file plan, including a TLDR, critical decisions, and broken-down tasks. This plan serves as a clear roadmap for execution and can be used by different AI models.
11. Specialize AI Model Usage
Allocate tasks to different AI models based on their strengths (e.g., Cursor’s Composer for speed, Gemini for UI/design, Claude for communicative technical leadership, GPT’s model for complex bug fixing). This optimizes efficiency and output quality.
12. Guide AI for Quality Output
To minimize ‘AI slop’ and ensure high-quality results, provide AI with clear guidelines and extensive context about your writing style, problem-solving approach, and specific requirements. This helps the AI produce more relevant and useful outputs.
13. Own All AI Outputs
Take full personal responsibility for any content or code generated by AI that you present or release. Blaming AI for mistakes is unacceptable; you are accountable for the quality and accuracy of your final deliverables.
14. Compartmentalize AI Contexts
Use AI ‘projects’ or similar features to separate different areas of your life or work (e.g., running, product management, personal projects). This prevents the AI’s memory from mixing up irrelevant information and ensures context-specific responses.
15. Be a ‘10x Learner’
Especially for junior professionals, prioritize being an exceptional learner over being a ‘10x doer’ or having all the answers. Actively seek out mentors, assess their strengths, and consult them for specific areas of expertise to accelerate your growth.
16. Embrace an ‘AI-First’ Mindset
When faced with any new challenge or problem, immediately consider how AI can assist in solving it. This proactive approach leverages AI’s capabilities for preparation, building, or learning, making it a default problem-solving tool.
17. PMs Can Code (Cautiously)
As a PM, you can use AI to build contained UI projects or create pull requests with AI-generated code for developers to finalize. This is particularly feasible in codebases with robust documentation for AI agents, but avoid complex database migrations.
18. Prioritize Human Mock Interviews
While AI can aid in interview preparation (e.g., mock interviews, question analysis), conducting human mock interviews is crucial, especially for competitive roles. This provides invaluable real-world practice and feedback that AI alone cannot fully replicate.
19. Use AI for Interview Feedback
Record your human interviews and feed them to an AI coach (e.g., a Claude project) to receive objective feedback on your performance. This helps identify areas for improvement, such as missed points or better phrasing, addressing the common lack of detailed human feedback.
20. Learn from AI’s ‘Perfect’ Answers
For interview preparation, ask AI to role-play as the ideal candidate and provide exemplary answers to questions. Studying these ‘perfect’ responses can offer valuable insights and improve your own articulation and content.
21. Build Your Own Startup
Recognize that the current AI era makes it the ‘best time to be a junior’ because individuals can now build and launch their own startups with minimal technical background and resources. This encourages entrepreneurship and independent creation.
22. Cultivate Key Traits
Develop curiosity, optimism, and a strong work ethic. These qualities, when combined with effective AI utilization, provide an ‘unfair advantage’ and enable individuals to deliver significant value, often surpassing those with more traditional experience.
7 Key Quotes
It's not that you will be replaced by AI. You'll be replaced by someone who's better at using AI than you.
Zevi
If people walk away thinking how amazing you are, you failed. And if people walk away and open their computer and start building, you've succeeded.
Claude (quoted by Zevi)
If regular ChatGPT was a CTO, that would be the CTO who like goes along with your dumbest ideas.
Zevi
Code is just words at the end of the day. So it's just files on your computer.
Tal Raviv (quoted by Zevi)
I always think about this where, for some reason, the easiest way for me to think about AI is to imagine it as people.
Zevi
The only way that AI makes you worse at your job is if you're using it wrong.
Zevi
Nobody knows what they're doing.
Zevi
1 Protocols
Zevi's AI Product Building Workflow for Non-Technical PMs
Zevi- Create Issue: Use a slash command (`/create issue`) to quickly capture a bug or feature idea, telling the AI you're mid-development and need quick capture, then create a Linear issue.
- Exploration Phase: Use a slash command (`/exploration phase`) referencing the Linear ticket to have the AI deeply understand the problem, analyze the codebase, and ask clarifying questions about scope, data model, UX/UI, validation, and AI system prompt changes.
- Create Plan: Use a slash command (`/create plan`) to generate a detailed markdown plan based on the exploration, including a TLDR, critical decisions, and broken-down tasks with status trackers. This plan can be split for different models (e.g., Gemini for UI).
- Execute Plan: Use a command (e.g., `execute`) to tag the plan file and have an AI model (e.g., Cursor's Composer) write the code based on the plan.
- Review: Manually QA the built feature, then use a slash command (`/review`) to have the primary AI model (e.g., Claude) review its own code for bugs.
- Peer Review: Have other AI models (e.g., Codex, Composer) also review the code. Then, use a slash command (`/peer review`) to present these external reviews to the primary AI model, challenging it to either explain why the issues are not real or fix them.
- Update Documentation: After successful execution and review, update documentation and tooling based on any failures or learnings to prevent future mistakes and improve AI responses.
- Testing: Conduct additional testing, including user testing, before releasing the feature.