The original growth hacker reveals his secrets | Sean Ellis (author of “Hacking Growth”)

Sep 5, 2024 Episode Page ↗
Overview

Sean Ellis, a pioneer in growth, discusses his famous "Sean Ellis test" for product market fit, explaining its origins, application, and how to act on its results. He also shares his framework for sustainable growth, prioritizing activation, engagement, and referrals before scalable acquisition.

At a Glance
50 Insights
1h 44m Duration
21 Topics
7 Concepts

Deep Dive Analysis

Introduction to Sean Ellis and the Sean Ellis Test

Explaining the Sean Ellis Test and the 40% Rule

Case Study: Improving Product-Market Fit at Lookout

Deeply Understanding Must-Have Users and Their Benefits

Nuances of the 40% Threshold and PMF Definition

When and When Not to Use the Sean Ellis Test

Origin Story of the Sean Ellis Test Question

Impact of Switching Costs on PMF Scores

Operationalizing the PMF Survey for Product Development

Superhuman's Strategy for Increasing Product-Market Fit

Coining the Term 'Growth Hacking' and Its Original Intent

Sean's Growth Strategy Approach: Prioritizing Activation

Case Study: LogMeIn's Activation Improvement

Strategies for Improving Activation and Onboarding

Identifying Effective Growth Channels and Demand Types

Developing the Dropbox Referral Program

The Importance of Word-of-Mouth for Freemium Models

Picking a North Star Metric for Business Growth

Evolution of Growth Strategies Over Time

ICE vs. RICE Prioritization Frameworks

AI's Role in Growth and Experimentation

Sean Ellis Test

A simple survey question ('How would you feel if you could no longer use this product?') with choices like 'very disappointed' or 'somewhat disappointed,' used to determine if a product is a 'must-have' for its users, indicating product-market fit.

Product-Market Fit (PMF)

The state where a product is considered a 'must-have' by a significant percentage of its users. The Sean Ellis Test uses a 40% 'very disappointed' response rate as a leading indicator, with actual user retention being the ultimate lagging indicator.

Activation

The process of guiding new users to their 'aha moment' or first valuable experience with a product. Optimizing activation is crucial for retention, as users are at the highest risk of churn before experiencing the product's core value.

North Star Metric

A single, quantifiable metric that reflects the core value a product delivers to its customers and aligns the entire team's efforts towards sustainable growth. It should be a measure of value delivered, not just revenue, and be 'up and to the right' over time.

Growth Hacking (Original Intent)

A disciplined approach to scrutinizing every action's impact on business growth, moving beyond traditional marketing to focus on acquiring customers efficiently and ensuring they retain and monetize. It emphasizes sustainable growth rather than one-off tactics.

Demand Generation vs. Demand Harvesting

Demand generation involves creating new demand for a product (e.g., through awareness campaigns or contextual ads), while demand harvesting involves capturing existing demand (e.g., through paid search for specific keywords).

ICE Prioritization Framework

A method for prioritizing growth experiments based on Impact (best-case scenario impact), Confidence (how confident you are in the impact), and Ease (how easy it is to implement). It helps teams systematically compare and select ideas for testing.

?
What is the Sean Ellis Test?

It's a survey question asking users how they would feel if they could no longer use a product, with options like 'very disappointed,' 'somewhat disappointed,' or 'not disappointed,' to gauge if the product is a 'must-have'.

?
What is the 40% rule in the Sean Ellis Test?

A score of 40% or more of users saying they'd be 'very disappointed' if they could no longer use the product is considered a leading indicator of achieving product-market fit.

?
How can a low Sean Ellis Test score be quickly improved?

By repositioning the product to highlight the most valued feature identified by 'very disappointed' users and streamlining onboarding to quickly deliver that specific value.

?
How should companies use the Sean Ellis Test beyond the initial score?

The most effective use is to deeply understand the 'very disappointed' users, identify their primary benefits, and use these insights to refine product roadmap, onboarding, messaging, and acquisition strategies.

?
When is the best time to ask the Sean Ellis Test question?

It should be asked to a random sample of users who have activated (used the product multiple times) and used it recently (e.g., within the last week or two), but haven't churned.

?
When should the Sean Ellis Test not be used?

It's not suitable for 'one-off' products or experiences (e.g., a single movie or workshop) where the concept of ongoing usage and disappointment if it disappeared doesn't apply.

?
How can a company increase its Sean Ellis Test score from a low percentage (e.g., 10-15%)?

Focus on the core benefit valued by 'must-have' users and identify what 'somewhat disappointed' users, who also value that core benefit, need to make the product a 'must-have' for them, without diluting the experience for core users.

?
What is the recommended sequence for focusing on growth areas?

Sean Ellis recommends starting with activation, then engagement and referral loops, then optimizing the revenue model, and only then obsessing over scalable customer acquisition channels.

?
What are effective strategies for improving product activation?

Deeply understand the specific problems preventing users from getting value, ask users why they bounced, look for inspiration from successful products, and focus on increasing desire and reducing friction in the onboarding flow.

?
How should a company choose its North Star Metric?

Start by identifying the 'must-have' value uncovered by the Sean Ellis Test, then select a metric that reflects the delivery of that value, is 'up and to the right' over time, and correlates to revenue growth (but isn't revenue itself).

?
How has growth strategy changed over the past decade?

Initially, being data-driven on customer acquisition was enough to win, but now, competitiveness requires efficiency across all parts of the business (conversion, retention, monetization), demanding cross-functional collaboration.

?
What is the primary difference between the ICE and RICE prioritization frameworks?

Sean Ellis views 'Reach' (from RICE) as already factored into the 'Impact' component of ICE, suggesting RICE is an unnecessary complication of the original, simpler framework.

?
How will AI impact growth efforts?

AI can help generate experiment ideas, identify opportunities, model potential outcomes, and streamline the analysis of experiments, potentially reducing the impact of ego in cross-functional team recommendations.

1. Gauge Product Market Fit

Ask users, “How would you feel if you could no longer use this product?” with “very disappointed,” “somewhat disappointed,” or “not disappointed” options to identify if your product is a must-have.

2. Focus on “Very Disappointed” Users

Concentrate on feedback from users who would be “very disappointed” if your product disappeared, as they represent your must-have users and core value.

3. Optimize Onboarding for Retention

Enhance retention by focusing on guiding users to the right initial experience during onboarding, rather than solely relying on tactical retention efforts.

4. Identify Core Value from Enthusiasts

Analyze the “very disappointed” user segment to pinpoint the specific functionality or benefit they value most, as this represents your product’s core must-have.

5. Streamline Onboarding for Speed

Streamline the onboarding process to quickly guide new users to experience the product’s core value, ensuring they achieve an “aha moment” and feel protected/benefitted early on.

6. Deeply Understand Must-Have Users

Conduct follow-up qualitative research with your “very disappointed” users to deeply understand their context, problems, and why the core benefit is important to them.

7. Align Product Roadmap with Core

Ensure your product roadmap prioritizes features and improvements most important to your must-have customers, as identified through their feedback.

8. Define Value-Driven North Star

Collaboratively define a North Star metric that quantifies the delivery of your product’s core must-have value, reflecting how many users experience product market fit.

9. Prioritize Asking Right Questions

Focus on asking the right and often obvious questions at the right time to deeply understand problems, rather than prematurely jumping to solutions.

10. Ignore “Somewhat Disappointed” Feedback

Disregard feedback from users who would be “somewhat disappointed,” as acting on their “nice-to-have” suggestions may dilute the product’s appeal for your must-have users.

11. Convert On-the-Fence Users

Identify the core benefit valued by “must-have” users, then ask “somewhat disappointed” users seeking that benefit what improvements would make it a “must-have” for them.

12. Set Growth Readiness Target

Define a specific target percentage for the Sean Ellis test (e.g., 40%) and commit as a team not to aggressively pursue growth until this target is consistently met.

13. Follow Growth Optimization Sequence

After validating product market fit, optimize in this sequence: speed to value, engagement loops, referral mechanisms, revenue model, and then scalable customer acquisition.

14. Achieve Full Growth Funnel Efficiency

To remain competitive, focus on achieving super efficiency across all parts of your business, including conversion, retention, and monetization, not just customer acquisition.

15. Foster Cross-Functional Collaboration

Actively work to drive cross-functional collaboration between marketing, product, sales, and customer success teams for an effective, integrated testing program.

16. Systematize Idea Prioritization

Implement a systematic method (like ICE) for prioritizing experiment ideas, allowing clear explanations for choices, which builds trust and improves future submissions.

17. Diagnose Activation Bottlenecks

For low activation, deeply understand why users are dropping off by directly asking them (e.g., via email) about their experience and reasons for not completing activation steps.

18. Optimize Conversion Levers

Drive higher conversion rates by simultaneously increasing user desire for the product and reducing any friction points in the user journey.

19. Qualitatively Define Activation Moment

Qualitatively determine when a user has had a sufficiently good experience to understand its value; ideally, this “aha moment” should occur within the first session or day.

20. Leverage Test for Early PMF Signal

Utilize the Sean Ellis test as a leading indicator to quickly assess product market fit, even before robust retention data is available.

21. Survey Activated, Recent Users

Administer the Sean Ellis test to a random sample of users who have activated, used the product at least twice, and done so recently (e.g., within the last week or two).

22. Aim for 30+ Survey Responses

Target a minimum of 30 responses for your Sean Ellis test to ensure a reliable sample size that you can confidently act upon.

23. Reposition Product with Core Value

Reposition your product and messaging to clearly highlight the core must-have functionality identified from your most enthusiastic users, setting the right expectations upfront.

24. Uncover User Motivation

Ask “very disappointed” users, “What is the primary benefit you get?” (open-ended, then multiple-choice) and “Why is that benefit important to you?” to uncover deep user motivations and context.

25. North Star Metric Should Reflect Value

Choose a North Star metric that directly reflects the value delivered to customers (e.g., “nights booked,” “monthly purchases”) rather than a direct revenue metric.

26. Map Growth Flywheel

Diagram all components of your “value delivery engine,” including onboarding, activation, engagement loops, and referral mechanisms, to identify current state and opportunities for improvement.

27. Develop Early Acquisition Hypotheses

Before fully committing to a growth role or strategy, develop two to three viable hypotheses for how to profitably acquire customers.

28. Acquire for Learning, Not Scale

In early stages, conduct acquisition efforts to generate sufficient user flow for testing and optimization, rather than obsessing over scalability.

29. Ask Users How They Found Product

Regularly ask existing customers how they discovered your product and how they typically find similar products to gain insights into potential acquisition channels.

30. Integrate Research Methods

Combine qualitative insights from customer conversations with quantitative data from analytics and testing to formulate much better experiments and deeply understand user behavior.

31. Build Credibility for Freemium

If offering a freemium product, enhance its credibility by clearly presenting a paid version or business model alongside the free option, especially for new users.

32. Learn from Competitors’ Funnels

Study the download and installation processes of successful competitors or similar products to gain inspiration and identify potential improvements for your own activation funnel.

33. Reinforce Value During Onboarding

Throughout the onboarding process, periodically remind users of the benefits they will gain from using the product to maintain their desire and motivation.

34. Correlate Activation to Retention

Once an activation moment is defined, verify its effectiveness by checking if users who reach this moment have a strong correlation with long-term retention.

35. Use Product as Acquisition Channel

Consider making your product itself an advertisement or entry point, allowing users to experience value (e.g., gameplay) before requiring registration.

36. Amplify Existing Word-of-Mouth

Implement referral programs with incentives as an accelerant for growth only when your product already generates strong organic word-of-mouth.

37. Design Compelling Free Tier

For a freemium model to succeed, ensure your free product is so valuable and effective that it naturally generates strong word-of-mouth.

38. Maximize Natural Engagement

Understand and design your engagement strategies to maximize user interaction within the natural usage cycle of your product.

39. Choose Growth-Oriented Metric

Select a North Star metric that can consistently trend “up and to the right” over time, providing a clear, positive indicator of sustained growth and value delivery.

40. Time-Cap Metric Selection

When collaboratively defining a North Star metric, time-cap the discussion (e.g., 30 minutes) to encourage efficient decision-making.

41. Crowdsource Experiment Ideas

Encourage and facilitate the submission of experiment ideas from across the entire company to foster a high-velocity testing program and leverage diverse perspectives.

42. Leverage AI for Communication

Utilize AI tools like ChatGPT to generate initial drafts for common questions or advice, allowing for quick tweaks and significantly increasing communication efficiency.

43. Adopt AI for Growth Strategy

Leverage AI systems to identify underperforming business areas and suggest experiments, as dispassionate recommendations can help overcome ego and drive effective cross-functional growth initiatives.

44. Prioritize Long-Term Value

Prioritize building a strong reputation and continuous learning over short-term earnings, as this approach can lead to greater opportunities and long-term success.

45. Manage Customer Dissatisfaction

If a customer is genuinely unhappy with your product and requests a refund, provide it without hesitation to protect your reputation and avoid prolonged dissatisfaction.

46. Tailor PMF Threshold

Consider cultural nuances (e.g., optimism/pessimism) when setting your specific product market fit threshold, as a 40% “very disappointed” might vary in meaning across different user bases.

47. Avoid Test for One-Off Products

Do not use the Sean Ellis test for one-off products or experiences (e.g., a single movie or workshop) where users wouldn’t naturally express disappointment if they couldn’t use it again.

48. Prioritize Core Product Issues First

If users who overcome onboarding challenges still dislike the product, focus on resolving core product issues or targeting the right users, rather than solely optimizing onboarding.

49. Strive for Valuable and Unique Product

Ensure your product is both highly valuable and offers a unique solution to be considered a “must-have,” as commodity use cases often have easy alternatives.

50. Adapt Survey for Experiments

When testing changes like onboarding updates, survey only users who have experienced the new changes to accurately gauge their impact on product market fit.

How would you feel if you could no longer use this product?

Sean Ellis

If you start paying attention to what your somewhat disappointed users are telling you, and then you start tweaking onboarding and product based on their feedback, maybe you're going to dilute it for your must-have users.

Sean Ellis

Moving retention often is really hard, but it's usually much more a function of onboarding to the right user experience than it is about the kind of the tactical things that people try to do to improve retention.

Sean Ellis

A problem well stated is a problem half solved.

Kettering (quoted by Sean Ellis)

I don't care what they say, I care what they do.

Sean Ellis

Before the referral program, Dropbox had amazing referral rate... it's a great accelerant when it's already working, but it can't fix it if people don't want to talk about your product.

Sean Ellis

Focus on reputation and learning over earnings.

Sean Ellis

Improving Product-Market Fit (Lookout Case Study)

Sean Ellis
  1. Run the Sean Ellis Test to identify 'very disappointed' users.
  2. Dig into the feedback from these users to understand what functionality they value most (e.g., antivirus).
  3. Reposition the product's messaging to focus on this highly valued functionality, attracting users with the right expectations.
  4. Streamline onboarding to ensure new users immediately experience the core, valued functionality (e.g., setting up antivirus and confirming protection).
  5. Continuously survey subsequent cohorts to track improvement in the 'very disappointed' percentage.

Deeply Understanding Must-Have Users

Sean Ellis
  1. Filter users based on the Sean Ellis Test to identify those who would be 'very disappointed' without the product.
  2. Ask these users an open-ended question: 'What is the primary benefit that you get?'
  3. Collect and categorize the open-ended responses to crowdsource different benefits.
  4. Run a follow-up survey with a different group of 'very disappointed' users, turning the identified benefits into a multiple-choice question, forcing them to pick one.
  5. Follow up the multiple-choice question with 'Why is that benefit important to you?' to gain deeper context.

Superhuman's Approach to Increasing PMF Score

Sean Ellis
  1. Identify the core benefit that 'must-have' users (those 'very disappointed') are focused on.
  2. Among 'somewhat disappointed' users, identify those who are also focused on that same core benefit.
  3. Determine what specific features or improvements these 'somewhat disappointed' users need to make the product a 'must-have' for them.
  4. Implement these changes, ensuring they do not dilute the experience for the existing 'must-have' users.

Sean Ellis's Growth Strategy Sequence

Sean Ellis
  1. Understand the product's must-have value for its users.
  2. Define a North Star metric that captures units of this value being delivered.
  3. Diagram all ways to grow the North Star metric (onboarding, activation, engagement, referral, revenue model, acquisition).
  4. Prioritize and optimize activation first, ensuring users reach their 'aha moment' quickly.
  5. Focus on engagement and referral loops.
  6. Optimize the revenue model for profitability.
  7. Finally, obsess on scalable customer acquisition channels.

Prioritizing Growth Experiments (ICE Framework)

Sean Ellis
  1. Source experiment ideas from across the company.
  2. For each idea, estimate its potential Impact (best-case scenario).
  3. Estimate your Confidence in achieving that impact.
  4. Estimate the Ease of implementing the experiment.
  5. Use these three scores to systematically compare and prioritize ideas for testing.
7%
Initial percentage of Lookout users 'very disappointed' without the product Before repositioning and onboarding changes
40%
Target percentage of 'very disappointed' users Lookout achieved Achieved in two weeks after repositioning and streamlining onboarding
60%
Percentage of 'very disappointed' users Lookout reached six months later After continued iteration
$4 billion
LogMeIn's acquisition value Company Sean Ellis was on the founding team of, eventually sold for this amount
750,000
Copies sold of 'Hacking Growth' Sean Ellis's book
90%
Percentage of Webs.com users 'very disappointed' without the product Attributed to high user investment/switching costs
30
Minimum recommended responses for Sean Ellis Test sample size For reliable data
$10,000 per month
Initial limit for profitable customer acquisition spending at LogMeIn Before activation improvements
95%
Percentage of LogMeIn sign-ups who never performed a remote control session Before activation improvements
1000%
Improvement in LogMeIn's signup-to-usage rate From 5% to 50% after three months of focused activation efforts
$1 million per month
Scalability of LogMeIn's acquisition channels after activation improvements With a three-month payback on marketing dollars
80%
Percentage of new LogMeIn users coming through word-of-mouth After activation improvements
90%
Drop-off rate at the download step for a LogMeIn demand generation channel Before addressing user skepticism
300%
Improvement in download rate for LogMeIn after addressing skepticism By offering a choice between free and paid versions to establish credibility
40,000
Number of websites Sean Ellis's first game company syndicated games to As an advertising strategy
1 billion
Facebook's monthly active users when they shifted North Star metric Before moving to daily active users goal
100,000 miles
Sean Ellis's approximate travel distance this year Personal anecdote
$5 million
Sean Ellis's self-estimated value of his reputation Used in a decision-making example
$20,000
Amount of a project fee Sean Ellis offered to refund Example of prioritizing reputation over short-term earnings