Becoming evidence-guided | Itamar Gilad (Gmail, YouTube, Microsoft)
Itamar Gelad, a product coach and former Google PM, discusses shifting from opinion-based to evidence-guided product development. He introduces frameworks like the GIST model, Confidence Meter, and Metrics Trees to help organizations make data-driven decisions and build high-impact products.
Deep Dive Analysis
17 Topic Outline
Gmail's Tabbed Inbox: Early Testing and Validation
Google Plus: Lessons from Opinion-Based Development
Gmail's Tabbed Inbox: Success through Evidence-Guided Approach
Introduction to the Evidence-Guided Book
Balancing Founder Vision with Evidence
Identifying Signs of Non-Evidence-Guided Teams
Overview of the GIST Model
Defining Overarching Goals with the Value Exchange Loop
Distinguishing North Star Metrics from Business KPIs
Prioritizing Ideas using Impact, Confidence, Ease (ICE)
The Confidence Meter: Quantifying Evidence for Ideas
Balancing Speed of Delivery and Speed of Discovery
Adapting Frameworks for Company Stage and Type
Initial Steps for Becoming More Evidence-Guided
The Steps Layer: Comprehensive Idea Validation Methods
The Task Layer: Managing Work with the GIST Board
Integrating OKRs and Outcome-Based Roadmapping
10 Key Concepts
Opinion-Based Development
A product development approach where ideas are pursued primarily based on strong beliefs, intuition, or the opinions of senior leaders, often leading to significant resource waste if the core assumptions are incorrect.
Evidence-Guided Approach
A product development philosophy that balances human judgment and creativity with systematic collection and analysis of evidence, allowing for early learning and adaptation to build high-impact products.
Value Exchange Loop
A model where an organization continuously delivers value to the market (measured by a North Star metric) and captures value back (measured by a top business KPI), creating a feedback loop for rapid growth.
North Star Metric (Itamar's Definition)
A key metric that specifically measures how much core value a product or service creates for its users or the market, distinct from business-centric metrics like revenue.
Metrics Trees
A hierarchical breakdown of a North Star metric and a top business KPI into their constituent sub-metrics, helping to visualize dependencies, assess impact, align teams, and assign ownership.
ICE Framework
A prioritization model that evaluates ideas based on their estimated Impact on goals, Ease of implementation (opposite of effort), and Confidence in those estimates, helping to objectively compare and select ideas.
Confidence Meter
A visual tool (thermometer-like) that quantifies the strength of evidence supporting an idea's impact and ease, ranging from low confidence (opinions, themes) to high confidence (rigorous tests and experiments).
GIST Model
A meta-framework for product development that breaks down the process into four interconnected layers: Goals (what to achieve), Ideas (hypothetical solutions), Steps (build-measure-learn loops for validation), and Tasks (development work), designed to help organizations become more evidence-guided.
Steps Layer (GIST Model)
A component of the GIST model that outlines a spectrum of validation methods, from low-cost assessments and fact-finding to various types of tests and experiments, enabling teams to learn and build simultaneously.
GIST Board
A dynamic project management tool, typically per team, that visualizes the team's key results (goals), current ideas, and the next validation steps, fostering context, ownership, and continuous learning within the team.
14 Questions Answered
Google Plus's failure, despite massive investment and belief, highlighted the pitfalls of 'opinion-based development' where ideas are pursued without sufficient evidence, leading to significant waste and missed opportunities.
The success of the tabbed inbox, which started as a small, doubted idea and was rigorously tested and validated with evidence, demonstrated the power of an 'evidence-guided' approach in delivering high-impact features.
While founders are crucial for generating important ideas, it's essential to critically examine these ideas with evidence rather than blindly building them. Even visionary leaders like Steve Jobs relied on evidence to refine and pursue ideas.
Presenting hard data and evidence from experiments, even small ones, can be very empowering. While some may react negatively, many leaders, including Steve Jobs, are willing to change their minds when presented with compelling evidence.
Telltale signs include unclear or output-focused goals, missing user-facing metrics, excessive time spent on roadmapping, lack of experimentation or learning from experiments, and disengaged teams focused solely on output.
The GIST model is a meta-framework that structures product development into four layers: Goals (what to achieve), Ideas (hypothetical solutions), Steps (build-measure-learn loops for validation), and Tasks (development work), designed to help organizations become more evidence-guided.
The North Star metric measures the core value delivered to the market or users (e.g., messages sent for WhatsApp), while the top KPI measures the value captured back by the business (e.g., revenue or profit).
ICE (Impact, Confidence, Ease) helps evaluate ideas by estimating their potential impact on goals, the ease of implementation, and the confidence in those estimates, providing a more objective and consistent way to compare ideas.
The Confidence Meter is a visual tool that assigns a score (0-10) to an idea's confidence based on the type and strength of evidence available, moving from low confidence (opinions, themes) to high confidence (rigorous tests, experiments).
Instead of viewing them as trade-offs, companies should focus on 'time to outcomes.' An evidence-guided approach, which integrates learning (discovery) throughout the building process, is ultimately faster and more resource-efficient in achieving the right outcomes.
Early-stage companies focus on finding product-market fit and iterating on their value creation. As they scale, they need to build business models and then establish systematic ways to evaluate ideas and create order, with specific frameworks like metrics trees becoming more relevant.
Validation methods range from low-cost assessments (goal alignment, business modeling, assumption mapping) and fact-finding (data analysis, user interviews) to various tests (fake door, smoke, Wizard of Oz, usability, early adopter programs, alphas, dogfooding) and controlled experiments (A/B tests).
The GIST board, used by individual teams, displays their key results (goals), current ideas, and planned validation steps. It serves as a dynamic tool for regular team discussions, providing context, fostering ownership, and reducing the need for top-down directives.
OKRs (Objectives and Key Results) provide the overarching goals for teams. The metrics trees, combined with the company's mission and team-specific missions, help populate the key results that are then tracked and managed within the GIST framework.
20 Actionable Insights
1. Shift to Evidence-Guided Development
Move your team and organization from opinion-based decision-making to an evidence-guided approach. This involves balancing human judgment with data to supercharge decisions, leading to more successful products and avoiding wasted resources on unvalidated ideas.
2. Challenge Exec Opinions with Data
When founders or senior leaders propose ideas, ask for the evidence supporting them. If necessary, run secret experiments or gather data to objectively challenge opinions, as even influential leaders like Steve Jobs were willing to change their minds with sufficient evidence.
3. Define User-Centric Goals
Establish clear, overarching, user-centric goals for the entire organization, not siloed by department. Use a “North Star metric” to measure value delivered to users and a “top KPI” for business value, then break these down into “metrics trees” to align teams and assess impact.
4. Adopt the GIST Model
Implement the GIST (Goals, Ideas, Steps, Tasks) model as a meta-framework to structure product development. This breaks down the process into manageable parts, ensuring alignment, systematic idea evaluation, simultaneous learning and building, and team engagement.
5. Evaluate Ideas with ICE
Use the ICE (Impact, Confidence, Ease) framework to objectively and transparently evaluate ideas. Estimate Impact on goals, Ease of implementation, and critically, Confidence in those estimates, to avoid opinion battles and prioritize effectively.
6. Quantify Confidence with Meter
Employ the Confidence Meter tool to quantify the strength of evidence supporting your ideas, ranging from low confidence (opinions, themes) to high confidence (various forms of testing and experiments). This helps teams understand the reliability of their impact and ease estimates and guides investment.
7. Invest Proportionally to Confidence
Tie your investment in an idea to its level of confidence. Start with cheap, low-effort methods to gain confidence, then invest more as positive evidence accumulates, and know when to stop testing if the risk is low.
8. Combine Learning and Building
Integrate learning and building simultaneously, rather than treating them as separate phases. Use a spectrum of validation methods, from inexpensive “fake it” tests to more elaborate experiments and staged releases, to validate assumptions early and efficiently.
9. Prioritize Time to Outcomes
When facing uncertainty, shift the focus from how fast you can get features into production to how fast you can achieve desired outcomes. Evidence-guided methods, by prioritizing learning and validating the “right bits,” are ultimately more resource-efficient and faster to impact.
10. Empower Teams with GIST Board
Replace traditional roadmaps with a GIST board for each team, displaying their key results (goals), current ideas, and planned validation steps. This fosters context, ownership, and dynamic decision-making, allowing teams to autonomously pursue outcomes and learn continuously.
11. Use Fake Door Tests
Before building a full product, create a facade (e.g., HTML mock-up) to test user interest and gather evidence. This allows for early validation without significant development cost, as demonstrated with the Gmail tabbed inbox.
12. Continuously Conduct User Research
Don’t wait until you have an idea to start user research; maintain ongoing research efforts. This ensures you have a continuous stream of data from surveys, interviews, and field observations to inform and validate new ideas efficiently.
13. Involve Developers in Discovery
Break down the traditional divide between planning and execution by inviting developers into the discovery process. This allows them to contribute to idea selection and validation steps, leading to greater engagement and more effective outcomes.
14. Start with Biggest Problem
When adopting evidence-guided practices, identify your organization’s most pressing problem (e.g., unclear goals, constant debates, disengaged teams) and start implementing the corresponding GIST layer first. Avoid trying to transform everything at once to prevent fatigue.
15. Use Outcome Roadmaps
Replace traditional release roadmaps (focused on launching specific features by certain dates) with outcome roadmaps. Define what outcomes you want to achieve by when, allowing flexibility in how those outcomes are met and encouraging continuous learning and adaptation.
16. Critically Evaluate Competitor Features
Do not assume a competitor’s feature is a good idea just because they launched it. Critically evaluate it with your own data and testing, as assuming they know what they’re doing can lead to implementing ineffective ideas.
17. Fish Food for Rough Versions
When you have a rough, incomplete, or unpolished version of a product, test it internally with your own team, a practice referred to as “fish food.” This provides early feedback before broader internal or external testing.
18. Dog Food Product Internally
For more complete versions of your product, implement “dog fooding” by having your internal team use the next version of the product. This helps identify bugs and gather practical feedback before external release.
19. Utilize Stage Releases
Even after a product is built, use staged releases, percentage launches, and holdbacks as final opportunities to learn and validate assumptions. This allows for controlled rollout and adjustments based on real-world usage.
20. Strive to Be of Value
Adopt the motto “Strive not to be a success, but to be of value.” This principle encourages focusing on creating genuine worth for others, which can guide both personal and professional endeavors.
6 Key Quotes
Behind every terrible idea that was ever, someone thought it was great.
Itamar Gilad
Evidence is very empowering for us smaller people in the organization or mid-level managers to be empowered to challenge the opinions.
Itamar Gilad
It's not about shutting them down. It's about looking at them critically.
Itamar Gilad
The metric is not how fast can we get the bits into production. When there's a lot of uncertainty... it's about getting the right bits to production.
Itamar Gilad
Strive not to be a success, but to be of value.
Itamar Gilad (quoting Albert Einstein)
You should not assume that your competitor actually knows what they're doing any more than you do.
Itamar Gilad
2 Protocols
Idea Validation Progression (Steps Layer)
Itamar Gilad- Step 1: Assessment: Check goal alignment, conduct business modeling, ISO analysis, assumption mapping, and stakeholder discussions to identify risks and initial insights.
- Step 2: Fact Finding: Dig into existing data through analysis, surveys, competitive analysis, user interviews, and field research to gather more information.
- Step 3: Fake Tests: Simulate the product experience without building code (e.g., fake door tests, smoke tests, Wizard of Oz, concierge, usability tests with mock-ups/facades).
- Step 4: Rough Version Tests: Build incomplete, unpolished, or non-scalable versions for early user feedback (e.g., early adopter programs, alphas, longitudinal user studies, fish food).
- Step 5: Complete Version Tests: Develop a more complete version for internal or external testing (e.g., dog fooding, previews, betas, labs).
- Step 6: Experiments: Conduct controlled tests with a control element (e.g., A/B tests, multivariate tests).
- Step 7: Release Results: Use staged releases, percentage launches, or holdbacks to further validate assumptions and learn from real-world usage.
GIST Board Team Management Routine
Itamar Gilad- Step 1: Define Team Goals: At the beginning of the quarter, the team leads define up to four key results and one or two objectives, reviewed with the team, managers, and stakeholders.
- Step 2: Populate Ideas: Generate or select promising ideas from an idea bank that could help achieve the defined key results.
- Step 3: Team Idea Selection: The team, ideally using the ICE process, chooses which ideas to test first.
- Step 4: Develop Validation Steps: The team collaboratively determines the sequence of steps (learning milestones) needed to validate the chosen ideas.
- Step 5: Regular Review Meetings: The team meets at least once every other week to discuss progress, update the board, assess goal achievement, and identify blockers for important steps.
- Step 6: Dynamic Adaptation: Actively remove ideas that prove ineffective, add new promising ideas, or shift focus if goals are achieved or changed.