#6 Philip Tetlock: How to See the Future
This episode features Professor Philip Tetlock, co-leader of The Good Judgment Project and author of "Superforecasting," discussing how to improve prediction skills. He shares insights from forecasting tournaments, emphasizing explicit judgments, debiasing, and Fermi-style thinking for better accuracy.
Deep Dive Analysis
15 Topic Outline
Introduction to Philip Tetlock and Superforecasting
The Good Judgment Project and Defining Forecasters
Identifying and Attributes of Superforecasters
Skill-Luck Ratio in Forecasting Performance
The Value of Granular Uncertainty Assessments
Impact of Debiasing Training on Forecast Accuracy
Organizational Resistance to Keeping Score in Forecasting
Applying the Outside View in Probability Estimation
Enrico Fermi's Method for Estimating Intractable Problems
Structuring Forecasting Teams in Organizations
Aggregating Forecasts: Wisdom of Crowds and Extremizing
Predicting Short-Term vs. Long-Term Events
Cultivating Active Open-Mindedness
Intuition's Role in Real-World Forecasting
Influential Books and Recommended Guests
7 Key Concepts
Implicit vs. Explicit Probabilities
When making decisions, people form implicit expectations about consequences. Making these expectations explicit and self-conscious allows individuals to learn and improve their forecasting ability, rather than operating with unrecognized probabilities.
Good Judgment Project
A research program co-led by Philip Tetlock and Barbara Mellers, supported by IARPA, which ran forecasting tournaments from 2011 to 2015. Its goal was to generate accurate probability estimates of national security relevance, with university teams competing to develop the best methods.
Superforecasters
The top 2% of performers identified in the Good Judgment Project each year, who consistently generated exceptionally accurate probability estimates. They are distinguished by their belief that probability estimation is a cultivable skill and their commitment to making the effort to improve it.
Skill-Luck Ratio in Forecasting
An observed ratio of approximately 70% skill to 30% luck in predicting geopolitical and geoeconomic outcomes. This indicates that superforecasters' superior performance is not merely due to chance but reflects genuine, cultivable skill.
Outside View vs. Inside View
The 'outside view' involves starting an estimation process by consulting statistical or base rate information (e.g., national divorce rates) before considering specific details. The 'inside view' focuses solely on the idiosyncratic factors of a particular case. Starting with the outside view and then adjusting inward is a mantra for more accurate predictions.
Fermi-style Thinking
A method, named after physicist Enrico Fermi, for estimating seemingly unestimatable problems by decomposing them into many tractable components. This process involves flushing out ignorance at each step, making initial guesstimates, and allowing for transparent refinement by others, making complex problems more manageable.
Extremizing Algorithm
An advanced aggregation technique used in forecasting where, if multiple independent forecasters converge on a similar non-0.5 probability estimate (e.g., all say 0.7), the algorithm pushes the aggregated probability to a more extreme value (e.g., 0.85 or 0.9). This is based on the assumption that independent agreement from diverse sources suggests a stronger signal.
9 Questions Answered
Philip Tetlock sees no essential difference between forecasting and predicting, considering them virtual synonyms.
No, the skill-luck ratio in predicting geopolitical and geoeconomic outcomes is estimated to be about 70% skill and 30% luck, indicating that superforecasters possess genuine skill beyond mere chance.
Superforecasters tend to score higher on fluid intelligence and active open-mindedness, but their most distinguishing attribute is their belief that probability estimation is a cultivable skill and their commitment to cultivating it.
Yes, average forecasters who received Kahneman-style debiasing exercises showed an improvement in accuracy of approximately 10% over a year.
Organizations resist due to a mix of psychological and political forces, including high-status individuals not wanting their judgment demystified, and a blame-game culture that incentivizes vague language over precise, falsifiable predictions.
It improves forecasting by breaking down seemingly intractable problems into smaller, manageable components, making areas of ignorance explicit, and allowing for systematic estimation and refinement.
Executives can consider dedicating a portion of their analytical capacity to 'pure accuracy games' by incentivizing a small group to make forecasts focused solely on accuracy, which can then inform senior-level decision-making.
Diverse views can be synthesized by first averaging forecasts, then weighting forecasts from better performers, and finally applying 'extremizing' algorithms that push probabilities further from 0.5 when independent forecasters converge on a similar non-0.5 estimate.
While intuition can have successes, the research emphasizes deliberate 'thinking' over 'blinking' for complex real-world problems, as history's patterns are subtle and conditional, requiring more self-skepticism than rapid pattern recognition.
18 Actionable Insights
1. Explicit Probability Judgments
When making any decision, explicitly state the probabilities of potential outcomes rather than relying on implicit expectations, as this self-conscious process allows for better learning and improvement.
2. Outside-In View for Predictions
When making predictions, always start by considering statistical base rates or the ‘outside view’ before adjusting your estimate based on the specific, idiosyncratic details or the ‘inside view’ of the situation.
3. Decompose Problems (Fermi Method)
Apply Fermi-style thinking by breaking down seemingly intractable problems into as many tractable, smaller components as possible, which helps to flush out ignorance and make the problem more manageable.
4. Granular Uncertainty Assessments
Make highly granular assessments of uncertainty, distinguishing even small differences (e.g., 55-45 bets from 45-55 bets), as this precision pays off in real-world judgments.
5. Practice Debiasing Exercises
Participate in Kahneman-style debiasing exercises (e.g., a 50-minute module) to improve forecasting accuracy by approximately 10% over a year.
6. Practice Bayesian Belief Updating
Engage in simulated problems (e.g., medical, economic, military diagnoses) with simulated data to practice and improve your ability to update beliefs appropriately in response to new evidence, following normative models like Bayes’ theorem.
7. Aggregate Diverse Forecasts
When seeking predictions, aggregate diverse views by taking the average of forecasts from a group, as this ‘wisdom of the crowd’ often yields more accurate results than individual predictions.
8. Weight Forecasts by Expertise
Improve aggregated forecasts by giving more weight to individuals with better track records or specific attributes (e.g., intelligence, frequent updaters), creating weighted averages that outperform simple averages.
9. Extremize Independent Agreements
If multiple independent forecasters arrive at the same probability estimate (e.g., 0.7), and they have diverse, non-overlapping information, extremize that probability (e.g., to 0.85 or 0.9), as this suggests a stronger likelihood.
10. Critique Forecasting Successes
Beyond analyzing failures, critically examine forecasting successes by asking if luck played a role, if outcomes could have been different, or if you were ‘almost wrong,’ to avoid overlearning from potentially spurious correlations.
11. Embrace Scorekeeping
Be open-minded and willing to keep score of your predictions, as this allows for tracking accuracy and learning, despite the psychological resistance to being proven wrong.
12. Overcome Fear of Error
Be willing to make and share estimates, even if they might appear ‘stupid’ to others, as this transparency allows for feedback and refinement of the decomposed problem.
13. Cultivate Challenging Team Dynamics
When working in teams, foster an environment where members have mutual respect but are also willing to push each other hard, which is optimal for problem decomposition and forecasting.
14. Establish Pure Accuracy Teams
Consider establishing small, incentivized groups within an organization to engage in ‘pure accuracy’ forecasting tournaments, with their probability estimates feeding up to senior executives to guide decision-making.
15. Prioritize Deliberate Thought
When making predictions for complex real-world problems, prioritize deliberate, analytical ’thinking’ over relying solely on rapid ‘blink’ intuition, as the latter is less reliable in messy, ill-defined domains.
16. Discern Subtle Historical Patterns
Understand that history ‘rhymes’ with subtle and conditional patterns rather than repeating exactly, and be cautious not to overlearn or overgeneralize from past events.
17. Avoid Pure Randomness Problems
To improve forecasting ability, avoid spending time trying to predict or model purely random events (like roulette wheel spins), as this is not a productive use of effort for becoming a super forecaster.
18. Focus on Shorter-Term Predictions
Generally, prioritize making predictions for shorter time ranges, as these are usually easier to foresee accurately than longer-term outcomes, though exceptions exist.
5 Key Quotes
When people make explicit judgments and they're fully self-conscious about what they're doing, they can learn to do it better.
Philip Tetlock
The greatest players tend to be extremely granular in their assessments of uncertainty.
Philip Tetlock
Start with the outside and work inside. That's one of our mantras.
Philip Tetlock
History doesn't repeat itself, but it does rhyme.
Philip Tetlock
Super forecaster skepticism even extends to their forecasting successes.
Philip Tetlock
4 Protocols
Improving Probability Estimates
Philip Tetlock- Make explicit judgments about probabilities.
- Be fully self-conscious about the process.
- Continuously learn and refine the approach.
Debiasing Training (Kahneman-style)
Philip Tetlock- Learn basic ideas about heuristics and biases and how to check them.
- Understand that people often don't give enough weight to statistical or base rate information.
- Practice starting with the 'outside view' (base rates) and then adjusting based on 'inside view' (idiosyncratic factors).
Fermi Method for Estimating Intractable Problems
Philip Tetlock- Flush out your ignorance by identifying what you don't know.
- Decompose the problem into as many tractable components as possible.
- Make initial guesstimates for each component, even if uncertain.
- Allow others to review and refine these guesstimates, making the points of ignorance clear and transparent.
- Combine the estimates to arrive at a range of probabilities for the overall problem.
Synthesizing Diverse Views for Aggregate Forecasts
Philip Tetlock- Calculate the simple average of forecasts from a group (wisdom of the crowd).
- Give more weight to forecasters with better track records, higher intelligence, or more frequent belief updates to create weighted averages.
- Apply 'extremizing' algorithms: if independent forecasters converge on a non-0.5 probability (e.g., 0.7), push the aggregated probability further towards the extreme (e.g., 0.85 or 0.9), assuming true diversity of information.