#6 Philip Tetlock: How to See the Future

Dec 8, 2015
Overview

This episode features Professor Philip Tetlock, co-leader of The Good Judgment Project and author of "Superforecasting," discussing how to improve prediction skills. He shares insights from forecasting tournaments, emphasizing explicit judgments, debiasing, and Fermi-style thinking for better accuracy.

At a Glance
18 Insights
47m 40s Duration
15 Topics
7 Concepts

Deep Dive Analysis

Introduction to Philip Tetlock and Superforecasting

The Good Judgment Project and Defining Forecasters

Identifying and Attributes of Superforecasters

Skill-Luck Ratio in Forecasting Performance

The Value of Granular Uncertainty Assessments

Impact of Debiasing Training on Forecast Accuracy

Organizational Resistance to Keeping Score in Forecasting

Applying the Outside View in Probability Estimation

Enrico Fermi's Method for Estimating Intractable Problems

Structuring Forecasting Teams in Organizations

Aggregating Forecasts: Wisdom of Crowds and Extremizing

Predicting Short-Term vs. Long-Term Events

Cultivating Active Open-Mindedness

Intuition's Role in Real-World Forecasting

Influential Books and Recommended Guests

Implicit vs. Explicit Probabilities

When making decisions, people form implicit expectations about consequences. Making these expectations explicit and self-conscious allows individuals to learn and improve their forecasting ability, rather than operating with unrecognized probabilities.

Good Judgment Project

A research program co-led by Philip Tetlock and Barbara Mellers, supported by IARPA, which ran forecasting tournaments from 2011 to 2015. Its goal was to generate accurate probability estimates of national security relevance, with university teams competing to develop the best methods.

Superforecasters

The top 2% of performers identified in the Good Judgment Project each year, who consistently generated exceptionally accurate probability estimates. They are distinguished by their belief that probability estimation is a cultivable skill and their commitment to making the effort to improve it.

Skill-Luck Ratio in Forecasting

An observed ratio of approximately 70% skill to 30% luck in predicting geopolitical and geoeconomic outcomes. This indicates that superforecasters' superior performance is not merely due to chance but reflects genuine, cultivable skill.

Outside View vs. Inside View

The 'outside view' involves starting an estimation process by consulting statistical or base rate information (e.g., national divorce rates) before considering specific details. The 'inside view' focuses solely on the idiosyncratic factors of a particular case. Starting with the outside view and then adjusting inward is a mantra for more accurate predictions.

Fermi-style Thinking

A method, named after physicist Enrico Fermi, for estimating seemingly unestimatable problems by decomposing them into many tractable components. This process involves flushing out ignorance at each step, making initial guesstimates, and allowing for transparent refinement by others, making complex problems more manageable.

Extremizing Algorithm

An advanced aggregation technique used in forecasting where, if multiple independent forecasters converge on a similar non-0.5 probability estimate (e.g., all say 0.7), the algorithm pushes the aggregated probability to a more extreme value (e.g., 0.85 or 0.9). This is based on the assumption that independent agreement from diverse sources suggests a stronger signal.

?
What is the difference between forecasting and predicting?

Philip Tetlock sees no essential difference between forecasting and predicting, considering them virtual synonyms.

?
Are superforecasters merely lucky?

No, the skill-luck ratio in predicting geopolitical and geoeconomic outcomes is estimated to be about 70% skill and 30% luck, indicating that superforecasters possess genuine skill beyond mere chance.

?
What attributes distinguish superforecasters from average forecasters?

Superforecasters tend to score higher on fluid intelligence and active open-mindedness, but their most distinguishing attribute is their belief that probability estimation is a cultivable skill and their commitment to cultivating it.

?
Can debiasing training improve forecasting accuracy?

Yes, average forecasters who received Kahneman-style debiasing exercises showed an improvement in accuracy of approximately 10% over a year.

?
Why do organizations often resist keeping score on forecasts?

Organizations resist due to a mix of psychological and political forces, including high-status individuals not wanting their judgment demystified, and a blame-game culture that incentivizes vague language over precise, falsifiable predictions.

?
How does Fermi-style thinking improve forecasting?

It improves forecasting by breaking down seemingly intractable problems into smaller, manageable components, making areas of ignorance explicit, and allowing for systematic estimation and refinement.

?
How can organizations apply forecasting research to make better decisions?

Executives can consider dedicating a portion of their analytical capacity to 'pure accuracy games' by incentivizing a small group to make forecasts focused solely on accuracy, which can then inform senior-level decision-making.

?
How can diverse views be synthesized for better aggregate forecasts?

Diverse views can be synthesized by first averaging forecasts, then weighting forecasts from better performers, and finally applying 'extremizing' algorithms that push probabilities further from 0.5 when independent forecasters converge on a similar non-0.5 estimate.

?
What is the role of intuition in real-world forecasting?

While intuition can have successes, the research emphasizes deliberate 'thinking' over 'blinking' for complex real-world problems, as history's patterns are subtle and conditional, requiring more self-skepticism than rapid pattern recognition.

1. Explicit Probability Judgments

When making any decision, explicitly state the probabilities of potential outcomes rather than relying on implicit expectations, as this self-conscious process allows for better learning and improvement.

2. Outside-In View for Predictions

When making predictions, always start by considering statistical base rates or the ‘outside view’ before adjusting your estimate based on the specific, idiosyncratic details or the ‘inside view’ of the situation.

3. Decompose Problems (Fermi Method)

Apply Fermi-style thinking by breaking down seemingly intractable problems into as many tractable, smaller components as possible, which helps to flush out ignorance and make the problem more manageable.

4. Granular Uncertainty Assessments

Make highly granular assessments of uncertainty, distinguishing even small differences (e.g., 55-45 bets from 45-55 bets), as this precision pays off in real-world judgments.

5. Practice Debiasing Exercises

Participate in Kahneman-style debiasing exercises (e.g., a 50-minute module) to improve forecasting accuracy by approximately 10% over a year.

6. Practice Bayesian Belief Updating

Engage in simulated problems (e.g., medical, economic, military diagnoses) with simulated data to practice and improve your ability to update beliefs appropriately in response to new evidence, following normative models like Bayes’ theorem.

7. Aggregate Diverse Forecasts

When seeking predictions, aggregate diverse views by taking the average of forecasts from a group, as this ‘wisdom of the crowd’ often yields more accurate results than individual predictions.

8. Weight Forecasts by Expertise

Improve aggregated forecasts by giving more weight to individuals with better track records or specific attributes (e.g., intelligence, frequent updaters), creating weighted averages that outperform simple averages.

9. Extremize Independent Agreements

If multiple independent forecasters arrive at the same probability estimate (e.g., 0.7), and they have diverse, non-overlapping information, extremize that probability (e.g., to 0.85 or 0.9), as this suggests a stronger likelihood.

10. Critique Forecasting Successes

Beyond analyzing failures, critically examine forecasting successes by asking if luck played a role, if outcomes could have been different, or if you were ‘almost wrong,’ to avoid overlearning from potentially spurious correlations.

11. Embrace Scorekeeping

Be open-minded and willing to keep score of your predictions, as this allows for tracking accuracy and learning, despite the psychological resistance to being proven wrong.

12. Overcome Fear of Error

Be willing to make and share estimates, even if they might appear ‘stupid’ to others, as this transparency allows for feedback and refinement of the decomposed problem.

13. Cultivate Challenging Team Dynamics

When working in teams, foster an environment where members have mutual respect but are also willing to push each other hard, which is optimal for problem decomposition and forecasting.

14. Establish Pure Accuracy Teams

Consider establishing small, incentivized groups within an organization to engage in ‘pure accuracy’ forecasting tournaments, with their probability estimates feeding up to senior executives to guide decision-making.

15. Prioritize Deliberate Thought

When making predictions for complex real-world problems, prioritize deliberate, analytical ’thinking’ over relying solely on rapid ‘blink’ intuition, as the latter is less reliable in messy, ill-defined domains.

16. Discern Subtle Historical Patterns

Understand that history ‘rhymes’ with subtle and conditional patterns rather than repeating exactly, and be cautious not to overlearn or overgeneralize from past events.

17. Avoid Pure Randomness Problems

To improve forecasting ability, avoid spending time trying to predict or model purely random events (like roulette wheel spins), as this is not a productive use of effort for becoming a super forecaster.

18. Focus on Shorter-Term Predictions

Generally, prioritize making predictions for shorter time ranges, as these are usually easier to foresee accurately than longer-term outcomes, though exceptions exist.

When people make explicit judgments and they're fully self-conscious about what they're doing, they can learn to do it better.

Philip Tetlock

The greatest players tend to be extremely granular in their assessments of uncertainty.

Philip Tetlock

Start with the outside and work inside. That's one of our mantras.

Philip Tetlock

History doesn't repeat itself, but it does rhyme.

Philip Tetlock

Super forecaster skepticism even extends to their forecasting successes.

Philip Tetlock

Improving Probability Estimates

Philip Tetlock
  1. Make explicit judgments about probabilities.
  2. Be fully self-conscious about the process.
  3. Continuously learn and refine the approach.

Debiasing Training (Kahneman-style)

Philip Tetlock
  1. Learn basic ideas about heuristics and biases and how to check them.
  2. Understand that people often don't give enough weight to statistical or base rate information.
  3. Practice starting with the 'outside view' (base rates) and then adjusting based on 'inside view' (idiosyncratic factors).

Fermi Method for Estimating Intractable Problems

Philip Tetlock
  1. Flush out your ignorance by identifying what you don't know.
  2. Decompose the problem into as many tractable components as possible.
  3. Make initial guesstimates for each component, even if uncertain.
  4. Allow others to review and refine these guesstimates, making the points of ignorance clear and transparent.
  5. Combine the estimates to arrive at a range of probabilities for the overall problem.

Synthesizing Diverse Views for Aggregate Forecasts

Philip Tetlock
  1. Calculate the simple average of forecasts from a group (wisdom of the crowd).
  2. Give more weight to forecasters with better track records, higher intelligence, or more frequent belief updates to create weighted averages.
  3. Apply 'extremizing' algorithms: if independent forecasters converge on a non-0.5 probability (e.g., 0.7), push the aggregated probability further towards the extreme (e.g., 0.85 or 0.9), assuming true diversity of information.
70% skill, 30% luck
Skill-luck ratio in forecasting geopolitical and geoeconomic outcomes Observed in the IARPA tournament
2011 to 2015
Duration of the IARPA forecasting tournaments The Good Judgment Project was a winner
Top 2%
Percentage of top performers identified as superforecasters each year Creamed off into elite teams
Approximately 10%
Improvement in forecasting accuracy from debiasing training For average forecasters receiving 50 minutes of Kahneman-style exercises
75%
Prediction market probability for Obamacare being overturned (2012) Referenced by journalist David Leonhardt; the law was upheld 5-4
Roughly 100 billion
Estimated number of stars in the Milky Way Used as an example component in a Fermi problem
2% to 50%
Range of Tetlock's guesstimate for the probability of another advanced extraterrestrial civilization in the Milky Way Derived from a Fermi-style decomposition
20% to 40%
IARPA's initial expectation for improvement over unweighted average forecasts The Good Judgment Project's superforecasters substantially exceeded this benchmark
About 30 years old
Philip Tetlock's age when he started forecasting tournaments (1984) He is now 61 years old