Measuring everything that matters (with Doug Hubbard)

May 22, 2024 Episode Page ↗
Overview

Spencer Greenberg speaks with Doug Hubbard about quantitative methods, biases in risk tolerance, and techniques for better prediction, emphasizing that everything that matters can be measured indirectly through its observable consequences.

At a Glance
18 Insights
1h 12m Duration
14 Topics
9 Concepts

Deep Dive Analysis

Defining Measurement and Observable Consequences

Reasons for Resistance to Quantitative Measurement

Value of Measurement in Decision-Making

Quantifying Personal Decisions and Relationships

Connection Between Measurement and Probabilities

Human Intuition Versus Statistical Models

Impact of Inconsistency in Human Judgment

Effective Calibration Training for Probability Assessment

Techniques to Improve Judgment Calibration

Understanding the Value of Information (VOI)

The Measurement Inversion Phenomenon

Applying Value of Information in Personal Life

The Rule of Five for Quick Sampling

Top Three Ways to Improve Decision-Making

Measurement (scientific definition)

A quantitatively expressed reduction in uncertainty based on observation. This means learning more than you knew before and reducing uncertainty probabilistically, even if through indirect observations, which is consistent with scientific and practical decision-making use of the term.

Innumeracy

A cultural phenomenon, particularly observed in American and European societies, where individuals exhibit resistance to quantitative methods and may feel that measuring certain things dehumanizes them. This perspective contrasts with other cultures, such as Indian or Chinese, where such beliefs are less prevalent.

Lens Method

Developed by Egon Brunswick, this method involves building a statistical model based solely on a human expert's subjective judgments across various scenarios. Interestingly, this model often outperforms the human expert because it effectively removes the inherent inconsistency in human application of their own learned experience.

Calibrated Probability Assessment

This refers to the process of evaluating how accurately an individual's stated confidence levels align with the actual frequency of being correct. For instance, if someone consistently states 90% confidence, they should ideally be correct about 90% of the time over a large number of trials.

Equivalent Bet Method

A technique used in calibration training where an individual compares their confidence in a statement to a known probability bet, such as spinning a dial with a certain chance of winning. If a person is truly X% confident in their claim, they should be indifferent between betting on the claim and betting on the dial with an X% chance of success.

Klein's Pre-mortem

A decision-making technique developed by Gary Klein to identify potential risks and failures by reframing a question. Instead of asking 'What could go wrong?', one assumes a project has already failed in the future and asks 'Explain what went wrong,' which encourages a backward-looking narrative and makes people more forthcoming about potential issues.

Value of Information (VOI)

An economic concept that quantifies the monetary worth of a measurement. In its simplest form, it is calculated as the chance of being wrong multiplied by the cost of being wrong, helping to prioritize which uncertain variables in a decision model are most valuable to measure.

Measurement Inversion

This describes the common tendency for individuals and organizations to measure things that are easy or habitually measured (e.g., IT project costs) rather than focusing on variables that are most uncertain and consequential (e.g., project benefits, cancellation risk, or adoption rates). This often leads to measuring almost exactly the wrong things.

Rule of Five

A statistical principle stating that if you randomly select five items from any population, there is a 93.75% probability that the median of the entire population will fall between the smallest and largest values observed in that sample of five. This probability is derived from the chances of not picking all five samples from one side of the median.

?
Can everything that matters be measured?

Yes, because anything that matters has observable consequences, and once those consequences are identified, measurement becomes a process of data collection and math, even if indirect.

?
Why are people resistant to measuring certain things?

Resistance stems from cultural beliefs that measurement dehumanizes things, a lack of understanding of quantitative methods, and a defense mechanism where people declare unfamiliar concepts irrelevant.

?
What are some things people should be measuring but typically don't?

People should measure their own skill at decision-making, the viability of personal investments like starting a business or home renovations, and in corporate settings, often the benefits, cancellation risk, and adoption rates of projects rather than just costs.

?
What is the connection between measurement and assigning probabilities?

Measurement, in a practical sense, is a quantitatively expressed reduction in uncertainty based on observation. Probabilities are the way we quantify that uncertainty, making them inherently linked.

?
How well does human intuition perform against 'doing the math'?

Extensive research, including studies by Paul Meal and Philip Tetlock's Good Judgment Project, consistently shows that human intuition performs worse than statistical models or even crude extrapolation algorithms across a wide variety of fields.

?
How inconsistent are humans at applying their own principles in decision-making?

Humans are highly inconsistent, with about 20% of the variation in expert judgments being attributable to personal inconsistency, influenced by factors like mood or sleep. Removing this inconsistency through models can make judgments better.

?
What kinds of calibration training are effective?

The most effective calibration training involves iterative practice where individuals assign probabilities to problems, receive immediate feedback on their accuracy, and then repeat the process, often incorporating techniques like the equivalent bet method and Klein's pre-mortem.

?
What is the best way to estimate confidence intervals?

Instead of starting with a narrow range and widening it, it is more effective to start with a very wide range that you are almost certain contains the answer, and then gradually narrow it down by 'chipping off the tails' using techniques like the equivalent bet.

?
What are the top three things people can do to improve their decision-making?

The top three things are: 1) Cultivate skepticism about one's own confidence and performance, seeking objective feedback. 2) Be curious about what methods actually work, rather than just adopting processes that feel structured or formal. 3) Document and track one's predictions and actual outcomes over time to get concrete feedback.

1. Doubt Your Gut Feel

Actively doubt your intuition and gut feelings when making consequential decisions, as quantitative assistance consistently leads to better outcomes than relying solely on subjective judgment.

2. Measure Decision-Making Skill

Regularly measure your own skill at decision-making to identify conditions and methods that improve your choices, as this is a meta-measurement that informs all other decisions.

3. Calibrate Probability Assessments

Improve your probabilistic prediction accuracy by undergoing calibration training, which involves making numerous estimates, receiving immediate feedback on their correctness, and iteratively refining your confidence levels.

4. Track Decision Performance

Systematically document and track your predictions and decisions, then compare them against actual outcomes to receive concrete feedback on your performance and improve decision-making over time.

5. Prioritize Skepticism

Cultivate personal skepticism about your own beliefs and confidence, asking ‘Why do I think I’m as good as I am?’ to avoid self-delusion and foster improvement.

6. Quantify Uncertainty with Probabilities

For important decisions, explicitly quantify your uncertainty by thinking in terms of probabilities, as this allows you to make better ‘bets’ and update your understanding based on new observations.

7. Calculate Value of Information

For significant decisions, compute the ‘value of information’ for each uncertain variable by multiplying the chance of being wrong by the cost of being wrong, to identify which measurements are most economically valuable to pursue.

8. Reverse Measurement Inversion

Counter the ‘measurement inversion’ by consciously prioritizing the measurement of highly uncertain variables that have significant impact, rather than focusing on easily measurable but less impactful factors.

9. Measure High Uncertainty First

Focus measurement efforts on variables with the highest uncertainty, as even a few observations in these areas can significantly reduce overall uncertainty more efficiently than measuring already well-known factors.

10. Quantify Personal Decisions

For significant personal decisions like home renovations, starting a business, or major purchases, conduct quantitative analysis, forecasting, and data gathering to reduce failure rates and improve outcomes.

11. Identify Observable Consequences

When trying to measure something seemingly abstract, identify its observable consequences; this helps define what you mean by it and is the first step towards measurement.

12. Apply Equivalent Bet Method

When assessing probabilities, use the ’equivalent bet’ method by comparing your confidence in a claim to a chance-based gamble; if you prefer the gamble, your stated confidence is likely too high.

13. Utilize Klein’s Pre-Mortem

To identify potential risks and failures, use Klein’s pre-mortem technique by imagining the project has already failed in the future and then explaining what went wrong, which encourages more candid risk assessment.

14. Assume Your Answer’s Wrong

After forming an answer or estimate, deliberately assume it is wrong and articulate reasons why, which can help uncover overlooked factors and lead to a more accurate final assessment.

15. Start Wide, Chip Tails

When creating confidence intervals for continuous values, begin with an extremely wide range you are highly certain contains the true answer, then progressively narrow it by using the equivalent bet method to remove the least likely extremes.

16. Assess Partner Financial Viability

Evaluate a potential long-term partner’s financial responsibility, such as their credit score and attitudes towards saving, to prevent future problems when mixing assets in marriage.

17. Apply the Rule of Five

To quickly estimate the median of any large population, randomly select five samples; there is a 93.75% probability that the true population median lies between the minimum and maximum values of your five-item sample.

18. Verify Method Effectiveness

Actively question whether your decision-making methods actually improve outcomes, rather than merely increasing confidence, and seek evidence-based approaches that are proven to be effective.

Anything that counts has observable consequences. And once you figure out how it's observable, you're halfway to measuring it. The rest is trivial math.

Doug Hubbard

I think measurement and modeling our reality mathematically is as uniquely human as music and language.

Doug Hubbard

It is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones.

Doug Hubbard

It turned out that not only were the models as good as the humans, they were better than the humans. That's a weird thing.

Doug Hubbard

If you ever say you're 100% confident and you turn out to be wrong, you are overconfident.

Doug Hubbard

The objective of a methodology is not to feel more confident in your decision. It's to make better decisions.

Doug Hubbard

Calibration Training Method

Doug Hubbard
  1. Practice making estimates on a series of problems where the answer is already known.
  2. Receive immediate feedback on the accuracy of your estimates, comparing your stated confidence to the actual results.
  3. Iterate this process multiple times, observing how often your predictions align with reality.
  4. Apply techniques like the Equivalent Bet Method to refine your probability assessments by comparing your confidence to a known probability bet.
  5. Utilize Klein's Pre-mortem by assuming your initial answer is wrong and explaining why, then adjusting your estimate based on these new insights.
  6. When estimating confidence intervals, start with a very wide range that you are almost certain contains the answer, and then gradually narrow it down by 'chipping off the tails,' rather than starting narrow and widening.
  7. Focus on skepticism and challenging your assumptions rather than solely building arguments to support your initial position.

Top 3 Ways to Improve Decision-Making

Doug Hubbard
  1. Cultivate personal skepticism: Question your own confidence and the basis for it, seeking objective feedback on your performance rather than relying on memory, which tends to recall only successes.
  2. Be curious about what works: Investigate and learn about methodologies that are empirically proven to improve decision-making, rather than just adopting processes that merely feel structured or formal.
  3. Document and track your performance: For consequential decisions, explicitly document your predictions and then track what actually happened over time to get concrete, unbiased feedback on your forecasting accuracy.
Over 150
Number of studies comparing human experts to statistical models (Paul Meal's collection) Collected by Paul Meal from the 1950s to early 2000s.
About 6
Number of studies where humans performed as well or slightly better than models Out of over 150 studies collected by Paul Meal; results could be due to chance.
Over 20 years
Duration of Philip Tetlock's Good Judgment Project Project tracking expert forecasts in various fields.
284
Number of experts tracked in the Good Judgment Project Experts in political, military, economic, and technology affairs.
Over 82,000
Number of individual forecasts tracked in the Good Judgment Project Individual forecasts where people assigned probabilities to events.
About 20%
Variation in expert judgments attributable to personal inconsistency Experts would often come up with a different answer if asked again.
About 80%
Success rate of calibration training Percentage of people who become statistically indistinguishable from a bookie at putting odds on things by the end of the training.
Almost an extra $100,000/year
Income difference for single vs. married happiness For a single person to self-report being as happy as a married person, based on a survey by economist Andrew Oswald.
Less than 30
Recommended sample size threshold for Student T statistic For estimating the mean of a population based on a sample.
93.75%
Probability for the Rule of Five The chance that the median of an entire population falls between the smallest and largest values of a random sample of five.