Should science stop worshiping statistical significance? (with Andrew Gelman)

Mar 5, 2026 1h 19m 21 insights Episode Page ↗
Andrew Gelman, Ph.D., discusses the replication crisis in science, highlighting common statistical flaws and the need for greater transparency and uncertainty in research. He advocates for valuing criticism, measuring mechanisms directly, and incorporating prior knowledge through Bayesian methods.
Actionable Insights

1. Embrace Uncertainty in Science

Recognize that evidence often points in different directions and that it’s a relief to be uncertain. This mindset helps avoid the false certainty often derived from misinterpreting statistical results.

2. Value and Seek Criticism

Actively welcome criticism of your work, as it can reveal flaws and lead to significant improvements. Treat criticism as a gift that helps refine and strengthen your understanding or research.

3. Avoid Presumption of Correctness

Do not assume a finding is correct simply because you cannot immediately identify a ‘smoking gun’ error. Maintain skepticism and be aware that flaws may become obvious later.

4. Be Open About Uncertainty

Communicate openly about the uncertainty in your findings, rather than presenting results as definitive. This fosters a more realistic understanding of scientific evidence.

5. Prioritize Research Design

Focus on robust study design, as it is more critical than the analysis itself. Use your intuitions and prior knowledge to inform and strengthen your experimental setup.

6. Measure Mechanisms Directly

When studying complex phenomena, measure the underlying mechanisms as directly as possible, rather than relying solely on reduced-form outcome analyses. This provides deeper insights into how effects occur.

7. Avoid One-Sided Thinking

Be aware of the ‘one-way street fallacy,’ the assumption that an intervention can only be positive or neutral. Consider that effects can be negative in some settings or for certain individuals.

8. Plot Data for Initial Insights

Before formal statistical analysis, create graphs of your data to visually inspect patterns and understand what the data look like. This can provide intuitive insights that complement formal methods.

9. Focus on Real-World Measures

Report and interpret findings using real-world scales (e.g., percentage shifts, death rate reductions) rather than solely relying on p-values. This makes results more tangible and understandable.

10. Be Realistic About Effect Sizes

When designing new studies, set realistic expectations for potential effect sizes based on prior knowledge and the inherent noise in measurements. This helps avoid designing underpowered studies.

11. Assess Mechanism Reasonableness

When evaluating health or other claims, assess the reasonableness of the proposed mechanism, especially for observational studies. This helps to gauge the plausibility of reported correlations.

12. Make Decisions with Imperfect Information

Recognize that you often have to make decisions with imperfect data and that waiting for perfect evidence is not always feasible. Not making a decision is itself a decision.

13. Conduct ‘One-Study Meta-Analysis’

Even with a single study, consider the potential variation of the effect across different populations and conditions. This ‘one-study meta-analysis’ helps account for broader uncertainty beyond the immediate sample.

14. Include Study-Level Predictors

In meta-analyses, incorporate characteristics of individual studies as predictors to understand where and for whom an intervention works or doesn’t work. This helps explain variation in effects.

15. Use Training and Test Sets

When working with large datasets, split them into training and test sets. Analyze the training set extensively, then validate findings on the unseen test set to prevent overfitting and spurious results.

16. Publish Study Designs (Registered Reports)

Consider publishing the design of your study (e.g., as a registered report) before data collection. This commits you to publishing results regardless of outcome, increasing transparency and reducing publication bias.

17. Publish Raw Data Separately

If your data is interesting, publish it as a standalone paper, allowing other researchers to analyze it independently. This promotes open science and diverse interpretations.

18. Show Raw and Adjusted Results

When performing complex analyses, present both the raw results and the results after adjustments. This transparency helps readers understand the impact of your analytical choices.

19. Respect Implementers in Policy

When designing and implementing policies or interventions, actively involve and respect the people on the ground (e.g., doctors, teachers, police). Their commitment and involvement are crucial for success.

20. Limit Email Checking

Consider limiting email checking to later in the day (e.g., after 4 p.m.) to protect focus and productivity. This habit helps manage interruptions and maintain concentration on core tasks.

21. Spend Time with Family

Prioritize spending time with family, as it can be a significant source of happiness. This direct approach to well-being can be more effective than indirect or subliminal methods.