Highs and lows on the road out of the replication crisis (with Brian Nosek)

Nov 8, 2024 1h 38m 18 insights Episode Page ↗
Spencer Greenberg speaks with Brian Nosek about progress in open science and psychology research. They discuss the value of replication, the importance of communicating scientific uncertainty, and how to improve research practices and incentives, including the "registered reports" model and the innovative "Lifecycle Journal Project."
Actionable Insights

1. Embrace Perspectivism in Science

Adopt the philosophy that every claim is true under some conditions, and progress involves identifying those conditions. This helps navigate complex fields where causation is difficult to pinpoint.

2. Take Replication Failures Seriously

When a study fails to replicate without a strong theoretical expectation, treat it as a serious challenge to the general claim. Use it to narrow the claim’s applicability or question its productivity.

3. Test Hypotheses for Failures

Instead of retrospectively explaining away replication failures with post-hoc reasons, formulate specific hypotheses about potential influencing factors and empirically test them. This moves beyond speculation to scientific inquiry.

4. Communicate Scientific Uncertainty Clearly

Always include the inherent uncertainty when communicating scientific findings to the public. This helps recipients understand the provisional nature of science and make informed decisions based on the current evidence.

5. Prioritize Uncertainty for Surprising Findings

When encountering surprising or newsworthy scientific results, give as much importance to the uncertainty surrounding the finding as to the finding itself, as highly surprising results are often less robust.

6. Seek Long-Form & Systematic Reviews

For a deeper and more comprehensive understanding of a scientific field, consult long-form pieces and systematic reviews that synthesize decades of research. These provide comprehensive insights beyond newsworthy individual findings.

7. Adopt Open Science Practices

Implement practices like sharing data, code, materials, and protocols openly. This transparency enhances the evaluability of research, facilitates self-correction, and allows others to build on work more effectively.

8. Utilize Registered Reports for Publishing

Researchers should use the “registered reports” publishing model, where peer review occurs before results are known. This aligns incentives with rigorous methodology and important research questions, improving research quality.

9. Reward “Done Well” Over “Figured Out”

The scientific community and journals should shift the reward system to prioritize the quality of research execution over the completeness or “figured out” nature of findings. This encourages publishing well-conducted exploratory or messy work.

10. Surface and Reward Discovery Phase

Create mechanisms to make visible and reward the “messy” exploratory and discovery phases of scientific research. This acknowledges their crucial role in pushing the boundaries of knowledge.

11. Publish Unexpected Phenomena

Be willing to publish research that uncovers unexpected phenomena or raises more questions than answers, even without a complete explanation. Such findings are valuable for understanding the conditions under which phenomena occur.

12. Embrace Openness for Error Detection

Researchers should embrace maximum openness and transparency, recognizing that errors are inevitable. Open sharing allows dedicated critics to scrutinize work, identify errors, and collectively reduce uncertainty, leading to progress.

13. Engage with Lifecycle Journal Models

Researchers and the scholarly community should engage with or develop “Lifecycle Journal” models that integrate diverse evaluation services across the entire research process. This treats all research outputs (data, code, plans) as first-class contributions.

14. Utilize Diverse Evaluation Services

Explore and use a variety of evaluation services beyond traditional peer review, such as prediction markets, data sharing quality assessments, and independent reproducibility checks. These modular services provide comprehensive insights into research quality.

15. Shift from Volume to Quality

Researchers and institutions should re-align incentives to prioritize the quality of research and the evaluations it receives, rather than solely focusing on the volume of publications. This can lead to more rigorous and impactful work.

16. Guard Against Importance Hacking

Researchers and reviewers must be vigilant against “importance hacking” by critically comparing stated research claims with the actual statistical evidence. Ensure that reported significance, novelty, and importance are truly justified by the data.

17. Develop Claim-Evidence Matching Services

For evaluation services, develop tools or protocols specifically designed to assess how well research claims align with the supporting evidence. This helps identify and mitigate “importance hacking” in published work.

18. Conduct Cross-Disciplinary Replicability Evaluations

Actively conduct evaluations of replicability, reproducibility, and generalizability across different scientific disciplines. This helps identify common challenges and areas of strength, informing targeted improvements.