Should we trust papers published in top social science journals? (with Daniel Lakens)

Jul 24, 2024 1h 41m 28 insights Episode Page ↗
In this episode, Spencer Greenberg and Daniel Lackens discuss the craft of science, issues with peer review and research incentives, and the concept of red-teaming scientific research. They also delve into the trustworthiness of social science findings and the nuances of various psychological effects.
Actionable Insights

1. Cultivate a Culture of Criticism

Actively seek and welcome criticism on your ideas from others, and foster an environment where people feel comfortable providing constructive feedback. This helps identify flaws and improve research, as self-criticism is often insufficient.

2. Implement Early-Stage Red Teaming

Shift the criticism process, like red teaming or peer review, to the early stages of research (e.g., via registered reports) before data collection. This allows researchers to fix fatal flaws in proposals and methods when it still matters, reducing defensiveness and waste.

3. Reduce Publication Pressure for Tenure

Implement tenure systems that grant early tenure (e.g., after one year) to assistant professors, significantly reducing the pressure to publish frequently. This frees researchers to pursue more difficult and important, rather than merely numerous, projects.

4. Increase Scientific Coordination, Collaboration

Actively foster coordination and collaboration among scientists to collectively address research challenges and improve practices. This is crucial for tackling large, complex problems and building cumulative knowledge.

5. Coordinate Replications, Measures, Challenges

Social scientists should coordinate to identify studies needing replication, standardize measurement tools, and commit to long-term, difficult research questions that require collective effort. This ensures a robust knowledge base and addresses critical, complex problems.

6. Promote Interdisciplinary Collaboration

Engage in more cross-field and interdisciplinary research, as integrating diverse expertise (e.g., sociologists, economists) is crucial for addressing complex, large-scale problems and building overarching theories. The academic system should reward this labor-intensive process.

7. Train Scientists to Admit Mistakes

Actively train scientists, especially early in their careers, on how to deal with criticism and admit when they are wrong or have made mistakes. This is a crucial skill that is often overlooked in academic training.

8. Broaden PhD Training to Practical Skills

Expand PhD programs to include training on practical skills essential for a scientific career, such as dealing with research roadblocks, developing new ideas, and managing criticism. This addresses common challenges students face but are not formally taught.

9. Discuss Principles vs. Rewards

Openly discuss the conflict between scientific principles and career reward structures with junior researchers. Encourage them to prioritize doing the right thing, even if it means a lower publication rate, as this aligns with long-term integrity and personal satisfaction.

10. Appoint a Chief Criticizer

For each research project, assign a dedicated “chief criticizer” who is responsible for identifying flaws and takes the blame if errors are found post-publication. This creates a strong incentive for thorough criticism and overcomes social biases.

11. Implement Red Teaming

Form a “red team” specifically tasked with actively trying to break down or criticize the work of another group (the “blue team”) in a collaborative environment. This method, borrowed from programming, helps identify weaknesses and improve outcomes.

12. Evaluate Research Trustworthiness

When assessing a new paper, check for falsifiable hypotheses, a clear data analysis plan, and sufficient data for accurate estimates, alongside a solid theoretical framework. This helps determine the reliability and validity of the findings.

13. Strengthen Theory to Prevent P-Hacking

Develop stronger, more constrained theoretical predictions in research, as this limits flexibility in data analysis and makes p-hacking more difficult. This theoretical component is often important to consider.

14. Increase Transparency by Sharing Code

Share research code publicly to make mistakes visible and normalize the process of finding and fixing errors. This transparency can benefit the entire field by fostering a more open and accountable environment.

15. Sign Peer Reviews for Accountability

Voluntarily sign your peer reviews to foster a sense of personal responsibility for the quality of the critique. This can motivate reviewers to be more thorough and improve the overall peer review system.

16. Diversify Reviewer Expertise

Select peer reviewers with varied and diverse expertise, including those from outside the immediate sub-field, to provide more comprehensive and useful input. This can help catch mistakes that specialized reviewers might miss.

17. Prioritize Direct Dialogue for Disagreement

Instead of formal commentary articles in journals, resolve scientific disagreements through direct, in-person conversations. This informal setting can foster more productive and less defensive conflict resolution.

18. Engage in Adversarial Collaborations

When strong disagreements exist, scientists should engage in adversarial collaborations where they jointly design and execute studies to resolve conflicts. This method aims to reach a shared conclusion or clearly delineate remaining disagreements.

19. Utilize Flexible Pre-registration Methods

Learn and apply advanced pre-registration methods that allow for flexibility in data analysis while maintaining rigor. This addresses the common issue of unforeseen data outcomes and reduces the need to deviate from pre-specified plans.

20. Combine Exploratory and Confirmatory Analysis

Conduct exploratory analysis on a subset of data, then test the most promising hypotheses on a separate, “held-in-vault” confirmatory dataset. This allows for broad exploration while maintaining statistical rigor.

21. Measure Multiple Outcomes with Caution

Collect data on multiple outcome measures to gain a comprehensive understanding, but interpret findings cautiously, especially when only some measures show significance. Use such discrepancies as opportunities for deeper exploration rather than selective reporting.

22. Understand Underlying Principles of Methods

Focus on understanding the fundamental principles behind research methods and ethical practices, rather than mindlessly following rules or avoiding practices that merely “look like” problematic ones. This enables informed decision-making and appropriate flexibility.

23. Share Null Results to Prevent Bias

Actively share null results to combat the “file drawer problem,” which otherwise leads to a literature filled with false positives and flukes. This transparency is crucial for an accurate scientific record.

24. Guard Against Importance Hacking

Be critical of research findings that replicate but whose significance or value is overstated or misinterpreted. Ensure that the claimed meaning and importance of results are genuinely supported by the data, not just statistical significance.

25. Adopt a Growth Mindset

View your own performance and abilities as something that can improve over time with effort and learning, rather than fixed traits. This perspective is crucial for continuous development and resilience, especially in challenging fields like science.

26. Continuously Teach Growth Mindset

Integrate the teaching of a growth mindset into education and training programs, reinforcing it consistently rather than as a one-off intervention. This sustained approach can lead to more significant and lasting positive effects.

27. Encourage Specialization in Science

Promote greater specialization within scientific fields, acknowledging that expertise in areas like programming, measurement, or statistics requires dedicated training and time. This can lead to higher quality work and fewer mistakes.

28. Interpret IAT with Caution

Be highly cautious and critical when interpreting results from the Implicit Association Test (IAT), acknowledging its methodological complexities and potential confounds. Do not assume it directly measures deep-seated implicit biases like racism without clear communication of its limitations.