Career science, open science, and inspired science (with Alexa Tullett)
1. Acknowledge Self-Deception
Understand that as individuals, we are highly prone to self-deception and biases, making external accountability and peer persuasion critical for producing trustworthy results, even for ourselves.
2. Embrace Rapid Iterative Exploration
To truly understand a phenomenon, engage in rapid, iterative exploration by testing it from many different angles, viewing this as the core process of figuring out the truth before conducting confirmatory studies.
3. Use Confirmatory Studies to Verify
After extensive exploration, conduct pre-registered confirmatory studies to ensure you haven’t bullshitted yourself and to provide strong, unimpeachable evidence to others about your findings.
4. Distinguish Exploration from Confirmation
When conducting research, clearly differentiate between exploratory analyses, which generate new hypotheses, and confirmatory analyses, which test pre-specified hypotheses, to maintain scientific integrity.
5. Beware “Importance Laundering”
Be vigilant against “importance laundering,” a practice where research findings that are replicable but lack genuine interest or importance are presented in a way that makes them seem significant.
6. Identify Conclusion Hacking
Watch out for “conclusion hacking,” where a study presents a specific finding (X) but subtly implies a more interesting, unproven finding (X prime), often by using vague language or overclaiming.
7. Recognize Novelty Hacking
Be aware of “novelty hacking,” which involves presenting a result as novel even if it’s common sense or an already established fact, often by renaming or repackaging existing constructs.
8. Detect Usefulness Hacking
Look for “usefulness hacking,” where researchers highlight the statistical significance of a finding with a very small effect size, while downplaying its lack of practical or clinical importance.
9. Critique Beauty Hacking
Critically evaluate “beauty hacking,” a practice where messy or contradictory research results are simplified and presented as a clean, elegant, and exciting story, often by omitting inconvenient findings.
10. Prioritize Generalizability
Beyond replicability, prioritize generalizability in research, as findings that are reliable but lack broader applicability may not contribute to understanding truths about human behavior that truly matter.
11. Be Skeptical of Small Effects
Maintain general skepticism towards small effects, as they are more likely to be accidental findings due to subtle experimental mistakes and tend to be less reliable than larger effects.
12. Contextualize Small Effects
When encountering small effects, assess their importance based on context; they can be significant if they are paradigm-shifting, relate to life-or-death outcomes, or have extremely low implementation costs.
13. Prioritize Registered Reports
When seeking scientific answers, prioritize registered reports over meta-analyses because they have more safeguards against bias, ensuring results are published regardless of outcome and authors stick to their original plan.
14. Understand Meta-Analysis Limitations
Be aware that meta-analyses can suffer from publication bias, where only studies with significant effects are published, and heterogeneity, where diverse study designs are combined, potentially obscuring true effects.
15. Expect Inconclusive Meta-Analyses
Be prepared for most meta-analyses to conclude that “more evidence is needed,” as this is a common outcome, suggesting that definitive answers are often still elusive.
16. View P-Values as Continuous Evidence
When seeking the truth, avoid dichotomizing p-values into “significant” or “not significant” thresholds; instead, interpret them as continuous evidence against the null hypothesis, as this is the correct mathematical interpretation.
17. Justify Alpha Based on Costs
When setting a p-value cutoff (alpha), justify it by considering the relative costs of making a false positive (Type 1 error) versus a false negative (Type 2 error) in your specific research context.
18. Perform Power Calculations
Before conducting a study, perform a power calculation to determine the necessary sample size, ensuring the study is adequately powered to detect the effects you are looking for.
19. Pre-Register Studies
Pre-register your study’s introduction and methods section before data collection to prevent p-hacking and demonstrate that your research plan was established prior to knowing the results.
20. Show All Studies, Even Imperfect Ones
Instead of hiding initial or “crappy” studies, make them available (e.g., online or in supplementary materials) so that others can see the full research process and evaluate the sum of all evidence, preventing capitalization on chance.
21. Consult Cochrane for Health Questions
When researching medical or health questions, begin by checking the Cochrane Collaboration, which provides in-depth meta-analyses on various topics, offering a trustworthy starting point for evidence.
22. Google Scholar for Research
If the Cochrane Collaboration doesn’t cover your topic, use Google Scholar and include phrases like “randomized control trial” or “systematic review” in your search to find empirical evidence.
23. Abandon Retribution in Justice
Eliminate retribution as a goal in the justice system, as it relies on an impossible assessment of blameworthiness, and the high cost of error necessitates focusing on consequentialist goals like public safety instead.
24. Skepticism of Judicial Objectivity
Maintain skepticism regarding the objectivity of Supreme Court justices, recognizing that individuals are inherently biased, and the immense power of their positions, combined with an expectation of impartiality, is often problematic.
25. Define Educational Purpose First
Before designing a college admissions process, clearly define the fundamental purpose of education and the type of individuals you aim to select, as this will shape the entire selection strategy.
26. Admit for Benefit, Not Just Success
Explore college admissions models that prioritize identifying individuals who will benefit most from education, rather than solely selecting those predicted to achieve high grades or test scores.
27. Random Selection Post-Threshold
Implement a college admissions system where candidates must meet a basic threshold of readiness or potential benefit, after which selection is randomized to address the inherent difficulties of precise evaluation.
28. Listen Actively in Interviews
In interviews, practice active listening to identify truly important or interesting points made by the speaker, and be prepared to ask follow-up questions to delve deeper into those moments.
29. Record & Review Conversations
To enhance listening skills and self-awareness, record your conversations and listen back to them, noting points you missed or differences between your internal thoughts and what you actually verbalized.
30. Let Conversations Flow Organically
When facilitating discussions or interviews, resist the urge to heavily structure or control the conversation; instead, allow it to flow organically, as this often leads to more interesting and insightful exchanges.