Exploring your shadow and healing your traumas (with Aurora Quinn-Elmore)
1. Restructure Math Education for Real-World Use
Advocate for high school math curricula to prioritize basic statistics, data analysis, probability, and direct logic instruction over traditional geometry or advanced calculus. This helps students understand the world, news, medical information, and make better decisions.
2. Teach Logic Directly
Instead of hoping students learn logic through indirect methods like geometric proofs, teach actual logic directly. This is a more effective way to train logical thinking skills.
3. Understand Correlation vs. Causation
Learn to differentiate between correlation and causation. This is crucial for interpreting information, understanding public discourse, and avoiding false conclusions about why events occur.
4. Grasp Bayesian Probability & Base Rates
Develop an intuitive understanding of Bayesian probability, particularly how to incorporate base rates into decision-making. This helps avoid common errors in interpreting probabilities, such as in medical test results.
5. Understand Core Statistical Concepts
Learn the concepts of mean, median, and basic probability distributions (e.g., standard deviation, width). These are fundamental for making sense of data, the world, and scientific findings.
6. Participate in Scientific Reasoning
Recognize that scientific reasoning is not exclusive to ‘scientists’; everyone can participate in better reasoning by understanding principles of mathematical probability and logic. This democratizes knowledge acquisition.
7. Interpret P-values Correctly
Understand that a p-value indicates the probability of observing data as extreme or more extreme if there were no effect, not the probability of an effect given the data. This prevents misinterpretation of scientific results.
8. Use P-values to Rule Out Sampling Error
Employ p-values as a tool to determine if a result can reasonably be attributed to random sampling error or noise. A very low p-value suggests the result is unlikely due to chance alone.
9. Be Wary of Small Sample Size + Large Effect
Approach findings with small p-values from very small sample sizes (e.g., N=20) and large effects with suspicion. Such results may be flukish and less likely to replicate.
10. Avoid Dichotomous Thinking with P-values
Do not treat the 0.05 p-value cutoff as a strict dichotomy for ‘statistical significance.’ Evidence exists on a continuum, and results just above or below this arbitrary line are essentially equivalent in terms of actual evidence.
11. Publish Theoretically Interesting Null Results
Prioritize publishing null results for questions that are theoretically interesting and would advance the field, regardless of whether the answer is positive or negative. This informs researchers about dead ends or what doesn’t work.
12. Publish Applied Null Results for Common Beliefs
Publish null results for applied interventions that are widely used or believed to be effective but are found not to work. This provides valuable information to practitioners and the public.
13. Publish Null Results for Failed Methods/Tools
Disclose when a scientific method or tool is discovered not to perform as claimed. This helps improve future scientific practices and tool development.
14. Implement Rigorous Coding Practices in Academia
Academics should adopt industry best practices for programming, including testing code for bugs and conducting independent code reviews. This helps catch inevitable errors in data analysis.
15. Double-Check High-Impact Work
For research with significant implications (e.g., policy, public health), rigorously double-check all calculations and analyses. Increased impact demands increased responsibility and verification.
16. Question Successes as Much as Failures
Perform ‘post-mortems’ on successful experiments or projects, not just failures, to evaluate if good decisions were made or if luck played a significant role. This improves decision-making processes.
17. Adopt Unit Tests in Scientific Programming
Integrate unit tests into scientific code development, where code is written specifically to test other code. This is an indispensable practice for catching bugs.
18. Fully Disclose Experimental Methods
Provide comprehensive and detailed descriptions of experimental methods in scientific publications. This is crucial for other researchers to accurately reproduce studies.
19. Be Less Grandiose in Scientific Claims
Avoid overgeneralizing findings from specific populations (e.g., university undergraduates) to broader humanity. Acknowledge the limits of generalizability and the specific context of the study.
20. Design Interventions for Specific Populations/Formats
When developing interventions, study them on populations very similar to the intended real users and in the exact format they will be deployed. This helps ensure effectiveness without needing broad generalizability.
21. Integrate Lightweight RCTs into Intervention Deployment
Weave lightweight randomized controlled trials (RCTs) directly into the ongoing deployment of interventions. This allows for continuous, high-quality data collection on the target population and iterative improvement.
22. Conduct Baseline RCTs in New Contexts
When introducing an intervention or policy in a new geographical or cultural context, always start with a baseline RCT to verify its effectiveness in that specific environment. Do not assume generalizability.
23. Continuously Test and Improve Interventions
Treat interventions as dynamic entities that can be continually improved through ongoing A-B testing and data collection, rather than static, one-time deployments.
24. Recognize Impact of Implementation Quality
Understand that even a theoretically sound intervention will fail if poorly implemented. Quality of execution is a critical factor for success.
25. Account for Dosage Differences
Be aware that variations in intervention dosage (e.g., duration, frequency) can significantly alter outcomes and affect generalizability.
26. Consider Cultural Factors in Interventions
When deploying interventions in new areas, carefully consider local cultural factors and potential moral opposition, as these can profoundly impact effectiveness.