What things in life *shouldn't* we optimize? (with Christie Aschwanden)

Jan 25, 2023 Episode Page ↗
Overview

In this episode, Spencer Greenberg speaks with Stuart Ritchie about making science trustworthy, navigating research controversies like COVID treatments and IQ, and applying meta-scientific principles to critically evaluate scientific claims and data.

At a Glance
19 Insights
55m 56s Duration
17 Topics
7 Concepts

Deep Dive Analysis

Trusting Science: Navigating Good and Bad Research

COVID-19 Vaccine and Treatment Trial Quality

The Surgisphere Scandal and Journal Retractions

Ivermectin and Vitamin D: Early Hype vs. Rigorous Trials

Challenges of Controlling for Confounding Variables

The Gold Standard of Randomized Controlled Trials

Natural Experiments: Education's Effect on Intelligence

Ivermectin Consensus and Science's Hype Cycle

Power Posing and Mindset: Small Effects, Big Claims

IQ Research: Unjustified Skepticism and Overstatement

Defining and Measuring General Intelligence (IQ)

IQ's Role in Skill Acquisition and Peak Performance

Philosophical Interpretations of General Intelligence (G Factor)

Limitations of Isolating Brain Functions in Research

Beyond IQ: Specific Cognitive Domains and Trainability

Identifying Additional Cognitive Predictors Beyond IQ

Addressing Misinformation in the Media Ecosystem

Replication Crisis

A phenomenon in science, particularly psychology, where many published research findings are difficult or impossible to reproduce when studies are repeated. This highlights issues with research methods, statistical practices, and publication bias.

Decline Effect

The observation that the effect sizes reported in initial scientific studies of a phenomenon tend to be larger than those reported in later studies, often due to methodological tightening or publication bias. This can lead to an initial overestimation of a treatment's or effect's impact.

Randomized Controlled Trial (RCT)

A research design considered the 'gold standard' for determining causality, where participants are randomly assigned to either a treatment group or a control group. Randomization helps ensure that, on average, there are no systematic differences between groups, allowing for strong causal inferences.

Confounding Factors

Variables that influence both the independent and dependent variables, creating a spurious association. In observational studies, researchers try to 'control' for these factors statistically, but it's challenging due to measurement error, non-linear effects, and unmeasured variables.

General Intelligence (G Factor)

A theoretical construct representing a common underlying cognitive ability that influences performance across various intellectual tasks. IQ tests are designed to measure this general factor, which is thought to predict a wide range of life outcomes.

Indifference of the Indicator

A concept proposed by Charles Spearman, suggesting that general intelligence (IQ) can be measured with any variety of cognitive tests, as long as a wide range of tests are used to extract the common underlying factor. The specific tests used in the battery don't significantly alter the resulting general intelligence score.

Bonds Theory of Intelligence

A theory proposed by Godfrey Thompson, suggesting that mental abilities are distinct and uncorrelated in the brain, but appear correlated when measured by intelligence tasks because each task involves a mix of these abilities. This contrasts with the 'G factor' theory, which posits a single underlying general intelligence.

?
How can one distinguish between trustworthy and untrustworthy science?

Trustworthy science often follows principles like openness and transparency, pre-registering study plans, and using rigorous methods like randomized controlled trials, while untrustworthy science may involve low-quality trials, small sample sizes, and lack of transparency.

?
Why do initial scientific studies often show stronger effects than later ones?

This 'decline effect' occurs because early studies may be lower quality, underpowered, or subject to publication bias where only statistically significant (and often larger) effects are published, leading to an overestimation of the true effect size.

?
Why are randomized controlled trials (RCTs) considered the 'gold standard' in scientific research?

RCTs are powerful because randomly assigning participants to groups ensures that, on average, there are no systematic differences between the groups, allowing researchers to confidently attribute any observed outcome differences to the intervention being studied.

?
Can observational studies reliably establish cause-and-effect relationships?

Observational studies are prone to confounding factors, where other variables might be causing the observed correlation, making it very difficult to reliably establish causation even with statistical controls, which can suffer from measurement error or missing factors.

?
What are the main criticisms and misunderstandings surrounding IQ research?

Critics often claim IQ tests only measure test-taking ability or are pseudoscience, but evidence shows IQ correlates with important life outcomes. Misunderstandings stem from cultural biases, overstatements by some proponents, and a lack of exposure to the full range of intelligence.

?
Does IQ set an absolute limit on an individual's potential for skill acquisition or peak performance?

While higher IQ can accelerate skill acquisition and may set a cap for peak performance in highly cognitively demanding tasks like chess, other factors like conscientiousness, charisma, and effective learning strategies can compensate and help individuals achieve high levels of success.

?
Can specific cognitive abilities or domains be improved through training?

Yes, specific cognitive domains like working memory can be improved through targeted training, but this 'near transfer' often doesn't extend to improving general intelligence ('far transfer'), suggesting that general intelligence is not solely bottlenecked by these specific abilities.

?
What is a potential intervention to combat misinformation in media?

One idea is to flag statements that are highly controversial, rather than trying to determine their objective truth or falsehood, to raise awareness for readers that there are significant disagreements about the claim.

1. Prioritize Randomized Controlled Trials

When evaluating scientific claims, especially for treatments or interventions, always prioritize evidence from randomized controlled trials (RCTs) because randomization is a powerful tool for establishing causality and eliminating confounding factors.

2. Assume People Make Things Up

Adopt a critical thinking mindset by acknowledging that people, even those you like or who are on your ‘side,’ are constantly making things up, whether through deliberate deception or self-deception, necessitating careful verification of information.

3. Be Aware of Controversial Claims

When encountering information, especially online, seek indicators that a claim is controversial or widely disagreed upon, as this awareness encourages greater skepticism and a more nuanced understanding before accepting it as objective fact.

4. Apply Skepticism to Controversial Claims

If a claim is flagged as controversial or you notice significant disagreement, give it a second thought and apply more skepticism before immediately adopting it, even if it comes from a media source you generally trust.

5. Look for Transparent Research Plans

When evaluating research, especially clinical trials, look for studies that were open and transparent about their plans, including published registrations, before the study even began, as this indicates higher quality and trustworthiness.

6. Question Research Plans Before Trials

Actively question the registrations and stated outcomes of trials before they happen, and be wary of changes or multiple registrations, as this scrutiny helps ensure methodological integrity.

7. Be Skeptical of Low-Quality Research

Be wary of scientific claims based on low-quality trials, especially those with non-justified methods, small sample sizes, or a lack of transparency regarding procedures, as these are hallmarks of untrustworthy science.

8. Don’t Rely Solely on Journal Reputation

Do not implicitly trust scientific findings based solely on the reputation of the journal they are published in, as even top-tier journals can publish flawed or retracted studies.

9. Scrutinize Data Verification

Be cautious of research where the scientists publishing the work did not actually look at or check the underlying data themselves, as this can lead to the publication of incorrect or fraudulent findings.

10. Trace Studies to Original Source

When encountering claims, especially those that seem sensational or politically charged, trace the studies and data back to their original source to verify their provenance and avoid spreading made-up stories.

11. Be Skeptical of Simple Statistical Controls

When evaluating observational studies, be aware that merely ‘controlling for something’ by throwing variables into a linear regression is often insufficient due to unmeasured factors, nonlinear effects, and measurement error in the controls themselves.

12. Map Out Causal Assumptions

When interpreting research, especially observational studies, explicitly map out the assumed causal relationships between variables (e.g., using directed acyclic graphs) to reveal hidden assumptions and be more circumspect about conclusions.

13. Expect Decline Effect in Initial Studies

Bear in mind that initial studies on a topic often report larger effect sizes than subsequent, more rigorous studies; expect effect sizes to generally get smaller as more research comes in.

14. Take Vitamin D if Deficient

If you have genuinely low vitamin D levels, you should take vitamin D supplements, as this can be beneficial for health, particularly bone health in older women.

15. Cultivate Conscientiousness

Recognize that high conscientiousness can compensate for lower cognitive abilities, enabling individuals to achieve academic and professional success by being highly organized and productive.

16. Adopt Organized Work Habits

Implement structured habits such as setting up specific times for tasks, maintaining a detailed calendar, and taking copious notes to maximize productivity and make up for potential cognitive differences.

17. Focus on Learning How to Learn

Prioritize developing meta-learning skills, understanding how to effectively acquire new knowledge and abilities, as this can significantly accelerate your personal and professional development.

18. Teach Efficient Learning Strategies

Teachers should equip students with effective learning strategies derived from cognitive psychology to help them learn more efficiently, regardless of their innate intelligence levels.

19. Understand Test Rules for Better Performance

When taking tests, especially those involving patterns or logic, understanding the underlying rules or meta-level instructions on how the test works can significantly boost performance by allowing you to apply your general abilities more effectively.

If people say trust the science, well, which science, right? There's good science and there's bad science.

Stuart Ritchie

People make stuff up, even people you like. That's my, that's like my, like, people always are constantly making stuff up, even people you like. Like, that's like the bottom line for critical thinking for me.

Stuart Ritchie

It's easy to forget how powerful randomization is and how important it is, because it's just kind of drummed into you. Oh, yeah, randomization, that's what you want to do. But it's a really remarkable thing that you can use.

Stuart Ritchie

The predictive ability of IQ tests exists regardless of what the interpretation of the general factor is, right?

Stuart Ritchie

The combination of having that skill learning and high IQ is going to take you places a lot quicker. But it doesn't mean that you can't learn those skills if you have a lower IQ. It's just it'll take you longer.

Stuart Ritchie

We know that these tests correlate positively together. Like, that is as replicable as you could ever get. Like, you will not fail to replicate that. I would bet large amounts of money on you not failing to replicate that.

Stuart Ritchie
50%
Effect size reduction in replication studies Effect sizes in replication studies are often 50% smaller, even if the replication is successful.
0.3 to 0.4
Correlation among different IQ measures Individual cognitive measures might correlate at this level, contributing to the general factor.
40-50%
Variance explained by the general factor of intelligence The general factor of intelligence explains this percentage of the overall variation among different cognitive tasks.
800 or 1000 people
Participants in a large power posing replication study The largest replication study of power posing involved this many participants.
42 people
Participants in the original power posing study The original, highly influential power posing experiment had this small number of participants.
0.04
P-value often found in early growth mindset studies Early studies on growth mindset often reported results with this p-value, indicating statistical significance but close to the threshold.
2012
Year of Norwegian education study publication A study from Norway published in this year showed that an extra year of compulsory schooling increased IQ scores.
18, 19, or 20
Age of military service in Norway Men in Norway take IQ tests during military service at this age, which was utilized in the education study.