Why capitalism doesn't live up to its promises (with Martin Schmalz)

Dec 14, 2022 Episode Page ↗
Overview

Spencer Greenberg speaks with Stuart Ritchie about making science trustworthy, examining controversies like ivermectin and vitamin D, and models of intelligence. They emphasize the importance of high-quality evidence, critical thinking, and the nuances of IQ research.

At a Glance
21 Insights
1h 16m Duration
18 Topics
8 Concepts

Deep Dive Analysis

When and How to Trust Science

Good vs. Bad Science During the COVID-19 Pandemic

The Surgisphere Data Fraud and Journal Retractions

The Ivermectin Controversy: Evidence and Advocacy

Vitamin D and the Challenge of Correlational Studies

Difficulties and Pitfalls of Statistical Control

The Gold Standard: Randomized Controlled Trials

Natural Experiments: Education's Impact on IQ in Norway

The Decline Effect: Why Initial Studies Overstate Effects

Power Posing: Hype, Backlash, and Small Effects

Growth Mindset: Exaggerated Claims vs. Actual Impact

The Controversy and Misconceptions Around IQ Research

IQ, Skill Acquisition, and Peak Performance

Compensating for Cognitive Ability with Other Traits

Philosophical Interpretations of General Intelligence

Challenges in Testing Theories of Brain Function and Intelligence

Beyond the Single IQ Score: Other Cognitive Variables

Improving the Media Ecosystem: Flagging Controversial Claims

Replication Crisis

A phenomenon in science, particularly psychology, where initial strong findings or effect sizes are not consistently reproduced in subsequent studies, often due to methodological issues or publication bias. This leads to a re-evaluation of the trustworthiness of certain research areas.

Statistical Control

A method used in observational studies to account for confounding factors by including them as variables in statistical analyses (e.g., linear regression) or by balancing groups. However, it's challenging due to unmeasured factors, non-linear effects, and measurement error.

Randomized Controlled Trial (RCT)

Considered the 'gold standard' in research, where participants are randomly assigned to different groups (e.g., treatment vs. control). Randomization ensures, on average, no systematic differences between groups, allowing for strong causal inferences.

Natural Experiment

A research design where a naturally occurring event or policy change (not controlled by researchers) creates conditions that mimic a randomized experiment. Researchers can then analyze existing data to infer causal relationships, though with more complications than a true RCT.

Decline Effect

The observation that initial studies on a topic often report larger effect sizes than later studies, which tend to show smaller or no effects. This can be attributed to publication bias, underpowered early studies, and increased methodological rigor in subsequent research.

General Intelligence (G-factor)

A theoretical construct in psychometrics representing a common underlying factor that influences performance on all cognitive tasks. It suggests that people who are better at one cognitive task tend to be better at others, and a single number can be extracted to predict various life outcomes.

Bonds Theory of Intelligence

An alternative theory to the G-factor, proposed by Godfrey Thompson, suggesting that mental abilities (e.g., memory, verbal skills) are distinct and uncorrelated in the brain. Performance on intelligence tests appears correlated because each task involves a mix of these underlying abilities, rather than a single general intelligence.

Process Overlap Theory

A more modern restatement of the Bonds theory, suggesting that the observed correlations between different cognitive tests arise because the tests themselves draw upon overlapping cognitive processes, rather than a single underlying general intelligence factor.

?
How can we determine which scientific findings to trust, especially during rapidly evolving situations like a pandemic?

Trustworthy science often follows principles like open transparency, pre-registration of study plans, and rigorous methods, as seen in successful vaccine trials, while lower quality science lacks these, leading to unreliable results.

?
What was the 'Surgisphere' scandal and its impact on medical journals?

In May/June 2020, two top medical journals, The Lancet and New England Journal of Medicine, published papers based on likely fraudulent data from a company called Surgisphere, which had to be retracted, highlighting a failure in data verification by researchers and journals.

?
Why is it difficult to draw causal conclusions from observational studies, even with statistical controls?

Statistical controls face challenges such as not measuring all relevant confounding factors, potential non-linear effects, measurement error in the control variables themselves, and the risk of 'over-controlling' by removing variance that is actually of interest.

?
How does randomization in controlled trials help establish causality?

Randomization ensures that, on average, there are no systematic differences between study groups (e.g., treatment vs. control), allowing researchers to confidently attribute any observed differences in outcomes to the intervention being studied.

?
What is the 'decline effect' in scientific research?

The decline effect describes the tendency for initial studies on a topic to report stronger effects than later, often more rigorous, studies. This can be due to publication bias, underpowered early studies, and the 'winner's curse' phenomenon.

?
Why is IQ research often controversial, and what are common misconceptions about it?

IQ research is controversial due to historical biases, political implications, and the perception that it contradicts ideas of equality. A common misconception is that 'IQ tests only tell you how good you are at doing IQ tests,' which is contradicted by evidence showing IQ's correlation with many life outcomes.

?
Does a single IQ score capture all aspects of a person's intelligence?

While a general factor of intelligence (G-factor) can be extracted from various cognitive tests and predicts many outcomes, it only explains 40-50% of the variance. Other factors, such as specific cognitive domains (e.g., verbal, mathematical, spatial abilities) and individual differences in skill acquisition, also play significant roles.

?
Can other personality traits compensate for lower cognitive abilities?

Yes, traits like conscientiousness can significantly compensate for lower cognitive horsepower, as highly conscientious individuals often organize their lives productively, learn diligently, and manage their time effectively, allowing them to achieve high levels of success.

?
What is a potential intervention to improve the media ecosystem's truthfulness?

One idea is for platforms to flag statements as 'controversial' when there is significant disagreement about them, rather than attempting to definitively label them as true or false. This would raise user awareness that a statement is not universally accepted as an objective fact.

1. Prioritize Randomized Controlled Trials

Recognize randomized controlled trials (RCTs) as the gold standard for establishing causality, especially when evaluating treatments or interventions, and prioritize them over observational studies.

2. Cultivate Critical Thinking

Adopt a critical thinking mindset by remembering that ‘people make stuff up, even people you like,’ and always trace claims back to their original source to verify validity.

3. Cultivate High Conscientiousness

Organize your life productively, set structured work/non-work times, take copious notes, and avoid frivolous tasks, as this can compensate for lower cognitive abilities and enhance productivity.

4. Develop Learning Strategies

Utilize effective learning strategies from cognitive psychology and strive to understand the underlying rules or meta-level strategies of new tasks to learn more efficiently and improve performance.

5. Be Wary of Initial Study Hype

Be aware of the ‘decline effect,’ where initial studies often show stronger effects than later, more rigorous ones; expect effect sizes to generally decrease over time.

6. Question ‘Controlled For’ Claims

When encountering claims of controlling for variables in observational studies, inquire about the specific methods used and the reliability of the measures, as full control is often difficult.

7. Seek Controversy Flags

Actively seek awareness of when a claim is controversial or has significant disagreement, rather than assuming it’s an objective, universally accepted fact, to apply more skepticism.

8. Understand IQ’s Learning Role

Recognize that higher IQ primarily indicates a faster speed of skill acquisition and learning new things, rather than an absolute barrier to learning for those with lower IQ.

9. Practice Desired Skills

Identify specific skills or domains you want to improve and practice them consistently to achieve better results, rather than solely focusing on general intelligence.

10. Use Job Task Simulations

For job selection, use specific job task simulations as the most effective predictor of performance; general mental ability (IQ) is a good overall predictor when specific task tests are impractical.

11. Take Vitamin D If Deficient

If you have medically diagnosed low vitamin D levels, take vitamin D supplements; otherwise, a healthy person with a balanced diet may not need extra supplements.

12. Consider Vitamin D for Groups

If you are vegan or have dark skin and live in cold climates, be particularly mindful of potential low vitamin D levels and consider testing.

13. Be Transparent in Research

When evaluating research, look for studies that are open and transparent about their plans before execution, including published registrations, as this indicates higher quality science.

14. Avoid Low-Quality Research

Be skeptical of research characterized by low-quality trials, non-justified methods, small sample sizes, and lack of transparency, as these are indicators of untrustworthy science.

15. Don’t Rely on Journal Reputation

When evaluating scientific findings, do not rely solely on the reputation of the journal, as even top journals can publish flawed or retracted studies.

16. Ensure Data Expertise in Review

When consuming scientific information, consider if the research has been reviewed by individuals with expertise in data science, as this helps catch issues like incorrect or fraudulent data.

17. Map Causal Assumptions

Map out your assumptions about causal relationships between variables (e.g., using directed acyclic graphs) to clarify understanding and identify potential biases in your models.

18. Train Specific Cognitive Domains

Train specific cognitive domains like working memory through repeated difficult tasks to improve performance in that domain and related tasks (near transfer), though far transfer to general intelligence is less certain.

19. General Intelligence for Versatility

Recognize that general intelligence provides potential for specialization across many different areas, allowing for adaptability in career paths and the ability to apply skills in diverse contexts.

20. Use Diverse Tests for IQ

To obtain a comprehensive measure of general intelligence, use a wide variety of cognitive tests rather than just one, as this captures the common factor across different abilities.

21. IQ Tests Measure More

Disregard the common misconception that IQ tests only measure test-taking ability, as evidence shows they correlate with important life outcomes and cognitive potential.

If people say trust the science, well, which science, right? There's good science and there's bad science. And it's not like so clearly delineated, right? It's not like all bad science is published in bad science journals and all good science is published in good science journals. Like it really actually takes some nuance to tell what is what.

Stuart Ritchie

People make stuff up, even people you like. That's my, that's like my, like, people always are constantly making stuff up, even people you like. Like, that's like the bottom line for critical thinking for me.

Spencer Greenberg

The predictive ability of IQ tests exists regardless of what the interpretation of the general factor is, right? There's the predictive validity when you have these, you just have these tests, and then you just see what predictions you can make. And then trying to understand why that is the case is a different question.

Stuart Ritchie

It's often said that like it's the soft, weak areas of psychology, like social psychology that don't replicate, but the areas like IQ research definitely do replicate. Like, well, let's test it.

Spencer Greenberg

The general factor of intelligence explains like somewhere between 40 and 50% of the overall variation among all the different tasks that you give people.

Stuart Ritchie

How to Boost IQ on Raven's Matrices

Stuart Ritchie
  1. Sit people down for the test.
  2. Tell them that there is a rule to follow for solving the puzzles.
  3. Show them an example of what such a rule might look like (e.g., 'the little dots, if they're there twice, then they disappear in the third panel. But if they're only there once, then they remain in the third panel').
  4. This approach helps people apply their general abilities to perform better on the test.

Dual N-Back Task

Stuart Ritchie
  1. Look at a screen showing numbers or letters appearing in sequence.
  2. Identify if the current number or letter is the same as the one that appeared 'N' positions back in the sequence (e.g., N=2, N=6, N=7).
  3. Simultaneously, wear headphones and listen to spoken numbers.
  4. Identify if the number just heard is the same as the one 'N' positions back in the auditory sequence.
  5. Performing this task for long periods can improve working memory, though its impact on general intelligence or daily life is debated.
40-50%
Variance explained by general factor of intelligence Of the overall variation among different cognitive tasks given to people.