The most important century (with Holden Karnofsky)

Aug 27, 2025 1h 48m 20 insights Episode Page ↗
Holden Karnofsky, a Member of Technical Staff at Anthropic and co-founder of GiveWell and Open Philanthropy, discusses whether society is at "peak progress," the dynamics of innovation, and the radical uncertainty of an AI-driven future. He explores the implications of exponential growth, the "low-hanging fruit" theory, and the critical need for careful AI development and pandemic preparedness.
Actionable Insights

1. Embrace Epistemic Humility

Practice admitting when you are wrong to build resilience and improve long-term well-being and epistemic accuracy. Cultivate a preference for discovering and correcting errors, even if initially embarrassing, over maintaining false beliefs.

2. Learn by Writing

Adopt a “learning by writing” approach by articulating your current opinion first, then using it to identify research questions and guide your learning. Externalize and rigorously test your ideas by writing them down, outlining evidence, and actively seeking counterarguments before fully committing to a stance.

3. Read Non-Fiction Strategically

Read non-fiction strategically by skimming, focusing on introductions, and engaging with criticisms to guide deeper dives into relevant sections. This approach helps retain more information relevant to your specific interests.

4. Form Flexible Opinions

Form opinions early, even with limited knowledge, but hold them with varying strengths and be open to easy revision. Distinguish between forming an opinion (acceptable with limited knowledge) and acting forcefully or making high-stakes decisions based on it (requires more expertise).

5. Vet Expert Knowledge

Develop a strategy for identifying trustworthy experts by doing foundational research yourself, then deferring to them on specific topics. Adjust your learning investment based on the field’s complexity and your need for informed judgment.

6. Acknowledge Future Uncertainty

Acknowledge the radical uncertainty of the future by understanding historical growth patterns, and avoid extrapolating future trends solely from recent history. Recognize the current era as historically unique due to rapid technological and quality-of-life changes.

7. Plan for Stagnation

Prepare for potential stagnation or collapse rather than assuming continuous exponential growth, as indefinite multi-percent growth is unsustainable. Understand that innovation per researcher tends to decrease over time due to “low-hanging fruit” being picked.

8. Accelerate Innovation via Minds

To accelerate innovation, significantly increase the number of minds working on problems, potentially through AI, to outweigh low-hanging fruit dynamics.

9. Nuance Progress’s Impact

Maintain a nuanced view on progress, acknowledging that while growth has generally been good, new technologies like AGI could have unforeseen negative consequences. Reframe the perception of historical “geniuses” by understanding that modern intellectual talent is abundant but faces different innovation landscapes.

10. Mitigate New Addictions

Be aware that technological progress can create new forms of addiction and “traps” that exploit human psychology, such as social media and processed foods.

11. Societal Problem Mitigation

Recognize that societal problems caused by technology can often be mitigated through collective action and regulation, as seen with air pollution.

12. Focus AI/Bio Safety

Focus safety efforts on specific, foreseeably high-risk technologies like bioweapons and AI, rather than broadly restricting all innovation. Pursue both technical solutions and regulatory frameworks for AI safety, as they are complementary and mutually reinforcing.

13. Learn from Pandemics

Do not assume society will automatically learn from and effectively respond to major crises like pandemics without deliberate effort. Advocate for low-cost, high-impact interventions (e.g., improved air circulation) in public health responses.

14. Identify AI Research Threshold

Identify AI’s ability to autonomously conduct high-level AI research as a critical threshold for exponential progress and heightened risk, signaling a need for extreme caution.

15. Separate Responsibility, Outcome

Differentiate between responsible action and achieving a good outcome, recognizing that positive results can sometimes occur despite irresponsible approaches.

16. Prioritize Career Fulfillment

When giving career advice, prioritize what individuals genuinely have energy for and can excel at, rather than solely focusing on external impact metrics. Approach complex, high-stakes work like AI development with positive energy and high integrity, as personal well-being correlates with a reduced risk of inadvertently causing harm.

17. Strategic Impact Allocation

When evaluating impact, prioritize direct, comparable benefits (apples-to-apples) and seek to maximize them. Balance quantitative comparisons with intuitive diversification when comparing incommensurable benefits to avoid excessive noise and negative consequences.

18. Skepticism of Pure Philosophy

Use philosophical thought experiments to challenge intuitions, but prioritize your core heuristics and values over purely theoretical philosophical arguments in high-stakes decisions. The methodology of philosophy is often not robust enough to outweigh practical judgment.

19. Support Neglected High-Impacts

Direct attention and effort towards highly neglected but impactful areas such as AI safety, pandemic preparedness, animal welfare, and global poverty. These fields offer immense opportunities for doing good that are currently under-resourced.

20. Value Modern Social Science

Prioritize learning from contemporary social science for higher quality reasoning and understanding, noting its consistent improvement. Modern social science, particularly in causal inference, has become significantly more rigorous and reliable than in past decades.