AI apocalypticism vs. AI optimism (with Adam Russell)

Aug 1, 2024 Episode Page ↗
Overview

In this episode, Spencer speaks with Adam Russell, Director of the AI division of the Information Sciences Institute at USC, about 'apocaloptimism' regarding AI's future. They discuss the critical need for integrating AI with social science, fostering cognitive diversity, and solving coordination problems to ensure AI thrives.

At a Glance
30 Insights
1h 4m Duration
12 Topics
5 Concepts

Deep Dive Analysis

Defining Apocaloptimism and AI's Profound Impact

Why a Neutral Stance on AI's Future is Unlikely

Categorizing Different Camps in AI Debates

Distinction: Making AI Safe vs. Building Safe AI

AI's Role in Solving Human Coordination Problems

Introducing Qualintative Research for AI Understanding

Social Science Concept: 'Matter Out of Place' (Dirt)

Applying 'Matter Out of Place' to AI and Cultural Categories

Shifting Focus from Individual to Collective Intelligence

Introducing Quorum Intelligence (QI) as a New Metric

The Importance and Cultivation of Cognitive Diversity

ISI's AI Division Strategy: AI Now, AI Next, AI in the Wild

Apocaloptimism

A neologism coined to capture a rational reaction to the current era, especially concerning AI, that acknowledges both the apocalyptic potential and the techno-optimistic possibilities. It suggests that AI will lead to a dramatic outcome, either very good or very bad, rather than a neutral middle ground.

Matter Out of Place (Dirt)

A social science concept, derived from anthropologist Mary Douglas, referring to things that fall between established cultural categories. These 'out of place' items can cause discomfort or be attributed powerful, dangerous, or even supernatural capabilities because they challenge our ingrained classification systems. AI is considered 'matter out of place' as it blurs categories like human/machine and creativity.

Qualintative Research

A term combining quantitative and qualitative research approaches, aiming to use AI to understand macro-level patterns and trends (quantitative) while simultaneously capturing the nuances and context of lived experience (qualitative). This approach seeks to provide a more comprehensive understanding of complex social phenomena.

Quorum Intelligence (QI)

A proposed concept for measuring intelligence at a network level, as an alternative to individual IQ. QI assesses an individual's access to wider networks of intelligence and their ability to contribute to making that network smarter, reflecting the collective nature of human success and innovation.

Cognitive Diversity

The variety of ways people think, process information, and make associations, often stemming from different experiences, backgrounds, and networks. When harnessed effectively, cognitive diversity leads to more robust insights and better forecasting, as it introduces unique perspectives that improve collective understanding and decision-making.

?
What is 'apocaloptimism' in the context of AI?

Apocaloptimism is a term coined to describe a rational reaction to the current era, acknowledging both the potential for AI to lead to an apocalyptic future and the possibilities for a techno-utopian future, suggesting that a dramatic outcome (either very good or very bad) is more likely than a neutral middle ground.

?
Why is a neutral or 'middle ground' perspective on AI's impact considered less plausible?

The nature of AI technology, particularly its ability to learn, change itself, make decisions, and its rapid engineering speed, suggests that its impacts will be profound and not merely incremental. The lack of sufficiently mature AI safety science further reduces the likelihood of a neutral outcome.

?
What is the distinction between 'making AI safe' and 'building safe AI'?

'Making AI safe' implies retrofitting safety measures onto existing AI systems, while 'building safe AI' means incorporating alignment, ethics, transparency, explainability, and potentially different architectures from the ground up, which are also solutions to near-term problems like bias and inequity.

?
Can AI help solve human coordination problems?

Yes, AI is seen as critical for solving coordination problems, such as aligning AI with diverse human values, by eliciting and aggregating opinions from various communities to find common ground, and potentially reversing social media algorithms that divide people.

?
What is 'qualintative' research?

Qualintative research is a blend of quantitative and qualitative approaches, leveraging AI to identify macro-level patterns and trends from data (quantitative) while simultaneously capturing the rich context and nuances of lived experience (qualitative), traditionally the domain of anthropology.

?
How does the social science concept of 'matter out of place' (or 'dirt') apply to AI?

AI acts as 'matter out of place' because it blurs fundamental cultural categories like human/machine, self/other, and creativity, leading to discomfort, strong reactions, and the attribution of powerful or dangerous qualities to AI, similar to how cultures react to things that don't fit neatly into their established classifications.

?
Why might focusing on individual intelligence (IQ) be misleading for future innovation and problem-solving?

Focusing solely on individual intelligence might be misleading because the success of the human species is largely a function of social learning and collective intelligence, not just individual capabilities. An alternative, 'Quorum Intelligence' (QI), suggests measuring access to and contribution to wider networks of intelligence.

?
What is 'cognitive diversity' and why is it important?

Cognitive diversity refers to the variety of ways people think, process information, and make associations, often stemming from different experiences, backgrounds, and networks. It's crucial for improving forecasting and decision-making because diverse perspectives lead to more robust insights that cannot be easily replicated by homogenous groups.

?
What are the three buckets for AI strategy at the Information Sciences Institute (ISI)?

The three buckets are: 1) 'AI Now' (tools that exist or are near-term for social scientists to solve current problems), 2) 'AI Next' (AI that needs to be built to advance social science, like social AI, causal inference AI, or metacognitive AI), and 3) 'AI in the Wild' (measuring and understanding the meaning and nuance of AI's real-world impact).

1. Build Safe AI from Ground Up

Focus on building “safe AI” from the ground up, incorporating alignment, ethics, causality understanding, metacognition, transparency, and explainability. These solutions address both near-term issues like bias and long-term existential risks.

2. Increase AI Safety & Alignment Resources

Invest significantly more resources into AI safety and alignment research and development. The current proportion compared to system engineering is insufficient to address both present and future risks.

3. Integrate AI and Social Science

Develop strategies that meaningfully integrate AI and social science to understand how human socio-technical systems operate, behave, and can be improved. This is crucial for navigating AI’s impact.

4. Solve AI Coordination Problems

Recognize that addressing AI’s challenges, both short-term and existential, fundamentally requires solving complex coordination problems at unprecedented scales. This is a critical challenge for humanity.

5. Use AI to Solve Coordination Problems

Leverage AI to help solve coordination problems, including future governance and understanding diverse values, especially from marginalized communities. This ensures AI alignment and societal thriving.

6. Acknowledge Diverse AI Perspectives

When considering AI’s future, acknowledge different camps (apocalyptic, techno-optimist, tool-focused, skeptical) rather than dismissing them. This approach helps steer towards optimistic outcomes.

7. Shift from Individualism to Collective

Move away from an overemphasis on individualism, especially in interconnected systems and AI innovation. This shift better addresses coordination problems and accelerates progress through collective intelligence.

8. Consider Quorum Intelligence (QI) over IQ

Explore shifting the focus from individual IQ to “Quorum Intelligence” (QI), which measures an individual’s access to and contribution to wider networks of intelligence. This reflects a more collective view of capability.

9. Cultivate Diverse Networks for QI

Actively seek to extend the diversity of your personal and professional networks. This is recognized as a key factor in improving your “Quorum Intelligence” (QI) and overall effectiveness.

10. Harness Cognitive Diversity for Decisions

Actively harness cognitive diversity in forecasting and decision-making processes. A greater variety of thinking, when effectively managed, consistently leads to stronger insights and better outcomes.

11. Seek Cognitive Diversity via Experience

Understand that socioeconomic and demographic diversity are valuable because they often lead to cognitive diversity. This means different ways of thinking shaped by varied experiences and networks.

12. Prioritize Collective Knowledge in Science

Reorient scientific incentives away from individual publication and tenure towards contributions that genuinely advance collective knowledge and reproducibility. This moves beyond a zero-sum game of individual rewards.

13. Incentivize Scientific Replication

Promote and incentivize scientific replication and open science practices, potentially using AI-enabled tools to improve reproducibility and assign credit scores to research. This moves beyond the current publication-focused reward system.

14. Adopt Open Science Methodologies

Implement strong scientific methodologies, including replication, data sharing, and an open science approach. These practices significantly improve the reproducibility and reliability of research findings.

15. Use AI for Process Transparency

Employ AI to document and capture the research process, not just the results, to enhance transparency and provide valuable insights. This advances collective knowledge by understanding why experiments succeed or fail.

16. Request Rationales in Forecasting

When engaging in forecasting or decision-making, ask for rationales behind predictions. These explanations provide valuable signals and insights into how people perceive the world, even if the prediction itself is inaccurate.

17. Cultivate Cognitive Diversity Long-Term

Cultivate cognitive diversity over the long term, recognizing that different perspectives may be more or less accurate in varying contexts. A track record is needed to understand their utility.

18. Improve QI via Track Record Weighting

Improve collective “Quorum Intelligence” (QI) by incorporating diverse individuals and weighting their forecasts based on their historical track record. This is effective even if individual forecasting ability isn’t exceptional.

19. Promote QI by Expanding Networks

Actively promote “Quorum Intelligence” (QI) by identifying and connecting individuals to new networks or areas of innovation. This expands their access to diverse knowledge and perspectives.

20. Utilize Current AI Tools for Social Science

Empower social scientists and others to use existing and near-term AI tools (like ChatGPT, Elicit, Quad) as assistants for brainstorming hypotheses and tackling current problems. This generates profound insights and solutions.

21. Develop Advanced AI for Social Science

Focus on building “AI next” that can genuinely advance social science, including AI capable of collective-level thinking, social learning, causal understanding, inference, and metacognition. This helps discern right from wrong.

22. Measure & Understand AI in Wild

Implement methods to quantitatively measure and qualitatively understand the real-world impact and nuances of AI as it is deployed. This moves beyond a purely engineering-focused approach.

23. Integrate Qualitative AI Research

Advance “qualitative AI” research by using AI to capture the nuances of lived experience and context, complementing quantitative data. This integration helps understand macro trends and informs localized interventions.

24. Prioritize Qualitative Data for Understanding

Incorporate qualitative data collection in research to understand the meaning behind quantitative measurements, generate new hypotheses, and gain insights into how people interpret questions and experiences.

25. Develop Constitutional AI Principles

Draw inspiration from “constitutional AI” to establish foundational principles that enable diverse individuals to coordinate and agree on AI’s purpose and governance. This is similar to how human constitutions function.

26. Employ AI for Value Elicitation

Utilize AI-powered systems (like “violet teaming”) to effectively elicit and aggregate diverse human opinions and values. This helps identify common ground and inform AI development.

27. Reflect on Personal Categorical Resistance

Practice self-reflection to identify and be mindful of one’s own resistance to “category-breaking ideas,” especially concerning AI. This resistance can be deeply cultural and visceral rather than rational.

28. Seek All Information for Decisions

Cultivate epistemic humility and actively seek all available information from diverse sources when making decisions. This is more effective than relying solely on individual judgment.

29. Recognize Cultural Resistance to Prediction Markets

Be aware that using prediction markets for “sacred” or intrinsic values will likely face strong, non-rational, cultural, and visceral pushback. This differs from their use for instrumental predictions.

30. Overcome Decision-Maker Bias for Forecasting

Challenge the “I am the decision maker” cultural category in governance by embracing collective intelligence and crowdsourced forecasting. Recognize that incorporating diverse opinions, even if more accurate, can be met with non-rational resistance.

We need to stop trying to make AI safe and start making safe AI.

Stuart Russell (as paraphrased by Adam Russell)

It's the first technology we've created that can actually not just learn, but can change itself. It's the first technology we've created that can make decisions essentially on its own. And it's the first technology we've created that is warning us in the process.

Yuval Noah Harari (as paraphrased by Adam Russell)

Imagine how much harder physics would be if electrons can think.

Murray Gell-Mann (quoted by Adam Russell)

The nuances of lived experience.

Teddy Collins (quoted by Adam Russell)

Dirt is matter that is out of place.

Adam Russell (explaining Mary Douglas's concept)
96%
Replication success rate in a multi-site, multi-team study Achieved by Brian Nozick's group when adopting strong scientific methodology, replication, and open science practices, demonstrating high reproducibility of original effects.
2020
Year of global pandemic and surprising AI advancements The year when AI continued to surprise even experts, highlighting its rapid evolution.
1956
Approximate year Mary Douglas wrote 'Purity and Danger' The year Mary Douglas's book, which introduced the concept of 'dirt' or 'matter out of place', was published.