How quickly is AI advancing? And should you be working in the field? (with Danny Hernandez)

Aug 23, 2023 Episode Page ↗
Overview

Spencer Greenberg and Danny Hernandez discuss the future of AI, highlighting its predictable exponential progress driven by hardware, spending, and algorithmic improvements. They explore AI's profound implications for labor, job displacement, and the concentration of power, urging individuals to consider careers in AI safety for deep meaning and impact.

At a Glance
10 Insights
1h 8m Duration
19 Topics
6 Concepts

Deep Dive Analysis

Predictability and Exponential Trends in AI Progress

Three Exponential Trends Driving AI Improvement

Algorithmic Progress Examples: Transformer and Chinchilla

Combined Impact of Exponential Trends on AI Capabilities

Sustainability of AI Exponential Trends and Potential Bottlenecks

Mapping Effective Compute to Qualitative Model Improvements

Why People Are Surprised by AI Progress

AI Progress on Benchmarks and Generalization Capabilities

Short-Term Implications of AI: Job Disruption and Power Shifts

Job Characteristics Amenable to AI Automation

Amplification vs. Replacement in Job Markets

Long-Term Implications of Advanced AI

Personal Motivation for Working on AI Safety

Reframing Motivation for AI Safety Work: Meaning Over Obligation

Specific Areas and Roles within AI Safety

Taking Ownership of Friendships

Meditation Insights from an AI Perspective

Approach to Making Difficult Decisions

Scrutinizing Assumptions About Happiness

Moore's Law

A long-standing exponential trend where the efficiency of hardware, specifically the number of transistors on a microchip, doubles approximately every two years, driving increased computation for AI.

Algorithmic Progress

Improvements in the underlying algorithms used to train AI models, leading to greater efficiency or capability. Examples include the Transformer architecture, which provided a 50x efficiency boost for translation, and optimizing training parameters like those identified by Chinchilla.

Scaling Laws

Predictable relationships showing that performance gains in AI models can be achieved by scaling up computation, data, and model size. These laws allow for forecasting how model capabilities will improve with increased resources.

Few-shot Learning

The ability of large language models to perform a task effectively after being shown only a small number of examples (e.g., one, two, or five). This demonstrates improved generalization capabilities as models scale up in size and effective compute.

Alignment Problem

A core challenge in AI safety focused on ensuring that advanced AI systems develop and operate with values and goals that are aligned with human values and intentions, essentially making them 'good eggs' with positive character.

Mechanistic Interpretability

A research area within AI safety that aims to understand the exact internal mechanisms and computations within AI models. The goal is to gain a microscope-level understanding of why a model behaves a certain way, reducing the risk of unexpected or undesirable outcomes.

?
How predictable is AI progress?

AI progress has underlying exponential trends in hardware efficiency (Moore's Law), increased spending on training, and algorithmic improvements, making its growth somewhat predictable in terms of effective compute.

?
What are the main drivers of AI improvement?

AI improvement is driven by three main exponential trends: Moore's Law (hardware efficiency), increased financial investment in training large models, and algorithmic progress (e.g., the Transformer architecture and optimized training schedules).

?
Are the exponential trends in AI progress slowing down?

The rate of increased spending on the largest models has slowed from 8x to 2-4x per year, but Moore's Law and algorithmic progress show no signs of slowing down significantly and are expected to continue for 10-20 years.

?
Why are people often surprised by AI capabilities despite predictable exponential growth?

Many people are disoriented and surprised because they don't track the exponential view of AI progress, often fixating on current model limitations rather than projecting forward the rapid, continuous improvements.

?
Could AI progress be limited by a lack of data?

While running out of high-quality internet data is a plausible bottleneck, it's considered less likely than other factors like Moore's Law petering out or the exponential investment in scientific and engineering talent becoming unsustainable.

?
What types of jobs are most vulnerable to AI automation?

Jobs with high economic value, well-defined tasks, and abundant, well-structured data for training models (e.g., radiology, certain legal or medical recall tasks) are more amenable to automation than jobs requiring physical interaction or emotional intelligence (e.g., nursing).

?
How does AI impact job markets: amplification or replacement?

Initially, AI often amplifies human capabilities, making workers more efficient. Whether this leads to job replacement or increased demand depends on the elasticity of demand for that job's output; if demand is finite, fewer people are needed, but if demand is high and unmet, more people might be hired.

?
What does it mean to 'take ownership' of friendships?

Taking ownership of friendships means actively pursuing a deeper connection with someone you're interested in, similar to how one might pursue a romantic or business relationship, by initiating contact, planning activities, and being willing to risk rejection.

?
How can thinking in AI terms inform meditation practice?

AI provides mental models for meditation, such as understanding that removing 'noise' (distractions) allows quieter rewards (joy, peace) to become salient, or that turning down 'preferences' (goal orientation) can make things feel perfect, which is mathematically simpler than constructing a perfect scenario.

?
What is an effective approach to making difficult decisions?

An effective approach involves identifying one to three top considerations that encompass all relevant factors, rather than listing many pros and cons. This often requires abstracting or 'rolling up' smaller concerns into broader categories (e.g., 'day-to-day well-being') to simplify the decision-making process.

1. Pursue Meaningful AI Safety Work

Consider a career in AI safety (e.g., alignment, interpretability, policy, security) for deep personal meaning and high impact, as it’s presented as the highest expected value work, even if it involves a modest pay cut or perceived demotion. Competent engineers and security professionals can contribute without prior AI expertise.

2. Bet on AI Career Growth

Consider making a significant career move into AI, as its underlying exponential progress trends in hardware, spending, and algorithms are expected to continue for 10-20 years, offering a safe bet for long-term growth and impact.

3. Anticipate Job Automation & Reskill

Avoid training for jobs highly susceptible to AI automation (e.g., radiology, certain legal/medical tasks involving recall or well-defined data analysis) and proactively prepare to reskill, as AI systems are expected to increasingly replace or amplify human labor.

4. Simplify Difficult Decisions

Focus on 1-3 top considerations when making difficult decisions, rather than a long list of pros and cons, because the importance of concerns isn’t normally distributed. Consolidate smaller concerns into higher-level abstractions (e.g., overall well-being, impact) to clarify the most relevant factors and simplify the choice.

5. Scrutinize Happiness Assumptions

Critically examine your assumptions about what brings happiness by reflecting on past experiences of sustained joy, rather than relying on societal narratives. This helps to better understand your personal path to a good life and avoid unscrutinized beliefs.

6. Explore Diverse AI Safety Roles

Investigate specific areas within AI safety like alignment (ensuring models share human values), alignment science (measuring safety), mechanistic interpretability (understanding AI internals), AI policy, or lab security, as these offer various ways to contribute to making AI beneficial.

7. Assess Job Automation Risk

Evaluate job roles based on their amenability to AI: tasks with high economic value, clear definitions, and abundant training data are at higher risk of automation, while those requiring physical interaction or emotional intelligence (e.g., nursing) are currently safer.

8. Develop AI Management Skills

Focus on developing skills in managing AI systems, such as prompt engineering or building AI systems, as these will be crucial for leveraging AI capabilities and influencing the world in the future workforce.

9. Take Ownership in Friendships

Actively pursue potential friendships by getting contact information, initiating follow-up invitations (one-on-one or events), and being prepared for potential rejection. This proactive approach can accelerate friendship formation compared to passively waiting for unplanned interactions.

10. Enhance Meditation with AI Models

Apply AI-inspired mental models to meditation: recognize that retreats improve signal-to-noise ratio by removing external distractions, and understand that reducing preferences or goal orientation can lead to a state of perceived perfection.

I think for exponentials in general, like COVID is an exponential that just kind of, a lot of people were like, Oh my God, this is like a thing we all have to pay attention to. And a lot of other people were just like not paying attention until this, until it like the exponential hit and did something that was like, actually it had grown already to the point where it just was obviously mattered rather than you could just project it forward and see that it would obviously be really important.

Danny Hernandez

You should trust an old scientist if they tell you what's possible, but not if they tell you what's not possible.

Spencer Greenberg

I think that AI systems are able to turn capital into labor. And I think that improve that like puts capital in an overall better position and like reduces labor's labor's bargaining power.

Danny Hernandez

I think that the meaning is just like, I don't know, worth like 10 or a hundred times, like any, any kinds of sacrifices that, that I'd have, have had to make for, for me personally.

Danny Hernandez

I think almost everybody who thinks they're like living in line with their values thinks that they would like respond to a call to heroism or something like that. And I think they just kind of don't recognize what like actual calls to that, like look like.

Danny Hernandez

Taking Ownership of Friendship

Danny Hernandez
  1. When meeting someone you are interested in becoming closer to, get their contact information.
  2. Figure out the next concrete step you should take to advance the relationship.
  3. Invite that person to do something, either an event or a one-on-one activity.
  4. Be prepared to feel a bit nervous and accept the risk of rejection, similar to other situations where you are pushing a relationship forward.

Making Difficult Decisions

Danny Hernandez
  1. Identify one to three core considerations that are most important for the decision, holding only these in your head.
  2. Roll up or redistribute smaller, related concerns into these top considerations (e.g., combine interest, people, and proximity into 'day-to-day well-being').
  3. Make the decision based on these few dominant factors, aiming for a choice that, if explained to someone else, would seem obviously correct given your priorities.
2x
Hardware efficiency increase (Moore's Law) Every 2 years, since the 1960s.
10x
Compute growth for largest AI training runs Per year, between 2012 and 2017.
2 to 4x
Spending growth on largest AI models Per year, more recently.
2x less computation
Algorithmic efficiency improvement (AlexNet equivalent) Every 16 months (initial estimate).
2x less computation
Revised algorithmic efficiency improvement (AlexNet equivalent) Every 12 or 9 months (more recent work suggests).
50x
Transformer architecture efficiency increase for translation In a single year.
40,000x
Total effective compute growth (2012-2018) From increased spending.
8x
Total effective compute growth (2012-2018) From Moore's Law.
44x
Total effective compute growth (2012-2018) From algorithmic efficiency.
300 to 600x
Effective compute difference between GPT-2 and GPT-3 Approximate.
2 years
Time for a capability jump equivalent to GPT-3 to ChatGPT/Claude Based on current exponential trends.
95%
Voice recognition accuracy threshold for broad usefulness Achieved around the time of Alexa's release.
At least 10%
Probability of human-level general intelligence in 10 years A view held by Anthropic.
$20 to $100 million
Estimated cost to train the most expensive current AI models Internet estimates.
$20 to $40 billion
Estimated cost of a brand new TSMC fabrication plant Example of very large capital expenditure.