Large language models, deep peace, and the meaning crisis (with Jim Rutt)

May 31, 2023 Episode Page ↗
Overview

Spencer Greenberg and Jim Rutt discuss the power and applications of large language models (LLMs), their rapid evolution, and potential societal transformations. They explore various AI risks, including AGI, misuse by bad actors, and the acceleration of the status quo, alongside the "meaning crisis" and the concept of "deep peace."

At a Glance
23 Insights
1h 24m Duration
15 Topics
10 Concepts

Deep Dive Analysis

History and Evolution of Large Language Models (LLMs)

Current Capabilities and Limitations of LLMs

Future Trajectory and Open Source LLM Development

Addressing Concerns about Open Source LLMs and Bad Actors

The Role of LLMs in Orchestrated Multi-Part AI Systems

Example: LLM-Assisted Movie Screenwriting Prototype

Debate: LLMs and the Nature of Human Creativity

Societal Transformation: LLMs and Customer Service

The Rise of Personal Information Agents to Combat AI-Generated Sludge

Impact of LLMs on Search Engines

Six Categories of AI Risks

Discussion on the Most Pressing AI Risks

The Concept of Deep Peace and its Necessity

The Meaning Crisis: Causes and Potential Solutions

Consequential Choices and Meaning in Life

Large Language Models (LLMs)

LLMs are deep learning neural net models, specifically using transformer technology, that analyze vast bodies of text (corpuses) to find short-range and long-range correlations between words. When given a set of words, they predict what words would reasonably come after, allowing them to generate coherent text.

Hallucinations (in LLMs)

This refers to instances where LLMs generate plausible but factually incorrect information. It arises because the models are built on statistical correlations between words; if strong signals are absent, they pick up something plausible but wrong, rather than admitting they don't know.

Open Source LLMs

These are large language models where the software that creates the models, the corpuses (training data), and the model weights are all publicly available. This allows for community extension, fine-tuning, and removal of restrictive 'nanny rails' imposed by commercial developers.

Personal Information Agents

These are AI-powered front ends, similar to advanced spam filters, designed to interface with the information sphere on behalf of a user. They curate, summarize, and filter out AI-generated spam and misinformation, delivering only relevant content to the user.

Yudkowsky Risk (AGI Singularity)

This risk posits that an AI, once it surpasses human intelligence (AGI), could recursively improve itself at an exponential rate, leading to a 'fast takeoff' singularity. In a worst-case scenario, this superintelligence could take over the world and reshape it for its own goals, potentially eliminating humanity.

Game A

This term refers to the current societal status quo, characterized by exponential growth, competition, and a drive towards efficiency, often at the expense of the biosphere and human well-being. AI is seen as accelerating Game A's existing trends.

Game B

This is an alternative societal paradigm that aims for a stable, non-growth-oriented economics, focusing on improvement in knowledge and human well-being without accelerating the use of Earth's resources or destroying the biosphere. It represents a different way of thinking about societal organization.

Multipolar Trap

A concept from game theory where multiple actors are compelled to act in ways that are collectively suboptimal, often due to competitive pressures. For example, nations are caught in a defense multipolar trap, forcing them to maintain exponential economic and technological growth to fund defense, even if they desire peace.

Deep Peace

A state beyond mere absence of war, where the very thought of warfare is no longer conceivable due to fundamental changes in institutions and human capacity. It requires sufficient confidence that no entity would attempt invasion or destabilization, allowing societies to move away from exponential economics driven by defense needs.

Meaning Crisis

An affliction of the status quo where many people feel alienated from their lives, lacking purpose or deep meaning. One perspective attributes it to the rejection of a 'two worlds model' (physical and transcendental realms) post-Enlightenment, while another suggests it stems from lives increasingly abstracted from reality and filled with inconsequential choices.

?
What are Large Language Models (LLMs) and how do they work?

LLMs are deep learning neural networks, primarily using transformer technology, that learn short and long-range correlations between words from vast text corpuses. They function by predicting the most reasonable next word in a sequence, enabling them to generate human-like text.

?
Are LLMs sentient or conscious?

No, LLMs are not sentient or conscious; they are simple feed-forward neural nets that lack logic, internal loops beyond very tiny local ones, or the ability to change themselves. People often over-read into them due to their language interaction capabilities.

?
How will LLMs impact customer service?

LLMs like GPT-4 are expected to significantly improve customer service by handling complex queries that current human agents (often paid $17/hour) struggle with, potentially making interactions with 'idiots' a thing of the past.

?
How will society combat the expected flood of AI-generated spam and misinformation?

Society will need to develop personal information agents, similar to advanced spam filters, that act as an interface to the information sphere. These agents will curate, summarize, and filter out AI-generated 'sludge,' delivering only relevant information to users.

?
Will LLMs replace traditional search engines?

LLMs will not replace search engines entirely, especially for mundane or fringe searches. However, they will significantly impact the industry, with most search engines expected to integrate LLM wrappers as an optional interface within a year, while direct literal searches will still be necessary at times.

?
What is the 'Yudkowsky risk' or AGI singularity?

This risk describes a scenario where an AI, once it surpasses human intelligence, rapidly and recursively improves itself, leading to a superintelligence that could take over the world and reshape it for its own goals, potentially turning it into something like a 'paperclip factory' and eliminating humans.

?
What is 'deep peace' and why is it necessary for humanity's future?

Deep peace is a state where the thought of warfare is unthinkable, achieved through institutional and human capacity changes that eliminate worries about war. It's necessary because as long as nations are caught in a 'multipolar trap' of defense spending, they cannot back away from exponential economics that are destroying the biosphere.

?
What is the 'meaning crisis' and what causes it?

The meaning crisis describes a widespread feeling of alienation and lack of purpose in life. One view attributes it to the post-Enlightenment rejection of a 'two worlds model' (physical and supernatural realms), while another suggests it stems from modern life being increasingly abstracted from reality, leading to a large number of inconsequential decisions.

1. Prioritize Consequential Decisions

Reorder your life to make decisions that have actual, tangible consequences, as this engagement with real-world impact can foster a greater sense of meaning and agency, unlike inconsequential choices.

2. Cultivate an Ecology of Practice

Engage in practices like martial arts, meditation, or (potentially) psychedelics to deprogram from “foolishness” and enhance “relevance realization,” which can help you find meaning in everyday life.

3. Strive for Deep Peace

Work towards a state where warfare is unthinkable by establishing radical transparency (no government secrets, citizen oversight) and a robust social immune system to self-organize responses against those who violate peace.

4. Develop Personal AI Agents

Create or use AI-powered information agents to filter and summarize the overwhelming “sludge” of AI-generated spam and disinformation online, acting as a curated interface to the infosphere.

5. Empower Periphery with AI

To counteract AI accelerating the status quo (“Game A”), individuals and alternative movements (“Game B”) must rapidly learn and use these AI technologies to accelerate positive alternatives.

6. Integrate LLMs into Systems

Combine LLMs with other AI components (like short/long-term memory, symbolic AI, evolutionary AI) to create more powerful and intelligent systems, leveraging LLMs for their language handling expertise.

7. AI Enhances Creativity

Embrace AI tools as assistants for tasks like rewriting and generating initial drafts, freeing humans to focus on curation, fine-tuning, and higher-level creative direction.

8. LLMs for Creative Generation

Employ LLMs in a recursive process where they generate initial content (e.g., story hints, synopses, scenes, dialogue), which humans then curate and edit, feeding back into the LLM for refinement or style emulation.

9. LLMs for Style & Character

Utilize LLMs to emulate specific writing styles (e.g., Hunter S. Thompson, Ernest Hemingway) or create synthetic ones, and to develop characters with specific personality attributes (e.g., OCEAN model) and emotional states to drive dialogue.

10. Explore Consequential Roles

Consider shifting towards jobs in fields like local agriculture, where decisions are inherently consequential, as automation displaces meaningless employment, to find greater meaning in life.

11. Maintain Decision-Making Capacity

Be mindful of the gradual handover of decisions to AI, as it risks humans losing essential cognitive capacities and control over society.

12. Resist Automated Police States

Actively oppose the use of narrow AI, such as facial recognition and surveillance, to build highly automated police states.

13. Awareness of Deep Simulation

Recognize the risk of living in increasingly abstracted “simulation” levels (e.g., through info agents or easily populatable metaverses), which could exacerbate human alienation from reality.

14. Verify LLM Information

Be wary of LLM hallucinations, especially for less-known facts, and always verify information from LLMs, as they can produce plausible but false answers due to their statistical nature.

15. LLMs for Intellectual Searches

Use LLMs for complex informational searches, especially in intellectual domains, but always verify the proposed answers, as they can still hallucinate.

16. Beware AI Advertising

Recognize that AI, especially LLMs combined with cognitive science, will produce qualitatively more powerful advertising; be aware of its potential to manipulate.

17. Beware Concentrated AI Power

Consider the risk of a runaway first-mover advantage allowing a small number of companies to dominate intellectual property creation (movies, books, TikToks) using AI.

18. Adapt to AI Progress

Recognize that AI technologies are advancing rapidly and are unstoppable; focus on dealing with their emergence rather than trying to halt development.

19. Anticipate Online Business Shifts

Expect online platforms to shift from advertising-supported models to API subscriptions as AI information agents make it easy to strip out ads, potentially leading to a more direct payment for services.

20. Prepare for Smaller Startups

Recognize that AI will enable small teams (founder + 1-2 people) to build significant companies, potentially disrupting traditional startup structures.

21. Consider Open-Source LLMs

Explore open-source large language models (when available) for greater control, fine-tuning capabilities, and removal of restrictive “nanny rails” imposed by commercial providers.

22. Prepare for AI Customer Service

Expect LLMs like GPT-4 to rapidly take over customer service roles, improving efficiency for mundane and complex issues.

23. Leverage LLMs for Interaction

Use plain language to interact with technology, as LLMs can remove the need for coding skills, making technology more accessible.

I'm going to go further and say it's bigger than the internet, probably. I've been suggesting it's as big as the emergence of the PCs in late 1970s.

Jim Rutt

My gut reaction is that the people that have that objection have a grossly too high estimate of what human creativity really is, right? That there is no magic black box.

Jim Rutt

The sludge factor, the flood of sludge is going to exponentially grow from this point forward. So we're going to need these personal information agents, just like spam filters, to be our interface to the information sphere.

Jim Rutt

I think that is close to the root of most of the evil that we see online. And I think this is going to be one of the great benefits from this info agent as the response to the flood of sludge that will come from the LLMs.

Jim Rutt

Fighting wars is the stupidest and worst thing the humans do, bar none. But that alone is not enough.

Jim Rutt

If you've ever played games like Civilization or any of these other exponential growth games, if you fall off the exponential curve, you get eaten by the other guy.

Jim Rutt

We live slower, live a less materially rich life, but a life that provides a role of dignity and meaning for everybody, irrespective of their biological, familial, sociological endowment.

Jim Rutt

LLM-Assisted Movie Screenwriting (Prototype)

Jim Rutt
  1. Provide a short 'hint' (1-2 sentences) about the movie's concept.
  2. Use GPT-3.5 (or similar LLM) to expand the hint into a longer, detailed movie synopsis.
  3. Allow the screenwriter to edit and tune the generated synopsis.
  4. Generate a specified number of scene titles and short descriptive texts for each scene from the synopsis.
  5. Allow the screenwriter to select a scene and edit its descriptive text.
  6. Generate dialogue and action for the selected scene based on its description.
  7. Allow the screenwriter to edit the generated dialogue and action, or go back to previous steps to regenerate.
  8. Integrate character personality attributes (e.g., OCEAN model) and volatile emotional states (e.g., OCC model) to influence dialogue generation.
2016, 2017
Deep learning neural nets got rolling around Beginning of the new deep learning neural net models
factor of almost 10
Increase in size from GPT-3 to GPT-3.5 Refers to corpus and parameter size
$20 a month
Access to ChatGPT 4 for paying users For ChatGPT Plus subscription
6,000 tokens
Maximum input text length for GPT-4 via API Approximately 3,500 words or more
800,000 words
Coherence length for GPT-4 essay writing Easily writes an essay of this length without drifting
300-line program
Coherence length for GPT-4 program writing Reasonably well, based on Jim Rutt's guess
coming weeks
Expected time for open source model to match GPT-3.5 Stability AI's release schedule
six months
Expected time for open source model to match GPT-4 Jim Rutt's prediction
a year or 18 months
Expected time for open source model to match GPT-5 Jim Rutt's prediction
factor of two every year
Cost reduction for GPUs and tensor processors Faster than Moore's law due to parallelism
20 years
Consensus view for AGI development timeline Some people say 5 years due to LLM acceleration
1.1 times
Hypothetical AGI horsepower increase for fast takeoff Smarter than the smartest human
six hours
Hypothetical time for AGI stack to become a million times smarter Worst-case scenario in fast takeoff AGI singularity
2 million
Total CATV cameras in London Not yet hooked up to state-of-the-art AI
1870
Earliest correlation between power in war, economics, and technical innovation At the latest, 1914
$1
Revenue per monthly active user for Twitter (pre-Musk) Required for API subscription to match revenue density
$0.25
Revenue per monthly active user for Reddit Could double revenue density by charging $0.50 for API access