Why are birth rates plummeting? And how much does it matter? (with Malcolm & Simone Collins)
Spencer Greenberg speaks with Adam Marblestone about focused research organizations (FROs) as a new model for scientific progress, the importance of field building to overcome sociological barriers in science, and beneficial applications of AI, including large language models, for research and societal challenges.
Deep Dive Analysis
15 Topic Outline
Introduction to Focused Research Organizations (FROs)
Contrasting FROs with Traditional Scientific Research
Examples of Focused Research Organizations in Biology
Applying the FRO Model Beyond Biology
FRO Funding, Scale, and Comparison to Mega-Projects
How FROs Shift Scientific Incentive Structures
The Rationale Behind Time-Limited Focused Research Organizations
Organizational Structure and Leadership in FROs
Understanding Field Building and Stagnation in Science
Examples of Fields Held Back by Sociological Factors
Strategies for Overcoming Field Stagnation
The Role of Different Actors in Building Scientific Fields
Beneficial Applications of Large Language Models (LLMs) in Science
Automated Theorem Proving as a Key AI Application
Using AI for Better Recommender Systems and Contextualization
6 Key Concepts
Focused Research Organization (FRO)
A special-purpose, non-profit organization with a fixed, specific mission to create a scientific tool, system, or data set. It operates like a startup with a finite duration (e.g., five to seven years) and a dedicated team, distinct from traditional university labs or for-profit companies.
Field Building in Science
The process of counteracting non-scientific, often sociological, factors that can block or stall scientific progress. It involves increasing the probability that a good idea or breakthrough will blossom into a new, established area of research with critical mass, resources, and standards.
System One Recommender
A type of recommendation algorithm, often seen in social media, that optimizes for immediate user engagement, attention, or base reactions, potentially leading to clickbait or polarizing content.
System Two Recommender
A theoretical type of recommendation system that optimizes for a user's reflective preferences, longer-term goals, or higher values, aiming to provide content that the user would genuinely find valuable upon deeper consideration.
Contextualization Engine
A proposed AI system that, when queried, provides comprehensive, multifaceted, and relatively unbiased context about a topic, similar to a Wikipedia article, instead of optimizing for relevance to a specific user or for clicks.
Automated Theorem Proving
The use of computational systems to formally verify or generate mathematical proofs. This involves expressing mathematical concepts in formalized programming languages, allowing AI to potentially reduce search space and assist in proving theorems.
10 Questions Answered
An FRO is a special-purpose, non-profit organization designed like a startup with a finite duration (5-7 years) and a specific mission to create a tool, system, or data set that benefits the scientific process.
Unlike traditional labs where individual professors secure grants for their students to pursue distinct research and publish papers, FROs are larger, goal-driven teams (20-30 people) focused on a specific engineering-like objective, often integrating multiple technologies.
FROs are ideal for projects that aim to create public goods for science, where the value is broadly catalytic rather than easily capturable for venture capital returns, allowing for sustained engineering and capital investment without market constraints.
FROs are well-suited for capital-intensive, engineering-intensive projects that build tools, systems, or data sets, such as mapping brain circuits, developing methods for culturing diverse microorganisms, or creating fusion prototypic neutron sources for material science.
FROs mitigate the pressure for individual publications by offering a professionalized team environment that can lead to entrepreneurship (spin-off companies), roles in other institutes, or industry, providing alternative career paths outside traditional academic tenure tracks.
The finite duration (5-7 years) acts as a forcing function for urgency and clarity of purpose, preventing mission dilution and reducing the long-term cost for funders, while still allowing for potential spin-offs into permanent institutes or open-source maintenance.
Field building involves actively counteracting sociological or non-scientific factors that can impede scientific progress, ensuring that promising ideas gain critical mass, resources, and legitimacy to develop into established research areas.
Fields can stagnate if they become associated with premature commercialization, overly large claims, or ethical controversies, leading mainstream scientists to reject them and preventing the accumulation of legitimate scientific evidence.
LLMs can boost science by automating 'Type 1' skills (tasks with clear success metrics, like automated theorem proving) and 'Type 2' skills (tasks with ample human-generated demonstration data, like generating scientific hypotheses or papers), by reducing search spaces and predicting next steps.
AI can be applied to create 'System Two Recommenders' that optimize for users' reflective, long-term values rather than immediate attention, and 'Contextualization Engines' that provide comprehensive, unbiased context for information, fostering epistemically positive outcomes.
14 Actionable Insights
1. Create Focused Research Organizations
Establish special-purpose, non-profit organizations with a fixed, 5-7 year mission to build scientific tools or datasets, acting like startups to address gaps not served by traditional grants or venture capital.
2. Diversify Science Funding Models
Advocate for and implement a more diversified approach to funding and organizing scientific research, moving beyond the single-professor grant model to include structures like FROs and ARPA-like programs.
3. Counteract Sociological Barriers in Science
Actively work to counteract non-scientific factors (e.g., premature commercialization, bad reputations) that can stall scientific progress, ensuring legitimate research areas are not rejected for sociological reasons.
4. Leverage DARPA Program Management
Implement an empowered program manager model, similar to DARPA, where technically strong managers actively guide research programs with clear milestones to bootstrap and develop new scientific fields.
5. Employ Field Strategists
Utilize field strategists to identify bottlenecks and systemic obstacles within scientific fields, creating roadmaps for how philanthropists or other entities can effectively support progress.
6. Pursue Hypothesis-Free Data Generation
Prioritize and fund the development of technologies for generating fundamental measurements and data sets in a hypothesis-free manner to broaden understanding and avoid getting stuck on single hypotheses.
7. Formalize Math for AI Proofs
Formalize mathematical proofs into programming languages that allow for automated verification, creating a structured environment for AI systems to generate and validate new mathematical insights efficiently.
8. Build System Two Recommenders
Develop recommender systems that optimize for users’ reflective, long-term preferences and higher values, rather than immediate engagement, to promote epistemically positive AI applications.
9. Build AI Contextualization Engines
Create AI systems that provide comprehensive, unbiased, and multifaceted context for queried topics, optimizing for a global understanding rather than personalized relevance or clicks.
10. Enable Customizable Recommendations
Design recommender systems with user-adjustable controls, allowing individuals to actively reduce biases or tailor recommendations to better align with their evolving interests and thinking.
11. Implement Startup Project Management
Apply startup-inspired techniques for project management, hiring, meeting structures, and people management to scientific endeavors to enhance efficiency and execution.
12. Set Finite Project Durations
Implement time-limited missions (e.g., 5-7 years) for scientific projects to create a forcing function for urgency, maintain clarity of purpose, and prevent mission dilution.
13. Plan Career Transitions for Scientists
Deliberately design personnel strategies and transition plans for non-traditional scientific roles, such as FROs, to support diverse career paths including entrepreneurship, industry, or a return to academia.
14. Empower Entrepreneurial Scientists
Create structures and provide support (e.g., coaching on operational aspects) to enable entrepreneurial-minded scientists to lead and execute projects effectively, potentially leading to spin-off companies.
6 Key Quotes
A focused research organization is basically a special purpose, non-profit organization that has a very fixed and specific mission to create some kind of tool, system, data set, or other advancement that benefits the scientific process.
Adam Marblestone
It's kind of driven by novelty, rather than driven by the sort of working backwards from a functional purpose in the same way that, you know, an industrial effort will be working backwards.
Adam Marblestone
The broader question we're trying to get at here is creating a conversation essentially about, are we in kind of too much of a almost like monotheistic world? There's only one way of doing organizing research. What if we have a much more kind of polytheistic kind of structurally diversified approach to funding and organizing research?
Adam Marblestone
If every time you do it, you're actually requiring that you endow a new institute or do something that's permanent, I mean, first of all, that creates some significant multiple on the actual cost of the project for the funder. But also, it can kind of dilute the mission in some sense.
Adam Marblestone
It's not to say that AI has to do everything that mathematicians do. It's to say that, can you get automated theorem proving to be is pretty efficient? And then would it actually grow up to a certain level? And could it actually make its own corpus of training or so on because it can verify itself?
Adam Marblestone
I think there's just tremendous potential if things are steered in the right directions for AI to kind of be epistemically positive for us, as opposed to kind of something that we're not controlling.
Adam Marblestone