What effects does guaranteed income have on U.S. citizens? (with Eva Vivalt)

Dec 11, 2024 Episode Page ↗
Overview

In this episode, Spencer Greenberg speaks with Eva Vivolt about her groundbreaking UBI study, its findings on consumption, leisure, and employment, and the importance of rigorous research design. They also discuss using expert predictions to improve study design and the challenges of evidence-based policymaking.

At a Glance
25 Insights
1h 11m Duration
19 Topics
9 Concepts

Deep Dive Analysis

Introduction to UBI and Eva Vivalt's Study

Study Design and Minimizing Differential Attrition

Importance of Randomization and Study Costs

Context of the US UBI Study vs. Other Countries

Distinction Between Universal Basic Income and Guaranteed Income

How Recipients Spent Their Guaranteed Income

Impacts on Entrepreneurship and Labor Supply

Effects on Employment Quality and Net Worth

Tentative Educational Benefits for Younger Participants

Addressing False Positives in Large-Scale Studies

Well-being and Mental Health Outcomes: Hedonic Adaptation

Health Outcomes and Healthcare Expenditure Concerns

Surprising Findings and Expert Forecasts of Study Results

Major Takeaways for UBI Policy from the Study

The Value of Causal Forecasting in Social Science

Accuracy of Experts in Making Causal Predictions

Policymakers' Use of Evidence and Behavioral Biases

Driving Real-World Impact from Academic Research

Advice for Aspiring Academics and Assessing Research

Differential Attrition

When participants drop out of a study at different rates between the treatment and control groups, potentially biasing the results. For example, if the control group receiving no benefits is more likely to leave, their absence could distort the observed effects.

Randomization

A critical research method where participants are randomly assigned to either a treatment or control group. This helps establish a causal effect by ensuring that, on average, the groups are comparable in all aspects except for the intervention being studied.

Guaranteed Income

A cash transfer program that provides a regular, unconditional income to a specific group of people for a defined period. It differs from Universal Basic Income (UBI) because it is not universal (not everyone gets it) and is typically not permanent.

Hedonic Adaptation

The psychological process where people tend to return to a relatively stable level of happiness despite major positive or negative life changes. In the context of UBI, initial happiness boosts may fade over time as recipients adapt to the new income level.

False Discovery Rate (FDR) Adjustments

A statistical method used in studies with multiple hypotheses to control the expected proportion of false positives. It helps prioritize the most important findings by applying stricter criteria to less central hypotheses.

Causal Forecasts

Predictions about the direct effect of an intervention or change, distinct from state forecasts (predicting a future state of the world) or conditional forecasts (predicting a state given an event). These forecasts aim to estimate the 'what if' impact of a specific action.

Asymmetric Optimism

A behavioral bias where people tend to be more influenced by positive evidence than negative evidence, even if the evidence is equally above or below their prior beliefs. This can lead policymakers to gravitate towards more favorable study results.

Variance Neglect

A behavioral bias where people fail to adequately consider the uncertainty or confidence intervals associated with data. This can lead to overconfidence in point estimates and a disregard for the potential range of outcomes.

Internal Validity

The extent to which a study establishes a trustworthy cause-and-effect relationship between its variables. Key considerations include response rates, attrition, and whether the study was adequately powered to detect an effect.

?
What is the difference between Universal Basic Income (UBI) and Guaranteed Income?

Universal Basic Income is typically universal (everyone gets it) and lasts forever, whereas guaranteed income, as studied, is provided to a specific group for a limited duration, like three years.

?
How do people tend to spend money received from guaranteed income programs?

Recipients tend to spend a lot on consumption (food, housing, etc.) and leisure (by reducing work hours), with very limited savings and no significant increase in spending on 'vice goods' like lottery or alcohol.

?
Does guaranteed income lead to more entrepreneurship?

While guaranteed income significantly increases people's willingness to take financial risks and their intention to start a business, the study did not find significant impacts on actual entrepreneurial activity within the three-year timeframe.

?
Does guaranteed income make people work less?

Yes, the study found moderate effects on labor supply, with people working slightly less overall and reducing their hours if employed. This effect also spilled over to other members of the household.

?
Does guaranteed income improve the quality of employment?

The study found no overall effect on the quality of employment, including hourly wage, adequacy of hours, training, benefits, opportunities for advancement, or daily work-life quality.

?
How does guaranteed income affect people's net worth and debt?

The study found a decrease in net worth, though imprecisely estimated, and an increase in debt, particularly educational debt and car loans. Overall, earnings fell net of the transfers due to reduced work.

?
Does guaranteed income improve people's well-being and mental health long-term?

The study observed significant improvements in well-being and mental health in the first year, but these effects tended to return to baseline levels in years two and three, consistent with hedonic adaptation.

?
How accurate are subject-matter experts at forecasting the results of social science studies?

Experts tend to be overly optimistic, often expecting more positive effects than actually occur in practice. They were 'pretty wildly off' on several outcomes in the UBI study, such as hourly wage, work search behavior, and post-secondary enrollment.

?
How can researchers avoid false positives when collecting many outcomes in a large study?

Researchers can group outcomes into hierarchical indices (families, components, items) and apply False Discovery Rate (FDR) adjustments, which prioritize the most important findings and reduce the likelihood of false positives for core hypotheses.

?
Why do policymakers sometimes disregard confidence intervals in research findings?

Policymakers often prefer clear, definitive answers and may view confidence intervals as 'clutter' or unnecessary nuance, leading to their removal from policy reports. This can be influenced by a need to 'sell their work' and a bias towards positive results.

1. Embrace Ambiguity and Uncertainty

Cultivate the ability to embrace ambiguity and uncertainty, rather than needing a definitive answer for everything. This prevents forcing artificial conclusions in a complex world and promotes a more accurate understanding of reality.

2. Employ Randomization for Causality

Utilize randomization in study designs to establish causal effects rather than mere correlations. This rigorous approach provides much stronger evidence for whether an intervention truly causes a specific outcome.

3. Collect Many Outcomes Carefully

Collect a wide range of outcomes in studies to gain comprehensive understanding of a phenomenon, but implement rigorous statistical methods like false discovery rate adjustments and outcome indices to manage false positives. This allows for extensive learning without compromising data reliability.

4. Engage Policymakers Early

Engage with policymakers and stakeholders early in the research design process to understand their specific interests and ‘move the needle’ questions. This ensures the study addresses relevant concerns and increases the likelihood of its findings being utilized.

5. Forecast Study Outcomes Ex-Ante

Collect ex-ante forecasts from experts about expected study outcomes to identify surprising results and maximize the learning from large-scale research. This helps understand pre-existing assumptions and highlights areas where actual findings diverge from expectations.

6. Don’t Over-trust Single Studies

Avoid placing too much trust in the results of a single study; seek out multiple studies on the same topic to gain greater confidence in the findings. This practice aligns with the scientific principle of replication.

7. Question Averages for Uncertainty

Develop the habit of questioning averages by asking ‘plus or minus what?’ to acknowledge the inherent uncertainty in statistical estimates. This helps in understanding the true range and reliability of reported numbers.

8. Demand Confidence Interval Transparency

Be critical of policy reports or presentations that omit confidence intervals, p-values, or standard errors, as this hides the inherent uncertainty and precision of the reported results. Demand transparency regarding statistical reliability.

9. Recognize Policymaker Behavioral Biases

Recognize that policymakers, like all humans, are subject to behavioral biases such as ‘asymmetric optimism,’ where they may gravitate towards more positive results. This awareness helps in critically evaluating policy decisions and their underlying evidence.

10. Beware Expert Optimism Bias

Be aware that experts often tend to be overly optimistic and overestimate the effects of interventions. Maintain a healthy skepticism (’team nothing works’) when evaluating predicted outcomes.

11. Minimize Differential Attrition

To minimize differential attrition in long-term studies, provide a baseline benefit to both control and treatment groups, and initially keep participants unaware of potential larger benefits. This ensures continued engagement from the control group, leading to higher response rates and more reliable data.

12. Pre-specify Attrition Handling

To ensure data robustness, implement a long baseline period before randomization and pre-specify strategies to handle future attrition, such as restricting analysis to consistently responsive participants. This helps maintain balance and prevents bias in estimates.

13. Ensure Sufficient Sample Size

Ensure a sufficiently large sample size in studies to be ‘well powered’ for the intended outcomes. This increases the likelihood of detecting true effects and reduces the risk of false negatives.

14. Hierarchically Structure Outcome Analysis

Structure study outcomes hierarchically into families, components, and items, then combine them into indices for analysis. Apply false discovery rate adjustments to these indices to prioritize and reduce false positives for the most critical outcomes.

15. Conduct Longer Studies for Well-being

When studying well-being or mental health, conduct longer-term studies (beyond one year) to account for hedonic adaptation. Initial positive effects may diminish over time, leading to different conclusions if only short-term data is considered.

16. Prioritize Causal Forecasts

When using forecasting platforms for research, prioritize collecting ‘causal forecasts’ to predict the direct effects of interventions. This differs from state or conditional forecasts and is crucial for understanding true impact.

17. Inform Study Design with Forecasts

Utilize expert forecasts, even if imperfect, to inform study design by identifying interventions or outcomes with the highest ‘value of information.’ This helps prioritize research efforts to maximize learning and policy relevance.

18. Calibrate Forecasting Algorithms

When developing forecasting models, focus on calibrating algorithms to ensure their probability estimates accurately reflect reality (e.g., an 80% chance truly means an 80% chance). This improves reliability, a common struggle for human forecasters.

19. Contextualize Intervention Design

When designing interventions, carefully consider the specific context (e.g., country, income level) as the same intervention may yield vastly different impacts in different settings. What works well in low-income countries might have smaller effects in higher-income ones.

20. Distinguish UBI from Guaranteed Income

Differentiate between ‘Universal Basic Income’ (UBI) and ‘guaranteed income’ for policy discussions, as UBI implies universality and permanence, while guaranteed income can be targeted and time-limited. This distinction helps in framing policy proposals more accurately and realistically.

21. Challenge Vice Goods Assumptions

Do not assume that providing unconditional cash transfers will lead to significant increases in spending on ‘vice goods’ like lottery, smoking, or drinking, as evidence suggests this is often not the case. This can help counter common misconceptions about such programs.

22. Long-term Follow-up for Entrepreneurship

For outcomes like entrepreneurship, plan for longer-term follow-up studies, as significant impacts may take more time to materialize than the initial study duration. Early trends might suggest future effects that require extended observation.

23. Prioritize Local Evidence & Expertise

When presenting research to policymakers, emphasize local evidence (studies from their own country) and recommendations from trusted local experts, as these factors significantly influence their valuation of information. This can increase the perceived relevance and impact of research.

24. Assess Internal Validity in Papers

When reading academic papers, critically evaluate the study’s internal validity by checking factors like response rates, attrition, and whether the study was adequately powered to detect an effect. These elements are crucial for assessing reliability.

25. Cultivate Persistence for Academia

To succeed in academia, cultivate persistence and the ability to work independently, as it functions much like an entrepreneurial environment with less direct oversight. This self-driven approach is crucial for original research and idea generation.

I mean, it's very common where a long term study might lose 20%, 30%, maybe even 50% of participants. And if it's completely random, who drops out, that may not be such a big deal. But if the people that drop out are not random, for example, if those getting the treatment are less likely to drop out, but those not getting the treatment, maybe a different group of them drop out, then you could really distort your results.

Spencer Greenberg

So we saw people spending a lot on consumption and a lot on leisure, essentially, from the reduction of work hours that we observed.

Eva Vivalt

I think that while the accuracy is off, like you say, it could still have uses in doing your figuring out your study design.

Eva Vivalt

So I think it's, you know, not going to solve all the problems that people thought. You know, it's not saying don't do it, because, you know, but just do it for the right reasons, okay, if you're going to do it.

Eva Vivalt

I think when people learn about the replication crisis, and they become really concerned about it, they sometimes come to this view that, oh, you should collect fewer outcomes, right? Because then you're going to avoid all these issues of, like, you know, false positives and adjusting or hypothesizing after the fact and so on. And I think actually that's, like, an understandable lesson, but it's completely the wrong lesson.

Spencer Greenberg

So yes, regularly, you will not have confidence intervals on any kind of policy report.

Eva Vivalt

I think it's an interesting trait because the reality is if we really have accurate views of the world, a great deal of things are uncertain.

Spencer Greenberg

Study Design to Minimize Differential Attrition

Eva Vivalt
  1. Recruit participants to a program offering $50 a month or more, without revealing the potential for higher payments.
  2. Enroll all participants to initially receive $50 a month.
  3. Conduct a long baseline period with regular surveys before randomization to establish consistent responders and balance attrition.
  4. After a few months, randomize participants into treatment and control groups.
  5. Inform the treatment group they will receive $1,000 a month for three years.
  6. Inform the control group they will continue receiving $50 a month for three years, without revealing they are the control group.
  7. Pre-specify that if attrition occurs, analysis can be restricted to those who responded regularly to early surveys, as this group is more likely to continue responding.

Designing Research for Policy Impact

Eva Vivalt
  1. Engage with policymakers or stakeholders early in the research process.
  2. Learn about their specific interests and what considerations would 'move the needle' for them.
  3. Design the study appropriately to address the exact questions and information needs of the potential users.
  4. Ensure the research provides answers that policymakers care about, increasing the likelihood of program renewal or adoption.
$1,000 per month
Amount of unconditional cash transfer For treatment group
3 years
Duration of cash transfers For treatment group
1,000 people
Number of participants in treatment group Received $1,000/month
2,000 people
Number of participants in control group Received $50/month
97%
Survey response rate at midline Approximately 1.5 years into the study
96%
Survey response rate at endline After 3 years, at the end of the study
Over $60 million
Total cost of the UBI study Including transfers, incentives, and research
Less than $30,000 per year
Average household income before study For targeted low-income participants in 2019
40%
Approximate income increase from transfers For the average household in the treatment group
Around $10 per month
Increase in 'vice goods' spending Observed in the study, not considered significant
From 2010 to 2016
World Development Reports (WDRs) analyzed Flagship publications by the World Bank
8 times
Number of WDR citations with precision information Out of thousands of studies cited, indicating a lack of confidence intervals or p-values