What effects does guaranteed income have on U.S. citizens? (with Eva Vivalt)

Dec 11, 2024 1h 11m 25 insights Episode Page ↗
In this episode, Spencer Greenberg speaks with Eva Vivolt about her groundbreaking UBI study, its findings on consumption, leisure, and employment, and the importance of rigorous research design. They also discuss using expert predictions to improve study design and the challenges of evidence-based policymaking.
Actionable Insights

1. Embrace Ambiguity and Uncertainty

Cultivate the ability to embrace ambiguity and uncertainty, rather than needing a definitive answer for everything. This prevents forcing artificial conclusions in a complex world and promotes a more accurate understanding of reality.

2. Employ Randomization for Causality

Utilize randomization in study designs to establish causal effects rather than mere correlations. This rigorous approach provides much stronger evidence for whether an intervention truly causes a specific outcome.

3. Collect Many Outcomes Carefully

Collect a wide range of outcomes in studies to gain comprehensive understanding of a phenomenon, but implement rigorous statistical methods like false discovery rate adjustments and outcome indices to manage false positives. This allows for extensive learning without compromising data reliability.

4. Engage Policymakers Early

Engage with policymakers and stakeholders early in the research design process to understand their specific interests and ‘move the needle’ questions. This ensures the study addresses relevant concerns and increases the likelihood of its findings being utilized.

5. Forecast Study Outcomes Ex-Ante

Collect ex-ante forecasts from experts about expected study outcomes to identify surprising results and maximize the learning from large-scale research. This helps understand pre-existing assumptions and highlights areas where actual findings diverge from expectations.

6. Don’t Over-trust Single Studies

Avoid placing too much trust in the results of a single study; seek out multiple studies on the same topic to gain greater confidence in the findings. This practice aligns with the scientific principle of replication.

7. Question Averages for Uncertainty

Develop the habit of questioning averages by asking ‘plus or minus what?’ to acknowledge the inherent uncertainty in statistical estimates. This helps in understanding the true range and reliability of reported numbers.

8. Demand Confidence Interval Transparency

Be critical of policy reports or presentations that omit confidence intervals, p-values, or standard errors, as this hides the inherent uncertainty and precision of the reported results. Demand transparency regarding statistical reliability.

9. Recognize Policymaker Behavioral Biases

Recognize that policymakers, like all humans, are subject to behavioral biases such as ‘asymmetric optimism,’ where they may gravitate towards more positive results. This awareness helps in critically evaluating policy decisions and their underlying evidence.

10. Beware Expert Optimism Bias

Be aware that experts often tend to be overly optimistic and overestimate the effects of interventions. Maintain a healthy skepticism (’team nothing works’) when evaluating predicted outcomes.

11. Minimize Differential Attrition

To minimize differential attrition in long-term studies, provide a baseline benefit to both control and treatment groups, and initially keep participants unaware of potential larger benefits. This ensures continued engagement from the control group, leading to higher response rates and more reliable data.

12. Pre-specify Attrition Handling

To ensure data robustness, implement a long baseline period before randomization and pre-specify strategies to handle future attrition, such as restricting analysis to consistently responsive participants. This helps maintain balance and prevents bias in estimates.

13. Ensure Sufficient Sample Size

Ensure a sufficiently large sample size in studies to be ‘well powered’ for the intended outcomes. This increases the likelihood of detecting true effects and reduces the risk of false negatives.

14. Hierarchically Structure Outcome Analysis

Structure study outcomes hierarchically into families, components, and items, then combine them into indices for analysis. Apply false discovery rate adjustments to these indices to prioritize and reduce false positives for the most critical outcomes.

15. Conduct Longer Studies for Well-being

When studying well-being or mental health, conduct longer-term studies (beyond one year) to account for hedonic adaptation. Initial positive effects may diminish over time, leading to different conclusions if only short-term data is considered.

16. Prioritize Causal Forecasts

When using forecasting platforms for research, prioritize collecting ‘causal forecasts’ to predict the direct effects of interventions. This differs from state or conditional forecasts and is crucial for understanding true impact.

17. Inform Study Design with Forecasts

Utilize expert forecasts, even if imperfect, to inform study design by identifying interventions or outcomes with the highest ‘value of information.’ This helps prioritize research efforts to maximize learning and policy relevance.

18. Calibrate Forecasting Algorithms

When developing forecasting models, focus on calibrating algorithms to ensure their probability estimates accurately reflect reality (e.g., an 80% chance truly means an 80% chance). This improves reliability, a common struggle for human forecasters.

19. Contextualize Intervention Design

When designing interventions, carefully consider the specific context (e.g., country, income level) as the same intervention may yield vastly different impacts in different settings. What works well in low-income countries might have smaller effects in higher-income ones.

20. Distinguish UBI from Guaranteed Income

Differentiate between ‘Universal Basic Income’ (UBI) and ‘guaranteed income’ for policy discussions, as UBI implies universality and permanence, while guaranteed income can be targeted and time-limited. This distinction helps in framing policy proposals more accurately and realistically.

21. Challenge Vice Goods Assumptions

Do not assume that providing unconditional cash transfers will lead to significant increases in spending on ‘vice goods’ like lottery, smoking, or drinking, as evidence suggests this is often not the case. This can help counter common misconceptions about such programs.

22. Long-term Follow-up for Entrepreneurship

For outcomes like entrepreneurship, plan for longer-term follow-up studies, as significant impacts may take more time to materialize than the initial study duration. Early trends might suggest future effects that require extended observation.

23. Prioritize Local Evidence & Expertise

When presenting research to policymakers, emphasize local evidence (studies from their own country) and recommendations from trusted local experts, as these factors significantly influence their valuation of information. This can increase the perceived relevance and impact of research.

24. Assess Internal Validity in Papers

When reading academic papers, critically evaluate the study’s internal validity by checking factors like response rates, attrition, and whether the study was adequately powered to detect an effect. These elements are crucial for assessing reliability.

25. Cultivate Persistence for Academia

To succeed in academia, cultivate persistence and the ability to work independently, as it functions much like an entrepreneurial environment with less direct oversight. This self-driven approach is crucial for original research and idea generation.