works
Dylan Matthews The AI people have been right a lot online Early skepticism toward artificial intelligence (AI) risk within the effective altruism movement often characterized such concerns as speculative distractions from established philanthropic priorities like global health and animal welfare. This skepticism was frequently reinforced by heuristics that prioritized institutional consensus and empirical precedents over a priori reasoning and long-range extrapolation. However, the subsequent decade of technological acceleration, particularly in deep learning and large-scale infrastructure investment, has demonstrated the limitations of mainstream predictive capacities. Traditional institutions in academia, finance, and government often fail to anticipate nonlinear societal shifts, whereas smaller, intellectually flexible communities may correctly identify transformative trends before they become mainstream. Reevaluating past predictive failures suggests that a bias toward narrow empiricism and the dismissal of “futurist” scenarios can lead to significant missed opportunities for impact. Intellectual openness to seemingly radical projections—including rapid labor dislocation and unprecedented economic growth—is essential for navigating an environment where technological progress consistently outpaces traditional forecasts. Maintaining a rigorous but receptive stance toward speculative risks allows for more effective adaptation to societal reconfigurations that may initially appear implausible or speculative. – AI-generated abstract.

The AI people have been right a lot

Dylan Matthews

Effective Altruism Forum, April 15, 2026

Abstract

Early skepticism toward artificial intelligence (AI) risk within the effective altruism movement often characterized such concerns as speculative distractions from established philanthropic priorities like global health and animal welfare. This skepticism was frequently reinforced by heuristics that prioritized institutional consensus and empirical precedents over a priori reasoning and long-range extrapolation. However, the subsequent decade of technological acceleration, particularly in deep learning and large-scale infrastructure investment, has demonstrated the limitations of mainstream predictive capacities. Traditional institutions in academia, finance, and government often fail to anticipate nonlinear societal shifts, whereas smaller, intellectually flexible communities may correctly identify transformative trends before they become mainstream. Reevaluating past predictive failures suggests that a bias toward narrow empiricism and the dismissal of “futurist” scenarios can lead to significant missed opportunities for impact. Intellectual openness to seemingly radical projections—including rapid labor dislocation and unprecedented economic growth—is essential for navigating an environment where technological progress consistently outpaces traditional forecasts. Maintaining a rigorous but receptive stance toward speculative risks allows for more effective adaptation to societal reconfigurations that may initially appear implausible or speculative. – AI-generated abstract.

PDF

First page of PDF