works
Misha Yagudin, Jonathan Mann, and Nuno Sempere Update to Samotsvety AGI timelines online Previously: Samotsvety’s AI risk forecasts. Our colleagues at Epoch recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive. We used the following definition to determine the “moment at which AGI is considered to have arrived,” building on this Metaculus question: The moment that a system capable of passing the adversarial Turing test against a top-5%[1] human who has access to experts on various topics is developed. More concretely: A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans. An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates. This definition of AGI is not unproblematic, e.g., it’s possible that AGI could be unmasked long after its economic value and capabilities are very high. We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.

Update to Samotsvety AGI timelines

Misha Yagudin, Jonathan Mann, and Nuno Sempere

Effective Altruism Forum, January 23, 2023

Abstract

Previously: Samotsvety’s AI risk forecasts. Our colleagues at Epoch recently asked us to update our AI timelines estimate for their upcoming literature review on TAI timelines. We met on 2023-01-21 to discuss our predictions about when advanced AI systems will arrive. We used the following definition to determine the “moment at which AGI is considered to have arrived,” building on this Metaculus question: The moment that a system capable of passing the adversarial Turing test against a top-5%[1] human who has access to experts on various topics is developed. More concretely: A Turing test is said to be “adversarial” if the human judges make a good-faith attempt to unmask the AI as an impostor, and the human confederates make a good-faith attempt to demonstrate that they are humans. An AI is said to “pass” a Turing test if at least half of judges rated the AI as more human than at least third of the human confederates. This definition of AGI is not unproblematic, e.g., it’s possible that AGI could be unmasked long after its economic value and capabilities are very high. We chose to use an imperfect definition and indicated to forecasters that they should interpret the definition not “as is” but “in spirit” to avoid annoying edge cases.

PDF

First page of PDF