works
Tom Davidson Semi-informative priors over AI timelines online This report forecasts the likely timing of the development of artificial general intelligence (AGI). AGI is defined as a computer program that can perform virtually any cognitive task as well as any human for no more money than it would cost for a human to do it. The report uses a simple Bayesian framework and inputs chosen using commonsense intuitions and reference classes from historical technological developments. The author identifies severe problems with applying Laplace’s rule of succession to AGI timelines and introduces a family of update rules. Each update rule is specified by four inputs: a first-trial probability, a number of virtual successes, a regime start-time, and a trial definition. The author argues that the first-trial probability, which gives the odds of success on the first trial, is the most important input for determining the probability of developing AGI. The report considers multiple reference classes to constrain the first-trial probability, and the author concludes that a first-trial probability in the range of [1/1000, 1/100] is most reasonable, with a central estimate of 1/300. The report also investigates the importance of other inputs, including the number of virtual successes and the regime start-time, and considers other trial definitions, such as researcher-years and compute used to develop AI. The author further explores model extensions, including modeling AGI as conjunctive, updating a hyperprior, and assigning some probability that AGI is impossible. Overall, the author’s central estimate for the probability of developing AGI by 2036 is about 8%, but other plausible parameter choices yield results anywhere from 1% to 18%. – AI-generated abstract.

Semi-informative priors over AI timelines

Tom Davidson

Open Philanthropy, March 25, 2021

Abstract

This report forecasts the likely timing of the development of artificial general intelligence (AGI). AGI is defined as a computer program that can perform virtually any cognitive task as well as any human for no more money than it would cost for a human to do it. The report uses a simple Bayesian framework and inputs chosen using commonsense intuitions and reference classes from historical technological developments. The author identifies severe problems with applying Laplace’s rule of succession to AGI timelines and introduces a family of update rules. Each update rule is specified by four inputs: a first-trial probability, a number of virtual successes, a regime start-time, and a trial definition. The author argues that the first-trial probability, which gives the odds of success on the first trial, is the most important input for determining the probability of developing AGI. The report considers multiple reference classes to constrain the first-trial probability, and the author concludes that a first-trial probability in the range of [1/1000, 1/100] is most reasonable, with a central estimate of 1/300. The report also investigates the importance of other inputs, including the number of virtual successes and the regime start-time, and considers other trial definitions, such as researcher-years and compute used to develop AI. The author further explores model extensions, including modeling AGI as conjunctive, updating a hyperprior, and assigning some probability that AGI is impossible. Overall, the author’s central estimate for the probability of developing AGI by 2036 is about 8%, but other plausible parameter choices yield results anywhere from 1% to 18%. – AI-generated abstract.

PDF

First page of PDF