AI races
AI races, no. unpublished
Abstract
The article presents a simple model of AI races with minimal assumptions to argue that companies are unlikely to have a race to the bottom on AI safety. If labs cut safety work to increase the chance of being the first to develop human-level machine intelligence (HLMI), they might end in a disaster or a situation where the other lab develops and controls AI. The model shows that a Nash Equilibrium involving very low levels of safety is not likely to be reached. There are better alternatives for all parties, such as probabilistic mixtures of outcomes, randomizing who drops out of the race, and enforcing agreements on some component, such as publishing safety research. – AI-generated abstract.
