The argument from philosophical difficulty
LessWrong, February 9, 2019
Abstract
Deciding how to direct humanity’s future requires solving many philosophical issues beyond AI safety. The author sees humans potentially ending up in a variety of scenarios, ranging from gaining vast amounts of philosophical knowledge before proceeding with technological development to solving metaphilosophy and programming it into an AI to solve philosophical problems on its own, with some other scenarios in between. The author argues that each scenario has various advantages and disadvantages, with none being an easy solution to AI safety. – AI-generated abstract.
