Abstract
This document is an outline for a reading list focused on Evan Hubinger’s views on AI safety. The list is structured around an overview, a video version, and a list of readings. It includes a suggestion for expanding the list to include the BoMAI paper and the FDT paper. The rationale for including the FDT paper is that it provides an introduction to logical control, a concept that may be relevant to the discussion of AI safety. – AI-generated abstract.
