Red teaming papers as an EA training exercise?
Effective Altruism Forum, June 22, 2021
Abstract
The author explores the idea of ‘red teaming’ EA research, arguing that such a process would be beneficial to the movement as a whole. By identifying inaccuracies and half-truths within EA research, a process of ‘red teaming’ can improve research quality, increase trust in past work, and reduce research debt. A team dedicated to ‘red teaming’ could start with a trusted person and expand outwards, eventually becoming a security consultancy for EA orgs or an external impact consultancy for the broader community. Specific areas for potential ‘red teaming’ include The Precipice, AI/ML Safety papers, Tetlock’s research on forecasting, The Sequences, reports from Open Phil and Rethink Priorities, GFI research, 80k podcasts, biosecurity papers, EA Forum posts with high karma, early posts by Carl Shulman and Paul Christiano, and impact assessments of core EA orgs. The author draws a comparison between ‘red teaming’ in EA and the JEPSEN project, which rigorously tested the stability of database systems, highlighting the importance of independent checks to ensure accuracy and reliability. – AI-generated abstract.
