DeepMind is hiring for the Scalable Alignment and Alignment Teams
Effective Altruism Forum, May 13, 2022
Abstract
DeepMind is currently hiring for several roles in its Scalable Alignment and Alignment Teams, focusing on strategies to ensure artificial general intelligence sequences operate as intended. The Alignment Team primarily investigates failures of intent, where an AI system knowingly acts against its developers’ wishes, while the Scalable Alignment Team works to make AI agents carry out human intentions, even in complex situations. Both teams maintain various projects exploring different facets of AI intent and alignment. The research projects involve large language models (LLMs), that may potentially cause both short- and long-term harm. DeepMind’s hiring is open to a diverse range of roles, including Research Scientists, Research Engineers, and Software Engineers, with different backgrounds such as machine learning or cognitive science. International applicants are equally considered provided they are willing to relocate to London. – AI-generated abstract.
