AI risk, again
Overcoming Bias, March 3, 2023
Abstract
Large language models like ChatGPT have recently spooked a great many, and my Twitter feed is full of worriers saying how irresponsible orgs have been to make and release such models. Because, they say, such a system might have killed us all. And, as some researchers say that they are working on how to better control such things, worriers say we must regulate to slow/stop AI progress until these researchers achieve their goals. While I’ve written on
