Clearer Thinking with Spencer Greenberg
Read the full transcript here.
Why should we consider slowing AI development? Could we slow down AI development even if we wanted to? What is a "minimum viable x-risk"? What are some of the more plausible, less Hollywood-esque risks from AI? Even if an AI could destroy us all, why would it want to do so? What are some analogous cases where we slowed the development of a specific technology? And how did they turn out? What are some reasonable, feasible regulations that could be implemented to slow AI development? If an AI becomes smarter than humans...