-- Shares Facebook Twitter Reddit Email One of the most prominent narratives about AGI, or artificial general intelligence, in the popular media these days is the “AI doomer” narrative. This claims that we’re in the midst of an arms race to build AGI, propelled by a relatively small number of extremely powerful AI companies like DeepMind, OpenAI, Anthropic, and Elon Musk’s xAI (which aims to design an AGI that uncovers truths about the universe by eschewing political correctness ). All are backed by billions of dollars: DeepMind says that Microsoft will invest over $100 billion in AI, while OpenAI has thus far received $13 billion from Microsoft, Anthropic has $4 billion in investments from Amazon, and Musk just raised $6 billion for xAI.
Many doomers argue that the AGI race is catapulting humanity toward the precipice of annihilation: if we create an AGI in the near future, without knowing how to properly “align” the AGI’s value system, then the default outcome will be total human extinction . That is, literally everyone on Earth will die. And since it appears that we’re on the verge of creating AGI — or so they say — this means that you and I and everyone we care about could be murdered by a “misaligned” AGI within the next few years.
Related Will "godlike AI" kill us all — or unlock the secrets of the universe? Probably not These doomers thus contend, with apocalyptic urgency, that we must “ pause ” or completely “ ban ” all research aimin.
