Shut It Down Now!
The existential risks of Artificial Superintelligence are both imminent and immense

Hi Folks,
What follows is not my writing, but a report on the existential risk of near-term Artificial Superintelligence, generated by Gemini through my prompting and guidance. Watching Gemini do weeks of research and writing in a matter of minutes — particularly, ironically, on this topic! — and put it together almost flawlessly was shocking. My sense is that all of the major AI platform keep improving rapidly. My suspicion is AGI — to be followed by ASI, potentially in a matter of minutes — is approaching quickly, and we are woefully unprepared. If you want to learn more about the threats and risks posed by AI — including massive unemployment, deep fakes that break democracy, sophisticated manipulation, ecological decimation, as well as the existential risks of human extinction discussed above — we are holding a free Zoom session next Sunday at 1 pm EST. We will also explore what we can do together to address the AI threat. Please register here:
Of course, there are also many possible benefits of Artificial Intelligence, which are explored in this paper, “The Dawn of Abundance: How AI, AGI, and ASI Can Usher in an Era of Unprecedented Human Flourishing”, also generated by Gemini (thanks, Guy James!). However, I have to say that the negatives seem to be outweighing the short term, at least in the short term. Even relatively optimistic tech leaders believe that AGI has a 20% chance of wiping out humanity. Those are worse odds than Russian Roulette! Imagine putting a gun to your head, spinning the chamber, and pulling the trigger. That is basically what we are doing with the race toward artificial superintelligence.
Keep reading with a 7-day free trial
Subscribe to Liminal News With Daniel Pinchbeck to keep reading this post and get 7 days of free access to the full post archives.