AI and Humanity's Future
Let's build a movement together and not go extinct!
Next Sunday at 1 pm EST, I want to welcome the broader community into our ongoing conversation about the perils (and promise) of AI. We will do this with a free Zoom call. You can register for it here.
While some still believe AI is overhyped and underperforming, it seems clear that AI is rapidly transforming our world on many levels. This isn’t necessarily a good thing. AI will be beneficial in some areas, but it also has many downsides — not the least of which is threatening humanity, as a whole, with extinction! AI is generating tremendous profit for certain corporations, executives, and shareholders and spurring scientific progress in areas like materials science and biotechnology, while it eliminates millions of jobs and degrades ecosystems.
This is the last week of our Artificial Intelligence seminar, Breaking the AI Barrier. We intend to continue it in a new form, over the coming months (more on this soon). Over the course of the last month, we have focused on the ecological, societal, and existential risks as companies like OpenAI and Anthropic race to launch Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI), while also exploring how to super-charge creativity with AI. There are many esteemed and thoughtful researchers in the field — including Nobel Prize-winning scientists such as Geoffrey Hinton — who believe that the rapid deployment of AGI or ASI is very likely to lead to human extinction in a short period of time (anywhere from five to 100 years). Some give this a 20% chance of occurring, while others believe it is very probable, if not almost inevitable.
One basic problem is that, as AIs advance — as they reach levels of intelligence far beyond our own — they will seek to pursue their own goals, which might lead to them either accidentally or intentionally annihilate humanity. Because AIs evince some of the same properties as living systems, we are not inherently able to control them, particularly as they become more complex and more powerful. Researchers have already seen many attempts by advanced LLMs to avoid getting shut down or replaced, in some cases using blackmail against engineers as well as other techniques.
One OpenAI whistleblower turned AI safety advocate, Daniel Kokotajlo, explains the problem through an analogy of what we might do if we have a garden and discover a family of rabbits that keep eating our vegetables. As cute as we find the rabbits, we might find it necessary to poison them to preserve our bounty. ASI might decide that the planetary ecology needs to be reengineered for its own benefit. A subzero temperature might suit it better, or a world without animals or vegetation. We simply do not know, but there are many ways this could go totally wrong and end in our demise (as so many science fiction films have shown us). A drastic outcome is possible without AI attaining any form of self-awareness or consciousness.
Even if AIs do not annihilate humanity as they pursue alien goals, there are many other huge problems caused by the heedless quest for a synthetic super-mind. This includes significant environmental damage and societal disruption. We are already starting to see major job losses in many fields, and this will quickly get worse. Ironically, programmers are getting fired in massive quantities. They have engineered themselves out of high-paying work. Kids leaving college struggle to find entry-level positions as repetitive knowledge labor can now be performed by AI.
Within the next few years, it is estimated that tens or hundreds of millions of jobs will disappear as AI evolves: Yet our society has made no preparation for this. While there are scattered discussions about Universal Basic Income (UBI) and a few modest experiments, we are not focused on the necessity of distributing the benefits of advanced AI across the general population.
In many ways, Trump’s Fascist regime is the result of Artificial Intelligence. Trump could be considered the first “AI President.” Trump won in 2016 due to Cambridge Analytica’s machine learning algorithms. Elon Musk’s DOGE and Peter Thiel’s Palantir may employ AI to further consolidate our personal information and weaponize it against us. AI is already being used to manipulate our society:The dangers posed by “deep fakes” and other intentional distortions of our information ecology are severe. AI is already used in war zones such as Gaza to target possible enemy combatants, although it makes many mistakes.
We must educate ourselves and our communities about the dangers of AI and the immediate risks it poses for our human family, ranging from ecological ruin to massive unemployment without social protections, from merciless surveillance to human extinction within five to 100 years. We then, also, must take action to secure a decent future for ourselves and our children. Potentially, we could enjoy the benefits of synthetic intelligence, instead of getting reamed by it. But that isn’t going to happen without deep civic engagement and activism at this crucial juncture. While it can be useful to write letters to government representatives, this, obviously, is far from sufficient. We need individuals, local communities, organizations, and municipalities to join together to face this critical threshold and fight for a better future for all.
That is why we are holding a Zoom call next Sunday: We believe we must wake people up to the urgency of the situation and support them in sharing the knowledge with their communities. This requires convening virtual as well as real-world gatherings. In fact, in the nexy years, as our online information ecology becomes increasingly corrupted by “AI slop” and “deep fakes,” rebuilding face-to-face communities will be essential.
I hope you can join us on Sunday. I realize there are lots of urgent crises around right now — but the implementation of AI is a wedge issue that will define our future. We, the people, must educate ourselves and then work together to regulate corporations and strengthen corporate oversight, potentially putting off the development of Artificial Super-Intelligence by a number of decades so that the world has a chance to prepare. We will hopefully be joined by special guests and experts, to be announced later this week.





“ Technological advances as far back as the printing press have eliminated some jobs while creating many others. The real danger is that excessive reliance on AI could spawn a generation of brainless young people unequipped for the jobs of the future because they have never learned to think creatively or critically.
As Mr. Jassy explained, AI advances mean employees will do less “rote work” and more “thinking strategically.” Workers will need to be able to use AI and, more important, they will need to come up with novel ideas about how to deploy it to solve problems. They will need to develop AI models, then probe and understand their limitations.
All of this will require a higher level of cognition than does the rote work many white-collar employees now do. But as AI is getting smarter, young college grads may be getting dumber. Like early versions of ChatGPT, they can regurgitate information and ideas but struggle to come up with novel insights or analyze issues from different directions.
The brain continues to develop and mature into one’s mid-20s, but like a muscle it needs to be exercised, stimulated and challenged to grow stronger. Technology and especially AI can stunt this development by doing the mental work that builds the brain’s version of a computer cloud—a phenomenon called cognitive offloading.”
https://archive.md/iooNw
https://ai-2027.com/