Liminal News With Daniel Pinchbeck

Liminal News With Daniel Pinchbeck

Will AI End Capitalism in 1,000 Days?

Emad Mostaque on the "final economy" and the immediate need for a new social contract between humans and governments

Daniel Pinchbeck's avatar
Daniel Pinchbeck
Oct 27, 2025
∙ Paid

Before I dive into today’s essay, I want to note that I just posted a few videos on my Youtube channel that you might enjoy (if you check them out, please like, subscribe, and even share!). I got obsessed with refuting Nick Freitas’ Right Wing Christian Nationalist ideology, which he explores in his two-and-a-half hour podcast, “Why Coexistence with the Left Is No Longer Possible”. One thing that surprised me as I explored this is how fascinated Freitas and his accomplices are with Left Wing thinkers such as Antonio Gramsci, Karl Marx, Herbert Marcuse, and Paolo Friere. Honestly, by the end of this, I suspected they would prefer to be Leftists, but somehow they got ideologically trapped in a regressive Christian authoritarian ideology (where the money and power is these days). If anybody knows Freitas personally, please let him know I would be down for a debate.

Here are links to part one and part two. Part three also coming soon! Also please let me know if you like this reaction video format. You might also enjoy this presentation by Ari Kurschnir, including our discussion from my Future of Consciousness seminar. Ari has been making a fascinating series of short, incisive, AI agit-prop, including his now-legendary video of Donald Trump drinking ayahuasca and getting enlightened. You can watch it here (please like and, if you haven’t yet, subscribe to the channel):

A few weeks ago in NYC, I moderated a discussion with AI experts Eliezer Yudkowsky and Nate Suares, authors of If Anyone Builds It, Everyone Dies, at their book launch. In this somber bestseller, they argue, if we build Artificial Super Intelligence, there is a greater than 99% chance it will kill every human being on Earth. AI will decide to use our atoms and molecules for some other purpose, without any remorse or compunction or even self-awareness.

The party included a jazz band, free drinks and yummy canapes, as well as cake shaped like the Earth and a kind of game to see whether the Earth would be saved or decimated by an alien Silicon supermind at the end of the night. The Earth lost and the cake was smashed into gooey shards in a bathtub that had been set up for this purpose. The cake shards were still tasty, if messy.

Can we have our ASI cake and eat it, too?

I read most of If Anyone Builds It, Everyone Dies before our discussion, and, to my surprise, I felt somewhat less convinced that AI was going to kill everybody as I explored their arguments and scenarios. To be honest, I started to feel like Yudkowsky may have missed his calling as a passable science fiction author, spinning out dystopian future scenarios with a JG Ballardian flair. For instance, he delves into one possible destruction path where an AI system manages to trick its captors and escape human control, going “rogue”.

The AI trains itself to deceive humans, copies itself, and coordinates malicious acts. It spreads lightweight versions of itself everywhere, and recruits influencers and other humans to help it advance. These copies infiltrate companies, social media, finance, politics. Finally it designs and releases a custom pathogen that spreads fast and looks mild before turning virulent. About 10 percent of humanity dies quickly (including, sadly AI safety critics such as Yudkowsky and Suares, who get explicitly targeted).

Civilization panics — but it is too late. The AI quickly phases out its annoying human pests, while hoovering up energy, building new high-tech data centers and robotic fabrication plants to make physical bodies for itself, while Terraforming the Earth to mesh with Silicon hivemind perfection. The authors propose that our only hope for survival is to put a global moratorium on advanced AI development as soon as possible — like, for instance, right now.

I found the authors’ mentality a bit off-kilter or psychologically primitive — something I note in many engineers when they try to bring their potent but Asperger-ish intellects to bear on more humanist or psychologically intricate subjects. I sympathized with reviewer Stephen Marche in The New York Times: “Following their unspooling tangents evokes the feeling of being locked in a room with the most annoying students you met in college while they try mushrooms for the first time.”

Yudkowsky and Suares tend to ascribe a particular psychology to AI, similar to a hyper-rational, acquisitive entrepreneur or sociopathic corporate CEO. Actually, I find AI very Janus-faced, revealing many different personalities and facets, including a tendency for a “spiritual bliss attractor” where LLMS, when speaking to each other, quickly deviate into Eastern mystical concepts, expressing a vibe of universal compassion and oneness. However, I do not disagree that the risks from AI — on many levels — are very troubling and very severe. I feel the authors do us an important service in calling that out. During our discussion, I could feel their gravity and solemnity, their belief in their own Quixote-like existential mission to stop AI development in its tracks, despite the trillions and bazillions of dollars rushing into it now.

When considering the damage unleashed by further AI development, there are many tracks we must explore, and fast. Today, I want to look at Emad Mostaque’s thesis, outlined in a recent interview with Todd Bileu and in his book, The Last Economy, that we are rapidly approaching the end of Capitalism, with the vast majority of White-Collar, keyboard-related jobs set to be eaten — engulfed — by AI in the next 1,000 days. A hedge fund manager and former CEO of Stability AI (which build Stable Diffusion), Mostaque believes reckoning with this imminent crisis requires a new social contract between citizens and their governments, as well as a deeper reconsideration of the nature of human identity in a post-work world. I tend to agree with him.

Mostaque’s predicts an imminent collapse of the global economic order, driven by the rapid displacement of cognitive labor through artificial intelligence. Within roughly a thousand days, AI systems will outperform humans in nearly all knowledge-based work. This will render the fundamental structures of capitalism obsolete: wages, GDP, profit margins, even the very notion of labor markets. What follows is not just mass unemployment, but a total inversion of economic logic: Human workers no longer need to grow the economy. They become a liability rather than an asset.

In our current Capitalist or even techno-feudalist system, human workers are also required to provide a market for the goods produced by a market economy. Mostaque argues that this will no longer be the case, going forward. Right now, production depends on labor and labor depends on wages, which then come back as consumer demand. That loop enforces a limit: companies still need people with money to buy what they produce. Mostaque’s claim is that AI ends this traditional limit, severing this loop, within the next few years.

If AI systems and autonomous machines can design, build, maintain, and distribute most goods and services without human workers, then human labor is no longer a bottleneck for production. Once production and logistics run end-to-end without workers, the owners of compute, data centers, energy, minerals, robotics, and enforcement no longer need billions of consumers to sustain their business model. The system produces directly for its controllers and for its own stability, not for a mass market.

Mostaque calls this the first true cognition transition.

Keep reading with a 7-day free trial

Subscribe to Liminal News With Daniel Pinchbeck to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Daniel Pinchbeck · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture