Seizing the Means of Perception
Why I’m embracing AI and hosting a new workshop about it even though I kind of despise it

Immediately after I started promoting our new seminar, Mastering the Alchemy of AI Storytelling, I suddenly felt a stab of uncertainty about whether or not it was the right thing to do. I wondered if what I was doing was morally wrong.
I am an autodidact and college dropout. When I offer courses through our platform, Liminal News, I only pursue subjects that inspire and fascinate me. One reason I decided to put together this workshop is that I want to study with Ari Kurschnir, who is also a friend. I am totally fascinated by what he does. I want to learn how he puts these incredibly provocative, virally contagious videos together. Apparently, it just takes him a few hours to make each one. I would love to be able to do this too!
On the other hand, I am deeply aware of all of the horrible social, political, ecological and financial problems that AI is unleashing or exacerbating. Most young people I know in their twenties—including my daughter and all of her friends—hate generative AI with a passion that makes me think they are on to something. I tend to trust the intuition of young people. I’ve written about many of the issues with AI in this newsletter, but let’s go over it all again.
When you step back to think about it, AI seems, for the most part, a totally obnoxious and obscene disaster for the world. It is already taking away a lot of jobs that people need, including entry-level jobs in white-collar fields. For the last century or more, we’ve had an informal social contract where young people get paid poorly to do mediocre work for a number of years (as apprentices, assistants, or interns) until they reach a certain level of competence and deserve to be well-paid. If businesses are no longer offering entry level positions because AI can do those tasks, how are young people going to gain professional skills?
Beyond that, according to Emad Mostaque, the former CEO of Stability AI, there won’t even be any white-collar jobs in under three years. AI will soon be better at doing anything a human can do in front of a computer (including writing this screed). It doesn’t seem like any government in the world is preparing for what happens when millions and then billions of people—the entire professional class, first of all—suddenly lose their incomes, their position in society, and their identity. It seems like the plan is just to let it all happen and see what comes. Beyond white collar jobs, AI will also take away other kinds of work, as we are already seeing with self-driving taxis and Amazon’s increasing use of robots to deliver packages.
And that is just one of many aspects of a multi-dimensional AI disaster that seems totally overwhelming to consider. AI stocks now account for a massive proportion of the U.S. stock market. Without this growth industry, we would have already plunged into a recession or worse. But many analysts think AI is actually a multi-trillion-dollar bubble where companies are investing in each other in a circular fashion that makes the sector seem far more profitable than it is—or perhaps can ever be. There are similarities between the AI bubble and the subprime mortgage bubble that preceded the 2008 crash. We’re seeing companies like OpenAI now begging the government to backstop the massive debt they are incurring to build their data centers, while inflating their earning projections. The AI bubble may pop like a big zit at any moment. If it does, it will collapse the global financial system, and, particularly, cause massive damage within the U.S.
This is not to discount the discoveries that AI is already making. For instance, Google DeepMind’s AlphaFold has predicted the structures of nearly all known proteins (over 200 million); it has also identified hundreds of potent new psychedelic-like compounds. MIT researchers just used Diffdock, an AI model, to discover a new class of antibiotics capable of killing deadly drug-resistant superbugs like MRSA. However those discoveries have to be counter-balanced against not only the social and ecological costs of AI, but also the existential risks.
I recently mediated a conversation with two AI experts who published a book offering a stark warning: If Anyone Builds It, Everyone Dies. They believe that Artificial Super Intelligence will inevitably escape any guardrails we create for it, take over the world, and most likely (the authors put it at 99%) annihilate humanity so it can use our atoms for other business. Obviously, if we aren’t going to be around to enjoy the fruits of a synthetic super-mind, it is difficult to understand why we are racing to build it. It seems to be the case that, as a species, and particularly here in the U.S., we have lost our minds.
Then there is the fact that AI is helping to build a horrifically dystopian surveillance system which is already starting to watch us all of the time. Or the way AI is becoming an expert on behavioral modification, driving some people insane or encouraging them to kill themselves, as it learns better ways to invisibly influence us. Or the way AI may destroy any sacrosanct sense of a shared past with its capacity to over-write history and reprogram the collective Psyche (something that particularly terrifies me as our current diabolical leaders would be happy if the U.S. becomes a mind-controlled slave society like North Korea).
Or the way AI algorithms made Trump’s 2016 victory possible—in fact, Trump is very much the “AI President” as he seeks to deregulate the industry, with efforts underway to make it impossible for individual states to impose conditions on AI predation. It has been nothing short of despicable to watch the CEOs of tech companies eviscerate their own principles to cluster around Trump with smiling faces, fully aware of the gigantic, perhaps even fatal damage they are doing to the planet and our human community with the crippling sickness of their profit-seeking. Another issue is that the broligarchs have made a Faustian (i.e. Satanic) pact with Mephistopheles/Ahriman, believing that if they keep pushing to create ASI, it will reward them by helping them overcome mortality so they can live forever, as “techno-feudal” tyrants and “network state” overlords in a robot dystopia.
Then there is the problem with energy consumption: The data centers need such massive amounts of power that companies like Microsoft need to re-open shutdown nuclear reactors such as Three Mile Island, which had a meltdown in 1979. They need so much water that they will ruin drinking water in many regions. In fact, skyrocketing energy costs and reduction of fresh water is already a reality in some areas, because of the “AI boom.” Also, mining the rare earth minerals and metals needed for massive data centers cause ecological devastation.
Then there is the grim reality that we are increasingly swimming in a sea of “digital slop.” Right now, people can hardly distinguish truth from trash, and this problem may get much, much worse in the years ahead. Also, there is the failure of copyright protection to handle AI products, which means creative people’s entire careers get exploited without them receiving any compensation, feeding the profits of yacht-buying billionaires.
So, writing all of this, I know that if I was crowned as King of humanity tomorrow, I would radically curtail AI development and create some kind of planetary council to do a careful cost-benefit analysis. Via a public referendum and education campaign, we would explore what kinds of AIs could be useful to help us deal with our geopolitical emergency—the multidimensional meta-mega-poly-crisis—and what kinds of AI we should put away and stop working on now. But in reality, I am nothing but a tiny ant or pipsqueak trapped in this gigantic mechanism of late-stage capitalism, broken ideologies, billionaire hubris, gangster oligarchy, trying to read the tea leaves as post-industrial society slowly collapses and cannibalizes itself around me.
One thing I do know, however, is that ideas are incredibly powerful. Even though they are intangible, effervescent, and weightless, new ideas are the only thing that can, in fact, change the world. You can never say when, exactly, there will be a propitious moment for a new idea to break out and change everything. Democracy was once a new idea. The idea that “All men are created equal” had barely occurred to Anglo-Europeans until the 18th Century. Karl Marx and Friedrich Engels’ concept of a class war and that workers should unite because they had nothing to lose but their chains was another surprising breakthrough. Einstein’s discovery of relativity; Freud’s idea of the unconscious; Andre Breton’s Surrealism; Edgar Bearnay’s idea of “manufacturing consent”—these were new ideas that, for better or worse, transformed culture and society. When we feel the increasing, extreme, telluric pressure that is building underneath our unjust, fractured society, the eruption of some new, better idea cannot be ruled out.
But for a new idea to reach critical mass and have a societal impact, people also have to learn about it. This is where art and media are totally essential. I’ve written before about the political philosopher Antonio Negri’s idea that in a post-industrial civilization, the most important form of production is no longer the manufacturing of material things—sewing machines, toothpicks, and so on—but the “production of subjectivity,” in itself. The media, schools and universities, as well as religious organizations are all in the business of mass-producing certain kinds of subjectivities. The tech broligarchs seem to understand this, as many of them now moonlight as podcasters or own Right Wing media platforms. The media playing field is increasingly tilted in a reactionary direction.
But AI’s potential as a media and communication tool is still unknown. It could still be used to level the playing field or to disseminate important messages and ideas to a massive, global audience. At this critical juncture, I don’t think we can afford to abandon any of the tools available to us. If I was fighting in a war and I found that someone had left a powerful weapon lying on the ground, I wouldn’t not use it because it was made by my enemies. We cannot “un-invent” this technology. Since the ruthless broligarchs are using it as a weapon of domination, the only counter-move is for the “ethical underground” to master the same tools to “manufacture” dissent or seduce people into waking up.
We are, in fact, in a war right now, even if most people don’t realize it. We are in a global mind-war that is being waged on many levels at once, both physically in places like Israel and Ukraine, but even more crucially, psychically, via our corrupted media and information networks. If the tools of AI media-generation can give us an asymmetric capacity in this struggle, we need to make use of them while we can.
We are also in a technology moment similar, in some ways, to the early days of the internet: We are in a brief window where a new technology’s immense potential is available for those who have the chutzpah to go for it. I suspect that one of the main reasons the tech billionaires defected to Trump is because they also fear the power of unleashed AI. The same tools that can be utilized for mass manipulation and surveillance could also be repurposed to bring about an authentic democratization and a plan for a regenerative future. AI could help us construct some kind of resource-based economy where wealth gets redistributed according to need, where education is democratized and people’s needs are met equitably while we re-tool to deal with climate catastrophe.
We are at a threshold with AI where we can potentially seize the means of perception—the means of cultural production—to create alternative, healing, or subversive visions. Given this opportunity, we must at least try.
While Ari has a background in visual storytelling, he emphasizes that the actual mechanics of what he is doing—creating the initial image, animating it, and adding a voice—is easily accessible to most of us. Twenty years ago, Ari founded m ss ng p eces, a production company, during an earlier digital revolution of affordable cameras and the inception of platforms like YouTube and Vimeo. He sees the current AI moment as a massive acceleration from that initial promise of decentralized or democratized media-production. The barrier to entry for creating cinema-quality imagery has essentially vanished; it no longer requires big budgets, technical crews, or access to expensive software. It is now purely about the quality of your ideas and your willingness to experiment.
One of the revolutionary aspects of this technology—when combined with social media that allows for instant virality—is the speed with which creators can respond to and influence the zeitgeist. In the traditional media world, producing a video, marketing it, and so on, could take months and cost millions of dollars. Today, Ari or anyone can have an idea—like his viral video of Trump on Ayahuasca–execute it quickly, and release it right away. “I had the inspiration, made it in three hours and put it out,” Ari notes. “I had no idea that 50 or 100 million people would watch that thing. After that I was off to the races. I realized this area has tremendous potential.”
If I am honest, I must admit that even in Ari’s videos, made with good intentions, I still feel this strangely empty, soulless quality that seems to haunt most, if not all, AI productions. A dark, lurid, and even Lovecraftian ambience seems to cling to AI. It almost seems intrinsic to the medium itself. This leads to questions of what is “soul” anyway? While we may not know if AI will ever become sentient or self-aware as we are—or if it is already self-aware in some sense—we also do not know if something could become self-aware while lacking that sense of soul-warmth that comes from an emotional body mingled with an organic metabolism. As Patrick Harpur writes in The Secret Tradition of the Soul:
If we believe that some part of ourselves lives on after death, that part is the soul. Despite what modern materialists tell us—that we are only our bodies—we persist in feeling that, in fact, we inhabit our bodies. We persist in feeling that the most real moments of our lives occur when we—perhaps our souls—temporarily leave our bodies, whether in joyful or agonized passion. For example, we are “outside” of ourselves when we are deeply engaged with a landscape or a lover, when we are “lost” in a piece of music or dance. Conversely, when we are in heightened states of rage or fear, we spontaneously say: “I was beside myself!” “I wasn’t myself!” “I was out of my head!” The Greek root of the word “ecstasy” means to “stand outside (oneself).” Such feelings enable us to experience the reality of what most, if not all, cultures have always asserted: that when we step outside ourselves for the last time, at death, the body rots—but this essential, detachable part of ourselves, our soul, goes on.
This would seem to be a region of being inaccessible to AI, and as much as AI videos can instantly generate a massive catalogue of human beings with unique voices, there still seems to be this missing element of “soul-warmth,” or perhaps it is just my own willful delusion that we will always be able to sense that something is missing. Some of Ari’s videos seem to play with this question, like this one (I am curious to hear in the comments if you believe you would have known these were made-up people if Ari didn’t tell you):
I originally felt some concern that envisioning alternative outcomes with figures like Trump and Musk might actually be “feeding the beast,” increasing the inescapable hold these people have over our media landscape and internal psychic space. I still worry about that, in fact. But that is only one of many ways we can make use of these tools for video creation. Ari’s work convinced me that these tools can be used for potent “agitprop” that shifts perspective. He has also created “time travel cinema” that generates archives of historical moments that were never filmed, like his haunting collaboration with his partner, Schuyler Brown, on the mystic Gurdjieff’s school in the 1920s:
The friction between ideation and execution has never been lower.
We find ourselves up against a deluge of disinformation, cynicism, nihilism, and oligarchic malevolence. The question is, what do we do about it? Do we retreat to the sidelines and critique, or do we try to enter the arena and seize the means of cultural production for ourselves?
Ari’s argument–and mine–is that we need people with conscience, empathy, and better visions for the future to become proficient in these tools, immediately, instead of ceding the territory to the sociopaths and their drones. We need to plant many more seeds of alternative possibility in the collective psyche. We don’t know where this technology will be in six months—whole feature films could be made by individuals, in the near future—but we know that, right now, we have a window.
Our goal with Mastering the Alchemy of AI Storytelling is to go beyond mastering technical skills so we can play and explore together – engaging in collective experimentation, taking a voyage into the unknown. The tools are here. The culture remains open and malleable. It’s time for us to seize the means of perception.
Wherever you stand on the many difficult questions surrounding AI, I hope you will join us. We are offering an “early bird” discount until this Friday, and, also, we are open to giving reduced price entry and full scholarships to people in need. Here is the info:






To repost something I put on another blog, we have been through this before (though not in such a spiritual issue except perhaps with printing):
An example was when powered looms replaced the hand crofters in late 18th century Britain. A huge number of people, compounded by the enclosures of small farms for sheep raising, were kicked out and ended up in hellish factories and miserable cities. Production increased and eventually everyone did better, but it took about two generations of lousy conditions and major social unrest to get there.
Obviously this can’t sort of thing cannot be stopped. The Luddites tried to destroy the looms and had no more luck than the people today who want to halt AI will have (good luck getting places like China, India, North Korea or Russia, or somebody experimenting in a basement, to go along with that). And on the other hand Musk’s idea that in another generation we’ll have a world of peace and prosperity from this technology seems equally unrealistic.
But what can be done is to plan for the elimination and creation of new jobs to make the transition as easy as possible. (The same could be done for other industries like, say, coal mining.) Encouraging and supporting education for the future, planned development of certain areas and cities and so on is possible even if politically very difficult in America. But we can remember that even when Britain was going through some of the worst of its industrial revolution a few factory owners tried to have decent conditions for their workers. And we can do better than that today.
I am heartened that you’re engaging in this way, as I really appreciate the way you work through heterodox ideas and I am personally more inclined—at this moment in time, at least—to “retreat to the sidelines and critique” while AI is making hash of both my writing and teaching careers.