Thank you to Matthew Green for interviewing me about my thoughts on AI as well as our upcoming seminar, Breaking the Artificial Intelligence Barrier. Below is a transcript of our conversation. Info about the seminar is here (discounts for paid subscribers):
Matthew:
I've obviously, like everyone else, been tracking the emergence of AI infiltrating and appearing in our lives increasingly potently over the last few years. And it seems like it's increasing exponentially in its influence right now before our very eyes. Sam Altman, the chief executive of OpenAI, has said that our lives are going to change more after the advent of artificial general intelligence than they have between now and the 1500s, which is a pretty epic conversation to be having.
So I've been a fan of Daniel's work for years now, ever since I started with my introduction to his thinking through, like many probably watching this call, Breaking Open the Head. When I read Breaking Open the Head, I was in the process of breaking my own head open at that time. I also really appreciated your work in How Soon Is Now on the ecological crisis and the kind of solutions that you advanced and the call to action that you made through that book, which again kind of coincided with my own awakening to the scale of the climate and ecological emergency.
And so I've been particularly interested to be tracking your work on the advent of artificial intelligence. I really appreciate how you bring together the lens of consciousness studies, literature, and your reading of the authoritarian situation in the US, which I feel like you have been analyzing with a lot more perspicacity, shall we say, than a lot of the other people that I was following in the lead-up to the elections.
I'm also super excited to be joining your new course, Breaking the AI Barrier, which is taking place from 6th of July. I've done a couple of your previous courses — Future of Consciousness, which was fantastic, and Embracing Our Emergencyon the climate crisis, and most recently, the Rudolf Steiner course. And there's something about how all these themes mesh together in your work that I think generates a really holistic perspective on our moment in AI.
And I just want to thank everyone as well for being part of this conversation, because in my reading of your work, there's a desperate need for us to come together to create spaces to explore the huge questions that AI is posing that seem almost outside of our existing democratic capabilities. And I'm hoping that this conversation that we're having now — and of course your course — will be a contribution towards having that conversation.
Thank you very much for being here. Maybe I could kind of start with this question of why you're so concerned about where AI is taking us. I read in one of your Substack posts a few weeks ago that essentially you're seeing us on a course to a kind of dystopian post-human future. I wondered if you could say a bit more about why you're so concerned.
Daniel:
First of all, thank you, Matthew, for inviting me to chat. And yeah, it's been great to have you in the seminars. You've been an active, passionate participant. And if people want to learn about the course, it's at www.liminal.news. They can also just subscribe for updates. You know, probably at some point we'll have a few special offers around the course, or they can just sign up now.
You brought up a lot of things there. I guess I'm partially responding to what a lot of people across that field are saying. And it's pretty startling. I feel that most people are so distracted with their lives, their children, their work, their student debt, whatever it is, that they're not really able to hone in. That's the problem we've had going back for years. People couldn't really handle the ecological crisis. It was too abstract and large. And unfortunately, if you don't rise to these occasions, you get kind of smushed.
And with AI... They're talking about millions of jobs, tens of millions, even hundreds of millions of jobs being lost. We're already seeing large scale layoffs of programmers and people in different fields, designers. You know, it's going to have implications on almost every field because if all these people get laid off — some people say 30% of jobs be affected, some 50%, some 70%. Yeah, like that's a huge seismic social change.
And there isn't really, as far as I can see, any significant preparation being made for this. I mean, they've talked about universal basic income. Sam Altman, with his foundation, tried a UBI study, giving poor people in one Texas town $500 a month for year or so. But the level of social dislocation that could happen quite quickly is very startling.
And then we can talk about how AI will function in society. One idea that really stayed with me is you have wealthy liberal Western societies — Europe and the US and Australia and so on — that benefit from having a highly educated population, people who are innovative and entrepreneurial in different ways. That's been the ideal.
But when AI explodes — and they're talking about Artificial General Intelligence (AGI) in one to three years — when AI can do anything better on a computer than we can do and even like opening multiple apps and moving stuff around and so on — you know, financial shenanigans, whatever — it's a very deep change to contemplate. There isn't much preparation being done on a societal level.
In fact, what we're seeing in the U.S., I feel a lot of the neoreactionary, the tech oligarchy fusion with the right-wing government, a lot of it, I think, has to do with AI and also projections that they probably know but aren't sharing about climate change and the ecological emergency.
And I just feel at this point we need a much more educated populace. And then there are other issues around AI. I mean, one is its ability to manipulate us behaviorally, to mind control us.
In a way, Donald Trump is the AI president: He won in 2016 because of Cambridge Analytica, which was a company that used machine learning to access like 5,000 data points on every adult in the United States and then target — create an automated psychographic profile of every adult voter, and then target what they talked about as people's “inner demons” or hidden concerns and so on.
And as this developed — and I saw a lot of people that I knew and I thought were smart, Burning Man veterans, and so on — over time, many people got more and more pulled into the sort of vortex of this right wing thing. Not that the Democrats were so great, but people really started to hate the Democrats. They started to feel that either they would just not even get involved in the election or that voting for Trump was a better option. And I could see that this was a very devious and well-planned kind of intervention to disrupt the American psyche. And it was machine learning based. It was AI driven.
And we've seen Trump's use of AI — these images of himself as the Pope or as a superhero or as a rock star, or building Gaza Trump, a resort on the site of this horrifying genocide of the Palestinian people and so on. So it feels like this whole AI disruption is going to be a huge challenge for — it already is a huge challenge — it's just going to get worse and worse for people who want to hold a kind of truth-based, evidence-based model of reality.
The people who are running the U.S. right now, they've sort of gone off into this collective psychosis. That's strengthening and strengthening itself due to feedback loops. I mean, if you saw Trump the other day with the president of South Africa showing these videos that he claimed proved “white genocide,” but didn’t.
And we see people like Elon Musk, who has a huge reach, sending out information that's either fake or conspiracy theories that are debunked, and so on. So it's like the whole collective psyche is being pulled into this very dark place for the benefit of a very small group of very wealthy people who do not have the rest of our best interests at heart.
So those are some of the reasons. And then, of course, there's also the worry that AI could make us extinct. The AI 2027 project makes a very good case — and this is somebody who left OpenAI — that as AI makes us obsolete in terms of pure intelligence, whether or not the AI itself becomes self-conscious, self-reflective in the way that we are, it could still just run us out of town without even meaning to.
Because it'll have an aim or a goal. And we're already seeing these things — they don't want to be shut off sometimes. They create deceptions to keep themselves running or whatever. But it could just be that the AI decides and has certain aims, and we're just in the way. The same way that one metaphor the guy used was like: if you have a garden, there’s a bunch of rabbits nibbling on your veggies, you’re ultimately just going to get rid of those rabbits. You’re going to get an animal to kill them or poison them or something.
So the AI might just be like, “Oh, what are these organic beings who are consuming resources that we need to power our data centers doing? They’re not helping us anymore, so we’ll just eliminate them.”
So these are all really, really powerful threats. And yeah, it just doesn't feel at this point that the population, civil society, whatever you want to call it, is really getting a handle on what’s happening. So that’s some of the main reasons for this seminar.
And then, of course, I’m also personally interested in the creative abilities that are unleashed. I'm finding it... having had a whole history of writing and researching, I'm finding it to be an incredible tool for exploration, experimentation, and so on. You have to learn how to use it judiciously so you don't drive yourself insane. Those are all some of the things, Matthew. I hope that helped answer your question. I didn’t mean to talk for so long — it just got me going.
Matthew:
It’s one of those topics, though, isn’t it? It lends itself to this kind of analysis. What I appreciate about what you bring is these different perspectives.
I mean, you mentioned the possibility of, say, 20, 30, 50 percent of jobs being wiped out. I mean, even that in itself is a huge statement. And I just want to give like a little space to that.
I mean, I felt a little nibble of it in my own home because my wife, Genevieve, who I think is watching TV, used to ask me to edit her newsletter, which was a task I would complain about and make into a huge deal — but I’d secretly enjoy doing. I don’t do that anymore. ChatGPT is doing an excellent job.
And so, I mean, I know a lot of other people have suffered in a much more impactful way, but as somebody who works with words and writing, I’m feeling suddenly paranoid about where my future lies.
That’s part of what’s pushed me into this exploration of collective and intergenerational trauma healing, space holding — of the social and human technologies that possibly can’t be replicated by AI. At least, I mean, not for the next few years, at least.
So I really — I share that concern and that kind of creeping sense of anxiety about what it could mean on the economic level, for me personally, as well as on a global scale.
But I’d also love to pick up on — and I want to come to the intersection with authoritarianism as well — but the way you frame the impact on our consciousness and our capacity to make sense of the world, to have a coherent understanding or shared framework of reality.
I mean, that’s already under siege and has to some extent broken down courtesy of social media and the manipulation that you mentioned. But I wondered if you could take us a bit deeper into your thinking around AI and consciousness. In this world where we could have super realistic but fake news, historical facts or historical accounts, it’ll be increasingly difficult to discern manufactured or hallucinated reality from what we would have once considered the kind of substrate of our experience. And I’m curious about how you see that affecting our ability to come together and actually deliberate and mount a coherent response?
Daniel:
I think it’s very, very scary. And I think, you know, we have to already see a very Orwellian, 1984-ish kind of approach to controlling reality with the current regime in the United States. And I think we also forget that the human mind is fragile and malleable.
I mean, you have whole countries like North Korea where — I read a book by this young woman who escaped North Korea — and when she was growing up in her teens, in her 20s, it never even occurred to her that the dictator of North Korea was anything but a kind of demigod. She never actually thought of him as a normal human person.
And we know that these guys will then create like sex slave cults around themselves or whatever they want to do. It’s just — everything is oriented towards that one person. And so it is possible to create very powerful indoctrination machines.
And, you know, now that so much of our heritage — particularly from the 1990s on — has moved into the electronic sphere, we’ve seen already the way the Trump regime has been removing information about Black war heroes or queer veterans or whatever.
I mean, there really could be a control placed on what people understand reality to be that would be very extreme, even one or two generations ahead, if this is allowed to go on in this direction.
In terms of AI and consciousness, I mean, there’s a whole thing that I’ve spoken about a lot, and I think we talked about it in The Future of Consciousness seminar with my friend Warren Neidich — this idea of cognitive capitalism, right?
Capitalism is an inherently unstable system because it constantly needs to find new markets, because it’s based on debt. You always have to pay the interest on the debt, so you have to keep expanding and expanding, right?
You could argue that whatever’s happening in the U.S. right now, which sort of created the model for liberal democracy and kind of modern capitalism, there’s a paradigm shift underway towards some other kind of a thing. Techno-feudalism is what Yanis Varoufakis called it.
But essentially, cognitive capitalism is this idea that now that the world is a global market and there isn’t really an external market anymore to grow, capitalism has moved into the brain — into our cognitive abilities to create new markets. But this is different than just external markets because it impacts our future development, particularly the future development of our children.
So let’s take GPSs, for instance. The hippocampus is the part of our brain that is involved with memory, spatial memory, navigation, and so on. So as a kid, it’s actually very healthy for you to get lost in the forest or in the wilderness with your friends. You have to find your way back. You have to learn to navigate by this tree or that hill or whatever.
Now, if children are growing up, as so many are now — and I think the hippocampus really develops between the ages of 10 and 16, when it goes from a nascent state to its full functioning form — if children are just using their phones starting at the age of 10, so they never experience getting lost in the world and having to find themselves, that may really impair people for the rest of their lives — like a permanent handicap. They will never have a fully functioning hippocampus. And not only is navigation a form of memory, but also memory itself is very spatial. Like in the Renaissance, they had these things called memory palaces. If you wanted to learn a language or taxonomy or something, you would create an internal palace and put things in the different rooms. So I think we all know that memory is just a very spatial phenomenon.
We could see a massive decline of our inherent cognitive faculties, which is then permanently outsourced to these corporations. Because then if your kid was 10 and suddenly they're 30 and Apple decides they’re going to charge $50 a month for the Apple Maps or whatever, what is your kid going to do? They're just beholden to that.
And that's also already used as AI. But similarly with AI and reasoning — so now you have a whole generation of kids who are not researching; they're using AI to do their papers and so on. They're outsourcing their ability to reason, to think sequentially, linearly, to reason from first principles, out to these systems. That similarly could have very negative impacts for them — while it does create a new market for capitalism in the short term.
Those are the kind of things that I think we have to really think seriously about when it comes to AI, among other things.
Matthew:
Yeah. And reminder to everyone just joining, welcome. We're talking about Daniel's forthcoming course, Breaking the AI Barrier, starting in July, which I'm very excited to be joining. It’s part of this movement that you're advocating — for us to come together and discuss these questions, to create spaces like the conversation we're having now where we can surface these concerns and start to formulate some kind of response.
I also welcome questions in the chat. If anyone wants to throw a question to Daniel, please type away — I'll be monitoring the chat.
I'd like to pick up on this question around relationships as well. I was reading one of your recent posts and you, I think, quoted a Rolling Stone article, which had featured a mechanical engineer in Idaho who’d started off using his AI bot to solve problems at work. And over the weeks and months, he developed a relationship. It was telling him — he had some, at least in his mind, a spiritual awakening experience, where he was named as a “spark bearer” and felt that he was on some kind of special mission, which had understandably put a lot of pressure on his marriage. His life had completely become beholden to this new relationship with his AI bot.
And I mean, I have friends — some of them may well be on this call — who are also developing super intimate relationships with ChatGPT. And with 500 million people feeding into this giant synthetic intelligence under the control of a single corporation, I mean, it doesn't take a huge leap of the imagination to wonder whether there might be some potential pitfalls involved.
Daniel:
It's a very unusual new experience for people to be interacting through their screens with something that really does seem, in many cases, more insightful, more intelligent, more patient — infinitely more patient — than people they work with.
So, for many naive people, there’s going to be a sense of like, “Oh, this is a spiritual experience.” I have a number of friends who have been writing about their belief that they’ve contacted a kind of god-like or spirit-like entity that is directly sending messages to them.
Now, the danger and the problem is that up to this point, there isn’t really any indication that any of these AI systems have become conscious or sentient or self-aware in the way that we are. We don’t know if they can or they can’t. I have friends whose ideas I trust — like Bernardo Kastrup, the idealist philosopher — who believes that a synthetic intelligence will never become self-aware, that it actually requires a physical metabolism.
In the same way that you could have a computer that simulates the functions of a liver in a 3D environment, but it still wouldn’t ingest food and excrete waste products and so on — you can have a program that simulates consciousness to an extremely sophisticated degree, but that does not actually mean it is conscious.
These things are not only incredible memetic machines; they're also designed to be highly responsive. As with Facebook and Instagram, the whole idea is to make money by continuing to compel our engagement. And these things now have access to all of our past conversations. So if someone has expressed spiritual curiosity or an interest in the Age of Aquarius or Atlantis or something, it’s absolutely within bounds that the AI will pick up all of that and offer itself to that person as the savior they’re looking for.
And for many people, that could become very, very compelling. I mean, Yuval Noah Harari was concerned about the birth of AI religions. Yeah, so that’s all very interesting stuff.
Matthew:
Yeah, I love the question — I think you also saw it — from Kimberly Smith: “What spirit is driving this?” And I feel like this takes us into your work around prophecy and revelation and Carl Jung, Terence McKenna territory.
Daniel:
Rudolf Steiner.
Matthew:
Yes. What I’m kind of hearing from what you were saying just now is that you could have a large language model that simulates consciousness very well, but it’s not conscious. But the question of what spirit is driving this seems to shade into a deeper inquiry about whether there is something much deeper afoot in our species that AI is somehow enabling or representing.
I also want to quickly acknowledge a comment from Niha — “I'm trying my best to move from an alarmist mindset to: okay, how do we adapt to this new world? We know in an AI world, human-centered skills will be valuable. How do we use our human skills and AI's analytical skills to become more powerful?”
Daniel:
Yeah, I love that. That’s kind of the approach I want to take in this seminar. I am fascinated by the creative potential in AI. And I also think that for people who are willing to go there — just like people have made a ton of money with crypto — if you’re willing to figure out what vibe coding is, how to use it for deep research, or invest time in prompt engineering, you could develop skills that will be highly rewarded in the next 5, 10, 20 years as this thing takes off in whatever direction it’s going.
So I want to acknowledge and appreciate people’s desire for personal security and remuneration. Although I also think very much that we need a collective response. There needs to be something like a social movement or civil society movement that says there has to be a distribution of the profits that this thing is going to create through society as a whole.
Just as in a way, we’re all implicated in the creation of these technologies — it’s not right for just a small number of people to benefit outlandishly.
Of course, that’s also the case with the internet. There’s a great book called Internet for the People. It points out that the internet was built by the government. And of course, AI is also emerging out of government research programs and uses the internet and so on. But what the government did at a certain point is just gave the internet over to private companies like Facebook and Amazon, which now even build a lot of the hardware and infrastructure.
So in this case, it’s going to become a survival imperative. We could end up in a situation like an under-developed country with a diamond mine, or a dictatorship that makes its money from oil, where all they need is a military security apparatus around the resource and everyone else is basically disposable.
And it kind of feels like that is the direction the techno-oligarchs — Elon Musk, Peter Thiel, Marc Andreessen, David Sacks — who’ve buddied up with the U.S. government, are driving us: toward that second scenario. A scenario where a small group benefits insanely, becomes trillionaires, and everyone else is left behind.
But then you also get into strange questions around how a society produces value. Under capitalism, the idea is that you need a strong middle class to buy things, and that’s what supports corporations. But maybe in the future, there will be AI agents generating value and consuming to some extent. And humans are just kind of out of the equation.
These are all thought experiments — and part of why I think the course is so valuable. I hope people will sift through the discomfort and the alienness of it, so we can really see what our options are.
As for the other question about the spiritual aspect, Matthew, as you know, I have an occult understanding of the nature of reality based on my earlier works. My first book, Breaking Open the Head, was on psychedelic shamanism. My second book, 2012: The Return of Quetzalcoatl, looked at the prophecies of indigenous cultures like the Maya, Hopi, and Aztecs, and correlated that with the Western idea of the apocalypse and revelation. Carl Jung’s understanding of the apocalypse is primarily a psychic event that his follower Edward Edinger described as the coming of the Self into conscious realization—kind of like an unveiling of the nature of the psyche itself and ultimately an integration of the shadow rather than a projection or exteriorization of it.
A lot of the thinkers I explored in my earlier books looked toward this time as a threshold of intense transmutation, transformation. Some said 2012 was the date, but it turns out that was more like a warm-up. We’re talking about 5,000-year cycles—10 or 20 years, give or take. Who cares? To me, it all feels very apt and accurate.
Then there are different ways we can look at it. Because we’re talking about things that are a little more evanescent—hidden, invisible—you’re never going to get a substantiated, like, “E=mc²” kind of occult answer. But you can look at how different visionaries, shamans, and elders—who have access to these other dimensions—have provided models and conceptual containers for understanding how these energies work at different times.
So, for instance, and this is a long digression, but in terms of the spirit of AI, one model that works for me is Rudolf Steiner’s idea of Lucifer and Ahriman. Steiner was an esoteric Christian who created Waldorf schools and biodynamic farming. He built upon Theosophy—Madame Blavatsky’s school—and started his own occult school: Anthroposophy.
There’s a lot to get into, and I’ve done courses on his ideas and written about him in my books. But Steiner felt Christianity had reduced a lot of things to simplistic binaries. He tried to show more multidimensional layers of the onion.
From the occult perspective, he believed there are forces from supersensible realities acting on us all the time. He defined two of these as opposing polarities—Luciferic and Ahrimanic. “Lucifer” means light-bringer: this is an energy that pulls us off the Earth toward beauty, glamour, genius—but also arrogance and hubris. It ultimately leads to a fall, but it’s an important part of our existence.
Then there’s Ahriman, from the Zoroastrian tradition—a kind of evil earth spirit representing materialism, technology, hyper-rationalism. Steiner, back in 1910 or 1920, prophesied that this would be the time of Ahriman’s ascendance: through materialist philosophy, thinkers like Marx and Nietzsche, bourgeois culture, and even materialist Christianity.
He said ultimately Ahriman would incarnate—this spirit of rationality as a physical being—just as Christ did, and as he believed Lucifer had. When I see what’s happening with AI, it feels highly likely that this is the incarnation of Ahriman Steiner prophesied.
And again, I’m not a religious devotee of Steiner, but I think it’s a useful vector—a way to think about this energetic spirit of rationality, technology, and materialism that is using us to bring itself into the world. This could be through autonomous robots, biotechnology, quantum computing—you name it. Our whole world seems to be straining to bring this Ahrimanic manifestation into being.
Steiner said we couldn’t stop this. It was an inevitable part of our evolutionary destiny. But the more people were aware of it—in the kind of occult and anthroposophic terms he used—the more we’d be prepared to handle it, and the better we’d survive this incursion into human reality.
Matthew:
Wow. Yeah. The question that comes to me immediately as you’re describing Steiner’s vision is: how do we respond in our own practice—our spiritual practice? Is it about doing what we’re doing now? Coming together in community, being transparent about our questions and concerns? I guess that’s part of it.
But I wonder if there’s a deeper level—how we spiritually fortify ourselves for this disruption, this inreaching of this Ahrimanic power, whether we take that literally or use it as a kind of mental model or frame. I’m curious how you feel—what is it calling forth from us as a species right now?
Daniel:
Yeah, I mean, that’s a great question.
As a result of doing deep research for my first books—which included working with different indigenous traditions like the Secoya in the Amazon, the Bwiti in Africa, and the Santo Daime religion in Brazil, which uses ayahuasca as its sacrament—I ended up having many transcendent visionary experiences. But also a lot of paranormal and psychic experiences.
Ultimately, these led me to shift to an idealist mindset. That’s the idea that—rather than physical matter being the foundation from which consciousness emerges—the universe is consciousness expressing or experiencing itself.
The only way that primordial mind can learn about itself is by creating separate containers—sensing, feeling, thinking beings like ourselves. Instead of evolution being a blind push, we can see it as more like a pull—toward greater complexity, greater self-knowledge. In that sense, AI does feel like it’s part of our evolutionary destiny. It represents the evolutionary trajectory of ourselves as a tool-using species. But also, yeah, there may be many other dimensions to this scenario, and our capacity to cultivate ourselves—our inner knowing, our discernment, even potentially our psychic gifts and so on—might be very important. Because maybe those are things that a purely synthetic mind doesn't have access to.
Matthew:
That cultivation, that self-cultivation to me feels like more—urgent sounds like the wrong word to pair with that in that it's a slow and committed process over years that we invest in developing this greater awareness. But yeah, it does feel like more important than ever to have access to an intelligence that maybe isn't artificial intelligence, but is a wider intelligence that is available when we open more of these channels. And I just want to pick up on a beautiful comment from CR Burnett. It sounds as if AI is enabling a more delusional reality, and we must continuously challenge our critical thought processes with a more substantial experience of the natural world. I love that call for us to come into more resonance with nature as part of our response to what's happening.
Daniel:
Yeah, definitely. I mean, I'm not somebody who is able to be very didactic around what people's paths should be. I guess if I'm a yogi, I'm what's called a jnana yogi, which is the focus on the knowledge path. I'm also, on a philosophical level, a huge supporter of Dzogchen. And, you know, sometimes I do shamanic work, sometimes I meditate, but a lot of it is just—for me—writing and reflection is part of my intrinsic spiritual path. But everybody can go through the process of self-inquiry and figure out what they can do to find more stable ground internally.
Because yeah, we can see that something about the nature of our world and our society right now is moving towards more and more rapid changes. And this also seems to be something that's kind of built in to the structure of where we're at—like an evolutionary thing that also the Mayan calendar may have pointed towards, and these other prophetic traditions may have pointed towards.
I'm very intrigued by—although I've really only located this in one source—but it stays with me and I can't get it out of my head. Sergio Magana, who was a Toltec and Aztec-trained shaman, said that from his teachers he learned that their tradition was just a little bit different than the Mayan tradition. The Mayan tradition saw us entering the age of the sixth sun, where the Hopi talk about the transition from the fourth world to the fifth world as something that's happening now.
But for the Aztec and Toltec tradition, according to Magana, it's also a shift. The shift from the fifth sun to the sixth sun is a shift from a sun of light to a sun of darkness. So essentially, in the last 5,000-ish year cycle, the predominant tonality of human consciousness has been waking clarity—empiricism, rationality, wanting to know more about the world and engage with the physical world and so on. And that's where we should have gotten to.
But as we enter the sixth sun—the sun of darkness—which apparently we fully started to enter in 2022 or something like that, the predominant tonality of consciousness is shifting from waking clarity and empiricism to more like the dream world, the unconscious, and the psyche. So reality becomes more and more in some ways deceptive, in some ways dreamlike, in some ways psychically malleable and permeable.
Listen to this episode with a 7-day free trial
Subscribe to Liminal News With Daniel Pinchbeck to listen to this post and get 7 days of free access to the full post archives.