
I recently proposed a thesis, based on monistic idealism, on how and why we experience an increasing incursion of the demonic. The essay is here, no longer pay-walled. Integrating ideas from William Irwin Thompson, Rudolf Steiner, Carl Jung, Bernardo Kastrup, and others, I propose the “demonic” is neither a medieval superstition nor a delusion. It is a particular quality of consciousness, often dissociative: a kind of possession trance. Many other beings or forms of consciousness in the multiverse seek to piggyback on human minds, leading us astray. Without proper cultural safeguards, people easily open to “influences” with malevolent or ambiguous intentions for them and for the human world as a whole.
Reasons for this incursion of the demonic include the collapse of a “sacred” dimension, myths and narratives; the overwhelming power of electronic media which functions as a collective rite of indoctrination/initiation; the resurgence of psychedelic use (and other esoteric practices) without an initiatory context or over-arching model of a “vertical dimension”: and a shared context for understanding how individuals and communities can reach higher levels of self-actualization. For someone like Steiner, accessing our evolutionary potential to attain “higher” levels of consciousness requires not only deepening our intellectual discernment but also a development in ethics.
I just watched New York Times columnist Ross Douthat’s troubling podcast, “The Forecast for 2027? Total A.I. Domination”, where he interviews Daniel Kokotajlo, formerly at OpenAI and now one of the founders of the AI Futures Project. Kokotajlo’s team recently released a report, AI 2027, which makes a compelling argument that we are just a few years away from engendering AI-based “super-intelligence” that will be beyond human capacity to control and could potentially – if not probably or even inevitably – cause humanity’s extinction in a short span of time.
The report proposes, Douthat notes, that, within a few years, “some machine god may be with us, ushering in a weird post-scarcity utopia or threatening to kill us all.” They discuss the idea that a rapidly evolving super-intelligence will develop its own goals and agenda, which might include rapid expansion into space. It would most likely see humans as a kind of annoying drag on its quest. It would soon decide to treat us as a farmer might treat a pesky group of rabbits that nibble at his vegetables (i.e., get rid of us). Kokotajlo also lays out the very likely scenario that will lead governments and corporations to merge to accelerate the development of synthetic super-intelligence. We are already seeing our government removing guardrails to AI safety and deregulating it, despite the warnings of many technologists.
We’ve reached a fascinating if frightening threshold. From this precipice, we can see the rapid evolution of synthetic intelligence as an outcome implicit in Western man’s dogmatic pursuit of rational intelligence and evidence-based knowledge, separate from any other noetic qualities of being or experience. From an esoteric Buddhist or monistic idealist perspective, we are co-creating this reality through our personal consciousness, which is a differentiated aspect of the underlying field of primordial awareness. “As perceived, so appears,” Buddha noted. We are in danger of reifying the materialist construct of a soul-less universe of contingent life and barren matter which then proceeds to revenge itself on us by annihilating us.
Now, of course, nobody knows what will happen in the future. I am deeply concerned with Kokotajlo’s thesis and, also, in an online chat, someone posted this, which summed up my spidey-sense of the current direction of AI integration, particularly in the U.S., where figures like Elon Musk and Larry Ellison are working closely with the Executive Branch:
I do believe that there will be a significant tension between planet straddling corporate interests that will seek to automate everything, build defensible moats around all resources and all technology – and that will leave us out to dry. I do think that this AI revolution will contribute to that automation. In natural systems we see it all the time, ecologies bloom and crash, specific trophic niches have new competitors. What makes us, the great mass of humanity, privileged or outside of this? AI, wielded by an alien species called the super-rich, is a competitor.
I believe we are already seeing the super-wealthy teaming up with AI against the people. This is a very nefarious development. By the way, we will be exploring this – among many issues – in my upcoming seminar Breaking The AI Barrier, which you can learn more about and join here.
Now, part of what I personally believe can help us defend against the worst outcome is to embrace what is most unique and precious in our human experience. This includes domains of feeling, sensing and sacralizing that AI cannot colonize. In "Will the Humanities Survive Artificial Intelligence?" published in The New Yorker, D. Graham Burnett considers the profound impact of AI on higher education and the humanities. It is an interesting piece, combining terror of the destructive impacts of AI with a glimmer of hope that AI can force a new reckoning where we reclaim those parts of ourselves — our personal feelings, our unique subjectivity — that can’t be assimilated into the AI mega-machine:
What it is like to be us, in our full humanity—this isn’t out there in the interwebs. It isn’t stored in any archive, and the neural networks cannot be inward with what it feels like to be you, right now, looking at these words, looking away from these words to think about your life and our lives, turning from all this to your day and to what you will do in it, with others or alone. That can only be lived.
This remains to us. The machines can only ever approach it secondhand. But secondhand is precisely what being here isn’t. The work of being here—of living, sensing, choosing—still awaits us. And there is plenty of it.
Burnett explores how AI tools can provoke deep philosophical and existential reflections among students, challenging them to reconsider the essence of human consciousness and creativity. AI cannot replicate the lived experiences and moral complexities that define the humanities. We can use AI to reevaluate what it means to be distinctly human, even in an age of super-intelligent machines.
While I appreciate Burnett’s essay, he lacks any knowledge or appreciation of the initiatory and transcendent realms explored by occult visionaries like Steiner, Gurdjieff, Thompson, and so on. (Please check out my lecture on the origins of the Western hermetic tradition, when you have time!) The mainstream liberal establishment continues to hold to the secular, materialist paradigm, out of fear of submersion under what Carl Jung called the “dark waters” of the unconscious. I propose— along with Kastrup, Neil Theise, Donald Hoffman, and others — we now have a way to swim into those psychic waters without getting drowned.
Idealism doesn’t deny scientific reason or logic, and its basic model fits much better with the discoveries of quantum physics than reductive materialism or physicalism. I believe we need to pick up on Steiner’s idea of a “spiritual science” and develop it further. We need to build something like an “Imaginal Academy,” open for all humanity to join at whatever level suits them.
In our time, it is not enough to dismiss supersensible entities — angels, daimons, or demons — as primitive superstition or to dismiss all encounters with the numinous as pathology or delusion. We can define a new approach to esoteric and occult contacts, fusing depth psychology, metaphysics, and ritual practice, to excavate lost continents of soul and spirit. Without this grounding, we remain trapped — whether in reductive materialism, obsolete religious models, or ungrounded forms of intuitive spirituality. Without a coherent ontology that meshes with both science and more noetic elements of reality, we are lost, broadcasting and receiving signals we can’t understand. This allows the demonic to thrive, feeding on our confusion. The antidote to this is a a coherent system of thought that encompasses metaphysical purpose and meaning. In a world where our received maps of meaning (such as physicalism) no longer work, we must build new symbolic structures for our time — as Jung, Steiner, Gurdieff did in the last century.
As Thompson, Steiner, and others suggest, the postmodern worldview is incomplete and broken due to the loss of the sacred and numinous. We can rediscover and rebuild a living symbolic architecture capable of mediating the numinous, integrating the unconscious and the psychic, and preparing the human psyche for healthy contact with transpersonal forces. We can call them divinities, angels, daimons, and demons, or, in modern terms, we can use terms such as “psychic attractors,” morphic resonance, or configurations of consciousness.
Such a system, or “imaginal academy,” needs to be experientially based, mythopoetically resonant, and ethically developmental. It can’t be overly intellectual or intuitive. We need to reclaim our connection to the vertical dimension — the central axis traditionally encompassing psyche, cosmos, and transcendence — and define an ethical sensibility, anchoring freedom in greater responsibility, as Steiner proposed.
Part of the foundation for a new esoteric container in the West is, I believe, Steiner’s Anthroposophy, although we need to translate, develop, and reinvent this system for the contemporary context of today’s post-secular world.
Keep reading with a 7-day free trial
Subscribe to Liminal News With Daniel Pinchbeck to keep reading this post and get 7 days of free access to the full post archives.