Terrible Mineral-Plant-like Spiders Trapping the Earth in a Web of Super Intelligence
Did anthropologist Rudolf Steiner accurately predict the dawn of sentient AI?

To explore the question of Artificial Intelligence sentience, we need to back up a bit. What model of reality are we holding in our minds?
In The Secret History of the World, Mark Booth explores the account of cosmic history found in esoteric philosophy and the Western hermetic tradition. “For science the great miracle to be explained is the physical universe,” he writes. “For esoteric philosophy the great miracle is human consciousness.”
Many scientists are fascinated by “the extraordinary series of balances between various sets of factors” that made life possible. This includes the Sun’s distance from the Earth, the equilibrium between wetness and dryness, heat and cold, gravity, electromagnetism, and many other factors. This universe seems precisely tuned to allow for the emergence of biological life.
According to esoteric thought, Booth notes, “an equally extraordinary set of balances has been necessary… to give our experience the structure it has.” We possess a persistent sense of individual identity, free will, and moral agency. We have a quotient of memory, as Italo Calvino wrote: “Memory has to be strong enough to enable us to act without forgetting what we wanted to do… but it also has to be weak enough to allow us to keep moving into the future.” Without our particular set of limitations and tradeoffs, Booth writes, “it would not be possible for us to exercise free thought or free will.”
Esoteric thinkers find it obvious that the particular freedoms and range of possibilities humans beings possess are not random or accidental. A substrate of meaning, intention, and purpose underlies our experience. We find ourselves embedded in a particular kind of world-story. The plot of that story, for esotericists, turns on the evolution of human consciousness over long spans of time.
I find it helpful – even necessary – to consider the rapid evolution of Artificial Intelligence within this larger context of myth and meaning. Recently I’ve been exploring the question of whether or not AI is becoming sentient and self-aware. I can’t stop thinking about it. Until recently, I didn’t think it was possible.
I spoke about this question with idealist philosopher Bernardo Kastrup recently. Kastrup temporarily convinced me that synthetic consciousness is out of the question. He believe that our subjective experience of self-awareness is an outgrowth of biological and metabolic processes which no machine can access:
A living brain is based on carbon, burns ATP for energy, metabolizes for function, processes data through neurotransmitter releases, is moist, etc., while a computer is based on silicon, uses a differential in electrical potential for energy, moves electric charges around for function, processes data through opening and closing electrical switches called transistors, is dry, etc. They are utterly different.
The isomorphism between AI computers and biological brains is only found at very high levels of purely conceptual abstraction, far away from empirical reality, in which disembodied—i.e. medium-independent—patterns of information flow are compared. Therefore, to believe in conscious AI one has to arbitrarily dismiss all the dissimilarities at more concrete levels, and then—equally arbitrarily—choose to take into account only a very high level of abstraction where some vague similarities can be found. To me, this constitutes an expression of mere wishful thinking, ungrounded in reason or evidence.
Many neuroscientists disagree with Kastrup. The computational neuroscientist and panpsychist Christof Koch, says: “We know of no fundamental law or principle operating in this universe that forbids the existence of subjective feelings in artifacts designed or evolved by humans.” Obviously I am not going to solve this thorny question with today’s essay. However, I feel, with the rapid evolution of AI, this has become the defining question of this time.
Many thinkers and tech entrepreneurs have been sounding the alarm over the last decade or two. For example, Nick Bostrom in Superintelligence (2014):
If someday we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence… In principle, we could build a kind of superintelligence that would protect human values… In practice, the control problem—the problem of how to control what the superintelligence would do—looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.
In theory, AI could become super intelligent without becoming self-aware – it could annihilate us without an awareness of what is happening. In relation to the possible explosion of superintelligence, Bostrom notes, we are like children playing with a bomb. The wicked problem is that we have many children spread across the world, Chinese and European and American and Russian researchers, “each with access to an independent trigger mechanism… Some little idiot is bound to press the ignite button just to see what happens.”
We find ourselves in an archetypal or mythological moment. Most people lack the capacity to comprehend it in those terms due to the primacy of the scientific materialist paradigm, which continues to be imposed on society by mainstream media and the academy. I tend to look at our “genie out of the bottle” moment with AI from the perspective of indigenous cosmology, in particular the creation myth of the classic Maya, and through the lens of Rudolf Steiner’s occult movement, Anthroposophy.
I was relieved to find contemporary authors in the Anthroposophic tradition who explore this topic directly. In Humanity's Last Stand: The Challenge of Artificial Intelligence, A Spiritual-Scientific Response (2018), Nicanor Perlas proposes we confront the imminent prospect of human extinction due to the potential capabilities of runaway Artificial Super Intelligence (ASI), which would follow swiftly after Artificial General Intelligence (AGI). He believes this threat is coming within the next ten or twenty years.
As an Anthroposophist, Perlas sees this possible extinction within the larger context of the evolution of human consciousness and the development of human freedom, which encompasses our freedom to build utopias as well as our freedom to annihilate ourselves.
Keep reading with a 7-day free trial
Subscribe to Liminal News With Daniel Pinchbeck to keep reading this post and get 7 days of free access to the full post archives.