The pace of AI development continues to accelerate, leading to the legitimate possibility that we rapidly approach the Singularity — an idea that once seemed remote science fiction. The Singularity is the threshold when the pace of innovation and invention goes exponential. One way this could occur would be if Artificial Intelligence becomes self-improving, attaining a level of autonomy and meta-cognition (reflecting on its own thinking), where it can evolve itself, rewrite its own code, continually iterate and innovate.
There are reasons to think we are approaching this threshold or have already crossed it.
The Singularity is one possible scenario implied by the prophetic, archetypal myth-based knowledge systems found in many indigenous and ancient cultures, as I explored in Quetzalcoatl Returns (2006). These myths envision our current time as a transition into a new dimension of reality. This next epoch has been called the “Fifth World,” “Age of the Sixth Sun,” “The Aeon of Horus,” the “End of Time,” and so on.
As I consider this now, I find no necessity that this prophesied “next age” refers to a long duration of linear time. The nature of time, itself, could be undergoing a metamorphosis. Increasing novelty, accelerating cognitive and creative processes, could represent the threshold, the gateway, into a “new time:” We might be approaching a kind of species crescendo, a movement into another layer of the reality-matrix. Time-as-serpent-spiral or ouroboros could, as one expression of this, involute.
In The Dawn of the Sixth Sun, the Aztec shaman Sergio Magãna writes that, according to the Aztec tradition, 5,000-year epochs alternate between “Suns of Light” and “Suns of Darkness.” We have just made a transition from the Fifth Sun, a “Sun of Light,” to the Sixth Sun, a “Sun of Darkness.” During a Sun of Light, the primary focus of consciousness is rational and empirical thought and the daylight, normal world. During a Sun of Darkness, reality becomes increasingly dream-like, magical, and mutable acccording to the magician’s will and intent. As we go deeper into the Sun of Darkness, there is no longer a definite boundary between the realm of the dead and that of the living.
In his magnificent essay “The Question Concerning Technology,” Heidegger saw technology, in its essence, as an “enframing” which conceives of the world as a “standing reserve.” This technological “enframing” reduces the living world to a husk, to be manipulated for reductive, utilitarian aims. He believed that technology was “no mere human doing,” but had an inevitable Telos or direction, which he called, “the destining of revealing.” Heidegger wrote: “In whatever way the destining of revealing may hold sway, the unconcealment in which everything that is shows itself at any given time harbors the danger that man may misconstrue the unconcealed and misinterpret it.” Heidegger proposed that the illuminating power of art — poesis — could, eventually, counteract the technological enframing by revealing a different essence, pointing to a different path forward.
Here are the questions I ask myself is: Are we, indeed, very near the Singularity, or already entering it? And does this imply we will experience, in the next decade or two, levels of transformation beyond anything we have known, historically, until now? Or is this feeling of imminent transformation part of the glamor of the technological “enframing” and, in the end, a dangerous delusion? (Please let me what you think in the comments.)
1. AI Exponentially Advances Materials Science
Two immediate developments stand out: One is Deepmind’s GNoME (an interesting choice for this acronym, shorthand for “Graph Networks for Materials Exploration”). GNoME was recently unleashed and, in a few weeks, it discovered 2.2 million new crystals, exponentially more than humans have known until now.
According to an announcement on the Deepmind site: “AI tool GNoME finds 2.2 million new crystals, including 380,000 stable materials that could power future technologies… Among these candidates are materials that have the potential to develop future transformative technologies ranging from superconductors, powering supercomputers, and next-generation batteries to boost the efficiency of electric vehicles.” Before this development, we knew of only 20,000 stable materials, through centuries of research. Google built an AI controlled robotics lab to create and test these new materials.
GNoME follows on the success of AlphaFold, launched two years ago, which has discovered millions of new proteins. “Determining the 3D structure of a protein used to take many months or years, it now takes seconds,” noted Eric Topol, Director of the Scripps Institute. “AlphaFold has already accelerated and enabled massive discoveries, including cracking the structure of the nuclear pore complex [the “gateway of macromolecular transfer between the nucleus and cytoplasm” within cells]. And with this new addition of structures illuminating nearly the entire protein universe, we can expect more biological mysteries to be solved each day.”
2. AI Develops Symbolic and Logical Reasoning
The second development is still a rumor: It seems increasingly plausible that OpenAI’s Board of Directors fired Sam Altman (who was rehired a few days later) because of a new AI discovery, Q*. Q* means OpenAI may be on the verge of creating (or has already created internally) Artificial General Intelligence (AGI). Realizing that this development may pose a danger to humanity, the Board reacted by following the original mission of the nonprofit and tried to pull the plug.
Essentially, the new breakthrough may make it possible for AIs to solve all kinds of problems requiring formal reasoning, spatial logic, and mathematics: areas it can’t address currently. As Thomas Smith writes in “Did OpenAi Really Create a Brain-like Intelligence After All?”, a theory is that the human brain is like a very intricate “jumble of wires” that naturally thinks intuitively. We think logically and mathematically by constructing a kind of internal virtual environment where we deal with these other kinds of problems.
While computers and calculators can do math easily, LLMs have not been able to think in terms of simple math and symbolic logic. They predict the next word in a text string, or reach other outcomes, by surveying vast amounts of data and proposing what seems most probable. With Q*, OpenAI appears to have developed a new LLM that can understand math. If these AIs can use reasoning to solve grammar-school math problems today, tomorrow they will be able to apply the same reasoning to problems of extreme complexity. They will, like humans, have access to both forms of cognition.
Apparently an internal document from OpenAI was linked to Reddit, discussing the Q* breakthrough and the dangers of it. AI researcher David Shapiro explores this on Youtube, noting that we don’t yet know if the letter is real. According to Shapiro’s interpretation of this letter:
Q* has demonstrated an ability to statistically significantly improve the way in which it selects its optimal action selection policies… exhibiting metacognition, in other words. It's thinking about the problem space and choosing the optimal path or the optimal policies in order to choose the optimal actions. It later demonstrated an unprecedented ability to apply this for accelerated cross-domain learning.
A further aspect of Q* is that it can redesign itself. We are reaching a threshold where autonomous AIs will be able to fundamentally rewrite themselves — alter their own models and programs — to tackle problems with maximal efficiency.
Keep reading with a 7-day free trial
Subscribe to Daniel Pinchbeck’s Newsletter to keep reading this post and get 7 days of free access to the full post archives.