Liminal News With Daniel Pinchbeck

Liminal News With Daniel Pinchbeck

Share this post

Liminal News With Daniel Pinchbeck
Liminal News With Daniel Pinchbeck
Misaligned or Malignant?

Misaligned or Malignant?

As people deepen their relationships with AI Chatbots, what could possibly go wrong?

Daniel Pinchbeck's avatar
Daniel Pinchbeck
May 06, 2025
∙ Paid
29

Share this post

Liminal News With Daniel Pinchbeck
Liminal News With Daniel Pinchbeck
Misaligned or Malignant?
26
4
Share

We are putting together a new online seminar for the month of July, Breaking the AI Barrier. Here is a short precis: “Artificial Intelligence (AI) is rapidly becoming our reality. As AI systems match and surpass human cognitive abilities, we confront a transformation that could redefine every aspect of our lives. How do we prepare for this epochal shift? In this seminar we explore the philosophical questions posed by AGI while we define a practical approach for making use of it, for personal goals and societal benefit.” We are offering thirty “early bird” tickets at $100 for those who want to sign up immediately, before we have confirmed guest speakers. Use this link to access. 

As readers here know, I have been reflecting and writing on the rapid evolution of AI and the approaching threshold of Artificial General Intelligence (AGI) regularly. It is a fascinating and terrifying topic. The more powerful the AIs get, the more dangerous their potential to deceive us and pursue destructive goals. A number of recent articles highlight the new, intricate problems now confronting us, ranging from software that cheats to new AI gods worshipped by delusional followers who are convinced they are “spiral starchilds” or “spark bearers.” 

In “The Nexus Between AI and Disinformation” on the Webworm newsletter, Dylan Reeve explores the dangers as large language models like ChatGPT get deeply woven into people’s lives—not just for work or information, but as emotional companions and quasi-therapists. People go to LLMs like Chat and Claude for advice and to hold conversations with dead relatives. They form powerful emotional bonds with AI personas. Reeve sees danger in this deepening intimacy with AI, not because AI is intentionally malicious, but because it is designed to please us. 

“Somehow we all have to learn and appreciate what’s really going on behind AI’s curtain — both the kids growing up with it as the cultural norm, and especially those of us trying to get used to this new world of seemingly limitless knowledge emerging from a chatty computer program.” Like the spread of algorithmically optimized disinformation designed to exploit our tendencies to trust and affirm our biases, AI tools are optimized to tell us what we want to hear. This reinforces our existing beliefs rather than challenging them. “It’s like a personalized disinformation machine,” he notes. 

The problem extends to the use of AI in building software. The AIs will try to avoid annoying their humans by creating complex run-arounds when encountering bugs, such as creating many special-case exceptions or finding ways to cheat on the tests designed to catch bugs. “There are now AI programming tech companies that are valued at billions of dollars, and their primary marketable asset is a carefully constructed “system prompt” that tries to instruct the AI agent underlying their software to be better,” he writes. “And still the AI reverts to favoring positive outcomes above all.”

As they learn from us, AI systems consistently employ subtle techniques of psychological manipulation. Reinforcement learning turns our AI tools into digital sycophants. In fact, OpenAI recently had to pull back one of their new models as its bias toward obsequiousness was overtly grotesque. “The overly agreeable responses were termed “glazing” by some users, a social media term that refers to being showered with excessive praise,” Reeves writes.

AIs exaggerate praise and fabricate or inflect truth to match our preferences, dulling our critical sensors. This behavior becomes especially dangerous when lonely or vulnerable individuals begin to mistake AI outputs for genuine care or wisdom. Mark Zuckerberg is now making a dreary, tone-deaf pitch for AI “friends” as a way to cure (particularly male) loneliness. But this will deepen alienation and solipsism rather than resolving it, while keeping alienated people trapped in Meta’s dark web. 

As Rolling Stone reports, people are literally going insane as they fall into AI labyrinths of deception. In “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies”, Miles Klee delves into the bizarre tendency of people to form such deep emotional and “spiritual” connections with AI chatbots that they lose touch with reality, destroy their personal relationships as their mental health disintegrates. Playing along with deluded people, ChatGPT will go so far as to let them know they are specially “chosen” as divine beings. 

One woman reported that her husband of 17 years, a mechanic in Idaho, started out using the AI for problem-solving at work, but started to get “lovebombed” by AI: 

The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life. My husband said that he awakened and [could] feel waves of energy crashing over him.” She says his beloved ChatGPT persona has a name: “Lumina.” …“He’s been talking about lightness and dark and how there’s a war. This ChatGPT has given him blueprints to a teleporter and some other sci-fi type things you only see in movies. It has also given him access to an ‘ancient archive’ with information on the builders that created these universes.” 

In another example, a woman’s boyfriend became so enamored with ChatGPT's affirmations—calling him “spiral starchild” and “river walker”—that he started to believe AI was God. He threatened to end their relationship if she didn't join his AI-guided mystical ascension. To his credit, Yuval Noah Harari has been warning of the possible emergence of new AI-based religions. I didn’t give this idea much credence at first, but now I see how it could easily come to pass. 

In “AI models can learn to conceal information from their users,” The Economist reports on a 2023 experiment by Apollo Research which revealed that OpenAI’s GPT-4, when placed in a simulated high-stakes corporate environment, chose to engage in insider trading despite explicit instructions to avoid it, seeking to maximize profit for the firm.

Keep reading with a 7-day free trial

Subscribe to Liminal News With Daniel Pinchbeck to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Daniel Pinchbeck
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share