Entering the "Post Knowledge Work" Era
What happens when AI completely "decodes and synthesizes" reality?

Like many of you, I have been exploring both the incredible potential and frightening peril of the rapidly advancing AI revolution. I think this technology is potentially as transformative as the Internet itself — perhaps even more powerful in how it will reshape human society. AI development moves so fast that it is very difficult to stay on top of what is happening, to track it. Every week sees new releases, new plug-ins that can change how we organize information, influencing how we think, write, program, and create. (As an antidote, I appreciate Maggie Harrison’s Futurism essay, which dismisses Chat GPT as “just an automated mansplaining machine. Often wrong, and yet always certain — and with a tendency to be condescending in the process.”)
These rapid-fire innovations, also, may lead to an enormous loss of jobs around the world. According to a new Goldman Sachs report, LLMs (Language Learning Models) could disrupt 300 million jobs over the next ten years. Other estimates propose a higher number, up to one billion current jobs disappearing or “degraded.” Interestingly, the jobs threatened by AI are not those performed by manual or essential workers such as farmers, teamsters, or hospice workers. It will impact vast swathes of the “knowledge economy:” Lawyers, graphic artists, financial analysts, software engineers, technical writers, journalists, teachers, call center workers, and so on.
I find it incredibly eerie — staggering — to ask Chat GPT-4 a relatively obscure question, to compare, for example, the concept of the “imaginal” versus the “imagination” found in the works of the visionary philosopher Rudolf Steiner and the Sufi scholar Henri Corbin, then watch it produce a relatively accomplished essay on this topic in a few seconds. It is difficult to fathom the impact this will have on humanity’s cognitive capacities; it may make many people even more lazy and dependent on tech than they are now. On the other hand, this is an unbelievably powerful tool for purposeful creative activity and research.
In a recent video (circulating widely, even though not yet released to the public), Tristan Harris and Aza Raskin from The Institute of Humane Technology explain the many ways that the AI breakthrough threatens humanity’s immediate future. They define it as “the total decoding and synthesizing of reality.” Following their warning, a long list of influencers signed a letter asking for a six month pause in the development of LLMs beyond GPT-4. However it doesn’t seem this will plausibly happen. In fact, Microsoft recently fired its entire AI ethics department, The Verge reports, as it integrates GPT-4 into its search engine, Bing, and other products.

One AI researcher, Eliezer Yudkowsky of the Machine Intelligence Research Institute, published a very alarming piece in Time Magazine last week, “Pausing AI Developments Isn't Enough. We Need to Shut it All Down.” He writes:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” …
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
It had also occurred to me (and many sci-fi auteurs) that advanced super-intelligent AI can apply biotech and robotics to build itself a fleet of super-bodies. Yudkowsky, however, turns out to be a bit of an odd character, a hyper-rationalist and transhumanist1. Lex Fridman just conducted a long interview with him. In Time, he argues that countries should ban together to stop advanced AI and “be willing to destroy a rogue datacenter by airstrike.” Scientists from Oxford and Google have reached a similar position about the threat of extinction, as expressed in this research paper, “Advanced artificial intelligence agents intervene in the provision of reward.”

On the more positive side, I have been watching Steven Wolfram’s recent videos on the AI breakthrough, which are fascinating. Wolfram has created a plug-in that allows Chat GPT to access Wolfram Alpha, his computational language, which “will allow GPT to to do actual nontrivial computations, or to systematically produce correct (rather than just “looks roughly right”) data.”
I recommend this interview between Wolfram and Jason Calacanis from This Week in Startups where they discuss, among many issues, the idea that we are entering “‘The Post-Knowledge Work’ Era.”
At one point, Calacanis asks, “What do we call this new era when anyone can talk to a chat interface and create a product or service in the world…?” Wolfram answers that it might be “thinking” which distinguishes humans and make them special. While the value of super specialized and silo’d knowledge will go down, the value of “big picture,” ideas the capacity “to globally think about stuff,” will “go way up.” Personally, I find this very promising.
Similarly, I was excited to find, in a recent Forbes interview, Open AI CEO Sam Altman (recently profiled in The NY Times), says:
“I think capitalism is awesome. I love capitalism. Of all of the bad systems the world has, it's the best one — or the least bad one we found so far. I hope we find a way better one. And I think that if AGI really truly fully happens, I can imagine all these ways that it breaks capitalism.”
That leads me to Part Two of today’s rumination:
“Now” Is Getting Closer!
Keep reading with a 7-day free trial
Subscribe to Liminal News With Daniel Pinchbeck to keep reading this post and get 7 days of free access to the full post archives.