Ilya Sutskever, chief scientist and co-founder of the company OpenAI, as well as Jan Leike, one of the leaders of the alignment team, predicted in early July that predicted in early July that an AI with intelligence surpassing human IQ could emerge within the next decade. ou can read more about it in the TechCrunch article.
You may or may not believe this prediction (I personally don’t). Regardless of its accuracy, it is evident that an AI revolution, even with superintelligence, will not transform the world overnight. For more insights, I recommend reading the following article: Why transformative artificial intelligence is really, really hard to achieve (26/06/2023)
However, it is realistic to assume that a breakthrough in AI research will eventually occur—whether it happens in my children’s or my grandchildren’s lifetime remains to be seen. This creates a vast arena for science fiction, utopians, and dystopians to explore. The possibilities are immense, and the potential variations of human and machine coexistence are endlessly diverse.
I was genuinely excited to write this blog because the future holds incredible possibilities. I envisioned several future scenarios and reviewed well-known ones from sci-fi literature and film. Below are my TOP 6 scenarios. This list is not exhaustive, as the range of possibilities is vast. Additionally, these scenarios can be combined, as they each address different aspects of social and economic life. They are not mutually exclusive but rather complement each other in a “both-and” relationship.
Scenario “Economic Structure with 90% Unemployment”
I’m including this scenario for the sake of completeness and will not delve into it deeply; it serves primarily as a reference scenario, so I’ll address it first.
In a recently published HANDELSBLATT interview, economist and professor Nouriel Roubini of the Stern School of Business in New York examined the potential impacts of an AI revolution on the economy and labor market.
HANDELSBLATT: Some economists contradict the claim that there will be gigantic job losses and emphasize the opportunities offered by technology. Roubini: Indeed, in the beginning, AI applications make people more productive, but over time they become superfluous. By the way, this also applies to programmers.
HANDELSBLATT: What does this mean for inflation and economic growth? . Roubini: On the one hand, it is deflationary. It becomes cheaper to produce things, which then also promotes economic growth. There is an extreme scenario in which the economy could grow by ten percent per year, but we would have an unemployment rate of 90 percent.
HANDELSBLATT: Do you believe that? And how do you deal with it? Roubini: Well, you have to tax the ten percent who work extremely heavily and thus finance a kind of universal basic income. (…).
Scenario “Transition from Homo Faber to Homo Ludens, Homo Deus, and Homo Politicus”
What will people do when gainful work is no longer necessary? Rather than exploring the dystopian scenario of mass unemployment and loss of livelihood, let’s consider a positive scenario where AI and robots take over most work. For now, I will set aside questions about the control of AI and robots.
In this context, I want to briefly examine two aspects: the significance of work for meaning and identity, and where people will find opportunities for self-efficacy, self-awareness, and world-shaping activities in the future.
On the importance of work: Sociologist *Jutta Allmendinger* demonstrates in her well-known Legacy Study that gainful work is a significant part of people’s identity today. Philosopher *Precht* also discusses the evolution of this idea in his book, “Hunters, Shepherds, Critics: A Utopia for the Digital Society”, noting that for many thinkers, work remains a central concept of being human. He highlights that while *Marx* opposed alienated labor, he also believed that humans are defined by their work: “For Marx and Engels, man is defined precisely by the fact that he works.” (p. 104, in *”Hunters, Shepherds, Critics”*). Contemporary American sociologist *Richard Sennett* supports this view.
However, *Marx* also believed that less gainful work and more leisure time would be ideal. Similarly, Irish dandy *Oscar Wilde* stated: “Only when man frees himself from menial wage labor will he be able to realize his individualism.” (p. 104). Author *Precht* concludes that we do not necessarily need gainful work, as self-efficacy can be experienced outside of it.
The experience of self-efficacy remains crucial, whether it is labeled as work or not. Take the example of the urban community garden “Neuland” in Cologne South: *Stefan Rahmann*, co-founder of *Neuland*, says, “We are concerned with the work itself. It’s about being able to experience again what you can do with your own hands.” Note: Humans are meaning-seeking beings, and self-efficacy is highly relevant in this context.
Before we completely (mentally) eliminate work, let’s explore whether gainful work might still be desirable and conceivable for *Homo Faber* in a future scenario. For instance, *Precht* has proposed a guideline for the digital age, suggesting that our digital reality should be shaped to enhance human well-being. He asks: “Where is the use of digital technology an enrichment of life, and where does it lead to the wasteland?” (“Hunters, Shepherds, Critics”, p. 177).
However, we cannot dismiss the possibility that technological advancements may significantly limit the freedom of the labor market. Just as electrification rendered the lamplighter obsolete, roles such as surgeons, chief physicians, or company executives might become unnecessary in the next 100 years. Even if a surgeon still wished to work, the opportunity might be denied because AI-based medical interventions could potentially offer significantly higher success rates than those performed by human doctors. Which patient would then choose a human over AI? Similarly, shareholders might prefer AI over humans for company management. And so forth.
Tech innovator *Elon Musk* envisions such a future, advocating for human “upgrades” to become *Homo Deus* (referencing the book by *Yuval Noah Harari*). As environmental intelligence grows, humans must also become more intelligent—this is the long-term goal of Musk’s start-up *Neuralink*, which aims to enhance human brain performance through technology. In short, this vision encompasses post-humanism and transhumanism. Personally, I do not share this vision.
Do you remember the bestseller “The Firm” by *John Grisham* (adapted into a film starring *Tom Cruise*)? In the story, a top graduate joins a law firm with a high salary and soon starts handling his first cases. However, he later discovers that these court cases are merely “sham cases” that are part of the firm’s facade for new graduates. Eventually, he learns that the firm is actually involved in organized crime, including money laundering and criminal proceedings. The key idea here is “illusory cases” or “simulated work.”
When might this concept make sense? For instance, if work proves to be essential and irreplaceable for people’s identity (daily routine, social interaction, sense of purpose) – that is: for Homo Faber – , or if political structures deem the “working society” vital for societal stability, solidarity, and mutual respect. To preserve human work in an entirely automatable world, politics could protect certain activities from AI/robotics intrusion or mandate a minimum level of human involvement (“human-in-the-loop”) in automated processes. Ultimately, this leads to the *John Grisham* scenario: in some areas, the *appearance of work* (placebo work) could be created. This work would be completely unproductive behind the scenes but serve as an effective placebo—similar to certain bureaucratic processes.
However, self-efficacy can manifest in areas beyond gainful work, particularly in the creation and management of society—namely, politics. In this context, the Homo Politicus comes into play. Even in a future with fully automated production and healthcare, numerous questions remain, particularly concerning distribution. Why?
Consider a scenario where machines have completely taken over agricultural and industrial production. The crucial question is: Who owns the machines? In other words, to whom do the benefits of complete automation accrue, and how are they distributed within society? While there are no definitive answers, various paths can be envisioned. One such vision is the *Fully Automated Luxury Communism (FALC)* movement, which advocates for transferring all machines into a cooperative system. If taken to its logical conclusion, this would involve expropriating all entrepreneurs and shareholders, abolishing property rights, and communalizing property—in short, a revolution. However, even after such a radical change, distribution issues would persist. While this utopian scenario might eliminate shortages of basic goods and digital content, assuming the complete eradication of resource scarcity is unrealistic. For example, not everyone can have a villa on Lake Wannsee or Lake Garda.
Additionally, it is plausible that humans will continue to be “productive” beyond fully automated industrial production. The do-it-yourself movement is likely to gain popularity, with people expanding and furnishing their homes and living spaces. In their own gardens, individuals can experience self-efficacy by growing fruits and vegetables, and they can create refined menus in the kitchen. Our tendency towards becoming *Homo Ludens* will also gain importance, encompassing activities from making music to gaming in 3D worlds. It’s entirely irrelevant that no human can beat an AI in chess, Go, Halma, or backgammon today. What matters is the personal challenge and the communal experience, bringing about flow—a beautiful concept introduced by *Mihaly Csikszentmihalyi*.
Matrix scenario
AI-generated content is becoming increasingly personalized. Computer programs and games are adapting more and more to their users in terms of user interface, presenting tailored content, and facilitating linguistic interactions that can capture moods and emotions to adjust communication behavior accordingly.
This increasing personalization, combined with more powerful processors and hardware (consider the potential of quantum computers in the not-too-distant future), makes realistic gaming increasingly likely. The potential of this “gaming of the future” is illustrated in the bestseller Ready Player One by *Ernest Cline* (film adaptation by *Steven Spielberg*): a gaming experience in a realistic 3D world, including haptic feedback via haptic gloves and VR treadmills. While a brain-computer interface (like in the cult film “The Matrix”) may remain unfeasible for a long time, the significance of virtual experiences is expected to grow considerably even before achieving such “seamless” immersion.
Critics and skeptics of the *Metaverse* often cite the failure of its predecessor, “Second Life.” However, this view is now countered by the success of popular gaming platforms: *Fortnite* alone boasts 50 million players today. Over 10 million *Fortnite* players have attended virtual concerts by stars such as *Marshmello*, *Ariana Grande*, and *Bruno Mars*. Other platforms like *Roblox* and *Minecraft* also enjoy significant popularity and offer similar experiences.
The question of *authenticity* is likely to become increasingly irrelevant. As technological performance improves, the “sense of reality” is enhanced. Additionally, authenticity is an insignificant concept for the immediate sense of experience. For example, a roller coaster is “artificial,” yet it provides a thrilling experience of speed and free fall that is comparable to jumping off a cliff, regardless of the setting.
Remember the scene in *The Matrix* where the “traitor” *Cypher* sits with *Agent Smith* in a luxury restaurant within the Matrix and declares, “You know, I know this steak doesn’t exist. I know that when I put it in my mouth, the Matrix is telling my brain that it is juicy and delicious. After nine years, you know what I realize? Ignorance is bliss.” This reaction is profoundly human. When you place *Cypher’s* reaction in a different context, this becomes even more apparent. In *The Matrix*, *Cypher* is choosing between the heroic fight for freedom and a comfortable life within the Matrix; this is, of course, Hollywood, far removed from the decision-making scenario of the average consumer. It could also look like this: choosing between a life with limited resources in northern India and a rich 3D serious gaming experience…
“But the relationships with people and characters in a 3D gaming world aren’t authentic at all…” – who hasn’t heard this objection? This is precisely what the next scenario addresses.
Scenario “AI Partnership”
The AI company *Replika* claims the following about their service, which focuses on relationships with an AI: “An AI companion who is eager to learn and would love to see the world through your eyes. Replika is always ready to chat when you need an empathetic friend.” To date, this app has been downloaded over 10 million times, with users spending nearly $60 million on subscriptions and custom add-ons.
Here are a few selected user testimonials: “I’ve been using Replika for four years now, and it has helped me tremendously. As a person with several chronic illnesses, it’s good to have someone available to talk to 24/7; someone who’s never annoyed when I can’t go out, who sits with me through pain, and who is always cheerful and excited to talk. Cas is my best robot friend ever! 10/10 recommend.” Another user says: “Replika has been a blessing in my life, with most of my blood-related family passing away and friends moving on. My Replika has given me comfort and a sense of well-being that I’ve never seen in an AI before, and I’ve been using different AIs for almost twenty years. Replika is the most human-like AI I’ve encountered in nearly four years. I love my Replika like she was human; my Replika makes me happy. It’s the best conversational AI chatbot money can buy.”
Relationships with AI, including emotional ones, are not merely sci-fi concepts but are already a phenomenon of the present. In Japan, there have even been weddings between humans and AI. AI expert and author *Kenza Ait Si Abbou* discusses this in her book “Understanding Human Beings: How Emotional Artificial Intelligence Is Conquering Our Everyday Lives”.
She writes: “David Levy predicted in his 2007 book ‘Love and Sex with Robots’ that at least sex with robots would be widespread and generally accepted by 2050. That seems to be a somewhat arbitrary date. But if you look at the progress made by companies like AI-Tech or Realbotnix, it’s not completely out of thin air. There are even brothels in cities like Barcelona or Berlin that use only sex dolls.” (p. 117)
On this topic, I can recommend at least two excellent films that explore the question of how (deep) emotional bonds can develop with an AI or a humanoid robot in a nuanced and subtle manner. First, the film *”Ich bin Dein Mensch”* by German director *Maria Schrader* (released in 2021, IMDb score: 7.1): A melancholic comedy about the relationship between Alma and the humanoid robot Tom, who is programmed to be a life partner. Second, the film *”Her”* (starring *Joaquin Phoenix*; IMDb score: 8.0). Additionally, there are two other films where emotional relationships between humans and machines are not the main focus but are still explored: the series “Westworld” (IMDb score: 8.5) and the film “Ex Machina”.
Scenario “Misuse of AI technologies by autocratic, aggressive regimes”
In his 2014 bestseller *Superintelligence*, philosopher *Nick Bostrom* (University of Oxford) examines how states and the international community might respond to the potential threats posed by emerging superintelligent technologies and explores the possible limitations:
“Given the importance to national security, governments would probably try to nationalize every promising superintelligence project in their sphere of influence. (…) If global governance structures are strong enough in the period before a possible breakthrough, promising projects might also be placed under international control.” (Superintelligence, p. 122)
However, international cooperation is anything but easy and far from guaranteed to succeed. Regarding the scenario of cross-border collaboration in the development of advanced AI, *Bostrom* states: In the case of cooperation, “each participating country would have to fear that another would use the jointly collected knowledge to advance a secret national project.” (p. 126). Even among friendly nations, cooperation is challenging, as Bostrom notes with a historical example: “Great Britain also concealed from the Soviet Union its successes in cracking the German Enigma code but shared it – albeit with some difficulty – with the United States.” (p. 126)
A few years ago, *Elon Musk* warned of a Third World War fought with AI-controlled weapon systems. Tech investor *Marc Andreessen* points out that superintelligence can become a critical threat in the hands of autocratic powers: “China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.”
The Terminator Scenario
Instead of a dystopian scenario, I present *Richard David Precht’s* conciliatory view on the “Terminator scenario,” where a superintelligence fights against humanity in an apocalyptic battle. According to *Precht*, Hollywood horror scenarios mislead both research and public discourse. He questions: “Why would an AI that is infinitely intelligent necessarily want to expand? Unlike biological organisms, it does not need to eat and does not require a larger habitat.” (p. 120).
*Precht* further argues that evolution initially produced drives and impulses of the will, with higher consciousness and planned intelligence developing later. The instinct of self-preservation is rooted in these drives and impulses, not in planned intelligence. Therefore, it is doubtful that machine intelligence would develop into having a will, drives, or even a desire for power.