“More. Everything. Forever – AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity”, by: Adam Becker, Basic Books, 2025, 30 EUR (Hardcover)
“If you don’t sign up your kids for cryonics then you are a lousy parent” (Eliezer Yudkowsky, in 2010). And before I fell into deep self-doubt as a father of two 7-year-olds, I read on the following page: “’What’s being done now in the commercial cryonics industry is garbage. They’re making puddles of pink mush in a liquid nitrogen tank. It’s nothing that could ever be used for anything’, tells Michael Hendricks, a neurobiologist at McGill University.” And it is currently not foreseeable when and whether the scientific foundations for this type of preservation (in a broader sense: immortality) will ever exist.
Cryonics is one of the ambitions of Silicon Valley that author Adam Becker takes a sober look at. Without a doubt, we are currently in a time of AI hype and an almost boundless fascination with technology (not to say: belief in technology). I myself am, of course, also part of the fan base of current AI capabilities, even if I believe I have a reasonably good understanding of the limits. In short, Adam Becker’s book helps to maintain a sober and cool head amidst this hype. The book has been widely praised by critics as a smart and highly readable dismantling of unquestioned technology utopias. Which of his arguments the readership follows is up to everyone to decide for themselves – even if some voices note that Becker’s own skepticism towards future technological breakthroughs is perhaps a bit too categorical in places.
Adam Becker is a science journalist with a PhD in astrophysics. He has written for the New York Times, the BBC, NPR, Scientific American, New Scientist, and Quanta, among others. His first book, What Is Real?, was an Editor’s Choice of the New York Times Book Review and was on the longlist for the PEN Literary Science Writing Award. He was a Science Journalism Fellow at the Santa Fe Institute and Science Communicator in Residence at the Simons Institute for the Theory of Computing. He lives in California.
Limits of Technology vs. Boundless technological fantasies
Adam Becker familiarizes the readership with ambitions and visions in Silicon Valley that often read like science fiction. However, the evangelists of technological progress by no means want this to be understood as science fiction, but regularly link their visions to deadlines. Futurists like Ray Kurzweil see the breakthrough to a superintelligence within reach, as well as the complete brain scan and upload of our consciousness into cyberspace. In short: immortality (with complete disembodiment). Nanorobots, under the direction of this superintelligence, will ultimately transform the entire universe into a gigantic data center.
The hope and the promise of salvation: The solution to all of humanity’s problems. Becker aptly exposes this attitude as an attempt to ask a non-existent AGI, almost like a genie in a bottle, for three wishes to save the world. Only, as Adam Becker points out, unfortunately: “Technology doesn’t solve social and political problems, any more than it causes them. The prospect of nuclear war was made possible through technology, but it’s a live concern because of geopolitics. Humans could come together and choose to rid the world of nuclear weapons, just as we could come together to end global warming. Applying more intelligence and technology to these problems won’t solve them; they’re fundamentally political.” (p. 175)
What makes this narrative so attractive to Silicon Valley (however realistic it may be): “First, these ideas are reductive, in that they make all problems into problems about technology. (…) Second, these ideas are profitable, aligning nicely with the bottom line of the tech industry via the promise of perpetual growth. (…) Third, and perhaps most importantly, these ideas offer transcendence (…) you can ignore scarcity of resources, not to mention legal restrictions. (…) the ideology of technological salvation.” (p. 28f) As Becker aptly analyzes, this ideology results not least from a deep contempt for genuine subject matter expertise: The financial elite falsely equates their immense wealth with all-encompassing intelligence and infallibility.
Superintelligence
At the center of the hopes (and also fears) for the future in Silicon Valley is, of course, the technological breakthrough to superintelligence (also affectionately called “Singularity”). Some time ago, regarding the expectations of this milestone, I posted an assessment from the “AGI Expert Community” (compare: What experts say about when to expect AGI …)
The forecast of a near breakthrough is based, among other things, on the hypothesis that technological progress (over human history) is developing or has developed exponentially. But Becker clearly questions this. Here Becker shines with his physics background and reminds us that every exponential growth in nature – be it in bacterial colonies or in computing power – inevitably reaches physical limits: “I think maybe Kurzweil’s law is a little too simplistic, because it doesn’t take into account the fact that complexity also increases exponentially with advancements in technology. (…) ‘Put differently’, they wrote, ‘it is around 18 times harder today to generate the exponential growth behind Moore’s law than it was in 1971’” (page 62)
Becker cites scientific analyses on the dynamics of research results: “The 2020 Stanford-MIT research on Moore’s law was part of a broader study, which came to a similar conclusion about the entire economy. The authors of that study provide compelling evidence that research productivity today – imperfectly measured as the translation of industrial R&D into economic growth – is over forty times lower than it was in the 1930s.” (p. 67).
Becker also points out: The concept of intelligence is extremely fuzzy. What exactly characterizes “Artificial General Intelligence” (AGI)? There is neither a deeper understanding of how the human brain works; nor is it true that the “artificial neural brain” works similarly to the human brain – even if the linguistic equation suggests this. Adam Becker lets various voices from science speak, such as: “A recurring flaw in AI alarmism is that it treats intelligence as a property of individual minds, rather than recognizing that this capacity is distributed across our civilization and culture. (…) Most cognitive scientists would agree that intelligence is not a quantity that can be measured on a single scale and arbitrarily dialed up and down but rather a complex integration of general and specialized capabilities that are, for the most part, adaptive in a specific evolutionary niche.” (p. 112)
And Becker also questions the so-called “Orthogonality thesis”, according to which an AI with wrong intentions will lead to the extinction of humanity. “It just doesn’t seem to be the case that motivations are totally or even mostly divorced from intelligence. Intelligence requires reflection, self-examination, critically evaluating one’s own actions and drives. Without that capacity, there would be a great deal of other intelligent behaviour that an AI wouldn’t be able to engage in, such as modifying its behaviour in response to changing circumstances or even undertaking many forms of learning. We grow and change with increased experience and wisdom. Why would an AI not do that? ‘Complex minds are likely to have complex motivations,’ says tech entrepreneur and software developer Maciej Ceglowski. ‘That may be part of what it even means to be intelligent’” (page 110)
Ceglowski continues: “When we look at where AI is actually succeeding, it’s not in complex, recursively self-improving algorithms. It’s the result of pouring absolutely massive amounts of data into relatively simple neural networks”, he [Ceglowski] says. (…) Ceglowski, who was born in Poland, says the idea that a superintelligent being would inevitably want to improve itself is ‘unabashedly American’” (page 111)
Anyone following the current discussion around LLMs also knows the limits of this algorithmic approach: All the LLM knows about are tokens and the connections between them. ChatGPT and other LLMs are text-prediction generators. (p. 115)
EFFECTIVE ALTRUISM & LONGTERMISM
A discourse around so-called Effective Altruism (EA) is producing quite astonishing blossoms. In the beginning, “Effective Altruism” was what the name suggests: An approach to finding the most efficient way to combat the “plagues of humanity” – such as poverty. (Some critics of the book fairly note at this point that Becker gives somewhat short shrift to these positive, pragmatic achievements of the EA movement in his sometimes very polemical reckoning).
EA has now partly detached itself from these roots. Starting from a utilitarian ethic (in short: maximizing the common good as the sum of the utility of a maximum number of individuals), the utility of future generations flows into the considerations for maximizing this “common good”. We know this ethical figure of thought from the discussion about climate justice. EA, however, takes these thought experiments to mathematical extremes, into downright absurd dimensions from the perspective of an average consumer. Namely:
The development of humanity is extrapolated into a very distant future based on the ideas outlined at the beginning (=Longtermism). Humanity colonizes the entire universe, on billions of planets, and does so for a mathematically unimaginably long period of time. In ethical considerations, an unimaginably large number of people (future generations) is then taken into account: 1,000,000,000,000,000,000,000,000. Even if the probability of such human populations in the entire universe is vanishingly small, multiplying these unimaginably large numbers by the low probabilities still yields absurdly high utility benefits that completely cloud our view of the real problems of the present:
“MacAskill and Greaves arrive at a stunning conclusion. ‘Every $100 spent [on AI safety] has, on average, an impact as valuable as saving one trillion [lives] … far more than the near-future benefits of [malaria] bednet distribution.’ For a strong longtermist, investing in a Silicon Valley AI safety company is a more worthwhile humanitarian endeavor than saving lives in the tropics.” (p. 167)
MISCELLANEOUS
Finally, an interesting calculation that Adam Becker has shown regarding the “limits to growth” (limits, by the way, that Silicon Valley evangelists want to overcome by colonizing the universe), namely regarding the available energy:
Becker points out that our energy consumption has increased by about 3 percent annually in recent decades. “If humanity’s energy usage continues to grow by a more modest 2.3 percent per year, then in about four hundred years, we’d reach Earth’s limit – we’d be using as much energy as the Sun provides to the entire surface of the Earth annually.” (p. 24)
And even regarding our solar system there is a limit, even if that is hard to imagine when looking up at the sky: “(…) the energy available in space is just as finite, and just as subject to limits on growth. If growth in humanity’s energy usage were to continue at the same rate past the four-hundred-year mark, in 1350 years we’d be using all the energy produced by the sun.” (p. 24)
Enjoy reading!