The futurist Amy Webb, founder of the Future Today Institute (FTI), has a remarkable overview of diverse (technological) trends – ranging from extreme weather events, space tourism to advances in artificial intelligence (AI). The “Tech Trend Report” (Free Download) gives an excellent overview, its predictions/trend analysis is based on quantitative methods.
Amy Webb has a close meshed network into the Tech Industries, she attends the Business Forum in Davos regularly, she speaks at various tech events (such as the SXSW), she has a good understanding of the dynamics in the Tech Industry. And she seems to be concerned, that’s why she had decided to write the book at hand. Webb perceives great potential from Artificial Intelligence, however, she’s also aware of major risks and challenges. Not all of her concerns are new, the control problem related to super intelligence (singularity) is known to the greater public since the bestseller “Super intelligence. Scenarios of a Coming Revolution” (publication year: 2014) by the Oxford professor Nick Bostrom. But the author Webb goes beyond that; she provides an overview of the existing structures of (global) governance of AI today, and she presents ideas to ensure responsible use of this technology.
“The Big Nine. How the Tech Titans & Their Thinking Machines Could Warp Humanity” by Amy Webb, Editing House “PublicAffairs”, 265 pages, publication year 2019
Future scenarios with the key technology AI: challenges
In China, the digital industry is subject to the strategic (and systematic) plans of the Chinese government to become the leading technology nation in AI and to establish an efficient surveillance regime. The US – in contrast to China – hasn’t come up with any AI strategy (matching the scope and funding of the Chinese strategy), the development of AI in the USA is essentially left to the market, driven by consumerism, by market demands. Amy Webb considers this to be problematic: “[Their] financial interests do not always align with what’s best for our individual liberties, our communities, and our democratic ideals” (p. 6).
The author has an ambivalent relationship with the digital industry (The Big Nine: Alibaba, Amazon, Apple, Baidu, Facebook, Google, IBM, Microsoft, Tencent). On the one hand she insists: “I firmly believe that the leaders of these nine companies are driven by a profound sense of altruism and a desire to serve the greater good: they clearly see the potential of AI to improve health care and longevity” (p. 4) On the other hand, the companies are exposed to the pressure of expectations from “Wall Street” (as the author likes to put it), and this financial pressure can push humanistic/altruistic goals to the sidelines; an aggravating factor is that the values of the companies are by no means deeply rooted in the corporate cultures, let alone formulated in sufficient detail to be effective. Besides the pressure for commercial success, in the case of Chinese companies the pressure from Beijing comes on top. We don’t know, whether Webb really believes in the altruistic mission of the Big Nine; but by making “Wall Street” the scapegoat in this scenario, Webb can maintain her good relationships with the Tech Industry (required to stay close to the dynamics), while voicing her concerns.
The Chinese strategy of technological leadership (defined in 2017) comes with side effects generates, especially with security risks. Amy Webb notes with dismay about her observations in China: “Bridges and buildings routinely collapse, roads and sidewalks buckle, and there have been too many instances of food contaminations to list here. (That isn’t hyperbole. There have been more than 500 000 food health scandals involving everything from baby formula and rice in just the past few years.) One of the primary causes for these problems? Chinese workplaces that incentivize cutting corners. It is absolutely chilling to imagine advanced AI systems built by teams that cut corners.”
This is – in a nutshell – the status quo with regard to artificial intelligence (AI). In addition, there are well-known challenges: the control problem of a super intelligence, lack of transparency about the functioning of AI algorithms (the “black box problem”), the poor data quality of training data (so-called “corpora”), which may contain discriminatory tendencies against minorities or gender. The author provides a wealth of illustrative, very catchy examples.
Finally, the author Webb develops three scenarios for the future, which helps to illustrate the great potential of AI as well as the concerns the author has. The author Webb spans a wide time horizon for this, namely from 2019 (year of publication of the book) to 2069. She presents three scenarios – the first is the “Optimistic Scenario”, the other two must be clearly classified as dystopias. This distribution (2:1) underlines that the futurist Webb is worried. And it is difficult to dismiss her concerns as mere prophecies of doom, she does not look into the proverbial crystal ball, but rather extrapolates known trends into the future. Let’s have a look at that.
“The Optimistic Scenario” (p. 155 ff): The central characteristic of this scenario is an International Coalition for the Enforcement of Ethical Standards in AI Development as well as close monitoring/support of AI research and development. There is, for example, generous funding for research, studies and surveys on the use of AI in various fields, there is diversity programs, and also large investments in education to offset the disruption in the labor market (“making America’s public education great again”). The tech industry has managed to establish common technology standards, protocols, data formats.
As for China, economic policy measures (e.g. sanctions) are used to enforce conformity. Also, countries that are committed to “AI governance” decide not to admit Chinese students to their universities anymore. Therefore, China finally joins the “Coalition of Good AI Governance”. By the way, this coalition then decides (in 2069) against taking the possible step towards super intelligence.
The scenario contains (like the other scenarios) some “science fiction”-like components: “Toothbrushes, which come with tiny oral fluid sensors, use your saliva as a mirror reflecting your overall health. With each routine brushing, AIs are monitoring your hormones, electrolytes, and antibodies, checking for changes over time.” And in 2049 (when AI has reached the general intelligence of humans): “Every family has a butler because every household has an AGI [Artificial General Intelligence]. One more sci-fi detail before we move on to the next scenario: “Microscopic computers, the size of a grain of sand, would gently rest on top of the brain and detect electric signals. Special AGI systems, capable of reading and interpreting those signals, could also transmit data between people.”
“The Pragmatic Scenario”: The name is quite euphemistic, because this scenario is – by any standard – a real dystopia. An international coalition is missing, and in the development of AI quick results in the competition for market share are prioritized over security. Due to its technological dominance, China can impose its autocratic political system on partner countries, be prepared for suppression of religion, suppression of a free press, discrimination against ethnic minorities and so on. The social divide is deepening, elites live in gated communities;
In western countries AI-based apps follow the nudging principle, they recommend the healthier menu on the menu, reward the right decision and so on. De facto freedom of choice has long since ceased to exist, because this “nudging” algorithm cannot be switched off and – worse still – health insurance and life insurance premiums are directly linked to behaviour.
The disruption on the labor market is badly managed, a “digital caste system” is emerging in society. Safety robots maintain public safety in this socially divided society; the robots (based on AI training data with bias against African Americans) behave discriminatory towards African Americans. The attitude to life is depressing: “You, like all Americans, are learning to live with constant, low-grade anxiety. (…) Your home has been turned into a big container for marketing, which is constant and instrusive.”
It finally comes to the Cyber War of China against America, which ends with the Digitally Occupied States of America.
The ”The Catastrophic Scenario” doesn’t really need to be read anymore, the “Pragmatic Scenario” is warning shot enough for the readers.
Future scenarios with the key technology AI: Impulses for Global Governance
The author Amy Webb does not see herself as a pessimist, she thinks the potential of AI is fascinating (and rightly so). However, she believes it is essential to set a strategic and systematic agenda so that the development path of AI can run in favour of mankind. The author has already anticipated some answers (measures) in the “Optimistic Future Scenario”. Namely: A Global Alliance on Intelligence Augmentation (GAIA), which she considers a multilateral institution in the tradition of Bretton Woods (an international agreement on a global financial system that laid the foundations for global prosperity after the Second World War).
In addition, she suggests a plethora of other efforts and measures: The compilation of an atlas of human values by cultural anthropologists, sociologists, psychologists, and so on. This should serve – according to Webb – as a basis for the value-based development of AI.
A comprehensive set of rules is to be added, such as The principle of safety before speed; AI must be explainable; the well-being of mankind must be irrevocably at the centre of the development of AI technology; the transparency of AI systems must be verifiable by an independent party, a trustee; GAIA members should allow inspections by inspectors (comparable to inspections by the IAEA) at any time to ensure that the set of rules is followed at all times.
Webb also intends to use AI technology itself to enforce compliance: “Sentinel AI would formally prove that AI systems are performing as intended, and as the AI ecosystem matures towards AGI, any changes made autonomously that might alter a systems’ existing goals would be reported before any self-improvement” (p. 244)
Amy Webb expects from the US a consistent AI strategy across all authorities and research institutes; for a systematic coordination spanning all authorities she strongly recommends a higher authority (she calls it the “Strategic Foresight Office”, SFO). In principle, countries should not leave AI exclusively to market forces, that is: to profit-oriented companies. That is why it is necessary for state authorities or GAIA institutions to have their own (generous) research budget to fund research on security, transparency; a budget that is not subject to the logic of a profit centre.
As for the “corpora” for the training of AI algorithms: Adjustment for bias is required, improved data quality: On that issue the author suggests that the digital companies could share the costs of creating such an improved database. In addition, Webb would also oblige the digital companies to spend part of their R&D budget on risk analysis and evaluation of possible negative consequences through pilot testing.
Webb also demand Chinese digital industry executives to demonstrate courageous leadership and oppose inhumane projects in Beijing – that seems somewhat helpless.Even the ideas mentioned above (e.g. GAIA) require a huge political political effort on a global scale, however, there’s a lot at stake and Webb is right to create awareness. To close on a positive note: Bretton Woods, the IAEA are historical examples where international cooperation has clearly worked … however at a time when mankind enjoyed the “Pax Americana”.