Click here for the Download: Tech Trend Report 2023

The Future Today Institute (FTI) has published its 16th Tech Trend Report and provides a comprehensive overview of technological trends that will (significantly) change our everyday lives and work. The report is downloaded by more than 1 million users annually. The entire report is over 800 pages long and provides a well-structured overview of technological trends in a wide range of areas – artificial intelligence (AI), climate, biotechnology, finance, medicine, and more.

The trend report is based on the evaluation of extensive data on consumer behavior, but also on insights into the activities and successes of numerous research institutions. Based on this quantitative basis of the trend report, a reliable short- to medium-term perspective emerges. In some places, the book breaks out of the tight medium-term time frame and looks further into the future in optimistic as well as pessimistic scenarios.

In the following paragraphs I’ll present some selected information bites from this Tech Trend Report 2023. I highly recommend reading this free report as; the report is structured in small reading bites, so that you can browse through it here from time to time.

The dynamics of change are – needless to say – enormous, and the Tech Trend Report not only provides a very comprehensive compilation of these trends and changes; The report also identifies which industries, which areas of life are (primarily) affected by new trends, and within what time horizon. Above all, it is also a trend report that is “actionable” for the management of companies. Below is the “Impact Matrix”, which provides a good overview:

In the following, I highlight some insights that I found particularly interesting, first and foremost, of course, the topic of AI.

Artificial Intelligence

82 pages (out of a total of 820) are devoted to the topic of artificial intelligence (AI). It is a very compact overview on a variety of topics. The table of contents of this chapter makes this clear:

Basically, the authors of the report classify the current developments in artificial intelligence in a bird’s-eye view as follows: While in the second decade of the 21st century the development in AI mainly revolved around perception and image recognition (recognizing traffic signs, recognizing dogs and cats, etc.), the development in the third decade (2020ff) focused on Generative AI.: These systems not only perceive and understand the world, but can also generate new content, concepts, and ideas as they communicate with us. And there’s more: AI systems of the future will not only try to find content that consumers like, but they will generate personalized content for their specific interests.

Generative AI applications such as ChatGPT (a milestone in AI development, especially in public perception) will increasingly be integrated into apps over the next 18-24 months. For example, Microsoft has already integrated the image generation app OpenAI DALL-E 2 into its applications Microsoft Designer and Image Creator. As a result, these AI applications have a broad impact.

And in software development, AI coding assistants will gain popularity. OpenAI‘s Codex, which was launched in 2021, evolved from research to open commercialization by the middle of last year. GitHub CoPilot is now available as a subscription (10 USD per month). As of January, Amazon’s CodeWhisperer has been in preview. Internally, Google uses a voice-enabled machine to complete code – it could be made available to regular users at some point, possibly as late as 2023.

The Trend Report also looks at how the market will develop in the long term and what market structures are emerging. Those providers who benefit from network effects, i.e. collect enough user data, have a long-term chance of success. In the longer term, the Future Today Institute expects niche LLMs (Large Language Models) to be used by a few players, while general-purpose LLMs will become a commodity.

It is already becoming apparent that oligopolization will occur, not least because of the high capital-intensive requirements for the infrastructure for training and operating (large) AI models. As a guideline, training an AI model costs $1 per 1,000 parameters; GPT-3 from Open-AI therefore cost around 10 million dollars – something that research institutes with smaller budgets could not afford. Clear statement in the report: “Just a handful of big companies dominate the AI landscape: Google, Amazon, Microsoft, IBM, Meta, and Apple in the US, and Baidu, Alibaba, and Tencent in China.”

I found an illustration of this at Momentum Works that illustrates this very well: Here you can see very well that there are only a few big players who are really driving the development of AI:

And the following infographic also underlines this finding very clearly (found in a LinkedIn post by Christian Schloegel, CDO & Member of the Executive Board at Körber AG):

Number of registered patents by company

In fact, it is already the case that we are now witnessing a race between some of the larger players, including OpenAI, Google, Microsoft, Meta or IBM. What’s more, the three major LLM service providers work with hyperscalers: AI21 is on AWS, Cohere is on GCP, OpenAI is on Azure. Adept.AI, an ML research and product lab, has entered into an agreement with Oracle cloud infrastructure. It is therefore logical that the Trend Report presents the major AI models around use cases such as text-to-video, text-to-speech, translation, image creation, 3D model generation, etc.

PaLM: It’s an LLM from Google with 540 billion parameters. PaLM shows groundbreaking skills in numerous, very difficult tasks. PaLM is one of the largest LLMs, with 6144 TPU chips (TPU = Tensor Processing Units).

By the way: The new version, GPT-4, is supposed to have one hundred trillion (100,000,000,000,000) parameters.

NLLB: Developed by Meta AI, this open-source model with 55 billion parameters is capable of delivering high-quality translations directly between 200 languages. Languages – including low-resource languages such as Asturian, Luganda, Urdu and more.

RETRO: The acronym stands for Retrieval Enhanced Transformers, developed in February 2022 by DeepMind (part of the Alphabet Group). Traditionally, the knowledge base of transformer models consists only of the data with which they have been trained. RETRO tackles this problem by gaining a new knowledge base of “facts” by retrieving information from a database. RETRO helps LLMs stay up-to-date without the need to retrain models.

Overall, efforts in AI development are also geared towards ensuring that AI interacts with humans to gain an understanding of the relevant facts (e.g. the subject of conversation) as well as the context in which the human user finds himself. For example, the so-called Vokenization aims to link words to images; and ActivityNet is a video dataset for understanding human activity that includes 700 hours of videos of people doing 200 different activities (long jumping, walking the dog, vacuuming, etc.). The goal is for an AI system to be able to recognize activities and also reliably determine the (temporal) beginning and end. In particular, the temporal delimitation of an activity (e.g. walking a dog) is one of the most complex and difficult tasks in computer vision.

And one more thing: deep neural networks are good at identifying objects in photos and videos and processing natural language, but until recently, the models had to be trained separately. Could there be a model that comprises all of these capabilities? That was Google’s proposal in 2017. Since then, DeepMind’s Gato has evolved into a transformer with 1.2 billion parameters, capable of performing hundreds of tasks in robotics, simulated environments, vision, and speech.

The Trend Report also poses the important question: Could 2023 be the beginning of the end for human radiologists? The Lithuania-based start-up Oxipit analyzes chest X-rays. The technology is so good that it received government certification to work independently without a radiologist in the loop.

And finally, let’s take a look at the hardware around AI models. Remember how many transistors Intel’s first microchip had in 1971? – 2300 transistors. The NextGen chip Wafer Scale Engine 2 (WSE 2) from Cerebras has the following parameters: 2.6 trillion (!!) Transistors, 850,000 cores, 40 gigabytes of on-chip memory, and 20 petabytes of memory bandwidth.

Web3.0, Autonomous Driving and Elon Musk

I also find worth mentioning the following bites of information from the Tech Trend Report 2023 beyond artificial intelligence:

New concept for computer chips: A new generation of computer chips is modelled on the human brain, with artificial nerve cells, neurons and synapses. “Neuromorphic computing” is the name of the approach. It is supposed to save a lot of energy and at the same time offer maximum performance for special applications such as AI. Several start-ups have announced clinical trials for a computer chip implanted in the brain in 2023.

Autonomous Driving: The founder of the FTI, Amy Webb, dampens expectations for a rapid breakthrough in autonomous driving. “We’ve made a lot of progress in technology, but there’s still a lot of work to be done.” . In principle, it is questionable in which environments autonomous driving is actually possible. It is conceivable that fully autonomous vehicles will drive between cities on highways. But the use in metropolises such as New York, with many pedestrians and cyclists, is still very difficult for robo-taxis to master. However, for autonomous vehicles to take off, it will require the collaboration of a complex range of stakeholders, including regulators (both government and national), technology developers, consumers, and those responsible for building the infrastructure. A task that will be difficult and time-consuming.

Amy Webb on Elon Musk: You can’t see “that his predictions are based on any data or evidence.” “I don’t understand him. And I don’t understand the fascination of others with Musk.”

WEB3.0: Digital identity is a key area in the evolution of the Web 3.0 landscape. For Web 3.0 to work, individuals will be provided with methods to identify and ways to validate transactions, but the tools are not yet mature.

Author

The author is a manager in the software industry with international expertise: Authorized officer at one of the large consulting firms - Responsible for setting up an IT development center at the Bangalore offshore location - Director M&A at a software company in Berlin.