In autumn 2018, magazine Wired announced: "AI has a hallucination problem". This headline hinted at the fact that pattern recognition in image analysis by AI algorithms was misled by small stickers with certain patterns (which are insignificant for humans). Stop signs are no longer recognized, simple objects are no longer recognized.

This confusion is caused by stickers with psychedelic patterns (hence the Wired title) or stickers with specific geometric patterns. AI experts describe this phenomenon by saying that a neural network ("deep learning") is not yet "robust". At this point it should also be pointed out that neural networks for image analysis (convolutional neural networks) are a black box; what characteristics of an image is detected by neural networks and interpreted is neither part of the design process of a neural network, nor can it be determined by developers.

By the way, Adversarial AI should not be confused with Generative Adversarial Networks (GAN). This structure is used to train an AI algorithm, for example to generate deceptive real portraits. Such a GAN consists of two components, the "generator" and the "discriminator". The "Generator" tries to produce portraits. The "Discriminator" evaluates these results and rejects them - or accepts them. This GAN-approach creates a feedback loop between two instances that train the "Generator".

Author

The author is a manager in the software industry with international expertise: Authorized officer at one of the large consulting firms - Responsible for setting up an IT development center at the Bangalore offshore location - Director M&A at a software company in Berlin.