In autumn 2018, magazine Wired announced: "AI has a hallucination problem". This headline hinted at the fact that pattern recognition in image analysis by AI algorithms was misled by small stickers with certain patterns (which are insignificant for humans). Stop signs are no longer recognized, simple objects are no longer recognized.

This confusion is caused by stickers with psychedelic patterns (hence the Wired title) or stickers with specific geometric patterns. AI experts describe this phenomenon by saying that a neural network ("deep learning") is not yet "robust". At this point it should also be pointed out that neural networks for image analysis (convolutional neural networks) are a black box; what characteristics of an image is detected by neural networks and interpreted is neither part of the design process of a neural network, nor can it be determined by developers.

By the way, Adversarial AI should not be confused with Generative Adversarial Networks (GAN). This structure is used to train an AI algorithm, for example to generate deceptive real portraits. Such a GAN consists of two components, the "generator" and the "discriminator". The "Generator" tries to produce portraits. The "Discriminator" evaluates these results and rejects them - or accepts them. This GAN-approach creates a feedback loop between two instances that train the "Generator".

Author

Sebastian Zang has cultivated a distinguished career in the IT industry, leading a wide range of software initiatives with a strong emphasis on automation and corporate growth. In his current role as Vice President Partners & Alliances at Beta Systems Software AG, he draws on his extensive expertise to spearhead global technological innovation. A graduate of Universität Passau, Sebastian brings a wealth of international experience, having worked across diverse markets and industries. In addition to his technical acumen, he is widely recognized for his thought leadership in areas such as automation, artificial intelligence, and business strategy.