When discussing Artificial Intelligence (AI) today, names like OpenAI, Google, or Anthropic inevitably come to mind. Yet one company has significantly shaped the development of AI over decades and repeatedly set milestones, long before the current hype around generative models began: IBM. From the first chess computers to complex enterprise solutions, IBM has continuously reinvented itself and adapted its AI strategy to changing technological and economic conditions. This blog post traces IBM’s journey—from its beginnings as a pioneer through valuable but also costly lessons to its current positioning as a leading provider of AI for the B2B sector.

Historical Overview: Pioneering Work and Valuable Lessons

IBM’s history in the field of Artificial Intelligence is a fascinating journey marked by spectacular successes and ambitious yet challenging projects. Two events have particularly etched themselves into collective memory and sustainably influenced public perception of AI.

Deep Blue: The Symbolic Victory Over Human Intellect

In May 1997, something happened in New York that many had considered impossible: IBM’s supercomputer Deep Blue defeated reigning world chess champion Garry Kasparov in a six-game match under tournament conditions. This victory was far more than just a sporting triumph; it was a symbolic milestone demonstrating the immense computing power of modern computers. Deep Blue was capable of evaluating 200 million chess positions per second, a feat enabled by the use of 32 processors working in parallel. Kasparov himself was deeply impressed and declared: “For the first time in the history of mankind, I saw something similar to an artificial intellect.”

Although Deep Blue was primarily based on raw computing power (“brute force”) rather than human-like understanding of the game, the underlying technology laid the foundation for future developments. The capability for massively parallel data processing later found applications in fields as diverse as financial modeling, data mining, and pharmaceutical research.

Watson: The Master of Natural Language Understanding

About 14 years later, in February 2011, IBM repeated the feat of pitting a machine against the best human minds—this time, however, in a far more complex discipline: understanding natural language. The computer, named after IBM’s first CEO Thomas J. Watson Sr., competed in the popular US quiz show Jeopardy! against the show’s two most successful champions, Ken Jennings and Brad Rutter, and won decisively.

Watson’s victory was a technological sensation because, unlike Deep Blue, the system could not rely solely on computing power. Jeopardy! requires understanding irony, wordplay, and complex semantic relationships. Watson had to analyze questions in natural language, generate hypotheses, find evidence in its vast knowledge database (fed from encyclopedias, books, and other sources), and evaluate the probability of the correct answer—all in a few seconds, without being connected to the internet. This breakthrough in Natural Language Processing (NLP) demonstrated that machines were capable of processing unstructured information and developing human-like conversational capabilities.

The Practical Test in Medicine: Ambition, Disillusionment, and Valuable Tuition

After the triumphant victory at Jeopardy!, expectations for Watson were immense. IBM invested heavily in the healthcare sector and founded the Watson Health division. The vision was to support doctors in diagnosing and treating diseases, particularly cancer. However, reality proved far more complex than a quiz show.

A prominent example of the challenges was the collaboration with the University Hospital Giessen and Marburg, which began in 2016. The goal was to use Watson in diagnosing rare diseases. However, the project was terminated before it could be applied to patients. The then-CEO of Rhön-Klinikum AG, to which the university hospital belongs, put it diplomatically but clearly: “The performance was unacceptable—the medical understanding at IBM simply wasn’t there“. The system had difficulties with complex medical terminology, negations in doctor’s letters, and the interpretation of abbreviations.

Similar experiences were also made by renowned institutions such as the MD Anderson Cancer Center in Texas and the Memorial Sloan Kettering Cancer Center in New York in the field of oncology. Doctors reported that Watson’s treatment recommendations sometimes fell short of those of a good resident physician. The projects were often discontinued after high investments.

This phase was undoubtedly a setback for IBM and led to public criticism. They had paid the “tuition” that often comes with entering new territory. However, these experiences were invaluable. They revealed the limitations of the AI systems of that time and made it clear that transferring from a controlled environment like a game to the complex, unstructured, and highly regulated world of medicine represents an immense challenge. IBM learned that processing vast amounts of data is not enough. Context, data quality, and close collaboration with domain experts are crucial for success. These lessons would form the foundation for the next, far more successful phase of IBM’s AI strategy.

IBM’s Realignment: watsonx as an Enterprise AI Platform

The experiences with Watson Health were a turning point. IBM recognized that the future of AI in the enterprise context does not lie in a monolithic, omniscient super-AI, but in a flexible, modular, and above all open platform that enables companies to use AI on their own data and in their own processes securely and scalably. The result of this strategic realignment is watsonx, a comprehensive AI and data platform specifically developed for the needs of enterprises.

The core of the watsonx strategy can be summarized in three principles:

  1. Openness: IBM relies on open-source technologies and offers customers the flexibility to use various models—whether IBM’s own, open-source models from platforms like Hugging Face, or models from third-party providers—and to operate them across any cloud infrastructure.
  2. Trustworthiness: With a strong focus on AI Governance, watsonx enables companies to design AI workflows responsibly, manage risks such as bias and drift, and meet regulatory requirements. Transparency and traceability are at the center.
  3. Data Sovereignty: The platform is designed so that companies retain control over their own data. AI models can be trained and adapted with a company’s specific, trusted data to achieve more precise and relevant results.

The Pillars of the watsonx Portfolio

watsonx is not a single product but an integrated portfolio of tools covering the entire AI lifecycle in the enterprise. The central components are:

watsonx.ai (AI Developer Studio): A comprehensive development environment where AI builders can train, validate, tune, and deploy models. It provides access to a library of foundation models and tools for complete AI lifecycle management.

watsonx.data (Data Management): An open data store architecture that enables data from various sources to be unified, prepared, and made usable for AI applications. It is optimized for governance and scaling of AI workloads.

watsonx.governance (AI Governance): A toolkit for automating AI governance. It helps proactively manage risks, simplify compliance, and create responsible, explainable AI workflows.

Granite: The Foundation for Enterprise AI

A crucial building block of the watsonx platform is IBM’s own Granite Foundation Models. These models were specifically developed for enterprise use and released under the Apache 2.0 open-source license. They are trained to handle enterprise-relevant data from the areas of finance, law, code, and science.

In contrast to the huge general-purpose models, the Granite models are deliberately designed to be smaller and more efficient. This enables companies to operate them on more cost-effective hardware and optimize them for specific tasks. The Granite family includes various sizes, from “Nano” for edge applications to “Small” for complex enterprise workflows, and offers specialized models for tasks such as code generation or document conversion (Granite-Docling). Through transparency in training data and built-in “guardrails,” IBM directly addresses the concerns of many companies regarding the “black box” nature of AI and the protection of intellectual property.

Where IBM Excels Today: AI in B2B Deployment

With a clear focus on the B2B market, IBM has successfully anchored its AI offerings in the core processes of companies. Instead of spectacular showcases, measurable business results are now in the foreground. The use cases are diverse and range from automating internal processes to improving customer experience.

Two additional products from the watsonx portfolio illustrate this pragmatic approach:

  • watsonx Orchestrate: This tool enables the creation and management of AI assistants and agents that automate repetitive tasks. It’s about increasing employee productivity by eliminating “busywork.”
  • watsonx Code Assistant: This tool supports developers throughout the software lifecycle by generating, explaining, and automating code. IBM reports time savings of over 40% in creating Red Hat Ansible Playbooks in internal use.

Success stories from customers like Vodafone, which was able to reduce processing time in journey testing by 99%, or Dun & Bradstreet, whose customers were able to reduce the time required to assess supplier risks by over 10%, demonstrate the effectiveness of this strategy. IBM’s AI is now deployed in 70% of global financial institutions and at 13 of the 14 leading systems integrators.

Concrete Application Areas: Where watsonx Makes a Difference Today

The strength of watsonx lies in its versatility and ability to integrate seamlessly into existing enterprise architectures. The most important application areas include:

Retrieval Augmented Generation (RAG) for Knowledge Management: Companies use watsonx to build question-and-answer systems based on their own documents, manuals, and knowledge databases. This accelerates decision-making and enables employees to quickly access contextual information.

Conversational AI and Chatbots: With watsonx Assistant, companies can quickly deploy voice agents and chatbots that provide automated self-service support across all channels. Integration with large language models enables more natural and context-sensitive interactions.

Code Generation and Developer Productivity: watsonx Code Assistant supports developers in creating code based on natural language, explains existing code, and automates repetitive tasks. This reduces complexity and enables teams to focus on value-adding activities.

Data Analysis and Pattern Recognition: Companies use watsonx.ai to extract insights from structured and unstructured data to identify trends, make predictions, and accelerate data-driven decisions.

Agentic AI: The Next Evolutionary Step

A particularly forward-looking aspect of IBM’s current strategy is the focus on Agentic AI—AI systems that not only respond to requests but proactively take on tasks and control processes. IBM describes this as the shift from “AI that chats” to “AI that acts.” With watsonx Orchestrate, companies can create autonomous AI agents that automate complex workflows, from processing support tickets to orchestrating business processes. This development is enabled by the strong instruction-following performance of the Granite models, which have been specifically optimized for such applications.

Future Perspective: AI as a Strategic Growth Driver

The importance of AI for companies will continue to increase in the coming years. A recent IBM study predicts that AI investments will rise by approximately 150% between 2025 and 2030 (measured as a percentage of revenue), while global productivity gains in the AI sector are expected to increase by 42%. With the new Enterprise Advantage Service, IBM offers companies a secure platform, common standards, and reusable AI assets to scale agent-based AI at large scale.

Conclusion: From Pioneer to Pragmatic Partner of Industry

IBM’s journey in the field of Artificial Intelligence is a lesson in technological evolution, strategic adaptability, and the importance of resilience. The early, media-effective successes of Deep Blue and Watson pushed the boundaries of what was possible and captured the imagination of an entire generation. The subsequent challenges, particularly in healthcare, were painful but necessary to develop a deeper understanding of the complexity of the real world.

Today, IBM no longer presents itself as the creator of an omnipotent superintelligence, but as a pragmatic and reliable partner for companies on their own AI journey. With the open, trusted, and data-centric platform watsonx and the Granite models built upon it, IBM has created an ecosystem that enables companies to harness the transformative potential of AI—securely, scalably, and with measurable business value. The pioneering work of the past has transformed into a mature strategy tailored to the needs of industry, securing IBM a strong position in the fiercely competitive AI market of the future.

Author

Sebastian Zang has cultivated a distinguished career in the IT industry, leading a wide range of software initiatives with a strong emphasis on automation and corporate growth. In his current role as Vice President Partners & Alliances at Beta Systems Software AG, he draws on his extensive expertise to spearhead global technological innovation. A graduate of Universität Passau, Sebastian brings a wealth of international experience, having worked across diverse markets and industries. In addition to his technical acumen, he is widely recognized for his thought leadership in areas such as automation, artificial intelligence, and business strategy.