In an interview with German economic newspaper Handelsblatt in July 2019, the then-CEO of AWS and current Amazon CEO, Andy Jassy, stated: “In ten or twenty years, most companies will no longer have their own data centers. Only tasks that require proximity—such as those in a factory—will still be carried out on-site in the future.”

This boldly outlined era envisioned by Jassy will not come to pass in the foreseeable future. In the five years since the interview, the motto “cloud-first” (or even “cloud-only”) has been replaced by a trend toward a hybrid approach. Check out also my blogpost Cloud Repatriation: Why Companies Are Bringing Workloads Back to Their Own Data Centers—high costs being one of the key factors.

Nonetheless, the growth of hyperscaler data centers remains unbroken, fueled particularly by the increasing demand for AI applications. Amazon alone plans to invest USD 75 billion this year, with that figure likely to rise. According to an analysis by the news agency Bloomberg, the entire industry will spend over USD 200 billion this year on servers, chips, and data centers.

The growth in the data center sector as a whole is impressive: From 2015 to 2022, the number of servers increased from 54 million to 86 million, representing a 45% growth in servers over the past seven years.

While Andy Jassy may have been overly bold in his predictions for the hyperscaler market, his analysis of the drivers of trends and change factors was accurate: namely, the challenges faced by data center operators due to the increasing complexity of IT landscapes and the growing demands for data analysis. The latter requires the non-trivial integration of diverse data streams into analyzable data lakes, as well as the generation of business-relevant insights through data science. Jassy speculated that public cloud providers or hyperscalers are best positioned to manage this growing complexity.

Let’s take a closer look at some of the core challenges data center operators face today. While the challenges for hyperscalers and enterprise data centers may differ in certain aspects, there are also shared challenges.

Increasing Heterogeneity of IT Ecosystems in Data Centers

Data centers today feature heterogeneous ecosystems: alongside the mainframe (which still processes 68% of all IT workloads), server technology has long been established, along with operating systems like Linux and Windows in addition to z/OS. On-premises applications communicate with cloud-based systems, and container technology is becoming widespread. Some critical systems are redundantly operated both in the cloud and on-premises.

In addition to the growing heterogeneity in data centers, operators must meet increasing demands for SLAs, DevOps, and similar requirements.

There are several approaches to addressing these challenges:

First, next-generation applications designed specifically for these hybrid system landscapes. These include workload automation applications like ANOW!, which seamlessly integrate and manage hybrid cloud instances, on-premises applications, and mainframe ecosystems. For more details, refer to the following blog post: How Workload Automation Software Adapts to Current Trends and New Requirements

Second, to ensure the interoperability of cloud and on-premises systems, companies increasingly rely on proven standards such as RESTful APIs and message brokering.

And third, the parallel operation of on-premises and cloud environments often requires standardizing processes and technologies. While this initially involves effort, it frequently leads to long-term efficiency gains.

The Era of Large Data in Data Centers

IDC predicts that the global volume of data will grow to 221 zettabytes by 2026. That’s 221,000,000,000,000,000,000 bytes.

Data has always been essential for data-driven decisions and is often referred to as the “new gold” in the context of artificial intelligence. The more data available, the better the AI model built on it can become (see the following section on the “Era of AI for Data Centers”).

The value of this “new gold” depends on various prerequisites, which define the challenges faced. Take, for example, a global manufacturer pursuing a hybrid cloud strategy. Significant challenges arise in harmonizing and standardizing data into a usable data lake, sourced from diverse origins such as legacy systems, external partner APIs, and cloud-based SaaS solutions. The diversity of formats and structures requires careful transformation to ensure that the data can be correctly interpreted and processed. Without precise alignment, errors in analysis can occur, which in the worst case could even lead to legal consequences.

According to various sources, around 60% of corporate data is currently stored in the cloud (see Statista or Forbes). However, it’s also reported that a large portion of global corporate data is stored on or originates from mainframes. A blog by Rocket Software even estimates this at 80% of worldwide corporate data. While these figures are inconsistent with the earlier-mentioned cloud storage data, verifying both would require significant effort.

The conclusion remains: the volume of data stored or generated on mainframes is substantial. Unsurprisingly, utilizing this mainframe-stored or mainframe-originated data is a critical topic. For example, Model9 (acquired by BMC in 2023) had this objective: making mainframe data accessible for cloud-based applications.

Lastly, for completeness: data centers must also prioritize backup and recovery plans to ensure data integrity and enable quick restoration in the event of a system failure or disaster.

The Era of AI

The rush for AI, which began with the release of ChatGPT in November 2022, continues unabated—despite occasional pessimistic predictions. For more on this, see my blog Where Does AI Stand in the Gartner Hype Cycle?

The high demand for GPU chips and the associated significant investments are well-known, and these create particular challenges for hyperscalers. As the Handelsblatt notes:

“The high investments represent a massive bet on a future where Amazon [and other hyperscalers] and its cloud customers will earn dream margins with new AI services. (…) As a result of the boom, there could indeed be excesses, Hamilton said. An industry ‘overbuild’—meaning more server or energy infrastructure than is needed to meet demand—is fundamentally possible. (…) The parallel structures that inevitably emerge could quickly lose value due to a technological leap; an unexpected innovation could turn the market upside down. Some analysts are watching the tech companies’ race with concern for this reason. Amazon’s stock price fell by more than 20 percent in August after the company reported significantly higher investments.”

And What Else? – Location Choice, Energy Consumption, Cyber Security, …

The energy demand of data centers continues to increase, not least due to the high demand for AI applications. While semiconductor manufacturers (especially Nvidia, AMD) can steadily improve the energy efficiency of their hardware, the overall growth in data production and computational requirements still outpaces these improvements.

The shift in location strategies for AI training nodes highlights a move to regions with robust fiber optic infrastructures and available energy, such as the 41st parallel in the U.S. These regions are now considered virtual “prime locations” for data centers. Hyperscalers are focusing on “power-first” locations like Council Bluffs, Iowa, and Lancaster, Texas. Additionally, AI workloads with lower latency sensitivity are driving demand in regions with lower energy costs and tax incentives.

In Germany, the majority of modern data centers are located in the Frankfurt area—thanks in no small part to DE-CIX Frankfurt, the world’s largest internet exchange point by data throughput.

Securing the power supply is a strategic priority. Hyperscalers are increasingly focused not only on procurement but also on generation. For example, Amazon is already investing in nuclear microreactors, which are being discussed as a potential solution for long-term sustainability. Furthermore, as societal acceptance of the hyperscalers’ business model becomes more important, companies and AI providers are increasing their reliance on renewable energy.

And last but not least: Cybersecurity has become an even more critical issue in recent years. The rise in cyber threats and the potential for data breaches have prompted data center operators to strengthen their security measures. Responses include implementing robust cybersecurity measures, improving encryption technologies, deploying real-time threat monitoring systems, and conducting employee training programs.

Explore more on related topics …

  • The future of the mainframe
  • The Future of the Mainframe (Part IV): Key Selling Points to Secure Mainframe Technology’s Future
  • Cloud Repatriation: Why Companies Are Bringing Workload Back to Their Own Data Center
  • Highlights from HOT 2024 – the largest conference for WLA in Europe
  • What is our current position in the GARTNER hype cycle for AI?
  • Author

    The author is a manager in the software industry with international expertise: Authorized officer at one of the large consulting firms - Responsible for setting up an IT development center at the Bangalore offshore location - Director M&A at a software company in Berlin.