As artificial intelligence (AI) continues to transform how businesses operate, one area often overlooked in the rush to deploy models and applications is the network. Enterprise networks are the circulatory system that supports AI: from the massive bandwidth requirements of training models to the real-time inference demands at the edge. But are today’s networks ready?
Who’s Preparing for AI, and How?
A study conducted by Enterprise Management Associates (EMA) surveyed a broad mix of IT professionals who are actively adapting their networks for AI workloads. All participating companies had AI strategies already underway, and the vast majority had already deployed at least some AI applications in production.
The Big Picture: Where AI Workloads Will Live
One of the first takeaways from the research is how distributed AI has become. Companies are deploying AI workloads across public clouds, private and co-located data centers, and increasingly, edge computing environments.
Most companies expect to use proprietary large language models (LLMs) and machine learning as part of their production workloads. Meanwhile, a significant portion are experimenting with open-source LLMs and more advanced techniques like agentic AI. Fewer organizations are currently leveraging retrieval-augmented generation (RAG), which suggests lower awareness or maturity in that area.
AI workloads are expected to span hyperscaler clouds, private infrastructure, emerging GPU-as-a-service platforms, and enterprise edge environments. This highly distributed architecture has enormous implications for both data center networks and wide area networks (WANs).
Top Concerns: Security, Cost, and Skills
When asked about concerns around networking for AI, several recurring themes emerged.
- Security risks ranked highest across the board. Organizations are rightly concerned about data protection, regulatory compliance, and the complexity of integrating with third-party AI services.
- Cost pressures were also widely reported. Making a network AI-ready is rarely a cheap exercise; it may require new switching infrastructure, upgraded WAN links, or changes to both overlay and underlay networks.
- Rapid technological change has left many teams feeling like they’re in a race to keep up. The pace of innovation, i.e. moving from ML to LLMs to agentic AI in a short span, makes long-term planning difficult.
- Skills gaps are another concern, as new protocols, tools, and vendors enter the enterprise landscape.
Readiness Gaps in Data Center and WAN Infrastructure
Despite the fact that many of these companies are already running AI workloads, fewer than half believe their data center networks are fully prepared to support AI. A similar number say the same about their WAN infrastructure.
Inside the Data Center:
- A large majority are investing in high-speed Ethernet switching, including cutting-edge 800 GbE gear.
- Many are adopting hyperconverged infrastructure (HCI) stacks, often built in partnership with GPU vendors like NVIDIA.
- A notable portion are implementing smart NICs and DPUs to offload encryption and networking tasks from CPUs and GPUs.
- Most respondents say they’re relying on Ethernet, often enhanced with RoCE (RDMA over Converged Ethernet), to ensure low-latency performance for AI workloads.
Across the WAN:
- A strong majority are pursuing high-performance cloud interconnects to bridge data centers and cloud environments.
- Many are investing in dedicated AI backbone networks, leveraging third-party providers for low-latency, high-bandwidth connectivity.
- A majority also consider SD-WAN and SASE overlays essential to orchestrating and securing AI-related traffic across hybrid environments.
Optimizing Traffic for AI
It’s clear that WAN optimization needs to evolve to meet the demands of AI. Enterprises are no longer looking for general-purpose bandwidth enhancements – they want tools and techniques that are AI-aware:
- Many are prioritizing compression and deduplication technologies optimized for AI datasets.
- A similar number are adopting traffic shaping and prioritization that distinguishes sanctioned AI from general or “shadow” workloads.
- Others are implementing forward error correction and other techniques to reduce dropped packets and improve data reliability.
- A growing portion are deploying edge computing to move inference closer to where data lives, reducing latency and WAN congestion.
Securing AI-Driven Networks
Security remains a central concern in every phase of the AI journey.
- A clear majority of companies are encrypting AI data to minimize compliance and leakage risks.
- Many are using AI-powered threat detection tools, such as next-gen NDR and XDR platforms that can adapt to AI-specific attack patterns.
- A strong portion are applying zero trust principles to AI workloads in both the data center and across WAN traffic.
The more forward-looking organizations are also incorporating AI-specific threat intelligence, locking down API integrations, and preparing for targeted attacks on AI models themselves.
Visibility and Control: Managing AI Workloads at Scale
While many companies are making strides, less than half feel their observability tools are fully ready to manage AI-centric networks. However, almost all believe that AI-driven network management tools will be critical to success.
What changes are these teams making?
- Most are moving toward real-time monitoring of network metrics and flow data.
- Many are expanding visibility across the entire network to detect performance and security issues earlier.
- A significant number are adding packet capture capabilities for forensic analysis and troubleshooting.
- A high portion want observability tools that can identify and classify AI traffic, enabling them to spot issues with sanctioned and shadow AI.
- Other desired features include predictive congestion alerts, anomaly detection, and GPU cluster traffic analysis—critical for understanding how AI training and inference affect the network in bursts.
What High Performers Do Differently
The research also revealed patterns among organizations that report higher confidence in their AI readiness:
- They are more likely to hire AI specialists rather than rely solely on upskilling existing staff.
- They place strong emphasis on managing third-party connectivity, especially to LLM providers and GPU cloud platforms.
- They automate traffic prioritization and make use of AI-aware WAN optimization techniques.
- They are also more aggressive about using edge computing to minimize latency and data movement across WAN links.
Final Thoughts: Optimism With a Note of Caution
While many organizations expect to eventually succeed in readying their networks for AI, there’s still a long road ahead. IT executives are often more optimistic than middle managers and engineers, who have a clearer view of the day-to-day constraints.
The key takeaway from the EMA research? Hope alone won’t deliver AI success. The most prepared organizations are investing strategically, partnering with the right vendors, automating where possible, and pushing observability to new levels. In the age of AI, the winners won’t just be those with the smartest models, they’ll rather be the ones with the smartest networks to support them.
Check out the EMA research paper for more insights!