[フレーム][フレーム]
Industry Perspectives

Insight and analysis on the information technology space from industry thought leaders.

Agentic AI Starts with Infrastructure That Can Act

Outdated network architectures are becoming the critical limitation that prevents AI from delivering on its promise.

AI in data center
Alamy

By Darren Wolner, GTT

Most enterprise conversations around agentic AI still revolve around models and data — e.g., how to structure them, govern them, and get them into production. And rightly so: Agentic AI depends on a strong foundation of real-time, trustworthy data and flexible model delivery.

But those layers don't operate in isolation. Behind every action an AI system takes, whether it's rerouting a delivery, flagging a fraud attempt or scheduling maintenance, there's infrastructure moving that data, executing the decision, and enforcing access policies in real time. Success ultimately depends on the network's ability to keep up.

As NVIDIA's CEO recently underscored , "the network is the computer." Agentic AI pushes decisions to the edge, stitching together data from cloud platforms, on-prem data centers, edge devices, and IoT systems. These components must stay continuously connected and synchronized, ensuring decisions can be executed instantly wherever they're needed. Without a high-performance network to sustain that level of coordination, autonomy grinds to a halt.

GTT's latest research shows private cloud spending is growing more than twice as fast as public cloud among the largest enterprise segments. This shift reflects the rising pressure in the advanced AI era to control performance, cost, and security. Yet the network and security architectures responsible for delivering that control often lag behind. Inference workloads stall. Latency increases. Access gaps emerge. What happens is the very systems built to act autonomously end up waiting on infrastructure.

Related:As Open Source AI Takes Off, Accountability and Governance Must Keep Pace

Agentic AI introduces new demands that traditional architectures were never built for. It pushes decisions to the edge, initiates real-time inference across hybrid environments, and increases machine-to-machine communication — often without human involvement. If your network can't keep up, this AI can't perform.

Where Traditional Network Architectures Break

Agentic AI isn't like traditional enterprise workloads. It distributes intelligence across clouds, data centers, and edges. And that changes everything for current infrastructure.

First, agentic AI workloads are highly distributed. They could feature a combination of public cloud, private cloud, and edge environments, depending on where the data and decision-making are located. This brings much greater complexity to maintaining consistent, secure performance across locations.

Second, these workloads are latency-sensitive. When an agent is powering fraud detection, routing logistics, or triggering a manufacturing control system, even small delays can cause costly errors, timeouts, or incorrect decisions. Traditional networks, built for batch processes or static traffic flows, aren't optimized for this kind of responsiveness.

Related:Secure AI Infrastructure: 5 Things You Should Never Compromise On

Third, agentic AI introduces a massive increase in machine-to-machine traffic. Far from only directly serving users, agents call APIs, access tools, and delegate tasks to other agents or services. Without a network that can dynamically route and prioritize this traffic based on context and criticality, things break down fast.

And finally, there's security where the stakes are highest. With agentic systems, more endpoints mean more automated access to sensitive systems. A single misconfigured policy or compromised credential can have a far wider impact. And when the data behind those decisions is outdated, incomplete, or inaccurate, the result is guesswork executed at speed and scale.

5 Ways to Strengthen your Network for AI's Demands

To function at scale, agentic AI requires a network and security stack that's built for autonomy. Here are five key areas to focus on:

1. Modernize SD-WAN for Real-Time Inference

AI workloads are unpredictable, latency-sensitive, and often originate from non-traditional endpoints like sensors, robots, or edge devices, making dynamic pathing, intelligent traffic prioritization, and application-aware routing essential. A static SD-WAN policy designed for email or file-sharing traffic won't cut it. IT teams need to optimize paths not just for performance, but for real-time responsiveness, with the ability to route latency-sensitive inference traffic ahead of bulk or non-urgent workloads.

Related:The Real Cost of Plugging GPT into Healthcare

2. Extend Zero Trust and SASE to AI Workloads

Agentic AI systems initiate actions on their own, interacting with other tools and APIs, and even chain decisions across environments. That expands your attack surface fast, rendering identity-based access and continuous verification non-negotiable. Applying Zero Trust principles to users isn't enough. You need to extend them to services, agents, and models while applying granular policies that can adapt to context. SASE frameworks can help unify this enforcement across environments, but only if they account for the dynamic, decentralized nature of modern AI workloads.

3. Deploy Full-Stack Observability

Inference spikes can be triggered by user behavior, environmental data, or autonomous agent activity. And when they do, they can strain bandwidth, overwhelm under-provisioned edges, or degrade performance in adjacent services. Full-stack observability gives IT teams the visibility needed to respond. But more importantly, it enables proactive tuning and root cause correlation between model behavior, network performance, and user experience before the AI's decision stream becomes a support ticket surge.

4. Rethink Edge Architecture for Localized AI

As AI shifts toward edge inference, organizations need to put compute and connectivity closer to where data is generated and actions are taken. That means rearchitecting edge sites to support not just local processing, but secure, reliable networking that integrates with central infrastructure without introducing new gaps or silos. It's not just about bandwidth anymore. Successful agentic AI requires proximity, policy enforcement, and fault tolerance.

5. Use Managed Services to Simplify and Scale Securely

The shift to agentic AI introduces technical demands most enterprises aren't resourced to handle alone. From traffic engineering to policy enforcement and observability integration, the operational complexity can grow faster than the AI stack itself. Partnering with managed service providers can reduce that load. When done right, enterprises can focus on AI outcomes instead of what's under the hood, while still maintaining full visibility and control across the network.

Agentic AI Will Only Move as Fast as Your Infrastructure

If your infrastructure isn't built for real-time action, your AI won't be either. Forward-looking IT teams are already treating network and security modernization not as a side project, but as the foundation for scalable, autonomous systems. The rest won't just fall behind. They'll be left waiting on the very infrastructure they forgot to update.

About the author:

Darren Wolner, GTT VP Product Management, leads GTT Managed Services, Professional Services, SD-WAN, SASE, cybersecurity product development and lifecycle management. A visionary, he sets the company's performance goals to innovate and align with market developments, while meeting the demands of the company's enterprise customers. He champions his team to help customers navigate their business transformation journey with fully managed, all-digital, on-demand experiences while helping them to protect their environments from today's cyber threats.

You May Also Like


Important Update

ITPro Today ended publication on September 30, 2025.

Learn More

AltStyle によって変換されたページ (->オリジナル) /