Key Points

  • Enterprises can build AI agents easily, but coordinating dozens across platforms breaks down at scale, creating latency, blind spots, and governance risk once pilots move into production.

  • Pranav Kumar, Sr. Director at Capgemini, explained that the real challenge shifts from agent creation to orchestration, visibility, and enterprise-wide control.

  • The solution is a centralized orchestration layer that registers, governs, and monitors all agents in real time, with shared guardrails, performance metrics, and lifecycle management.

Enterprises can build agents. That problem is solved. What they cannot reliably do is make dozens of agents, built on different platforms by different teams, work together at scale without latency, downtime, or compliance gaps. As organizations push past pilots into production, the operational challenge has shifted from creation to coordination.

Pranav Kumar is a Senior Director at Capgemini who leads the firm's Digital, Data, and AI initiatives. Drawing on over two decades of experience advising Fortune 500 clients, holding leadership positions at firms like Adobe and PwC, and serving as a mentor for India's flagship innovation program, Kumar said that scaling AI safely begins with reframing the problem.

"Anybody can create an army of agents. That's pretty much common these days. Now, the differentiator is how we take care of the orchestration," said Kumar. The fragmentation is immediate and structural. Google has its Agent Development Kit. Microsoft has its own approach. AWS has Bedrock. Customers build custom agents internally. System integrators build their own.

  • The event-driven gap: The result is a patchwork of agent ecosystems that need to interoperate in real time across event-driven environments where a failed handshake between agents can mean missed transactions, misrouted requests, or compliance violations. Regulators are already raising concerns about the operational risks this creates. "We live in an event-driven world," Kumar noted. "How do we ensure there's no latency, no downtime, the right rerouting, and that guardrails are applied not just to one-off use cases, but at the enterprise level?"

  • Guardrails that don't scale: "Traditional hyperscalers come with two, three, maybe ten guardrails. When you move beyond pilots to enterprise scale, that's not enough. Governance has to work across all agents, all use cases." The guardrail problem compounds quickly. A contact center deployment might need agents for greeting, knowledge management, analytics, and routing. That's one business line. A bank with wealth management, retail banking, and insurance lines needs the same treatment across each, with shared enterprise controls for things like bias prevention, brand consistency, and regulatory compliance layered on top.

Anybody can create an army of agents. That’s pretty much common these days. Now, the differentiator is how we take care of the orchestration.

Pranav Kumar

Sr. Director of Digital, Data, & AI
Capgemini

Kumar's team built a modular platform to address this, offering guardrails as a configurable service that can enforce 70 or more controls across use cases, compared to the handful that come natively from cloud providers. Kumar described the platform as an enterprise foundation for registering, monitoring, and governing agents regardless of where they originate. Gartner predicts that 40% of enterprise applications will feature task-specific AI agents by 2026, up from less than 5% in 2025. At that density, centralized visibility becomes non-negotiable.

  • The cockpit view: Kumar and his team created a centralized dashboard that monitors agent performance, compliance status, and lifecycle state across every deployed agent. "We can manage the entire agent lifecycle, monitor performance, and ensure all compliance is happening from a single cockpit," he explained.

  • Measuring what matters: ROI measurement proved to be one of the hardest problems. Kumar's approach ties metrics to specific use cases rather than broad efficiency claims. For contact center agents, that means tracking first-call resolution and help desk ticket reduction. Across the platform, a reusability index tracks how modular components get adopted across business lines. "We defined KPIs use case by use case and tracked them at a modular level on the reusability index as well."

The platform was pressure-tested with a large European bank that had tried to scale agentic AI internally and hit the same wall. The bank needed agents across multiple lines of business but could not manage interoperability, governance, or performance monitoring across them. Kumar's team deployed a fail-fast strategy early, building toward a modular system where CX, supply chain, and finance each operate as distinct pillars on a shared foundation. The platform supports deployment across any major hyperscaler or on-premises, with interoperability built in.

Kumar also pointed to a next phase that few enterprises have reached: an event mesh where agents are registered in a common ecosystem, discoverable by users across the organization, and consumable on demand. The concept extends agent orchestration beyond coordination into something closer to an internal marketplace.

The trajectory is clear. Agent creation is a commodity. The competitive advantage now belongs to organizations that can govern, monitor, and orchestrate agents as a unified operational layer. "Democratization has already happened," Kumar concluded. "People are using AI in their day-to-day work. The focus now has to be on agentic governance. That is where the real work begins."

The views and opinions expressed are those of Pranav Kumar and do not represent the official policy or position of any organization.