• CIOs face pressure to prove AI value, but adoption is stalled: many organizations are stuck in pilot phases, with 75% of CIOs regretting recent AI purchases and struggling to show meaningful ROI.
    • Model Context Protocol, MCP, enables powerful autonomous agents. But challenges like inconsistent outputs, weak security, and lack of governance are limiting real-world impact and eroding trust.
    • MCP on its own is too limited for enterprise use. Without deep integrations, scalability, and guardrails, companies end up with fragmented projects that improve productivity but not business outcomes.
    • To unlock value, organizations must treat AI agents like employees—giving them controlled access, clear workflows, performance metrics, and secure, enterprise-grade platforms to ensure reliable, scalable results.
  • “Real-world integrations have got to be secure, they've got to be scalable, they have to be production-grade, there has to be guardrails."

    Chief Product & Technology Officer, B2B
    Expedia

    Karen Bolda

    Across industries, CIOs face a common AI dilemma: They’re under pressure to keep up with competitors, stay ahead of AI mandates, and show that early investments in the technology are paying off. But many are not yet comfortable deploying AI broadly across operations. The result? Lots of pilots, but real-world impact is elusive, underwhelming, or isolated.

    According to an industry study, 75% of CIOs regret the AI buying decisions they’ve made in the past year. And if they don’t show value soon, AI budgets are at risk. Luckily, the opportunity is waiting for CIOs willing to take on the challenge.

    AI has advanced far beyond simple chatbots. With Model Context Protocol, or MCP, companies can connect powerful underlying LLMs to core business systems, instantly creating AI agents that can autonomously execute commands. But while they can operate independently, the question becomes: are companies ready to let them?

    Today, the main hurdle is consistency. While AI agents may be technically accurate, they’re also delivering ten different responses to the same prompt. Some have even described these basic agentic AI systems as “eager interns” — excited to help, but lacking the professional knowledge needed to do the work properly. And security is a major concern. In fact, a recent study found that organizations lacked visibility into 95% of MCP deployments.

    “Right now it's an unsolved problem because it's the wild, wild West,” Jon Aniano, SVP of product and CRM applications at Zendesk, told VentureBeat. “We don't even have a defined technical agent-to-agent protocol that all companies agree on. How do you balance user expectations versus what keeps your platform safe?"

    Weak security, coupled with sub-par results, are eroding confidence in AI at a time when organizations are rapidly trying to expand use of the technology. In fact, by 2027, over 40% of AI investments will fail because of governance and infrastructure problems, according to Gartner.

    Trust in AI is now an MCP problem

    On its own, through open source libraries or lightweight frameworks, MCP is too shallow and fragile to support enterprise AI workloads. It operates like a simple API call, instead of the fortified foundation that businesses need to support their growing fleet of AI agents, limiting the end capabilities.

    “Real-world integrations have got to be secure, they've got to be scalable, they have to be production-grade, there has to be guardrails." Karen Bolda, Chief Product & Technology Officer, B2B, previously told CIO News.

    For example, teams may be able to use MCP on its own to quickly connect to a CRM to build a prototype sales agent. But it lacks the rich customer data in other repositories, like financial systems, payment rails, and customer support software, the AI agent would need to orchestrate whole workflows in a safe, predictable, and scalable way.

    While MCP on its own may improve productivity, it often doesn’t improve trust — the key ingredient enterprises need to scale. As a result, companies end up with lots of standalone MCP projects. There’s a gap between results like faster information delivery, and impact to the bottom-line.

    A human-like approach to agent management

    Until these systems have access to the full context they need, and operate in an orchestrated, observable, and governed manner, companies will continue to struggle to achieve the results they want. That can’t happen until CIOs are confident the AI agents won’t misuse data, access unauthorized systems, or expose the business to new risks.

    Organizations need to start treating AI agents like employees. The right MCP platform delivers the breadth of ingredients the AI agent needs — context, skills, identity, and observability. It imposes the same access and security controls on AI agents as the users behind them, meaning employee benefits agents can’t access social security numbers, or expense agents can’t tap payroll information.

    And importantly, proper guardrails mean organizations can focus on where they have the biggest advantage: their workflows. Internal processes and business logic define how competitive an organization is in the marketplace. Too slow and complex? Organizations fall behind. Too fast and unwieldy? Businesses run into regulatory and compliance challenges. Too rigid? The tools are out-of-date in six months

    Similar to human employees, AI agents must have their own set of specific performance metrics to operate against, whether it’s faster response times to customer inquiries, improvements to future products, or fewer supply chain disruptions. This is where companies often run into challenges.

    It’s when companies build defined workflows, and expose them to AI through enterprise-grade MCP platforms, that agents are aligned to the same performance metrics as the rest of the business. An IT agent, for example, could immediately be judged on how many tickets it resolves in a given time frame. How? Because the underlying MCP platform keeps the AI agent restricted to only the workflows directly linked to its own performance indicators. .

    For example, at engagement ring maker Brilliant Earth, buyers can pick out diamonds, resize rings, and other support requests all through AI agents. The result? Faster service for customers. While human agents took an average of 57 seconds to respond to customers, AI agents typically got back to them in 11 seconds, according to a company executive. Of course, employees are likely handling more complex cases which take longer to resolve. But Brilliant Earth exemplifies how companies can start to rethink human-centric performance quotas when AI systems are handling rote processes much faster. With AI agents, “we want to build by measuring them up against key KPIs for enterprise work streams,” Steve Giles, senior director of IT at Brilliant Earth, said at WoW Austin in January.

    Real AI transformation can’t start until CIOs trust the technology enough to let it work across systems. And that starts with an enterprise MCP platform built for reliable, observable, and production-ready AI agents.