• As AI moves from assistive tooling to operational authority, expectations around governance, leadership, and accountability change dramatically for CIOs and boards.

  • Deven Verma, Director of Technology Strategy at Deloitte, said the CIO role is evolving from transformation sponsor to accountable owner of AI-driven enterprise operations.

  • He outlined a three-pillar framework of governance, operational integration, and economic accountability, stressing that defensibility and measurable ROI are now non-negotiable.

"The CIO’s role is evolving from a transformation sponsor to the accountable owner of AI-driven enterprise operations and applications."
Director, Technology Strategy
Deloitte

Deven Verma

Deploying AI is no longer the constraint. Governing the decisions AI agents make is the real challenge. For the past several years, organizations have treated AI as assistive technology, layering tools into existing workflows to accelerate decisions and reduce friction. Now those tools are making the decisions that once required human judgment. As intelligent agents take over, the pressure on CIOs has shifted from adoption to accountability. As that transition accelerates, AI governance has become a board-level concern, reshaping what boards expect and what CIOs must own.

Deven Verma is Director of Technology Strategy at Deloitte, where he advises Fortune 200 and PE-backed organizations moving enterprises from AI experimentation to accountable operations. Also the SVP and CIO of a stealth mode startup company, Verma is a CIO-level executive who has managed $250M+ IT portfolios, unlocked more than $300M in revenue, and advised boards and CEOs through large-scale AI, cloud, and enterprise modernization. He believes the current moment demands a fundamental rethink of what the CIO role is actually for.

"The CIO is no longer enabling digital transformation. They are accountable for how the enterprise runs," said Verma. "The CIO's role is therefore evolving from a transformation sponsor to the accountable owner of AI-driven enterprise operations and applications." That shift is already redefining what boards expect from their technology leaders, raising the stakes for every decision AI makes on the organization's behalf.

The change is already underway. Across industries, intelligent agents are embedded in core operational workflows, handling everything from service tickets and pricing decisions to resource allocation and transaction approvals. As these tools gain more power, the nature of risk changes with them. Verma said navigating that shift requires a disciplined approach built on three pillars: governance of autonomous systems, operational integration, and economic accountability. Getting governance right is where most organizations are still finding their footing.

  • Beyond the demo: For Verma, the first pillar demands that AI systems be built to withstand scrutiny from regulators, auditors, and boards, not scramble to satisfy it after the fact. As decisions become algorithmically influenced, operational risk becomes model risk. "Demonstrations create excitement, but it's the defensibility that creates trust," he said. "To be defensible, you need traceable decisions, auditable models, defined governance, and measurable ROI." That requires building in guardrails and ensuring a human remains in the loop for critical decisions.

  • The hard way: Verma's work with a federal agency handling one million calls a month illustrates what governance at scale actually demands. Moving that workload to an LLM-based system required mapping 80 distinct call workflows, implementing FedRAMP and PII guardrails, and submitting to audits every three months. "Our first rollout was a huge failure," Verma said. "But since then we have improved a lot. We are looking at anywhere from 38 to 42% efficiency improvement, and equipment downtime has gone down 30% to 42%, and the mean time to resolution has improved significantly."

The second and third pillars, operational integration and economic accountability, are where governance meets execution. Together they address a pattern Verma sees repeatedly: organizations that have built defensible AI systems still struggle to embed them into the business as a unified operating system that generates measurable returns. The question shifts from whether AI can be trusted to whether it is actually solving the right problems and delivering value that boards want to see.

  • The two booths fallacy: "There's an analogy I use about two booths," Verma said. "One is filled with the latest AI tools, while the other is focused on defining the business problem. Too often, I see clients flock to the booth offering tools, rather than the one that can guide them to the right solution." When done correctly, the results can be powerful. Walmart's AI-powered supply chain shows what that looks like at scale, with intelligent systems reading local demand patterns and supply constraints to drive decisions across a global retail operation.

  • The buck stops here: To avoid this trap, Verma advocated for a problem-first approach anchored in the third pillar: economic accountability. "When an AI system causes financial loss, or there's reputational damage, or regulatory exposure isn't addressed, who is accountable?" he said. "It cannot be a tool. It has to be a human within the organization." Every AI initiative must also demonstrate clear economic impact, delivering measurable improvements to productivity or margins to justify its costs and graduate from the experimentation phase.

With many CFOs and boards demanding clear returns, the pressure is on to prove that AI is more than an expensive experiment. The high failure rate of many initiatives makes this especially true. "I've seen figures suggesting as many as 74% to 78% of AI initiatives fail to deliver on their promised impact," Verma said. That gap is closing, but only for organizations that adopt an ROI-first approach anchored in concrete business goals, with the human context required to solve actual problems kept firmly in view. A well-governed AI strategy is a powerful lever for business objectives.

For Verma, this moment marks a genuine inflection point. "The first wave of AI proved what’s possible. The next will define who is accountable, embedding these systems not just in technology, but into the operating fabric of the enterprise," he said. That shift will reshape the C-suite itself, collapsing data, technology, and information leadership into a single accountable function. "A role like Chief Data & AI Officer is a natural next step," Verma predicted. It will also require a far tighter partnership with the CISO than most enterprises have today. "In an agentic world, security isn’t a control, it’s a prerequisite. Without strong guardrails, scale becomes risk," he concluded.