AI experimentation is everywhere, but results are not. As boards push for progress, weak foundations are quietly doing the damage, with analysts forecasting that 60 percent of AI projects will be abandoned by 2026. That failure rate stems not from the technology itself, but from a lack of AI-ready data and a missing board-level understanding of governance and risk. Trapped in a cycle of pilots that never scale, companies are discovering that moving from proof-of-concept to production requires stepping back to basics, not racing ahead.

This is the reality described by Shaukat Ali Khan, a global technology executive with over two decades of experience in digital transformation. As the Executive Chief Digital and Information Officer for the NHS West Yorkshire Integrated Care Board, he leads a directorate supporting 2.7 million residents with a nearly-$10 billion budget and chairs the organization's AI steering group. Khan's perspective is forged from leading large-scale IT operations in the healthcare and education sectors across Asia, Africa, and Europe, including his role as Global CIO for Aga Khan University. From large systems to frontline teams, he has seen why AI succeeds for some and fails for most.

"Companies with the right governance model, infrastructure, and skill set are realizing a 25% return on investment in their day-to-day operations," said Khan. The disconnect between AI's promise and its messy reality, Khan explained, begins with a foundational challenge. Despite widespread pressure to implement AI, many organizations are simply not ready. Successful AI initiatives often hinge on a disciplined, ROI-first approach becoming a board-level imperative, but Khan breaks the problem down into three parts:

  • Data-rich, insight-poor: Khan was blunt about the scale of the problem. "60 to 70 percent of institutions are lacking AI-ready data," he said. "Organizations have data everywhere, but most of it is not data that can be used for this purpose." Without high-quality, well-governed data, even the most advanced AI models struggle to produce reliable or actionable results, turning experimentation into little more than noise.

  • Boardroom blindspot: The data readiness gap, from Khan's perspective, is exacerbated by a lack of foresight at the top. "Everybody wanted to implement AI without the actual understanding of what it means from a data governance point of view, from a responsibility and vulnerability point of view, and from an infrastructure point of view." When boards lack that foundational understanding, AI initiatives move forward without clear guardrails, ownership, or accountability.

  • Rules of the road: That boardroom blindspot often leads to a policy vacuum at the operational level. "How are we educating our staff?" Khan asked. "How are we educating our digital, data, and technology teams to support these mechanisms? And at the same time, how do we make sure that with the use of any form of AI, we are still putting a lot of focus on cybersecurity?"