
Key Points
AI experimentation accelerates across enterprises, but weak data foundations and poor board-level governance push most projects to stall or die before reaching production.
Shaukat Ali Khan, Executive Chief Digital and Information Officer at NHS West Yorkshire Integrated Care Board, described how organizations fall into an experimentation trap without clear ROI, readiness, or trust.
Real progress comes from starting with the problem, building governance and infrastructure first, and investing equally in tools, skills, and human oversight.
AI experimentation is everywhere, but results are not. As boards push for progress, weak foundations are quietly doing the damage, with analysts forecasting that 60 percent of AI projects will be abandoned by 2026. That failure rate stems not from the technology itself, but from a lack of AI-ready data and a missing board-level understanding of governance and risk. Trapped in a cycle of pilots that never scale, companies are discovering that moving from proof-of-concept to production requires stepping back to basics, not racing ahead.
This is the reality described by Shaukat Ali Khan, a global technology executive with over two decades of experience in digital transformation. As the Executive Chief Digital and Information Officer for the NHS West Yorkshire Integrated Care Board, he leads a directorate supporting 2.7 million residents with a nearly-$10 billion budget and chairs the organization's AI steering group. Khan's perspective is forged from leading large-scale IT operations in the healthcare and education sectors across Asia, Africa, and Europe, including his role as Global CIO for Aga Khan University. From large systems to frontline teams, he has seen why AI succeeds for some and fails for most.
"Companies with the right governance model, infrastructure, and skill set are realizing a 25% return on investment in their day-to-day operations," said Khan. The disconnect between AI's promise and its messy reality, Khan explained, begins with a foundational challenge. Despite widespread pressure to implement AI, many organizations are simply not ready. Successful AI initiatives often hinge on a disciplined, ROI-first approach becoming a board-level imperative, but Khan breaks the problem down into three parts:
Data-rich, insight-poor: Khan was blunt about the scale of the problem. "60 to 70 percent of institutions are lacking AI-ready data," he said. "Organizations have data everywhere, but most of it is not data that can be used for this purpose." Without high-quality, well-governed data, even the most advanced AI models struggle to produce reliable or actionable results, turning experimentation into little more than noise.
Boardroom blindspot: The data readiness gap, from Khan's perspective, is exacerbated by a lack of foresight at the top. "Everybody wanted to implement AI without the actual understanding of what it means from a data governance point of view, from a responsibility and vulnerability point of view, and from an infrastructure point of view." When boards lack that foundational understanding, AI initiatives move forward without clear guardrails, ownership, or accountability.
Rules of the road: That boardroom blindspot often leads to a policy vacuum at the operational level. "How are we educating our staff?" Khan asked. "How are we educating our digital, data, and technology teams to support these mechanisms? And at the same time, how do we make sure that with the use of any form of AI, we are still putting a lot of focus on cybersecurity?"
To escape this cycle, Khan advised a complete inversion of the typical technology-first approach. The small group of AI "survivors" achieving real impact are often characterized by their focus on anchoring work in a clearly defined problem. High-performing organizations tend to use AI to pursue growth and innovation by fundamentally redesigning core workflows, a goal that moves well beyond simple cost savings. Redesigning core workflows requires a holistic framework that addresses the structure and preparedness layer first, to make sure the organization is building on a solid foundation. His framework is straightforward:
Problem before product: "The most important question is: what is the problem we are solving?" Khan said. "Are we just implementing AI for the sake of implementation, or do we have a genuine problem or use case? When we have the problem in front of us, then we do the reverse engineering on how to get the solution."
The transformation trifecta: He argued that real progress requires balance, not shortcuts. "You need to focus on three layers in parallel. The structure and preparedness layer, the tool and licensing part, and the actual human capital: the skill set to work, absorb, and get the required benefit," explained Khan. The human layer, he warned, is already under strain, noting that "40 percent or even more of core skills will change," while most employees already see skill gaps as their biggest challenge.
At NHS West Yorkshire, Khan puts this three-layer framework into practice. A key component of effective AI is often a unified orchestration platform that can draw context from across the enterprise, and at the NHS, this means bringing a diverse group of stakeholders to the table from the very beginning. The process helps build the trust and ethical oversight that form the structure layer in several key ways:
All hands on deck: "We engage the right people, including digital, clinical, non-digital, non-clinical, and also our patients," he said. "We brainstorm the problem together, define what we want to do, and then test it in a small pilot to assess not only the solution but also its consequences."
Human in charge: Khan was clear that AI does not replace clinical judgment. "In radiology, AI gives a lot of value for early diagnosis by providing an initial indication that there is a problem," he said. "But the radiologist remains the ultimate authority, and AI is used as an assistive tool, not a decision-maker."
Ethics by design: "Ethics are essential in healthcare," he noted. "Does the patient know what we are doing with their data? What is the mechanism for getting their consent? How are we using the data, and has it been agreed upon by our ethical committees? All of these factors come into the picture when we design these projects."
Ultimately, Khan insisted that navigating the AI era demands a change in leadership. In this new environment, the CIO's role is expanding from a purely technological focus to one that requires cultivating a culture of curiosity, clarity, and human-centricity. His final message for leaders was simple and direct: "It's important to lead with empathy, act with clarity, and never stop learning," he concluded. "Empathy, clarity, and continuous learning are very important for us."




.webp)
