
Key Points
Most enterprise AI initiatives are failing because most strategies start with technology instead of business outcomes.
We spoke with Hardi Gokani, Director of Product Management for AI/ML at Grainger, who explained how leaders can reverse this approach by defining the desired outcome before choosing a tool.
Gokani shared her "metrics laddering" framework for connecting a model's technical performance with top-line business value to ensure alignment with C-suite priorities.
She also described a "human as copilot" design process that embeds users as co-creators, turning governance into a trust-building engine that drives organic adoption.
Enterprise AI projects are failing to deliver value at an astonishing rate. But the root cause is often strategic, not technical. According to Boston Consulting Group, 74% of companies fail to capture meaningful value from their AI investments. The reason is simple: they start with the technology instead of the outcome. The organizations that succeed begin by defining the business result they want to achieve, then design the process and technology to make it happen.
As the Director of Product Management for AI/ML at industrial supplies and equipment provider Grainger, Hardi Gokani's work centers on a single question: what job are we trying to get done? Before her time at Grainger, Gokani led a cloud transformation at CVS Health that saved the company over $75M. Today, Gokani is a leading voice on the gap between AI experimentation and enterprise ROI, including a guest appearance on "The CAIO Connect" podcast.
"The mistake many companies make is starting with the technology. They jump into AI without clearly defining the outcome they’re trying to achieve," Gokani said. But her approach flips that sequence. "First, identify the outcome you want to enable, the specific 'job to be done' for your user. Next, design the ideal process that would deliver that outcome. Only after those steps should you choose and embed the right technology. If you start with the tech, you end up optimizing for the tech. If you start with the outcome, you optimize for the business."
According to Gokani, the disconnect between technical execution and business value is a key contributor to the "pilot graveyard." The problem is rarely the algorithm, she noted. In fact, 90% of AI failures stem from challenges with people and processes. To solve for this, she proposed a framework to connect technical metrics to C-suite priorities.
From model to money: "Many AI projects fail because they cannot clearly connect a technical pilot to a top-line business goal. A 'metrics laddering' framework bridges that gap. The first rung is the AI Metric, which measures the model’s direct output, such as prediction accuracy. The next is the Process Metric, which tracks operational improvements like faster cycle times or higher efficiency. At the top is the Business Metric that matters most to leadership, whether that is revenue growth, cost savings, or customer satisfaction," Gokani explained.
Right metric, right goal: Next, she illustrated how different objectives require different optimization strategies. "If the goal is to improve customer satisfaction, the recommendation model should prioritize accuracy to ensure each suggestion truly fits the user’s needs. But if the objective is to expand market share, volume becomes more important, and the focus shifts to the number of recommendations displayed."
Even the strongest strategy fails without adoption. This is where the focus shifts to the enduring challenge of people and processes. Rather than viewing governance as a constraint, Gokani reframed it as a trust-building mechanism, positioning employees as essential partners from the very beginning.
Human as copilot: "My design principle is that the human cannot be an afterthought. They must be a copilot from the start. This happens through three phases: co-creation, where users help design the system; progressive automation, where AI assists and trust grows; and a built-in skill transition plan, that retrains teams to supervise AI and take on more strategic roles."
It isn't just theory. Gokani shared how her team launched a zero-to-one computer vision product to create digital representations of storage units. They chose to forgo a mass rollout, instead launching with a hand-picked group of five "early adopters" treated as co-creators. These users provided feedback on a product that was still a work in progress, but because their input directly shaped the roadmap, they became evangelists. Eventually, their advocacy created a pull effect that drove organic adoption to 50 users and created a feedback loop that simultaneously improved the AI model.
Gokani's human-centric philosophy extends directly to risk management. According to her, the rise of "shadow AI" isn't a failure of security, but a symptom of unmet employee needs. When people lack sanctioned tools to innovate, they find their own. The solution, therefore, isn't to lock things down, but to lean in with proactive enablement. By providing a "sandbox" with company-approved tools, leaders can satisfy employees' drive to experiment while keeping data safe.
Two sides, one mistake: However, this focus on proactive enablement raises the challenge of tooling. Here, she identified two common, opposing errors. "Leaders often fall into one of two traps when it comes to tooling. Some try to standardize on a single technology. A strategy that worked for data warehousing may fail in the complexity of AI. Others go to the opposite extreme, chasing every new tool that appears." For Gokani, the right approach goes back to the "outcome-first" principle: carefully curate and provide access to the specific technologies needed to solve your specific business goals.
In conclusion, the difference between experimentation and enterprise impact is architectural intent, Gokani said. AI only creates value when every tool fits into a cohesive system built for scale. "Architecture cannot be an afterthought, especially in the fast-moving world of AI. You need a team that is constantly thinking about what the next tool will be and, more importantly, how it will integrate with your existing ecosystem. If this is neglected, you will end up with a collection of fragmented applications that don't talk to each other, setting your company back 20 years."





