
Most enterprises treat AI transformation as a technology decision, investing in models and platforms while leaving roles, decision rights, and incentives unchanged.
Hardi Gokani, Director of Product Management for AI/ML at Grainger, said the real work of AI transformation is deeply human, requiring leaders to redesign the organization itself, rather than simply deploying better technology.
She outlined a four-part framework covering talent, leadership, incentives, and governance to move AI from pilot mode into genuine business transformation.
The tech industry's intense focus on AI models and platforms is missing the point. Real transformation succeeds or fails long before deployment, determined by how organizations restructure roles, decision rights, and incentives. Without that organizational blueprint, even the most advanced AI strategy is little more than an expensive experiment.
Hardi Gokani, Director of Product Management for AI/ML at Grainger, is a Fortune 500 growth catalyst whose work sits at the intersection of AI product strategy and organizational transformation. At Grainger, she enabled over $398M in revenue through AI-driven product initiatives and scaled a computer vision application from five to 1,000 users in four months. Earlier, at CVS Health, she protected $35M in annual revenue during an enterprise-wide move to the cloud. She believes the industry's focus on technology is misplaced, and the real work of AI transformation is deeply human.
"AI transformation doesn't succeed or fail at the point of deployment. It succeeds or fails much earlier, when companies decide who owns AI-driven decisions, how roles change, and what leaders are accountable for," said Gokani. That stance cuts against the instinct to treat AI adoption as a procurement decision. The organizational blueprint has to come first. "If your roadmap doesn't explicitly address roles, decision rights, and incentives, it's not a strategy, it's just a science experiment," she added. Without that foundation, even well-resourced AI initiatives tend to fail before maturity.
Gokani identified two organizational pitfalls that consistently keep AI stuck in pilot mode. The first is a talent imbalance that most companies create without realizing it. The second is a leadership fluency gap that turns even well-designed AI initiatives into sources of frustration.
The demo trap: The first pattern of failure is often rooted in how organizations staff their AI initiatives. Companies unintentionally over-invest in "builders," data scientists and engineers, while underfunding the "translators": the business leaders, product leaders, and domain experts who connect AI to real problems. The resulting imbalance has a predictable outcome. "If 90% of your AI talent sits in model development, you're optimizing for demos, not for business decisions. You have to make sure you have a good balance when you structure your talent for an AI initiative," said Gokani.
Mind the human gap: The fluency problem lives at the top of the organization. Executives routinely sponsor AI but struggle to evaluate its probabilistic nature, which leads to inflated expectations and team burnout. At the same time, AI actively reshapes power and identity at work, stoking fear among employees that their experience will be made irrelevant. She noted that ignoring that fear will guarantee resistance rather than adoption. The companies seeing real impact address that anxiety directly, by being explicit about where AI will and will not be used and by rewarding good human judgment alongside automation. Measurement is where that commitment becomes visible. "In addition to model accuracy, you need to measure things like: How many decisions did you improve? What time did you save for each user? What adoption persisted after the initial launch? If humans are not in your metrics, value won't be either," she said. Netflix offered her clearest example of what a genuine AI-native operating model looks like: completely and intentionally invisible, with recommendation and personalization algorithms integrated into every product, content, and marketing decision the business makes.
Getting to that point, Gokani said, requires reimagining the talent structure around three core traits: adaptability, as machines may learn to take on decisions humans make today; technological fluency, now an essential skill even for business leaders; and deep business acumen, because without a clear grasp of top-line metrics, the risk of a poor technology decision increases. With that foundation in place, she outlined two levers for redefining ownership, both of which require leaders to go deeper than superficial goal-setting.
Follow the money: The first lever is incentive redesign. When organizations set goals that do not account for how AI will contribute to outcomes, teams have no practical reason to integrate it. Major consulting firms are already tying AI adoption to leadership incentives and promotion criteria, a signal that this alignment is becoming a baseline expectation rather than a differentiator. "If leaders tell a team their goal is to reduce average handle time, that's a surface-level goal. They need to go a level deeper and define how much of that reduction will be driven by humans versus how much will be driven by technology and AI," Gokani said.
Mapping the machine: The second lever is process flow redesign. The step that most organizations skip is explicitly documenting who makes which decision at which point in a workflow, distinguishing between what is handled by technology and what remains with a human. This is a key step in building change fitness at an organizational level. "When we talk about process flows now, we are explicitly listing down who is making the decision at what point. This part is technology, while that part is human," she said.
Redesigning incentives and process flows creates a new problem at scale: without a structure for managing distributed AI experimentation, grassroots initiatives accumulate in ways that leadership cannot see. Gokani proposed a federated governance model as the solution.
Governing the grassroots: Gokani's preferred model distributes accountability rather than centralizing it. "The onus for governance shouldn't be on one central body; it should be on 'champions' from each business unit. It is their responsibility to bring use cases forward and present them to the governance board," she said. That structure gives leadership visibility while empowering "translators" to manage and harvest value from grassroots shadow AI experiments. This allows local productivity gains to contribute to full enterprise transformation, rather than staying siloed as one-off initiatives.
For Gokani, the organizational redesign work underway today is also a signal of what comes next. The companies getting it right are not just building better AI strategies. They are building the management cultures that will define the next era of AI-driven work. "Over the next few years, we will see far fewer pure AI roles and more AI-augmented ones where judgment, context, and ethics command a premium. The next phase of AI advantage won't be won by better models alone. It will be won by leaders willing to redesign roles, decisions, and accountability before technology forces them to," Gokani said.





