Key Points

  • JPMorgan Chase's new Global Head of Risk and Regulatory Compliance, Hardik Mehta, discusses the importance of simplifying communication around AI systems to build stakeholder trust.
  • Organizations are rushing to deploy tools without understanding them, leading to confusion and eroded confidence.
  • Mehta advocates for a three-part philosophy of verification, vigilance, and radical simplicity in AI deployment.

As AI grows more powerful, the pursuit of quick wins becomes a temptation that's hard to ignore. But leadership's quest for immediate ROI and fear of missing out has many organizations rushing to deploy advanced tools without the ability to explain how they work to internal stakeholders. The result is an erosion of confidence not through failure, but through confusion.

We spoke with Hardik Mehta, the new Global Head of Risk and Regulatory Compliance at JPMorgan Chase about the importance of building clarity into AI systems from day one. Mehta held previous roles in governance and risk at tech giants like Uber, Microsoft, and PwC, and has seen firsthand how trust breaks down when over-complexity goes unchecked.

  • The grandma test: "Simplification is an undervalued asset, especially in the age of complexity we live in," said Mehta. "I was in a board meeting a few months back when a member said, 'Explain this to me as if you are explaining it to your grandma.'" The moment became a surprising unlock for a mental framework that can be used to gather stakeholder support. Every AI enablement leader within an organization, regardless of size, can always explain concepts in a more deeply technical manner when the audience calls for it. But simplicity can often be more elusive. "The more simple and nimble we get with data sets and all these AI workflow complexities, the easier it will be for our leaders to make sound, risk aware decisions."

For Mehta, building trust in AI isn't a single action, but a three-part philosophy grounded in verification, vigilance, and radical simplicity. It starts with a "trust but verify" principle to pressure-test data and a Zero Trust mindset. But the final layer relies on a skill Mehta said is often underestimated in the age of AI.

  • When good tools go bad: "People think the human skill of explaining complex mechanics is going away. In fact, it's coming back at a higher rate," he said. Earlier in his career, Mehta saw an internal AI experiment go off the rails in just hours due to lack of oversight and purview into how the underlying model would handle complex data. "At a previous company, we unexpectedly saw sensitive financial data getting uploaded in a workflow where the results being returned were less 50% accurate," he recalled. For Mehta, the incident exposed a core truth about the technology's limits: when unvetted data meets a powerful model, the result can induce "hallucination and jailbreak" that is difficult to reverse engineer by throwing extra layers of technology at a challenge only humans can truly understand.

AI trauma is best experienced early in a product's lifecycle, when lessons can be learned before it's too late. According to Mehta, many vendors are rushing to sell AI features, but security is often an afterthought.

"People think the human skill of explaining complex mechanics is going away. In fact, it's coming back at a higher rate."

Hardik Mehta

Global Head of Risk and Regulatory Compliance

JPMorgan Chase

With downside risk being nearly infinite, the delineation between great AI vendors and negligent ones is more important than ever. But even the most sophisticated providers of enterprise-grade AI require the end user businesses themselves to own the last mile of safety.

  • Ready, vet, go: "The old model of doing a checkbox exercise and looking at your risk register after a year is completely out the window. Now we do a monthly calibration exercise with the board of directors," Mehta said. Projects move at a deliberate pace, sometimes taking two or even three years to complete "based on the sensitivity of the data involved." First, new tools are tested in sandboxes by architects and engineers, then applied to internal, low-risk use cases. Finally, the plan is communicated at every level, from senior leadership down, inviting everyone to experiment in a controlled way.

Ultimately, Mehta believes a disciplined approach is what prepares companies to compete in the emerging era he calls "Machine vs. Machine", where both defenders and nation-state actors will wield sophisticated AI. He called on a Justice League-like cohort of veterans such as CIOs and CISOs that are rising above pedestrian corporate competition to solve governance challenges together on a global scale. "Top leaders are coming together to write rules that cut across industries and government," he says. "It's not about competition; it's about protecting the world together."