As AI grows more powerful, the pursuit of quick wins becomes a temptation that's hard to ignore. But leadership's quest for immediate ROI and fear of missing out has many organizations rushing to deploy advanced tools without the ability to explain how they work to internal stakeholders. The result is an erosion of confidence not through failure, but through confusion.
We spoke with Hardik Mehta, the new Global Head of Risk and Regulatory Compliance at JPMorgan Chase about the importance of building clarity into AI systems from day one. Mehta held previous roles in governance and risk at tech giants like Uber, Microsoft, and PwC, and has seen firsthand how trust breaks down when over-complexity goes unchecked.
For Mehta, building trust in AI isn't a single action, but a three-part philosophy grounded in verification, vigilance, and radical simplicity. It starts with a "trust but verify" principle to pressure-test data and a Zero Trust mindset. But the final layer relies on a skill Mehta said is often underestimated in the age of AI.
AI trauma is best experienced early in a product's lifecycle, when lessons can be learned before it's too late. According to Mehta, many vendors are rushing to sell AI features, but security is often an afterthought.
"People think the human skill of explaining complex mechanics is going away. In fact, it's coming back at a higher rate."
With downside risk being nearly infinite, the delineation between great AI vendors and negligent ones is more important than ever. But even the most sophisticated providers of enterprise-grade AI require the end user businesses themselves to own the last mile of safety.
Ready, vet, go: "The old model of doing a checkbox exercise and looking at your risk register after a year is completely out the window. Now we do a monthly calibration exercise with the board of directors," Mehta said. Projects move at a deliberate pace, sometimes taking two or even three years to complete "based on the sensitivity of the data involved." First, new tools are tested in sandboxes by architects and engineers, then applied to internal, low-risk use cases. Finally, the plan is communicated at every level, from senior leadership down, inviting everyone to experiment in a controlled way.
Ultimately, Mehta believes a disciplined approach is what prepares companies to compete in the emerging era he calls "Machine vs. Machine", where both defenders and nation-state actors will wield sophisticated AI. He called on a Justice League-like cohort of veterans such as CIOs and CISOs that are rising above pedestrian corporate competition to solve governance challenges together on a global scale. "Top leaders are coming together to write rules that cut across industries and government," he says. "It's not about competition; it's about protecting the world together."