
Enterprise AI adoption is moving faster than ever, but it's now approaching a crossroad. On one side, business teams are more empowered than ever, using accessible AI tools to drive productivity and surface new ideas with the promise of generous ROI. On the other, this decentralized enthusiasm is creating a massive orchestration headache, with fragmented tools, shadow AI, pilots axed, and failed vendor partnerships littering the path to progress. This isn't just an anecdotal problem; reports from MIT, IBM, and Gartner have consistently portrayed the vast majority of AI initiatives failing to deliver tangible value or revenue.
Navigating enterprise AI requires more than just better technology. It demands a disciplined regimen for data governance, a ruthless focus on ROI, and a new philosophy for evaluating the vendor partners meant to be allies. The success of AI, it turns out, depends as much on people, process, and partnerships as it does on the platform itself.
We spoke with Vaishali Gandhi, Sr. Director of Data Engineering & AI at Samsara, who has seen both the wins and pitfalls firsthand. Gandhi has leveraged her experience over two decades of data leadership experience at companies like BlackLine, StubHub, and Hewlett Packard to develop and implement a framework for cutting through the AI hype. Central to her philosophy is a rather simple concept: Data is Dirty.
Dirty jobs: For Gandhi, the work begins and ends with the unglamorous but essential task of establishing data integrity. "Enterprise data is the dirtiest thing on the face of this earth. Even dirtier than money. Data is dirty," she said. "It's people like us who are there to clean it and provide it so that our organizations can grow faster and get your AI results out." Gandhi argued that while it sounds straightforward, it's really a foundational principle that many organizations are still struggling to master. "If you really want AI to do what you need to, make sure your data is clean," Gandhi stated, "because without clean data, you will not be successful."
Ready, Fire, Aim: Dirty or poor data quality creates a challenging environment to do much of anything, from marketing to RevOps to HR. Gandhi cautions AI models will absolutely hallucinate or misfire when trained on flawed data, and that cleaning and curating datasets upfront saves far more time than trying to patch errors later.
For Gandhi, the biggest early win she can point to in the AI era was a cultural one. Business leaders are finally coming to the table with their own use cases and, more importantly, a newfound appreciation for the data that powers them.
A welcome change: "Business is stepping up. They actually think about use cases, and people are doing things on their own, then coming to us saying, 'Hey, this is what I did. Can you put this in production for me?' And they understand data now; they understand when data is dirty that they can't really get value out of it."
But this enthusiasm is a double-edged sword. When left unchecked, it leads to a chaotic landscape of unvetted tools and wasted effort, making central orchestration nearly impossible.
Adding insult to injury: "We had an internal team that used a product and they spent almost nine months migrating all the workflows into it," Gandhi shared. "Then the vendor pulled out saying, 'We're not developing anymore, our roadmap has changed.' So now they have to yank all that out of production. Imagine the double work, the amount of effort, the resources we spent." These challenges underscore the critical need for structured governance and enterprise-wide alignment when rolling out AI initiatives.




