Vaishali Gandhi, Sr. Dir. of Data & AI for Samsara, spoke with CIO News about how enterprise AI initiatives often fail due to poor data integrity, not technology limitations
Business-led AI adoption and enthusiasm can lead to fragmented tools and wasted effort without structured governance and vendor scrutiny.
Gandhi's framework uses 'hard gates' and 'soft influence' to guide teams and vendors toward value-driven AI solutions which are agile and collaborative to maintain competitive advantage.
Enterprise AI adoption is moving faster than ever, but it's now approaching a crossroad. On one side, business teams are more empowered than ever, using accessible AI tools to drive productivity and surface new ideas with the promise of generous ROI. On the other, this decentralized enthusiasm is creating a massive orchestration headache, with fragmented tools, shadow AI, pilots axed, and failed vendor partnerships littering the path to progress. This isn't just an anecdotal problem; reports from MIT, IBM, and Gartner have consistently portrayed the vast majority of AI initiatives failing to deliver tangible value or revenue.
Navigating enterprise AI requires more than just better technology. It demands a disciplined regimen for data governance, a ruthless focus on ROI, and a new philosophy for evaluating the vendor partners meant to be allies. The success of AI, it turns out, depends as much on people, process, and partnerships as it does on the platform itself.
We spoke with Vaishali Gandhi, Sr. Director of Data Engineering & AI at Samsara, who has seen both the wins and pitfalls firsthand. Gandhi has leveraged her experience over two decades of data leadership experience at companies like BlackLine, StubHub, and Hewlett Packard to develop and implement a framework for cutting through the AI hype. Central to her philosophy is a rather simple concept: Data is Dirty.
Dirty jobs: For Gandhi, the work begins and ends with the unglamorous but essential task of establishing data integrity. "Enterprise data is the dirtiest thing on the face of this earth. Even dirtier than money. Data is dirty," she said. "It's people like us who are there to clean it and provide it so that our organizations can grow faster and get your AI results out." Gandhi argued that while it sounds straightforward, it's really a foundational principle that many organizations are still struggling to master. "If you really want AI to do what you need to, make sure your data is clean," Gandhi stated, "because without clean data, you will not be successful."
Ready, Fire, Aim: Dirty or poor data quality creates a challenging environment to do much of anything, from marketing to RevOps to HR. Gandhi cautions AI models will absolutely hallucinate or misfire when trained on flawed data, and that cleaning and curating datasets upfront saves far more time than trying to patch errors later.
For Gandhi, the biggest early win she can point to in the AI era was a cultural one. Business leaders are finally coming to the table with their own use cases and, more importantly, a newfound appreciation for the data that powers them.
A welcome change: "Business is stepping up. They actually think about use cases, and people are doing things on their own, then coming to us saying, 'Hey, this is what I did. Can you put this in production for me?' And they understand data now; they understand when data is dirty that they can't really get value out of it."
But this enthusiasm is a double-edged sword. When left unchecked, it leads to a chaotic landscape of unvetted tools and wasted effort, making central orchestration nearly impossible.
Adding insult to injury: "We had an internal team that used a product and they spent almost nine months migrating all the workflows into it," Gandhi shared. "Then the vendor pulled out saying, 'We're not developing anymore, our roadmap has changed.' So now they have to yank all that out of production. Imagine the double work, the amount of effort, the resources we spent." These challenges underscore the critical need for structured governance and enterprise-wide alignment when rolling out AI initiatives.
To prevent this chaos without stifling innovation, Gandhi's team acts as a proactive guide, implementing a governance framework built on hard gates and soft influence. This framework is put to the test daily. Gandhi described a recent request from one of Samsara's business units for a new data tool. Her response was a masterclass in navigating corporate politics and maintaining focus on value.
The hard conversation: According to Gandhi, the hardest part about navigating various proposals are deciphering which projects actually have leadership approval when internal meetings, emails, or hallway conversations start throwing names and titles around. A familiar phrase across most enterprises 'I've already spoken to the CIO, the CEO wants it.' immediately enables Gandhi and her team stick to their questions to say, 'Okay, great, but what is the value proposition? What are you solving for?' so that Gandhi can generate a better understanding of the request and help team members generate executive buy-in if it's something worth pursuing.
The two levers: "The first lever is our hard gates: infosec and legal," she said. "The second lever is the mindset that we are never taking something away without giving something back. So we will say, 'So-and-so already looked at it, these are the things that don't work here, therefore we cannot implement this, but we have this other option.'" In her experience, being able to redirect into another tool, workflow, or solution actually helps maintain 'political capital' and drives stronger adoption.
While being in a position to challenge teams through rigorous, value-focused questioning seems straightforward, it's a skill she has honed over her career. This same unforgiving scrutiny is applied not just to new tools, but to existing vendors. In an era of rapid innovation, a vendor's willingness to co-create and move fast is a critical differentiator.
Questioning authority: In the early days of Gandhi's career one of the things she's been shy about was questioning authority and learning how to insert her voice into the conversation. Now, as a leader, she reflected, "I feel I'm getting stronger and stronger, and with AI, it's an important time to speak up in the enterprise because it's not just about 'I'm doing AI', it's about I'm doing AI that's actually delivering results." In her view, the quick vanity wins aren't what will drive the greatest business impact, so knowing when to speak and pushing back against the common approach of just bolting things on critical in 2025.
The partnership test: With AI changing business needs and shifting project scopes it's important to have partners that share the co-creation aspect. In Gandhi's experience this co-creation and partner building doesn't always work out as Gandhi shared, "We've had vendors who don't move fast enough in the innovation race, which makes it challenging to renew engagements when other vendors are willing to co-create with us."
Given the high stakes environment across enterprise AI, this approach means that the competitive moat a vendor enjoys can evaporate quickly if they become complacent, a summary Gandhi emphatically endorsed.
An evaporating moat: "We always look for partners that help us build AI capabilities to deliver to business—otherwise, business will just go somewhere else" Gandhi said. From her perspective, it's in every enterprise's best interest to be evaluating and selecting agile, collaborative partners who can ensure that AI projects can keep pace with business needs, rather than being derailed by rigid platforms.
Ultimately, all of these processes—governance, ROI analysis, partnership evaluation—circle back to one core purpose: moving beyond AI theater to deliver real, measurable results. Key to achieving these results lies heavily on your people. As Gandhi puts it, “AI isn’t going to take your job if you're actively making AI part of your life. You must be investing the time and resources to figure this out today so that your enterprise can win tomorrow."