
Enterprise AI is delivering early returns for some, but most initiatives are stalling. Many have attempted to adopt AI, but few deployments have been effective. Now, with wins from copilots fading quickly after the proof-of-concept stage, many leaders are still waiting to see ROI. But instead of a technical issue, some experts say the error is human.
For a practical perspective on the problem, we spoke with Nicholas Mortensen, Principal over Development and Integrations at top 25 CPA firm Eide Bailly LLP. Mortensen has spent over a decade at the intersection of AI strategy and legacy infrastructure, architecting the systems that bridge the gap between promise and production. For him, AI's most significant security risk is also the only solution to address it: trust.
An AI arms race: The speed of today's AI-powered attacks requires an equally fast, AI-driven defense, Mortensen explained. From his perspective, the biggest threat is attackers using AI to move through compromised systems with unprecedented speed. "Bad actors are using AI to accelerate every stage of an attack, from breaching a system to extracting value. You can't be content with yesterday's security methods. You have to fight fire with fire."
But because AI workflows are non-deterministic, they can no longer be secured with the traditional 'if-then' logic that has governed IT for decades, Mortensen said. Instead, organizations must reimagine trust as a principle embedded from the start.
Divide and conquer: According to Mortensen, the problem isn't the model changing over time, but the cumulative effect of minor, acceptable deviations in a complex workflow. "When one AI agent handles a complex task, small deviations can stack up and create inconsistent results. The solution is 'task decomposition': an agent architecture with a main decision-maker and specialized sub-agents. Each sub-agent is highly focused, providing a consistent output that eliminates variability and leads to far more reliable outcomes."
When bots go bad: For example, he cites a recent public incident as a clear illustration of what happens when governance is an afterthought. AI coding assistants can misinterpret vague code references and silently delete critical components, a risk that can only be mitigated by comprehensive, end-to-end testing. "Consider the Replit incident, where an agent deleted a database. That would never have happened with proper DevOps. A proper CI/CD process with robust, holistic test classes provides the human-in-the-loop oversight needed to catch the subtle, but critical, errors that AI can introduce."
For most organizations looking to deploy AI, the best place to start is ensuring the core infrastructure is sound, Mortensen said. Rather than negotiable, data quality is the primary building block for trust.
The unbreakable rule: Most common data issues, like missing fields from historical records, prevent the consistent inputs required to get reliable AI outputs, Mortensen explained. "Consistency is the key to trust. If your data is inconsistent, you are feeding the system inconsistent inputs, which guarantees your outputs will be inconsistent. Find me anyone getting reliable results from an agent with poor data quality. You won't."
Start where you are: As a first step, leaders should identify which department has the cleanest data and begin their AI initiatives there, using a data warehouse as a temporary bridge if needed. "If you need to leverage legacy systems and can't modernize right away, a Data Lake or Data Warehouse is a great, pragmatic way to solve the problem of data availability."




