AI capabilities are now embedded in the enterprise platforms organizations already pay for, available at near-zero incremental cost. As employees across every function begin building their own agents to query corporate data, automate workflows, and accelerate decisions, the barrier to adoption has collapsed. However, that speed has outpaced the controls designed to govern it. With AI-generated queries touching transactional systems in real time, a growing number of CIOs are reframing AI around practical execution rather than industry hype, and pushing access enforcement down to the database layer, turning to mechanisms like the Model Context Protocol to replicate role-based permissions at the model level.

Leading that build at GuidePoint Security, a cybersecurity consultancy serving Fortune 500 companies and U.S. government agencies, is Shawn Harrs, Ph.D., the company's Chief Information Officer and a two-time Orbie Award finalist with over 20 years of C-level experience across Disney, NBCUniversal, and Red Lobster. Harrs has managed IT P&Ls up to $200 million and led a cloud migration that delivered a 10x return on investment. He said the real challenge is no longer capability but operating discipline, particularly as employees begin assembling their own AI agents with access to enterprise data.

"Building agents is the new Excel. People see AI as something novel and complex when really it is a baseline skill that the entire organization should own, not just IT," said Harrs. "Every office worker job description now requires not just productivity software but AI proficiency." In peer conversations over the past year, he said, organizations across industries have begun writing AI competency into job descriptions at every level, from executive assistants automating expense reporting to revenue teams building their own sales tools.

That breadth of adoption raises a practical question: how should organizations evaluate where to invest? Harrs treats AI the same way he treats any capital investment. Before deploying a single dollar, he starts by evaluating the potential for savings, productivity gains, or new revenue. Organizations are getting more leverage from their existing IT spend because advanced features are now built directly into common SaaS platforms. Microsoft's Copilot offers the ability for enterprise employees to build agents without requiring custom infrastructure, and the cost of these enterprise suites has remained largely consistent with pre-AI pricing. With the cost barrier largely removed, the bottleneck has moved to AI literacy, and Harrs pointed out that proficiency should be championed collectively by the entire C-suite, beyond IT alone.

  • On the house: "These AI tools at the disposal of sales, finance, or marketing functions are a step-function more productive and available at zero IT investment," Harrs noted. "The cost is literally zero above your existing investment because vendors are investing in these platforms, adding AI capabilities to their products, and generally the cost is consistent with what they were pre-AI." What was once a separate line item has become a bundled feature, collapsing the time and cost required to pursue new use cases.

  • Native by default: The workforce shift is already visible at the entry level. "Today, companies are hiring entry-level employees who are what I would call 'AI natives,'" Harrs said. "Just as we had the digital native generation that was mobile-first, when someone early-career is given a task today, they start from the perspective that AI can help them accomplish 80% of the work." For organizations still debating whether to adopt, the next generation of employees has already decided.

  • The new fast track: "If someone can move really fast at taking away remedial tasks, your role becomes more valuable and you are set up for raises and promotions," he said. "Because now your role is rationalizing the other positions that didn't need to get hired. I am hearing some organizations state that AI proficiency is now a requirement to get a raise or promotion." The incentive structure is shifting from rewarding tenure to rewarding velocity, and AI fluency is becoming the differentiator.

The AI mandate is landing squarely on the CIO. Some organizations are formalizing it with new titles. Others are simply expecting their technology leaders to deliver an enterprise-wide AI operating platform with the same rigor as any production system. Harrs advised moving quickly with big cloud providers for turnkey capabilities while keeping strict guardrails in the deployment pipeline. But as more employees create and deploy their own agents, three questions take over: who can access what data, how the model enforces those boundaries, and how users are trained to interact responsibly.

  • Shadow IT, sanctioned: "We are going to allow users to elevate beyond simple agent-building capabilities into building their own personalized agents, and with guardrails, obviously, allowing them to interact with transactional systems," Harrs said. "Where particular agents have great adoptability at an enterprise level, we'll have a path for those agents to move out of the self-service use case into becoming an enterprise agent." The model is designed to start with a human in the middle and expand autonomy as maturity develops, not the other way around.

  • Bouncers at the database: Letting user-built agents query transactional data exposes a gap in traditional application-layer security. Because AI can generate queries dynamically, the application is no longer a reliable sole enforcement point. Harrs' response is to push controls down to the database and require that MCP replicate existing role-based permissions at the model layer. "The MCP has to replicate role-based permissions at the MCP layer," he explained. "If we have robust controls at the system level, we have to demonstrate that the MCP replicates permissions so the model can train on the whole set of data. But as I interact with that model, I'm not getting results that include restricted information or break the access controls I would have if I were interacting directly with the application." The governance structure was the first thing the team designed, before any tooling decisions were made, he said.

  • Phishing for prompts: Beyond technical controls, Harrs treated prompt quality and safety as an extension of the organization's risk surface, requiring the same governance as any other communication channel. "There is a training aspect to the types of prompts people put in, where the end user is given guidance on what a good prompt is versus a not-so-good prompt," he explained. "It's not dissimilar to cybersecurity phishing training or effective corporate communications in your company chat tool so that users understand their prompts matter." But governing how people interact with AI is only half the equation. The tools themselves have to meet the same standard.

Rather than retrofitting controls onto platforms that lack them, his team requires data lineage, auditing, and cataloging capabilities out of the box. "Even before we move to a pilot, we have our data assurance group under cybersecurity acting as a standing member of the architecture group," he said. "They define the test cases we have to demonstrate around data governance. As we evaluate our stack and speak to vendors, we are asking them to show us that their tool can execute before we even do the demos."

Ultimately, Harrs treats AI implementation much like a traditional IT project, relying on standard source-to-target data validation. That includes not only how data is governed within tools, but also how AI outputs are checked against source systems as models are trained and deployed. "We rely on a human-in-the-middle, trust but verify approach," he concluded. "As we're training and launching models, we're doing a lot of data validation not dissimilar to a data migration from a legacy to a new system. We're executing queries at the agent level and looking to ensure the answers we're getting are correct and equivalent to the source system."