
Enterprises treat AI risk as a cybersecurity issue, leaving ownership with the CISO and lacking a formal, cross functional governance model that matches AI’s legal, operational, and financial impact.
Ian Schneller, a 35 year information security veteran and 3x Fortune 500 CISO, said AI risk must move to an enterprise committee model with security serving as a specialized advisor, not the sole owner.
He called for cradle to grave AI governance, unified AI project prioritization, workforce training that builds adoption, and identity systems redesigned to manage autonomous agents at scale.
A fundamental AI ownership gap has opened up across the enterprise. Because the nature of AI risk is often perceived as technical, responsibility for managing its various challenges is often defaulted to the cybersecurity function. But treating a business-wide issue as a niche IT problem slows innovation and amplifies exposure. What's needed is a formal, enterprise-wide model for AI governance that spans legal, privacy, operations, and finance, but most organizations don't yet have one.
Ian Schneller is a 35-year information security veteran with a 24-year military career in offensive and defensive cyber operations. A three-time large enterprise CISO, he advises and mentors security leaders on organizational cyber strategy, and shares insights from the field through his newsletter, Signal Not Noise. His experience spans the highest levels of the public and private sectors, with senior roles at the US Air Force and USCYBERCOM, and executive leadership positions at Bank of America and JPMorgan Chase. Most recently CISO at Health Care Service Corporation, he believes AI adoption demands the same long-game discipline that has defined his career.
"This is a long game, not a short one," Schneller said. "Culture, training, and initiative adoption all compound over time. The key is to monitor, adjust, and avoid monumentally risky mistakes." For Schneller, this long-game mindset starts by correcting what he sees as a foundational error in ownership.
He observed a common reaction in the enterprise: the default assignment of AI risk to the security team, a move he believes is deeply flawed. But that doesn't mean the CISO is off the hook. Instead, their role must shift from general owner to specialized advisor, who provides deep technical expertise on security, governance, and risk to the governance committee.
Beyond the CISO: "For whatever reason, AI risk 'sounds' like cyber risk, and so CISOs are owning the AI risk process. But AI risk isn't just security. It’s also privacy, accuracy, fairness, transparency, and non-bias. Taken as a whole, it is an enterprise risk and must be governed as such," Schneller said. Risk acceptance needs to happen at the right level through an enterprise risk committee drawing on legal, privacy, security, and operations, not through the CISO's office alone, he added.
The chatbot speaks for itself: Governance failures carry real legal and financial consequences. For proof, Schneller pointed to a high-profile incident that makes the need for this multi-stakeholder approach to oversight clear: "in the Air Canada case, their chatbot gave incorrect information. The company claimed it wasn't responsible, but a judge ruled that the chatbot speaks for the company," he said. To prevent such failures, Schneller advocated for a cradle-to-grave lifecycle framework, with governance applied continuously, from upfront ROI discipline to a dedicated AI incident response capability distinct from cyber response.
When it comes to orchestration, Schneller challenged the conventional definition, describing it as strategic portfolio management: creating a single, prioritized list of AI projects for the entire enterprise. But for most organizations, governance is still catching up to the technology. Achieving that alignment hinges on the ability to unite top-down goals with ground-level innovation, starting with building a culture where employees see AI as an opportunity rather than a threat, he explained.
The one true list: "If you do it top-down, you get things that are top-of-mind for leadership but might miss opportunities lower down. If you do it bottom-up, you need widespread AI training for the workforce so they can identify those opportunities," he said. The answer was a combined approach where top-down direction and ground-level input converge into one prioritized slate, he added.
Death to drudgery: "Part of your AI journey needs to include training the workforce on what AI will do if you've done it right: the really un-fun parts of your job that you hate doing, AI is going to do for you. That means you are now free to do the parts that are really stimulating and creative. You have to get in front of the competing narrative that AI is simply coming to take people's jobs," he said. Left unaddressed, workforce resistance could quietly undermine initiatives before they gain traction, he added.
Looking beyond today's challenges, Schneller was already focused on the next horizon of risk, where autonomous systems will test the limits of legacy security models. These emerging issues of agent autonomy risk will require a new approach to core IT functions.
A new ID for AI: "I don't think information security or IT teams have truly grasped how identity and access management has to transform. To do this right, especially with agentic AI, identities are going to be rapidly spun up at scale, do their thing, and then be rapidly deprovisioned at scale. If you've got somebody in there clicking 'approve' on every single one, your AI initiative is not going to work because it can't move at the speed it was designed for," he said. The solution is redesigning identity and access management from the ground up to provision and deprovision agent identities automatically, at the speed AI actually operates, he noted.
Boggling the brain: "Instead of managing people, you'll need a supervisor who manages AI agents. Now let's think about that at scale. Eventually, we're going to have an AI agent that is trained to be a manager of other AI agents. At some point, you're going to have a human in charge. So now you have a human being the manager of an AI agent who's managing other agents. We need skills for that, and we don't even know what those skills are yet," he said. The job titles don't exist yet, he acknowledged, but building toward them was already the right starting point.
Ultimately, the mindset shift matters as much as the framework. Schneller identified a common hurdle where organizations demand perfection from a technology designed to be iterative. "Organizations that set the bar at perfection are going to find themselves never really adopting it," he said. "Those that set it on the level of 'better than a human' are probably going to be the ones that see the benefits. The question is now: how do we know when it makes mistakes, and what do we do about it? Those need to be part of the framework." He pointed to starting points, including the new Financial Services AI Risk Management framework, but said humility was required. "Be flexible," he said. "Nobody's figured it out exactly right yet."





