"This is a long game, not a short one. Culture, training, and initiative adoption all compound over time. The key is to monitor, adjust, and avoid monumentally risky mistakes"
Ian Schneller
CISO Advisor
3x CISO

A fundamental AI ownership gap has opened up across the enterprise. Because the nature of AI risk is often perceived as technical, responsibility for managing its various challenges is often defaulted to the cybersecurity function. But treating a business-wide issue as a niche IT problem slows innovation and amplifies exposure. What's needed is a formal, enterprise-wide model for AI governance that spans legal, privacy, operations, and finance, but most organizations don't yet have one.

Ian Schneller is a 35-year information security veteran with a 24-year military career in offensive and defensive cyber operations. A three-time large enterprise CISO, he advises and mentors security leaders on organizational cyber strategy, and shares insights from the field through his newsletter, Signal Not Noise. His experience spans the highest levels of the public and private sectors, with senior roles at the US Air Force and USCYBERCOM, and executive leadership positions at Bank of America and JPMorgan Chase. Most recently CISO at Health Care Service Corporation, he believes AI adoption demands the same long-game discipline that has defined his career.

"This is a long game, not a short one," Schneller said. "Culture, training, and initiative adoption all compound over time. The key is to monitor, adjust, and avoid monumentally risky mistakes." For Schneller, this long-game mindset starts by correcting what he sees as a foundational error in ownership.

He observed a common reaction in the enterprise: the default assignment of AI risk to the security team, a move he believes is deeply flawed. But that doesn't mean the CISO is off the hook. Instead, their role must shift from general owner to specialized advisor, who provides deep technical expertise on security, governance, and risk to the governance committee.

  • Beyond the CISO: "For whatever reason, AI risk 'sounds' like cyber risk, and so CISOs are owning the AI risk process. But AI risk isn't just security. It’s also privacy, accuracy, fairness, transparency, and non-bias. Taken as a whole, it is an enterprise risk and must be governed as such," Schneller said. Risk acceptance needs to happen at the right level through an enterprise risk committee drawing on legal, privacy, security, and operations, not through the CISO's office alone, he added.

  • The chatbot speaks for itself: Governance failures carry real legal and financial consequences. For proof, Schneller pointed to a high-profile incident that makes the need for this multi-stakeholder approach to oversight clear: "in the Air Canada case, their chatbot gave incorrect information. The company claimed it wasn't responsible, but a judge ruled that the chatbot speaks for the company," he said. To prevent such failures, Schneller advocated for a cradle-to-grave lifecycle framework, with governance applied continuously, from upfront ROI discipline to a dedicated AI incident response capability distinct from cyber response.