• As universities accelerate AI adoption across every function, security leaders who once focused on risk prevention faced growing pressure to guide institutions on how to implement AI responsibly.

  • Lester Godsey, CISO at Arizona State University, explained how his team shifted from auditing innovation to actively enabling it, becoming the consultative arm that helps ASU implement AI in a secure, private, and ethical manner.

  • At ASU, that meant turning security standards into self-serve tools, scaling agentic automation across departments, and repurposing annual tabletop exercises to test AI readiness rather than just compliance.

The CISO’s role in higher education is shifting. As universities accelerate AI adoption across every function, security leaders who once focused on risk prevention are being asked to do something harder: guide institutions on how to implement AI in a secure, private, and ethical manner. Whether security becomes a brake pedal or a throughput engine depends on how leaders show up to that conversation.

Lester Godsey is a leader at the center of this transformation. As the Chief Information Security Officer for Arizona State University (ASU), Godsey is responsible for protecting a highly innovative academic institution. His perspective is built on over 30 years of IT experience, including CISO roles for Maricopa County, the fourth-largest county in the United States, and the City of Mesa. He is now championing a model where security functions as an accelerator.

“We’re using multi-agent frameworks to orchestrate security workflows, so routine tasks like incident triage and identity management happen automatically, enabling us to continually scale innovation while keeping sensitive data under control," Godsey said. At ASU, agentic automation is already operational, and the security controls that govern it were built in from the start, not bolted on after.

Godsey’s teams embed security directly into the university’s daily workflows. The goal is to reduce friction by turning compliance burdens into on-demand, self-serve tools that make the secure path the easiest one to take.

  • Policy to portal: Nobody was going to read 19 new security standards, so the team transformed the policies into a self-serve Q&A bot. "Instead of expecting our users to read all of them, we gave them access to a bot trained on our existing policies and standards. They could ask any question they wanted around multifactor authentication, encryption, passwords, or elevated privileges," Godsey explained.

  • Expertise on demand: The team captured the institutional knowledge of their most seasoned identity management experts and fed it into an AI bot for the university help desk. "The bot gives technicians immediate access to accurate answers, allowing them to better support the person on the other end, who is on a phone or online," Godsey said.

This philosophy of enablement extends to the university’s high-profile initiatives. Godsey noted that the accelerated adoption of AI is shining a greater light on problems that have plagued technology for decades. He believes few organizations can confidently answer where all their data lives, what it is, and who has access.

  • Old problems, new catalyst: That insight became a driver for ASU’s significant partnership with OpenAI, a move that reflects a clear approach to enterprise AI governance by addressing the data challenge. "The challenges AI exposes within higher education, like unknown data access and classification gaps, aren’t new. We’re finally using automation to tackle decades-old problems at scale," Godsey said.

  • Enabling with guardrails: The university's approach to vendor agreements reflects the same logic. "We brokered an enterprise agreement that gives every single faculty, staff, and student at ASU access to ChatGPT Edu. The platform provides guardrails and privacy pre-agreements that prevent the university's data from being used to train models," he added. The expansive collaboration is now featured by OpenAI as a case study in responsible, large-scale deployment.

"We’re using multi-agent frameworks to orchestrate security workflows, so routine tasks like incident triage and identity management happen automatically, enabling us to continually scale innovation while keeping sensitive data under control."

Lester Godsey

CISO
Arizona State University

The OpenAI deal is a key component of ASU's vendor-agnostic strategy. To avoid vendor lock-in, the university fosters a multi-platform AI ecosystem that gives faculty, staff, and students the flexibility to select the best tool for their specific needs. This positions the university as a facilitator of choice rather than a gatekeeper of a single platform.

  • The gamut of agents: To manage this new complexity, ASU is scaling its response through a multi-pronged strategy for agentic automation that runs the gamut from commercial platforms to in-house development. These automation projects are underpinned by an enterprise-wide data initiative co-led by the provost’s office, a practical example of operational AI governance. "We've already stood up multiple agentic workflows across the organization, including in the business school, parking and transit services, and our EdPlus department, which is responsible for all our online offerings," Godsey said.

  • Accelerating the analysts: Godsey's team is evaluating their Google SecOps platform's agentic framework to accelerate threat response. "A lot of those steps are reasonably straightforward, but at scale, doing it hundreds of times a day for a shift takes up a lot of time," he noted. The team's approach to AI agent security uses multiple agents trained on specific tasks to expedite triage, a process requiring a careful balance between AI and human judgment to react much more quickly to potential threats.

Ultimately, Godsey framed the AI challenge as the latest in a series of manageable technology transitions, similar to the advent of the internet and cloud computing. For ASU, a critical component of moving fast safely lies in the strong governance that underpins the technology, embodied in an annual cybersecurity tabletop exercise repurposed to test AI readiness.

  • Drills for speed: The exercise presented university leaders with a realistic scenario: the potential exfiltration of sensitive student data (FERPA), through a third-party AI tool, and the readiness of internal platforms to respond. Beyond identifying gaps, it built institutional trust. "It went a long way toward letting people know we're really thinking about all of this," Godsey said, noting that other departments began actively seeking the security team's input. "Tabletop exercises for AI adoption aren't just about compliance. They're how we teach the institution to move fast safely," he added.

His advice for other institutions begins with a sense of urgency. “If institutions are still asking whether they should adopt AI, they've already missed the boat," he said. "One big challenge higher education has is preparing students for future employment, and those opportunities are going to involve AI. That ship has already sailed.” The path forward doesn't require reinventing the wheel. "You can modify your existing processes, frameworks, and strategies to account for the unique challenges of AI. For institutions with a decent grounding in cybersecurity, a lot of that heavy lifting is already done," he added.

He explained that his goal is to enable progress, not to be a barrier. "Sometimes we'll say, 'Hey, we may need to take a more measured pace and make sure that you understand what the risks are.' But we never take the position of halting investment in AI or in delivering value and learning outcomes for our students," Godsey concluded.