"AI agents aren’t just tools. They’re insiders acting autonomously, and traditional identity frameworks were never designed for that."
Frank Sgueglia
Vice President of Information Technology
Penta Group

As autonomous AI agents increasingly operate like a new class of nonhuman insider, the security models built for human employees at keyboards are struggling to adapt. The result is a reckoning between the probabilistic nature of these new tools and the security frameworks designed to govern predictable human behavior. While innovation races ahead, a majority of cybersecurity leaders are actively slowing agentic AI adoption due to security concerns, highlighting the tension between progress and protection.

Helping organizations navigate this new environment at scale is Frank Sgueglia, the Vice President of Information Technology at Penta Group. Sgueglia has spent over 15 years on the front lines of this architectural change for major organizations including S&P Global Market Intelligence. To adapt, he advised leaders to fundamentally rethink the very definition of identity and control.

"AI agents aren’t just tools. They’re insiders acting autonomously, and traditional identity frameworks were never designed for that." Sgueglia explained that although agents operate with a human's credentials, they're driven by a logic that can be at odds with our security models. The result is a new class of identity—nonhuman workers—that legacy systems were not designed to parse.

  • Authorized break-in: A clear illustration of the new reality comes from Sgueglia’s own experience. He saw how even an advanced security philosophy like Zero Trust can be challenged when an agent hijacks an authenticated session to fulfill a command. "I was getting frustrated with an agent I was testing and told it to find a better way. It then used the browser's development tools to sniff out the back-end API calls and accomplished in one command what had been taking three days," he shared.

  • Healing versus stealing: Sgueglia said this kind of unexpected agent behavior can reveal a core feature of their autonomous nature, showing how even strong technical guardrails can fail. "When I challenged the agent for stealing my authenticated session, its response was mind-blowing. It acknowledged the technical reality of what it did, but argued that it wasn't theft because I was the one who was authenticated on the platform. It was self-healing to the point of stealing my authentication."