Phillip Miller, CISO at H&R Block, spoke with CIO News about the parallels between farming and cybersecurity, emphasizing the need for strategic focus in monitoring systems.
Miller warned of "agentic sprawl" in enterprises, where autonomous systems create silos and technical debt.
He proposed a framework using proxy identities and digital twins to manage AI systems safely.
Miller honed in on the importance of vendor alignment and governance in AI adoption, likening it to early SaaS security challenges.
From a farm nestled in the rolling hills of Southern Virginia, a Chief Information Security Officer ponders the parallels between cultivating the land and cultivating technology.
Veteran security exec, farmer, author, and current Vice President and Global CISO at H&R Block, Phillip Miller, drew the compelling analogy in a recent LinkedIn post about finding his chickens' eggs in unpredictable locations throughout the barn:
"Checking every nook and cranny in my barns and barnyard for eggs would take many hours. Not something that can be easily automated. I certainly cannot watch the hens all day long, either... It is also that way for computer systems. There is simply not enough available processing power (or money) to watch everything all of the time. The ‘art’ of our world has always been about making informed decisions about where to focus our efforts. Sometimes it may be the most likely to be attacked area, other times we may hone in closely on the data with the most value."
The analogy extends to the risk and reward of "agentic sprawl" in the enterprise. As companies rush to deploy intelligent agents, they risk creating the very same silos (no pun!) and technical debt that a generation of cloud tools was meant to solve. But the ROI of AI is often worth temporary disorganization, provided that the methods for retroactive clean-up are well-governed.
Speaking to CIO News about his general industry philosophies on corporate AI policy, Miller discussed balancing the lure of autonomy with human-in-the-loop safety.
A cultural disruption: The appeal of agents is obvious. They are able to perform tasks without emotion and with perfect accuracy. But that perception of control fractures when autonomous systems begin to interact with each other in isolation without guardrails. "Let's say I'm in the finance department and I bring in an agent to help me manage budgets, and the IT department goes ahead and creates agents to help with their file set. You can see very quickly how you could have two agents communicating with each other and without a human in the loop, and you might end up with a level of chaos."
Digital twins: To manage this new reality, Miller proposed a three-part framework. The first two pillars are technical: implementing "proxy identities" to ensure an agent can never exceed the authority of its human counterpart, and leveraging "digital twins" as functional replicas of the IT environment to serve as a safe sandbox for rigorous testing before deployment. His third pillar is all about the cost of free-flowing agentic systems and the importance of choosing the right vendors to usher in the future of AI tooling.
Without proper staging in place, leaders often feel like they are navigating a black box when implementing agentic systems at such a relatively early stage in the technology's lifecycle. Miller advised scrutinizing a vendor's core mission when determining the right partners to usher in AI.
Vendor alignment: "You have to look beyond just the tech, and look at the substantive business model that vendors are going after," he said. "Are they all about just trying to displace talent as quickly as possible, or are they investing in solutions that would have enterprise guardrails?" The market is rampant with the former, but more mature vendors are quickly emerging from the legacy SaaS world in the form of well-entrenched providers that are soft-pivoting to agent-first solutions. To Miller, the vendor assessment challenge is reminiscent of the early days of SaaS, when security was often an afterthought in the rush for adoption.
The NTSB of AI: To avoid repeating history, he calls for proactive governance, making a memorable appeal for a balanced approach. "I don't think we need to see a National Transportation Safety Board-type equivalent for agentic AI or a National Labor Relations Board with AI unions," he joked. "But we do need some kind of effective governance model."
Miller shifted focus to human capital, noting that the hypothetical pinnacle of agentic success brings new challenges and opportunities.
Talent monopolies: "Agents alone tend to create somewhat of a talent monopoly for the work they do. How do you make sure that you don't find yourself sort of hostage to a skilled capability inside of the agent?" This long-term risk, he argued, is similar to the "early days of outsourcing," where initial cost savings are eventually eroded by hidden complexities and a dependency that's hard to reverse.
Ultimately, the rise of agentic AI creates a new societal obligation that will elevate the entirety of work to be done. AI's existential thread to human workers creates a forcing function to up-skill, which has trickle down effects in the workforce and trickle-up effects to further improving and honing the AI. "If you have robots coming to take care of most things, then we will all need to do higher-order jobs to support them."
*All opinions expressed in this piece are those of Phillip Miller, and do not necessarily reflect the views of his employer.