
Key Points
AI is dismantling the traditional on-prem versus cloud decision as advanced capabilities concentrate in cloud platforms, making workload placement a function of where AI actually performs best.
Shawn Harrs, EVP and CIO at Red Lobster, outlined how this shift is eroding platform parity and reshaping how CIOs approach infrastructure, development, and R&D.
Harrs laid out a three-part path forward: move legacy systems to AI-enabled SaaS, embed agents across custom development, and use leased private AI infrastructure for high-cost, specialized workloads.
Artificial intelligence is breaking the long-standing on-prem versus cloud debate. As vendors concentrate their AI investment in cloud-hosted products, the feature parity that once justified on-prem deployments is eroding. For today’s technology leaders, workload placement is dictated by where meaningful AI capabilities actually live.
Shawn Harrs, Ph.D., Executive Vice President and CIO at Red Lobster, has spent more than two decades leading enterprise technology transformation from the C-suite, including senior roles at Universal Parks & Resorts and The Walt Disney Company. His work spans large-scale operational environments and close collaboration with technology vendors, positioning him at the intersection of infrastructure strategy and AI adoption. In his view, the long-standing rules that once governed IT infrastructure no longer hold in the era of AI.
"The historical parity between a cloud and an on-prem system is diverging. I see companies moving away from on-prem investment because of it," said Harrs. To navigate this new environment, Harrs breaks the problem down into a three-part technical playbook.
A worthy trade-off: The first path centers on recognizing when legacy on-premise systems have reached the limits of what they can deliver. In areas like reporting and analytics, Harrs prioritizes adaptability and speed over cost parity, treating higher operating expenses as the price of access to capabilities that change how work gets done. "I’m migrating our enterprise reporting and dashboarding tool from on-prem to a hosted product," he said. "It is slightly more expensive from an operating cost perspective, but I’m gaining powerful AI capabilities like natural language querying. We can now automatically build business intelligence dashboards just by asking a business question. That is a phenomenally powerful capability."
Enter the agents: The second path focuses on custom development, where Harrs advocates for augmenting internal teams with AI to accelerate the entire software lifecycle. His vision sees AI as a ubiquitous tool for work automation, from writing code to generating test plans. Successfully implementing this strategy often depends on careful planning, as the new demands from enterprise AI agents can strain IT infrastructure. "Any time I'm refactoring a custom system, I remove whatever analytical logic used to live there and replace it with an AI agent. That’s no longer optional," stated Harrs. "We should be using tools like GitHub Copilot everywhere to accelerate development, refactoring, and integration. The goal is to stop hard-coding intelligence and let agents handle it instead."
Renting R&D: A third path addresses the needs of highly specialized, "inch wide, mile deep" R&D workloads where hyperscaler costs can be prohibitive. For these use cases, his advice is to work with private AI datacenter providers to build or lease dedicated appliances. This partner-led approach often relies on smarter designs for AI infrastructure, offering a cost-effective alternative to the massive public cloud providers. "You can partner with a company to build an AI infrastructure stack that turns a week-long model run into a daily one, accelerating the entire R&D cycle by orders of magnitude. Doing that in one of the big hyperscaler clouds is extremely expensive, but leasing a dedicated AI appliance for a defined period lets you achieve the same results at a fraction of the cost, without owning the infrastructure."
But a technical playbook alone isn’t enough to make AI stick. Harrs framed adoption as an organizational shift as much as an architectural one, where the goal is to move beyond isolated pilots and embed AI into everyday work. He contrasted the "inch wide, mile deep" demands of advanced R&D with a complementary "mile wide, inch deep" approach that pushes practical AI usage across the entire workforce. In his view, AI only becomes transformative when it changes how people actually work, not just how systems are built.
Excel without Excel: Harrs pointed to AI fluency as a baseline workplace skill rather than a technical specialty, arguing that productivity gaps now form between employees who work with agents and those who don’t. "If you’re in an administrative role and you’re manually creating PowerPoints these days, that’s a generation behind in terms of how productive your work can be," he said. Real adoption, in his view, happens when agent-building moves into the business itself, not when it stays centralized in IT. "The key is getting people in the business to share the agents they’ve built." At scale, those small workflow gains compound quickly. "Don’t open Excel," Harrs said. "Leave it closed. Go to the agent first."
Making AI adoption stick, Harrs argued, comes down to leadership discipline around execution. As infrastructure decisions shift toward cloud platforms, leased AI appliances, and agent-driven systems, the risk of misaligned vendors and costly missteps rises. That makes structured change management and trusted validation essential.
Explainability and confidence don’t come from vendor decks, but from peer proof. "I have access to 1,700 CIOs in my network. If I have a question about a partner, who to use, or how to approach something, I ask," Harrs concluded. "That allows me to go in with confidence that I’m not going to be sold something that doesn’t work for me."


.webp)


.webp)