

Artificial intelligence is breaking the long-standing on-prem versus cloud debate. As vendors concentrate their AI investment in cloud-hosted products, the feature parity that once justified on-prem deployments is eroding. For today’s technology leaders, workload placement is dictated by where meaningful AI capabilities actually live.
Shawn Harrs, Ph.D., Executive Vice President and CIO at Red Lobster, has spent more than two decades leading enterprise technology transformation from the C-suite, including senior roles at Universal Parks & Resorts and The Walt Disney Company. His work spans large-scale operational environments and close collaboration with technology vendors, positioning him at the intersection of infrastructure strategy and AI adoption. In his view, the long-standing rules that once governed IT infrastructure no longer hold in the era of AI.
"The historical parity between a cloud and an on-prem system is diverging. I see companies moving away from on-prem investment because of it," said Harrs. To navigate this new environment, Harrs breaks the problem down into a three-part technical playbook.
A worthy trade-off: The first path centers on recognizing when legacy on-premise systems have reached the limits of what they can deliver. In areas like reporting and analytics, Harrs prioritizes adaptability and speed over cost parity, treating higher operating expenses as the price of access to capabilities that change how work gets done. "I’m migrating our enterprise reporting and dashboarding tool from on-prem to a hosted product," he said. "It is slightly more expensive from an operating cost perspective, but I’m gaining powerful AI capabilities like natural language querying. We can now automatically build business intelligence dashboards just by asking a business question. That is a phenomenally powerful capability."
Enter the agents: The second path focuses on custom development, where Harrs advocates for augmenting internal teams with AI to accelerate the entire software lifecycle. His vision sees AI as a ubiquitous tool for work automation, from writing code to generating test plans. Successfully implementing this strategy often depends on careful planning, as the new demands from enterprise AI agents can strain IT infrastructure. "Any time I'm refactoring a custom system, I remove whatever analytical logic used to live there and replace it with an AI agent. That’s no longer optional," stated Harrs. "We should be using tools like GitHub Copilot everywhere to accelerate development, refactoring, and integration. The goal is to stop hard-coding intelligence and let agents handle it instead."
Renting R&D: A third path addresses the needs of highly specialized, "inch wide, mile deep" R&D workloads where hyperscaler costs can be prohibitive. For these use cases, his advice is to work with private AI datacenter providers to build or lease dedicated appliances. This partner-led approach often relies on smarter designs for AI infrastructure, offering a cost-effective alternative to the massive public cloud providers. "You can partner with a company to build an AI infrastructure stack that turns a week-long model run into a daily one, accelerating the entire R&D cycle by orders of magnitude. Doing that in one of the big hyperscaler clouds is extremely expensive, but leasing a dedicated AI appliance for a defined period lets you achieve the same results at a fraction of the cost, without owning the infrastructure."




