
Most enterprises are struggling to scale AI because they treat it as a project rather than a persistent capability with clear ownership and accountability.
PwC Partner George Korizis said the barriers are organizational: fragmented ownership, legacy systems, and a double standard that holds AI to a higher standard than people.
Korizis said success requires clear AI governance, human verification, applying constraints to spur creativity, and knowing when to slow down.
Technical capability rarely explains why generative AI adoption fails to scale. Organizational discipline almost always does. While teams celebrate successful proofs-of-concept, many enterprises find themselves in pilot purgatory, stuck in a hype gap between potential and performance. The widespread stall often stems from a strategic misunderstanding, where companies tend to treat AI as a series of short-term projects instead of a persistent, core business capability that is owned and integrated.
George Korizis is a Partner and Front Office Strategy & Transformation Leader at PwC, where previously he co-led a $1B+ business within PwC's Customer Transformation practice. Korizis brings more than two decades of large-scale transformation experience across financial services, insurance, and consumer sectors, including 13 years at Accenture leading CRM and omnichannel consulting across North America. He has seen firsthand how the hardest part of enterprise AI has little to do with the technology itself, and more to do with a fundamental reframing of the problem.
"The gap with AI adoption isn't technical, it's organizational. Most companies are still treating AI like a project instead of a capability that needs ownership and persistence inside the business," said Korizis. The result is a proliferation of sandboxed pilots that prove out a concept but never connect to a business outcome with a clear owner accountable for the result.
That "project" mindset is what drives the visibility mirage between the promise of a successful pilot and the reality of organizational impact. When AI initiatives solve isolated problems with no clear accountability for business outcomes, promising prototypes get stuck as IT experiments, disconnected from the broader organization and a cohesive enterprise strategy.
Forty pilots, no captain: "I'm working with a large bank that has 30 or 40 different AI pilots. We talked to their application development team, their retail team, and their cards business. All of them had pilots going," said Korizis. "When I asked who the actual owner was, the answer was a mix of IT teams and developers. They were all becoming like IT projects with no business accountability, that's why they typically end nowhere. But if you treat AI like a capability with a clear owner, you move beyond prototyping." The pattern, he said, runs deeper than governance. Successful scaling starts when someone with real business accountability owns the AI capability as part of their core mandate, connecting the model's output to a measurable outcome they are responsible for delivering.
For the teams that escape the pilot phase, a harder challenge of orchestration waits. Scaling AI means moving from building a model to integrating it into real operational workflows. That transition is where innovation runs into siloed priorities, non-standardized data, and technical debt. Friction can slow decision-making, as many leaders weigh the "sunk costs" of their legacy systems against the risk of falling behind. Korizis said the winning strategy is to invest capital in building unique, in-house AI. He believes the most effective path is to pick a single high-value workflow, wire AI into it end-to-end, solve the orchestration problem once, and then scale horizontally.
The messy middle: "The cost is what catches people off guard. The building part is easy. The real challenge is the orchestrating and wiring it into the existing structure and then running it," said Korizis. He said the pattern holds across sectors, from high-tech and media to banking and fintech, and that consumer markets are next as AI begins to reshape how people discover and buy.
That flawed foundation and operational friction feed into a deeper problem: a trust deficit that shapes how willing organizations are to deploy AI at all. The problem is compounded by what Korizis described as a psychological double standard, where many organizations expect perfection from machines while accepting everyday human error. The solution is not to wait for flawless AI, but to build a framework that addresses the AI governance gap through active human verification.
Perfection paradox: "There's an expectation that an AI agent cannot make any mistakes, while a human agent can. Expecting the AI to be perfect while the human is flawed is a fundamental misunderstanding. We are holding AI to a higher standard," said Korizis. Building that confidence, he said, means designing verification into the workflow itself. Low-risk outputs get spot-checked, while high-risk ones get full human review. The standard is not perfection, he said, but whether the AI plus human process outperforms the one it replaced.
Trust but verify: For Korizis, the greatest value of AI comes from combining it with human judgment and collaboration. That kind of deliberate approach is often missing when organizations adopt AI without a clear strategy. The goal is never purely AI or purely human. It is the combination of both that enables organizations to streamline work while maintaining proper oversight and verification. "I don't believe that we should have just AI or just humans anymore, primarily because it's just not efficient, viable, or the best value prop for the company or the consumer. We should be using the best of both worlds to the best of our ability consistently," he said. What that combination requires, Korizis said, is keeping humans in the loop, not as a compliance gesture, but as the mechanism that makes AI deployment responsible, auditable, and genuinely useful to the business.
Another common result of a strategy-free approach is "AI sprawl," where companies accumulate dozens of generative AI tools without a clear sense of their utility. Korizis said simply opening the door and hoping innovation follows is not a strategy. He emphasized that even the most creative minds need constraints to produce great work, pointing to design school as proof that principles and boundaries are what channel creativity, not kill it. The dynamic creates a central tension for many leaders: how to impose discipline without stifling the bottom-up innovation from the "mad scientists" who drive true breakthroughs. The pace makes discipline and QA harder still: when four model releases are shipped in a two-week window, teams are increasingly delivering AI that has, in effect, written itself and been released into the wild, according to Korizis.
More tools, more problems: "AI sprawl is a strategy problem, not a technology problem. It happens when there is no thinking about utility. It's when leadership fails to question what each tool is for or why the company needs 15 different flavors," said Korizis. In practice, he said, that means deciding up front which tools are approved, what data is in scope, and where AI is simply not the answer. Giving teams clear lanes, not an open field, is what prevents sprawl from becoming unmanageable. The same logic applies to the pace of AI development itself: more releases, options, and pressure to keep up does not automatically translate into better outcomes.
Slow down to go fast: For Korizis, the answer is counter-intuitive. As AI development accelerates, he said the most disciplined response is also the least intuitive one. "You can have four model releases in two weeks. But as we accelerate, we will also have to decelerate," he said. To deliver responsible AI, there needs to be time for the auditability, explainability, and governance that accountable deployment requires.
Korizis brought the entire challenge back to a C-suite imperative that has less to do with technology than with finance. He recalled a chemicals company convinced it needed to build its own foundational AI. The question he put to them was simple: is building a foundational AI model really the highest use of your capital right now? For him, that question applies at every level of the AI investment decision, distinguishing between capabilities worth building in-house, tools worth buying, and initiatives worth ignoring entirely. In the end, he concluded that seizing the agentic AI advantage will come down less to who has the best models and more to who has the most disciplined financial strategy. "We need to allow for innovation in companies, but we also need to be very careful and disciplined about how we deploy capital. I think that will be the winning proposition: who deploys capital effectively and how they harness AI in their way of working," he said.





