
AI is now embedded end-to-end across the SDLC, shrinking engineering pods to as few as one person and placing a massive premium on polyglot engineers who understand design, product, and business context.
Ravi Evani, GVP and CTO at Publicis, said speed-to-deploy is the wrong KPI. Organizations need to measure token consumption, defect recidivism rates, and mean time to recovery to understand how well teams actually steer AI.
AI has made the building blocks of software cheap. The new CIO mandate is deciding what to build and preventing sprawl, while recognizing that smaller pods mean more parallel innovation, not workforce contraction.
AI-assisted development is not just accelerating how code gets written. It is restructuring how enterprise engineering teams operate, from how pods are organized to how performance is measured to who is accountable when things break. For CIOs still optimizing for speed-to-deploy, the shift demands a broader lens. The real question is how engineering teams structure context, oversight, and accountability in an AI-driven development cycle.
Ravi Evani is GVP, CTO, and Regional Engineering Lead for North America at Publicis Sapient, the digital business transformation company within Publicis Groupe. With more than two decades of experience building consumer-facing experiences and mission-critical distributed systems, Evani now leads a global engineering organization of people and AI agents. That vantage point gives him a direct view into how AI is reshaping the structural DNA of enterprise software teams.
"AI is no longer a tool. It's a first-class citizen in the development process, connecting context across design, engineering, and product management." The transformation Evani described goes well beyond coding copilots. Organizations getting ahead are connecting AI context across every part of the software development lifecycle: meetings, architectural guidelines, production logs, customer feedback, and company strategy. The result is a fundamental shift in how teams collaborate and execute.
Shared context, not siloed tools: "If you and I are engineers, it's not that we are working on our own agentic coding tools. Both of us have to be on the same tool which shares this entire context," Evani said. "Everything is connected and you have to bring all of that context together."
Pods of one: Engineering pods that once held seven or eight people are being compressed to one to three. The most effective configuration is a single engineer who understands the full picture: design, product, and business domain. "The best pods are pods of one," Evani said. "Engineers who were already full-stack are now even more fearless with AI. They're going into design, they're going into product management."
The premium on talent: That compression puts an enormous premium on polyglot engineers who can hold the full picture in their heads. "The premium of really strong engineers has gone up 50x now. In this new world, they can replace a lot more given this whole back and forth."
The speed gains, however, are not straightforward. Evani pointed to a pattern familiar to organizations scaling AI across the SDLC: faster code production creates new bottlenecks elsewhere.
The review bottleneck: "A junior engineer would now complete their PR in 30 minutes. But for a senior engineer to review that PR might take half a day to a day," Evani said. For mission-critical systems in financial services and other zero-defect environments, the time simply shifts.
Prompt thrashing: Evani described a pattern where engineers blindly paste error messages and ask AI to fix them without thinking through the problem. "All you're doing is thrashing, doing a really bad job of leveraging that abstraction."
New metrics for a new reality: "We don't measure productivity in terms of lines of code. We measure how many tokens were actually consumed for that particular PR across different engineers," Evani said. Beyond token consumption, his teams track defect recidivism rates and mean time to recovery, metrics that reveal how well an engineer steers AI rather than how fast they ship.
That orientation toward eval-driven development changes the definition of a high-performing engineering organization. Evani's benchmark is whether AI has reached the critical systems. "If you're a travel business and you've touched your booking engine, you've reduced that to a smaller pod, brought in more context, and developed a good set of evals, then that is the target."
Evani closed with an analogy that reframes the CIO's challenge entirely. "If bricks were free, the problem becomes what houses you build, what skyscrapers you build, what towns you build," he said. AI has made the building blocks of software nearly free. The new risk is sprawl: a proliferation of small projects that burn money without driving business value.
But smaller pods do not mean fewer engineers. Evani expects overall engineering demand to grow as AI enables more parallel innovation and faster cycles. "Smaller teams don't mean fewer engineers. They mean more parallel innovation, faster cycles, and ultimately more opportunities to build."





