
Key Points
Enterprise AI stalls when pilots linger on technical success or executive enthusiasm, draining budget and talent without ever proving they deserve to scale.
Udit Pahwa, Chief Information Officer at Blue Star Limited, explained how disciplined decision-making matters more than model sophistication in separating progress from pilot purgatory.
Mature organizations shut down projects that fail ROI tests and advance only AI that removes real business constraints, integrates into core workflows, and delivers measurable value.
Most enterprises can launch an AI pilot. What separates mature organizations from the rest is the willingness to shut one down. As generative AI moves into core operations, CIOs are finding that the challenge is no longer technical feasibility, but deciding which initiatives actually deserve to scale.
Udit Pahwa is Chief Information Officer at Blue Star Limited, a multinational company best known for its engineering, cooling, and infrastructure businesses. A Chartered Accountant by training and a CIO-100 Awardee, Pahwa has spent his career scaling enterprise systems with a clear focus on financial outcomes. His experience has shaped a view of AI that starts with profitability, not experimentation. At the center of Pahwa’s strategy is a counterintuitive belief: progress requires a tolerance for failure.
"To foster innovation, you need to have an appetite to accept failure. If you need to innovate, you have to be okay with failing, learn from it, and deliver faster," said Pahwa. He believes that too many AI pilots survive on executive enthusiasm or novelty rather than measurable business impact, consuming budget and talent without ever earning the right to scale. The result is pilot purgatory, a growing state of organizational inertia.
Shut it down: In an era of innovation theater, abandoning technically impressive work has become a marker of real leadership. "A tolerance for failure creates space for better judgment, because the hardest decision is knowing when a project no longer serves the business," Pahwa explained. He pointed to a predictive maintenance pilot as proof. "The pilot was a technical success, and we built an algorithm that could predict equipment failure two to four weeks in advance. But when we looked at deploying it at scale, the economics did not hold, the ROI was not favorable, and the monetization was not there, so we shut it down."
Betting on promise: The same discipline applies to newer generative development tools. Early promise alone does not justify continued investment without clear downstream value. "We experimented with a developer GPT where you feed a specification document in and it builds out the wireframes," Pahwa recalled. "While it is positioned as a feature that can cut development time, we found it is currently useful only for building wireframes. The technology still has to evolve, and we decided to put it on hold."
So what makes the cut? Projects advance only when they remove a real business constraint. Pahwa highlighted two examples that met that standard, integrating cleanly into core workflows and delivering lasting operational value.
The one-stop chatbot: "We upgraded what was initially a basic bot into a conversational generative AI chatbot," Pahwa said. "Employees can go to one place to check entitlements, apply for leave, or ask questions about their employment." The system works because it sits on top of existing infrastructure rather than alongside it. The chatbot is connected to payroll and time-management systems and supported by an LLM trained on internal HR policies and compensation data. By consolidating multiple touchpoints into a single interface, the tool reduces friction for employees while lowering the operational burden on HR teams.
Value that scales: Tender management made the cut because it removed a real business constraint. "Our project business runs on the submission of tender documents, which on average are about 1,000 pages, and we built a summarization platform using generative AI that brought response times down from weeks to just days." By compressing timelines and reducing dependence on a small group of deeply experienced specialists whose expertise was difficult to scale or retain, the system delivered durable operational value rather than a one-off technical win.
Secure from the start: Pahwa manages innovation through a balance of process and culture, using structure to enforce discipline and norms to keep momentum from stalling. "Our security-by-design model means the CISO is part of the build design team, embedding a cybersecurity perspective into the process from the very beginning," he explained. That control is reinforced through a monthly security council that shares real incidents and best practices, turning governance from a gate into a habit and ensuring that security scales with experimentation rather than slowing it down.
Looking ahead, Pahwa framed agentic AI as a governance test, not a technology leap. As autonomous digital agents begin to operate like employees, the question shifts from capability to accountability, and the same discipline that governs today’s AI pilots will determine whether these systems create value or risk. "As we create an 'e-employee,' we're facing a new challenge of establishing clear lines of accountability," he concluded. "We need to define who the reporting manager is, who acts as the supervisor to correct wrong decisions, and who ultimately owns the outcomes."





