
Many companies’ rush to “bolt on” AI to existing workflows has risked creating internal chaos, undermining organizational wisdom.
Mei Lin Fung, Vice Chair of the UN AI for Good Impact Steering Committee, and a pioneer of early CRM systems at Oracle, argued that organizations must instead adopt a philosophy built for constant change.
She offered an adaptive How, What, Why framework based on the OODA loop to help organizations manage the friction between strategy and execution, using AI as a tool to debug organizational contradictions.
As companies adopt AI, many have taken a critical misstep: bolting AI onto surface-level workflows. The problem is that optimizing one visible process in isolation can sow chaos across the rest of the business, creating expensive messes and eroding the collective wisdom that holds an organization together. The solution lies in adopting enterprise-wide frameworks that align AI with strategy, operations, and continuous learning, ensuring innovation strengthens the system rather than destabilizing it.
Mei Lin Fung is Vice Chair of the UN AI for Good Impact Steering Committee. Her career spans from the trenches of enterprise software to the highest echelons of global policy. She has foundational experience from her time at Oracle, where she pioneered one of the first Customer Relationship Management systems, in addition to her advisory work on G7 policy briefs, and her contributions to the UN Commission on the Status of Women on digital innovation. She said that the key to avoiding this misstep is to establish frameworks that turn AI from a chaos agent into a disciplined tool that moves the enterprise toward its goals.
“If you bolt AI onto what you can see, you optimize a tiny piece of the iceberg and risk creating chaos everywhere else," said Fung. She suggested organizations replace static, linear thinking with an operating philosophy built for constant change: the OODA loop, along with an adaptation of her own How, What, Why model.
Learn or lose: Credited to Air Force pilot John Boyd, OODA—Observe, Orient, Decide, Act—offers a framework, a guiding philosophy for enterprises to adapt and innovate more effectively, something Fung said is vital. "It's a dogfight with millions of dogfighters," she said. "The only way to play is to constantly learn, innovate, forecast, and test through disciplined experiments. But you must never be tied to one person's view of the world, because that's the trap." It's this adaptability and constant learning that the OODA loop unlocks for enterprises.
Fung took this further by combining the classic Strategy-Operations-Tactics model, a military framework designed to turn high-level vision into actionable results, with Doug Engelbert's ABC, a bootstrapping framework designed to accelerate innovation (where A is the work, B improves A, and C improves how you improve). The result was her How, What, Why model, which she used to translate the OODA framework into an architectural blueprint, a structure of nested feedback loops that allows an organization to adapt without fracturing. Her framework pinpoints the central node where strategy and execution collide- the "What"- a junction she called "the crazy point," where contradictions surface.
The crazy point: "That middle 'What' layer is the crazy point because it's where all the contradictions become manifest. Strategy says go left and tactics say go right, and you have to reconcile it. Without organizing how you adapt to that feedback, you take the company in a hundred different directions with no reconciliation."
Organizational debugging: Fung's solution has been to manage this friction through a familiar IT process: organizational bug triage. The approach seeks to reframe AI’s role from a chaos agent into a disciplined tool for prioritizing fixes and stabilizing the system toward its goals. "Imagine having bug tracking at all of these feedback points. You could then use AI to analyze all the reported issues and help determine the right order to resolve them to have the biggest impact, making the system more stable."
In Fung's view, the core of the problem is data governance. Adaptive loops and debugging systems are useless if the data flowing through them isn’t reliable. Industry analysts note that a lack of AI-ready data can be a primary cause of project failure, and that information governance has become "vital beyond simple risk reduction."
Bidirectional data flows: "When people put in fake or agenda-influenced data, how can you debug?" Fung said. "You can't if what's in the debugger is not the real thing that happened." Her solution is "bidirectional data flows", an insight from her work on the Federal Health Futures project. There, leaders found that top-down census data is useless if a citizen cannot correct their own out-of-date address from the bottom up. That logic points to the need for "data rights," which gave people the incentive to become active stewards of data quality.
Incentivizing integrity: The idea connects to the growing trend of data sovereignty, with platforms like IBM's "Sovereign Core" and a growing ecosystem in Europe with initiatives like Gaia-X and providers leveraging sovereign clouds to meet regional data laws. "Data rights are the incentive," said Fung. "They empower people to keep the data accurate and to correct problems. Without that incentive, the data can be manipulated by anyone, taking the company off into somebody's private agenda."
Fung finished by expanding the argument from an enterprise decision to a national issue. She contrasted the U.S. model, which relies on market competition between tech giants to forge its AI "highways," with India's deliberate, top-down push to build digital public infrastructure. In Fung’s view, India is executing its own version of the 1956 Federal Aid Highway Act for the AI era. Just as the U.S. once built a national interstate system to ensure every citizen and business could move freely, India is building "digital highways", publicly owned rails for identity, payments, and data exchange. The modern U.S. approach, she argued, has moved in the opposite direction.
"Instead of setting up a trail that allows everyone- including the open-source community- to find their way using publicly available maps, some countries are handing the keys to the tech giants," Fung explained. "The danger is that it no longer matters what the 'people' want or need. In those countries, the people get the roads that the tech companies can make money from." By allowing private profit to dictate the geography of the digital future, Fung believes the U.S. is bypassing the collective safety and economic dynamism that public infrastructure provides. "That’s a big bet," she concluded. "It’s an existential bet for the U.S. to make."





