
Key Points
The surge in AI demand exposed long-standing gaps in cloud, data, and infrastructure that continued to slow enterprise adoption.
Bogdan Muraru, a Field CTO at Cisco, said leaders first need to build a strong data culture and close skills gaps before any AI investment can succeed.
He outlined an approach that empowers the workforce through automation and guides organizations to design AI infrastructure with security, efficiency, and flexibility at the core.
The views and opinions expressed are those of Bogdan Muraru and do not represent the official policy or position of any organization.
Many enterprise leaders are trying to chart a clear path for AI adoption while building infrastructure that supports both autonomy and compliance. In the process, they're running into a familiar reality. The long-standing challenges of cloud migration, data governance, and infrastructure modernization never went away. The surge in demand for AI capabilities is now magnifying these unresolved issues and forcing organizations to accelerate technology roadmaps that were already under pressure.
Bogdan Muraru brings more than twenty years of hands-on experience shaping enterprise technology strategy across global organizations. Now a Field CTO at Cisco, his background includes senior work with companies such as GSK, WPP, and Ericsson. Muraru has guided enterprises through cloud modernization, ITSM transformation, and the architectural decisions that support secure, data-driven operations, and in his view, leaders can’t pursue the promise of AI until they address a more fundamental, human-centric challenge.
"Before companies can actually make use of AI, they need to invest a lot more in creating the skills, the knowledge, and the capabilities for their organization to work with data and to put data to use," says Muraru. He explained that when an organization begins to see its data as a core strategic asset—its intellectual property—the need for data sovereignty can transform into a C-suite priority. From this perspective, establishing a foundational data culture is a prerequisite to purchasing any tools or deploying any models.
Getting it right: "The required culture is about having the right data at the right time for the right reasons, with the right people," explained Muraru. That readiness depends on a disciplined approach to both data and process. "You need to curate data; you need to understand what data you have. A key skill, which has been crucial for the last decade, is automation. That's where we need to keep investing, so we understand how to automate our processes and then how to use AI to accelerate that automation."
Smarter, not bigger: It's a mindset that can help leaders address the skills gap with a more strategic and sustainable approach. The result is a virtuous cycle that empowers the existing workforce with AI tools that make them more productive, agile, and innovative. "I don't think we should fight the skills gap just by adding more and more people. I think we really need to understand how to put the tools to practice."
With the right culture and an empowered workforce, organizations can then implement a structured operating model to build their AI-ready infrastructure. Drawing on common challenges from previous technology cycles, Muraru offered a set of core principles that can offer a practical starting point for leaders looking to operationalize AI responsibly.
Security by design: "In the operating model, security and compliance need to be accommodated by design from the very beginning," he said. "We also need to ask how micro-segmentation would play a role when we either consume or build new models. The network and infrastructure have to employ architectures that respect encryption relevant for today, but also post-quantum encryption relevant for tomorrow."
Performance and efficiency: "Scalability and performance need to be brought to life in the operating model. When we design compute, storage, and networking resources, they have to scale efficiently and align with AI workload demands," urged Muraru. "We must also consider infrastructure optimizations for power, cooling, and cost efficiencies that were already a problem before, and are now even bigger." He stressed that this work must preserve high performance and include a constant check on whether any remaining manual overhead is being reduced.
Flexibility and choice: "Flexibility and choice are key. You can't be stuck. We've heard this over and over again with the cloud," he noted. "Being able to choose based on your own needs, for workloads you’re moving to sovereign infrastructure versus environments with unique compliance and operational demands, is critical."
For Muraru, these technical principles are incomplete without considering their human impact. He suggested the "why" behind this focus on AI infrastructure is just as important as the "how," since the ultimate goal is to ensure the technology is applied responsibly.
"It's our duty to make sure that not only does AI behave ethically, rationally, and fairly in the future, but also that we seek the right opportunities to bring the most to humanity, whether it's for things like dealing with diseases and finding new cures, all the way to improving quality of life for those who follow us," he concluded. It’s a perspective that connects the technical work of building these systems to the broader goals of preserving intellectual property and developing AI in a way that is rational, fair, and beneficial.



.webp)

