• Organizations that rushed into AI without ROI frameworks are now in a correction phase, forced to measure whether deployments deliver real business value or just the appearance of progress.

  • Ricardo Bastos, Manager of Cybersecurity Engineering at TELUS, explained that AI does not fix weak foundations, and that unresolved gaps in risk ownership, process documentation, and cross-functional alignment become exponentially harder to manage as agents and automation scale.

  • He argued that legacy is built through courageous conversations, enterprise-wide literacy, and measurable outcomes, not through the volume of AI tools deployed.

The companies that sprinted fasted into AI are now pausing to ask what it truly delivered. After a surge of adoption fueled more by competitive urgency than strategic intent, a recalibration is taking hold. Leaders are scrutinizing their AI portfolios and discovering that many deployments lack clear, measurable return. For security executives, the implications run deeper. AI layers new forms of risk onto environments where governance, ownership, and core controls were never fully matured, forcing long deferred accountability conversations to the surface.

Ricardo Bastos is a Cybersecurity Engineering Manager at TELUS. A CISSP, CISM, and CCSP-certified security professional with over two decades of IT experience, Bastos previously led NIST CSF assessments for critical infrastructure clients at EY Canada and managed global ransomware recovery across 5,000 assets in seven countries at Sierra Wireless. His career has centered on building security programs aligned to governance frameworks that translate technical risk into executive decision-making.

"Security risks are business risks. If we're not at the table understanding revenue streams and critical processes, we can't build controls that truly protect the business," Bastos said. That conviction shaped his view that the current AI correction is not a technology problem. It is a strategy and accountability problem that predates generative AI entirely and now grows faster with every new deployment.

  • Correction, not collapse: Bastos described a market recalibrating after years of unstructured adoption. "Almost every company jumped into AI, but without clear expectations or a framework on return on investment," he said. "Now I see a correction. Companies are trying to study whether they are actually gaining something from AI, or if it is just something they say they have but don't know how to measure." That correction is healthy, he argued, but only if it leads to real accountability for outcomes rather than another round of rebranded pilots.

  • Foundations first: The deeper issue is that AI amplifies structural weaknesses that were never resolved. "AI doesn't fix weak foundations. If you don't have clear ownership, accountability, and defined processes, AI just makes the existing gaps exponentially bigger," Bastos warned. He pointed to agentic identities as a concrete example. "We've been working with identity management for maybe twenty years, and now we need to handle agentic identities, which is a completely different game. Who owns the agents? Is it the developer? Security? GRC? If you don't have the basics well defined, how do you manage this completely different reality?"

"Security risks are business risks. If we're not at the table understanding revenue streams and critical processes, we can't build controls that truly protect the business."

Ricardo Bastos

Cybersecurity Engineering Manager
TELUS

That ownership vacuum extends beyond agents. Bastos described a broader culture where leaders avoid signing off on risk, even when the consequences of inaction are clear. He framed this not as a new problem introduced by AI, but as an old accountability deficit that AI has made impossible to ignore.

  • Resilience over prevention: Bastos argued that the keyword for security leadership in 2026 is resilience, not prevention. "We should not be thinking about if, but when," he said. "Can your business withstand a ransomware attack or something of large proportion?" That question can only be answered by security teams who understand which business processes are critical, how revenue flows, and where the tolerance for disruption ends. "We can't create good controls if we don't understand the business aspect," he added, reinforcing why security needs a permanent seat in strategic planning, not just incident response.

  • Literacy as infrastructure: Bastos pushed for AI training that starts at the organizational level and moves into function-specific depth. The goal is not just competence but defense against shadow AI and the risks that come when employees use tools without understanding the exposure. "Sometimes security folks focus too much on the technical aspects and don't focus enough on educating the base," he said. He favored workshops, lunch-and-learns, and business storytelling over technical jargon. "Nobody cares about threat actors or state-sponsored attacks when you're talking with business teams. We need to talk about the challenges we're facing and how we protect their workloads while giving them the tools to do their jobs."

The measurement question is where legacy becomes tangible. Bastos pushed back on vanity metrics and volume-based AI reporting, arguing that the number of agents deployed or workloads running tells leadership nothing about impact. "You need to bring actual numbers," he said. "How many hours did you save this week? How did AI help you improve a specific process? These are the things you can measure that actually create impact."

That insistence on honest measurement tied back to his broader argument about what defines leadership in this period. "It all boils down to conversation," Bastos concluded. "A lot of leaders don't have the courage to engage in those tough discussions, but without honest conversations about ownership and risk, you can't build something that actually works for the business."

*The opinions expressed in this article are those of Ricardo Bastos and do not reflect the official positions of any organization.