"Most governance models are still focused on what’s coming in the front door. But the real risk lives in the vendors already in the environment, quietly changing the attack surface through routine updates."
Ed Gaudet
CEO and Founder
Censinet

Many healthcare leaders think AI risk starts at procurement. Increasingly, it shows up after deployment. Major software vendors are quietly embedding always-on AI into routine updates, often with little notice or control. In healthcare, that shift creates new exposure paths for protected health information (PHI) and sensitive business data, as AI capabilities are activated inside tools long after they’ve been approved.

Ed Gaudet brings deep experience to this emerging risk. As CEO and Founder of Censinet, a collaborative cloud platform for healthcare third-party risk management, he draws on more than 30 years of experience building enterprise software, including multiple IPOs and acquisitions. Through his work and his Risk Never Sleeps podcast, Gaudet focuses on how risk quietly accumulates across modern healthcare environments.

According to him, the industry needs to reset its expectations by demanding a new architectural model. "Companies have to require that third parties build AI securely by design and secure it by default," said Gaudet. His proposed opt-in approach, where AI is built-in but remains dormant until a customer chooses to activate it, is a powerful way of giving control back to the healthcare organization.

  • The risk is coming from inside the house: The AI threat doesn’t introduce a new problem so much as it speeds up one that was already getting out of control. "Before AI, healthcare organizations were already struggling with scale," Gaudet noted. "The average hospital is running on well over a thousand third-party products, and that alone makes static, point-in-time risk assessments ineffective." That scale creates a leadership blind spot. "Most governance models are still focused on what’s coming in the front door. But the real risk lives in the vendors already in the environment, quietly changing the attack surface through routine updates."

  • Flying blind: AI turns that quiet change into something more volatile. "When vendors can add new models or capabilities every few weeks, a SOC 2 report from last year stops being proof of anything," he continued. "The product you approved is no longer the product you’re running, and most organizations don’t have visibility into how much has changed."

  • Lawyering up for AI: Just as importantly, Gaudet stressed that legal and contractual agreements should be updated to hold vendors accountable for responsible AI. "You must review your contractual agreements with third parties. If they've added AI, do your contracts need to be updated accordingly? You have to examine your limited liability and your overall coverage as it relates to cyber insurance."