Key Points

  • A new survey revealed widespread use of unvetted "shadow AI" by healthcare workers, creating significant risks for patient safety and data privacy.

  • The report found 40% of staff are aware of the practice, with nearly one in five admitting to using the tools to improve workflow efficiency.

  • The trend highlights a failure in internal governance, with a significant gap between the creation of AI policies by administrators and awareness among frontline staff.

A new Wolters Kluwer Health survey revealed that healthcare workers are widely using unvetted "shadow AI" tools, creating significant patient safety and data privacy risks, as first reported by Healthcare Dive. The report found that 40% of staff are aware of the practice, and nearly one in five admit to using the tools themselves.

  • A need for speed: The primary driver is a hunt for efficiency. Over half of users pointed to a need for faster workflows, while others cited access to better functionality than their hospital-approved software provides.

  • Direct line to danger: This experimentation goes far beyond summarizing notes, with one in ten workers admitting to using a shadow AI tool for direct patient care. Without proper vetting, the practice creates a direct line to diagnostic errors and data breaches, and patient safety was the top concern cited by respondents.

  • Policy disconnect: The trend highlights a failure of internal governance. The data reveals a significant gap, where hospital administrators are three times more likely to develop AI policies, yet the providers on the front lines are more aware that such rules even exist.

The findings put the onus on health system leaders to close the gap between policy and practice. As a Wolters Kluwer executive noted in a press release, this is a governance issue that requires clear guidelines and the deployment of validated, secure AI tools for clinical care.

The survey lands amid a tense environment for healthcare IT. Other reports show that healthcare data breaches have doubled, while the nonprofit ECRI has separately named the use of AI in medicine a top technology hazard. Meanwhile, the surge in unsanctioned tool usage is also being linked to widespread staff burnout as overworked clinicians seek any available relief.