Key Points

  • Rushed AI adoption without proper security is leaving businesses vulnerable to cyber threats, warns PayPal's Colleen Crane.
  • "Shadow AI" practices, where employees use unapproved tools, exacerbate security challenges for companies.
  • Crane advises implementing model-specific security frameworks and Zero Trust infrastructure by default.

Speed at the expense of security turns AI innovation into risk. In the rush to adopt new tools, many companies are discarding basic technology protocols and walking into a security minefield. We spoke with Colleen Crane, Business Security Engagement Officer at PayPal, who had a stark warning: the tools some businesses are chasing could be the very ones that take them down. The solution requires equal amounts of patience and org-wide education to commit to secure-by-design AI infrastructure as a matter of practice at every stage of implementation.

Crane said the only path forward is to stop treating AI as a series of haphazard experiments, and instead build a top-down corporate strategy with security at its core. "When you implement AI, you need to build the security into that plan early," she advised. "When people don’t, that’s when things start to get expensive and insecure."

  • Opening Pandora's box: "Many small-to-medium-sized companies are going to get crushed by this," said Crane. While large, regulated enterprises risk becoming competitively irrelevant by moving too slowly, smaller businesses face the opposite danger; adopting new AI tools with an alarming speed that outpaces their security resources. "We’re going to see companies become victims of the very technology they thought was going to save them. They're implementing tools to be more competitive, but they're unknowingly opening the door to a free-for-all for attackers."

Recent research from Viking Cloud found that as many as 1 in 5 SMBs could be forced out of business by cyberattacks. Much of the risk stems from trust without verification and a lack of insight into how underlying infrastructure works. Even well-established companies unknowingly inherit the entire responsibility of securing the technology from the inside out. Crane pointed to the rise of "shadow AI," where employees and departments deploy unvetted tools to innovate faster, bypassing security teams entirely. When data is given so freely to external tools, organizations risk data leakage that opens them up to increasingly-frequent ransomeware attacks targeting small businesses. This trend leaves already understaffed and perpetually reactive security leaders in an impossible position.

  • New infrastructure requires new rules: "People think they can just re-use old policies, but that’s not going to work." This main challenge stems from a new class of technical risks, as many AI applications are just "wrappers" around foundational models. "People assume the security is baked in, but when you use the API, you’re getting raw infrastructure," she cautioned. "The safety features are gone. You have to build them yourself, and most people don’t even know it."
  • Zero Trust, or full exposure: Nothing short of a model-specific security framework will cut it. "We need to have this kind of model-centric security. Threat modeling, for example, should be adapted for LLMs. We need to make Zero Trust a part of AI. The policies and controls you have in place now are not going to cover you, and thinking they will is the biggest blind spot companies have right now."
"When you go to implement AI, you need to build the security into that plan early. When people don’t, that’s when things start to get expensive and insecure."

Colleen Crane

Business Security Engagement Officer

PayPal

The issues are well-researched, with data showing the multi-layered risk structure unique to AI. Cybernews found surprising vulnerabilities even with the biggest mainstream LLM providers like OpenAI, while controversial newer players like DeepSeek have already experienced concerning breaches. More anecdotally, Anthropic ran a clever marketing-masked-as-self-reflection experiment showing AI's conversational capacity to deceive users at every turn. But no leader wants to hold back internal innovation by being overly-skeptical.

  • IT's balancing act: Looking ahead, Crane predicted the challenge will only grow more complex. "In the next year or two, it’s going to look completely different as the technology advances. We're going to continue to see more and more threats that we can't even conceptualize right now." And while there’s no way to stop technological momentum, first principles preparation is key. "Companies need to start doing the basics correctly, or else the understandable desire to move quickly could unintentionally lead to their demise," said Crane. "The job of IT leaders is enablement, but we also have a responsibility to protect."