"This is what our users want and need, and we need to look internally and figure out how we can provide the tools that they need so they don't go outside of what's provisioned."
Ari Harrison
VP, Information Technology
BAMKO

When employees reach for unsanctioned AI tools, they are signaling unmet demand, and in the age of generative AI, that signal carries real stakes. The barrier to entry with consumer LLMs is effectively zero, meaning sensitive corporate data can reach an unprovisioned model in a single click. Most organizations respond by tightening controls, restricting access, and treating the behavior as a problem to suppress. Others read it as a roadmap. The organizations moving fastest on AI adoption opened a better door instead, paired it with training built around how people actually work. Provisioning safer, sanctioned options and stepping back to let peers teach each other has pushed adoption well beyond typical benchmarks.

For more than a decade, Ari Harrison has built IT organizations from the infrastructure up. Now VP of Information Technology at BAMKO, a promotional products company and subsidiary of NASDAQ-listed SGC, he serves as divisional CIO overseeing cloud strategy, cybersecurity, and automation across global operations. He previously served as an IT service delivery manager at Silicon East Inc., where he developed the operational instincts and security practices he now applies at enterprise scale. Harrison believes the surest way to govern AI adoption is to get in front of it.

"This is what our users want and need, and we need to look internally and figure out how we can provide the tools that they need so they don't go outside of what's provisioned," said Harrison. For BAMKO, that was as much a leadership decision as a technical one.

Harrison's starting point was a straightforward exposure: one click separated any employee from feeding sensitive corporate data into an unprovisioned LLM. Rather than treating that as a behavior problem to suppress, his team reframed it as a demand signal and a window into the attack surface growing beneath the organization. The response was to provision something better: a company-wide rollout of Google Workspace and Gemini Pro, governed by the same API controls BAMKO already applied to traditional software.

  • If you can't beat 'em: "The barrier to entry with these free tools is basically zero," Harrison said. "You can feed huge amounts of data into an unprovisioned LLM and put the company at risk." Provisioning enterprise-grade alternatives gave employees what they were already looking for, while keeping sensitive data inside governed systems.

  • Same rules, new toys: Beyond rolling out new tools, Harrison's team made sure employees knew the capabilities were already built into the software they used every day. "We're not allowing users to connect different pieces of software arbitrarily with our systems," he noted, describing an approach that pairs access controls with active internal championing of sanctioned tools.

Provisioning the tools was the easier half of the problem. BAMKO's workforce spans logistics, kitting, and international vendor management across multiple geographies. Early training sessions fell flat when employees wanted one thing: how does this help me do my specific job? Harrison's response was to bring in professional trainers and rebuild the program around department-specific use cases, bridging the guidance gap between IT and the business functions actually using the tools. The result: AI adoption and innovation rates of 70 to 80 percent in some departments, driven largely by non-technical staff and peer-to-peer learning.

  • Keyboard warriors: To teach global users how to handle difficult vendor disputes, Harrison turned training into a game. “We actually held an internal training where I sent all of the people in the training an email that was simulating an irate customer,” he said. “We held a contest. Whoever could come back with the best response wins.”

  • Accidental engineers: Giving people the right tools allowed everyday employees to become builders. “They're creating images to show clients mockups and building applications for calculating taxes when we're shipping internationally,” he added. “My mind's actually been blown by some of the stuff that those so-called citizen developers have done.”

But broad access creates broad exposure, requiring a rock-solid security foundation. The more AI touches day-to-day work, the more business context flows through systems that need to be governed. Harrison's governing priority is straightforward: protect the data, not the perimeter. A layered approach to controls lets the workforce innovate more safely without the organization losing sight of what is happening to its data.

  • Protect the payload: “What we're ultimately trying to protect in the cybersecurity world is our data,” Harrison explained. “These tools potentially have access to so much of our data, so much of our business context, and we really need to focus on protecting that.” The risk, as Harrison framed it, is not to systems or hardware but to the business context those systems carry.

  • Basics at scale: “It's a layered approach,” he said. “It's role-based access, the principles of least privilege. It's all of the stuff that we know. But at a far greater scale with newer tools, and the stakes are a lot higher.” At BAMKO, that stack runs from user education and API guardrails through to endpoint detection and DSPM tooling.

Harrison's security approach extends beyond controls into culture. At BAMKO, the goal is an environment where employees feel safe enough to admit mistakes, because a workforce that hides errors is harder to protect than one that surfaces them. "We make sure that if someone does something they're not supposed to, they are willing to come to us and explain what they did," he said. "Otherwise it becomes a game of hide and seek."

To get there, BAMKO gamifies its security awareness program: a rotating leaderboard tracked by department one quarter, by country the next, with badges awarded for reporting phishing attempts and completing training. The results surprised even Harrison. "You get those fake badges, and something about it is just so fun," he said, noting that users now message him directly when simulations almost fool them. "Even if you click on something in a phishing simulation, it's not punitive. We use it as a teaching moment. You still get those stars, you just didn't pass the phishing test."