Key Points

  • Many organizations view AI adoption as a tradeoff between speed and safety, creating unnecessary friction in their innovation efforts.

  • Indeed's CIO & CSO outlined how embedding trust from the outset enabled the company to align innovation and governance, allowing speed through safety.

  • He presented a structured framework that integrates shared governance, automated safeguards, and evolving human skills to scale AI responsibly.

The rush to bring AI into business has many companies caught between two less-than-ideal options: move fast and risk mistakes, or move slow and miss the moment. But what if it's a false choice? The smarter path blends innovation and risk management from the start, built on a simple truth that trust is not a brake but the engine that makes real speed possible.

That's how Anthony Moisant sees it. As CIO and CSO at Indeed, he oversees both innovation and security for one of the world’s largest hiring platforms, serving more than 300 million job seekers each month. A U.S. Navy veteran with leadership experience at Glassdoor and Tribune Media, Moisant brings a rare perspective to the challenge of balancing safety with speed, turning what is often a point of friction into forward motion.

"When you treat trust as a built-in strategic advantage, instead of a defensive function, you move faster, not slower. Embedding security, privacy, and responsible AI from day one removes the bottlenecks later. It means innovation and governance scale together, letting speed come from safety rather than in spite of it," said Moisant.

  • Mesh connectivity: To turn this philosophy into practice, thoughtful adoption relies on a governance model that functions as a business enabler. For Moisant, it starts with weaving together IT, security, legal, and business teams in the decision-making process. "This cannot be just an IT or security play; it has to include the rest of the business. When we build governance structures, every function has a voice, creating mesh connectivity across the organization. That balance between governance and operations keeps us fast without adding unnecessary risk," he said.

  • Paving the golden path: His framework treats the usual guidelines versus guardrails debate as a sequence rather than a choice. It begins with clear guidelines that translate trust into everyday action, followed by technical guardrails that automate safety and allow the business to move quickly without added risk. "Guidelines are often faster than guardrails, so we start by defining the philosophies of trust and translating them into practice," Moisant explained. "In some cases, like PII redaction through our LLM proxy, we built guardrails quickly to keep pace with the business. Those guardrails then became the blueprint for what we call 'golden pathways,' which simplify how teams use AI while keeping it as safe as possible."

When you treat trust as a built-in strategic advantage, instead of a defensive function, you move faster, not slower. Embedding security, privacy, and responsible AI from day one removes the bottlenecks later.

Anthony Moisant

CIO & CSO
Indeed

The meticulous approach stems from the specific risks of deploying AI in the sensitive nature of hiring. In a domain touching on identity and personal life circumstances, the challenges are well defined. Moisant identified a trifecta of primary risks: privacy exposure, the threat of adversarial attacks that require a zero-trust lens, and the ethical question of fairness in matching models: Are you opening a door or are you closing one?

  • A higher standard: "Our AI systems now aggregate vast amounts of sensitive data, which means the blast radius if something goes wrong is greater than ever before," cautioned Moisant. It's a reminder that the power of these systems is matched only by the responsibility to use them wisely. "That reality defines our North Star: every use of AI must expand opportunity, not restrict it, and that can only happen when trust is built in from the very beginning."

  • From hours to revenue: With a framework for risk in place, the conversation naturally turns to generating value. In a market awash with conflicting reports on AI's return on investment, Moisant bypassed the debate by focusing on what he calls "precursors" to ROI. He measures this with tangible impacts, such as rising activity metrics and a 30% reduction in trust and safety moderation time, which build upon widespread employee time savings. "We're seeing case volumes decline for tasks previously handled by humans, and that is allowing us to dedicate those people to higher-value opportunities. We're already seeing revenue from these redeployments."

But the path from prototype to production has its own hurdles. According to Moisant, success hinges on clearing a few key challenges. Even after addressing technical prerequisites like clean data and simplified processes, the biggest obstacle is often human. Overcoming employee change fatigue requires a deliberate focus on building internal trust, recognizing that people's standards for machine error are far higher than for human error.

  • When machines falter: "Putting AI on top of a broken workflow just accelerates dysfunction. We should start with old-school process simplification and always ask: Could we eliminate this process entirely with the AI we have now?" Streamlining processes is one thing, but helping people feel confident in what replaces them is where transformation really succeeds. "People have little empathy when a machine gets it wrong, but a lot of empathy when a human gets it wrong. That's an opportunity to increase transparency and build employees' trust in these new systems," Moisant said.

In the future he envisions, human challenges related to AI integration will matter just as much as the technical ones. He imagines teams that are builders by nature, developing T-shaped skill sets that blend deep expertise with broad creative problem-solving. In his view, the next era of work will value distinctly human strengths—flexibility, empathy, and clear communication—as the foundation for a new, higher-order role that goes well beyond simple prompting. "Prompt engineering was the start, but agents will not run themselves. An agent is made up of dozens of complex prompts, and we are going to need many more people capable of that orchestration across the business."