"I’ve got AWS and Google engineers at the table together on every development call, because we’re designing these systems to actually work in orchestration, not in silos."
Robert Fulk
CIO
State of Indiana SOS

Most government AI programs never make it past pilot because the moment multiple vendors, clouds, and agents enter the picture, the system starts to fragment. Indiana’s Secretary of State office took a different path by building a coordination layer that forced every provider to operate inside the same governed framework. From there, every workflow, contract, and citizen interaction was designed around that system, turning what is usually a stalled rollout into production AI across four divisions.

Robert Fulk is CIO for the Indiana Secretary of State, overseeing technology across elections, business services for 800,000 state-registered businesses, auto dealer and manufacturer regulation, and securities. He brings 28 years of private-sector experience spanning Ford, GM, Dow, thyssenkrupp, and multiple CEO and CTO roles at growth-stage companies. Recruited specifically to bring private-sector discipline into government, he said the first order of business was making sure every vendor, cloud provider, and internal team was working inside the same controlled framework.

"I've got AWS and Google engineers at the table together on every development call, because we're designing these systems to actually work in orchestration, not in silos," said Fulk. The architecture was designed around a coordination layer above both platforms, so AI agents could orchestrate across environments, rather than operate in isolation.

Fulk partnered with Google for big data and AI workloads and AWS for enterprise cloud systems, then required both providers to coordinate directly with private-sector implementers on every development call. That framework made three things possible that government AI programs typically cannot achieve: common standards enforced across every vendor, AI-assisted recovery of undocumented legacy systems, and decisioning agents built to orchestrate from day one.

  • One standard, enforced everywhere: Without a shared control layer, every vendor defaults to its own tools, models, and infrastructure, and the fragmentation compounds fast. "Without standards, it's impossible. You'd have 40 or 50 vendors going in different directions, some using Claude, some using whatever. I mandate to all of our vendors: here is the AI model we're using, here is the cloud and data model we're using. Everybody follows the same standards, same environment, same infrastructure."

  • Reverse-engineering what nobody documented: One of the earliest AI applications addressed a problem unique to government: legacy systems with no documentation and no remaining staff who understood them. "We used AI to reverse-engineer the business rules, the database, the processes, and the code so you can understand what these things are. The knowledge is just gone," said Fulk. That capability turned undocumented technical debt into readable, actionable architecture.

  • Agents that make decisions: Beyond code analysis, Fulk deployed AI agents into licensing and review workflows where they performed document intake, evaluated whether applicants met requirements, and even corresponded with citizens about missing materials. "AI actually converses with the customer. It will go back and forth and say, 'you're missing this,' or 'this doesn't seem right.' The human being is still always in the loop at the end, but it's accelerating their work." The result freed staff from manual entry and review so they could focus on direct human interaction with citizens.

AI-assisted code generation compressed timelines dramatically. "Things that used to take six months, 12 months, or years to develop, you're putting development cycles on enterprise systems down to months," noted Fulk. The vendor management model was equally deliberate. He refused to pay for staffing or time. Every contract was built around tangible deliverables, with code check-ins and infrastructure state as the proof of progress.

  • Delivery or departure: That accountability structure extended to every vendor relationship. "I don't pay on staff time. I pay on deliverables. I want to see code, design, and approaches," said Fulk. "If they don't deliver, I will stop it. I will remove them and bring other vendors in. Failure is not an option." He also required the state to own all source code, ensuring vendor-agnostic control and protection against lock-in. The result was a vendor ecosystem where every workflow was visible, every output was verifiable, and no single partner could hold the state's systems hostage.

  • Evaluating beyond the bid: Fulk called out what he saw as a structural flaw in government procurement: evaluating vendors on paper rather than on capability. "Everybody says they're doing AI now. I'm like, really? You didn't even know what AI was two years ago and now you're an expert?" said Fulk. "I evaluate based on performance and delivery. I call their previous customers. I have engineers from Google, from AWS, and from my private-sector network evaluate their teams, their code, and their infrastructure." The same discipline that governed his orchestration architecture governs his vendor selection. Capability has to be demonstrated, not claimed.

  • Citizens first, systems second: Yet none of the orchestration work is designed in a vacuum. "Do you really know what the customer needs? We have 1.2 million users on our business services system. So we spent eight months doing comprehensive CX research," said Fulk. "We used that to drive the vendors and technology from the front end." That research shaped how services were rebuilt, how interfaces were designed, and which AI interactions citizens would encounter. Government rarely starts here, but the state's AI policy framework gave him the mandate to do so.

The model is replicable, but it requires the same discipline applied to people that Fulk applied to vendors and systems. He changed staff, brought in private-sector evaluators, and insisted on engineers who could assess vendor claims independently. "Government doesn't know what they don't know. Vendors take advantage of that," said Fulk. "You have to have people who understand all of the above in government to really evaluate them. And if you don't have them, bring partners to the table who aren't vested in winning the project but can help you evaluate others."

For Fulk, the orchestration model, the vendor accountability structure, and the citizen-experience research all point to the same conclusion. "We're not using AI for the sake of AI," he said. "We're using it to reduce cost, improve services, and actually deliver measurable value to citizens."