Key Points

  • A grey box approach to enterprise AI offers a balance between complexity and clarity, improving trust and usability.

  • Greg Makowski of Ccube discusses the need for AI systems to undergo the same validation as human or code-based systems.

Enterprise AI has a branding problem. It’s seen as a black box—opaque, unexplainable, and risky. In reality, most systems today are grey boxes: transparent enough to build with, trust, and refine.

Greg Makowski, Chief Data Scientist at Ccube, sees the “black box” narrative as outdated. With decades of experience in AI and machine learning, he’s focused on designing systems that are not just powerful, but explainable grey boxes built for clarity and real business outcomes.

Inner workings: Makowski is candid about the current limits of comprehension, but firm on AI's accessibility: "I won't say we understand it completely. We don't understand the human brain completely either. But there is a degree, that's why I call it a grey box. It's not a black box that we know nothing about." This grey box perspective is crucial, he suggests, because "if you understand how things are working, then you're able to architect a reliable solution."

A practical understanding of how AI works is enough to use it effectively, but it's well within reach for most. “I fully understand how a bike works, but that doesn't stop me from driving a car to work,” Makowski says. “A person may understand the 'bike level of detail' on how a car works, which can be enough. The same goes for AI systems.”

Fact or fiction: A common concern with AI is its potential for hallucinations. "It doesn't know if you're generating fiction or non-fiction," Makowski notes, as AI scraping the internet finds no inherent guiding tags. To counter this, he champions techniques like Retrieval Augmented Generation (RAG), which grounds AI responses in an organization's factual data.

This engineered reliability underpins trust. Makowski insists that AI systems can be held to familiar standards: "I expect an LLM agent system to go through the same validation process as a human system or a code-based system. The same proof points work, and if my LLM agent system passes it, then why would you not have trust?" he questions.

"I won't say we understand it completely. We don't understand the human brain either completely. But there is a degree, that's why I call it a grey box. It's not a black box that we know nothing about."

Greg Makowski

Chief Data Scientist

Ccube

The AI apprentice: Once understood and validated, AI can function as a powerful, supervised assistant. "The AI is still like a glorified intern," Makowski says. "But if you can build it so that the intern can ask questions when it doesn't know the answer, then that's where you get very useful systems."

The goal is support, not full autonomy. In Makowski’s view, the best systems don’t replace human judgment; they enhance it. "It's augmented human intelligence. AI automates some of the grunt work, but it's able to explain it so humans have a chance to review and either authorize or reject," he explains.

Business first: Adopting grey box AI isn't about technology for its own sake, but about tangible outcomes. Makowski likens well-architected AI workflows to a Gantt chart, with clearly defined tasks, inputs, outputs, and roles, ensuring a reliable system.

This structured approach extends to project selection itself; Makowski advocates for a data-driven prioritization process, aligning proposed AI solutions with high-level company strategy and quantifying potential financial benefits. "We're solving a business problem, we're going to validate the solution to the business problem," stresses Makowski.

Eyes forward: As AI continues its rise to the top—"things are still progressing at a healthy speed," Makowski confirms—the focus sharpens on its most transformative applications. "I'm very excited about the ARC million dollar prize on AGI," he says. "It's more advanced cognitive thinking."

Peeking into AI’s grey box doesn't just clarify how it works. It reshapes what's possible.