
Enterprise AI has a branding problem. It’s seen as a black box—opaque, unexplainable, and risky. In reality, most systems today are grey boxes: transparent enough to build with, trust, and refine.
Greg Makowski, Chief Data Scientist at Ccube, sees the “black box” narrative as outdated. With decades of experience in AI and machine learning, he’s focused on designing systems that are not just powerful, but explainable grey boxes built for clarity and real business outcomes.
Inner workings: Makowski is candid about the current limits of comprehension, but firm on AI's accessibility: "I won't say we understand it completely. We don't understand the human brain completely either. But there is a degree, that's why I call it a grey box. It's not a black box that we know nothing about." This grey box perspective is crucial, he suggests, because "if you understand how things are working, then you're able to architect a reliable solution."
A practical understanding of how AI works is enough to use it effectively, but it's well within reach for most. “I fully understand how a bike works, but that doesn't stop me from driving a car to work,” Makowski says. “A person may understand the 'bike level of detail' on how a car works, which can be enough. The same goes for AI systems.”
Fact or fiction: A common concern with AI is its potential for hallucinations. "It doesn't know if you're generating fiction or non-fiction," Makowski notes, as AI scraping the internet finds no inherent guiding tags. To counter this, he champions techniques like Retrieval Augmented Generation (RAG), which grounds AI responses in an organization's factual data.
This engineered reliability underpins trust. Makowski insists that AI systems can be held to familiar standards: "I expect an LLM agent system to go through the same validation process as a human system or a code-based system. The same proof points work, and if my LLM agent system passes it, then why would you not have trust?" he questions.