The future of AI is undeniably agentic, but the bottlenecks to widespread adoption are still being sorted out on the infrastructure level. The physics of generative AI are a function of "tokens and time", with the bulk of the technical lift falling on the foundational model providers and inference infrastructure that deliver them to application-layer tools for use. As the underlying technology matures, larger questions arise about the future of business models that hedge against pricing commoditization, a possibly open source future, and perhaps more importantly, the energy consumption required to power increasing demand for the best "logic".

To understand the path forward, we spoke with Richard Ling, a commercial Go-to-Market leader at Groq. With a background in the energy sector and multi-time founder, he brings a unique, systems-level perspective to the AI market.

  • Beyond benchmarks: "Agentic AI is clearly the future, and I'm excited how cheaper and faster compute will enable agents to solve novel problems beyond our current imagination," said Ling. Flashy product demos often show aesthetically-enticing use cases for agents, but often lack real world impact. Ling envisioned a future where agents innovate by debating solutions, where they "actually fight just like humans would do" to determine the best answer, and thus lead to new discoveries beyond the limit of human thought.

  • The three stages: This capability will likely unfold in three stages: moving from today's automation of "boring schlep work," to augmenting "mid-level intelligence" for knowledge workers, and finally to performing the high-level intelligence of bleeding-edge research.