AI Hamsters Engineering: Circling Your Way to Success
Every AI engineer knows the loop — prompt tweaks, evals, regressions, repeat. It feels like going in circles. But that’s not the problem. The problem is not knowing if each circle is tighter than the last. This talk is about how Langfuse turns iteration from an act of faith into something measurable — traces, scores, and evals that tell you whether you’re actually moving forward or just staying busy.
But the loop isn’t just for the developer anymore. Tools like Cursor and Claude Code can tap into Langfuse mid-build, check whether recent changes moved things in the right direction, and keep iterating without waiting on you. The feedback loop becomes part of the development process itself — and that’s when the wheel really starts to spin.
Muhammad Ali
Muhammad Ali is an AI Engineer and Solutions Architect at ClickHouse, specializing in the intersection of real-time analytics and Agentic AI. As the Langfuse Lead for the APJ region, Ali bridges the gap between data engineering and LLM orchestration. Over the past three years, he has designed AI applications for the likes of Apple, Atlassian, and Amazon, focusing heavily on the AI development lifecycle.
Muhammad’s expertise lies in transforming “blind“ prototypes into observable, reliable systems. By combining ClickHouse’s high-speed telemetry with Langfuse’s evaluation frameworks, he helps developers solve the silent failures of multi-step agents. He previously served as the Principal Analytics Tech Lead (APJ) at AWS