Orbital Lasers vs For Loops: Economically Matching Models to Tasks
Most developers pick their AI model the same way: use the biggest, smartest one available for everything. Bash script? Opus. Dockerfile? Whatever’s at the top of the dropdown. Then they hit their usage limits halfway through the day and lose the productivity gains they were chasing. After too many cases of my workflow pausing because my Claude subscriptions limit, I started asking a different question: what model does this task actually need? The answer, for a surprising number of daily tasks, was something far smaller, faster, and cheaper.
This talk shares a practical framework for model selection built from real development work across cloud infrastructure, scripting, code generation, and documentation. I’ll walk through concrete comparisons across model tiers — from frontier models through mid-range options down to lightweight and even local models — covering output quality, speed, cost, and the dimension most benchmarks ignore: actual impact on developer velocity. You’ll walk away with a mental model for matching tasks to appropriate tiers, an honest look at where cheap models genuinely fall short, and a case for why thoughtful model selection is an engineering discipline, not just a cost optimisation exercise.
Stephen Sennett
Stephen Sennett is a cloud technology leader, content creator, educator, and speaker. He worked in the industry for over a decade across in a variety of roles, currently as a Lead Consultant with V2 AI. He holds high-level certifications across multiple technologies, has been recognised as an AWS Community Hero, spoken at events around the world, and authors technical content with A Cloud Guru (a Pluralsight company).
Outside work, he is a dedicated volunteer across numerous organizations, primarily in the Emergency Management sector, serving around the country during several major national disasters.