Your Agent Doesn't Like Your APIs
Every API you’ve shipped was designed for a human reading docs. Agents don’t read docs - they load your entire tool schema into a context window every call, then burn tokens guessing which endpoint to try.
Take a standard accounting API — clean REST, solid docs, every endpoint you’d expect. Point an agent at it with one task: get an invoice status. Watch it burn through tokens, pick the wrong endpoints, and maybe even give up. I’ll demo this failure live, then rebuild it into a handful of outcome-oriented tools — and the same query that 'failed' now works in a single call at a fraction of the cost.
The fix isn’t adding more endpoints or better docs. It’s rethinking what a “tool” means when your consumer is an LLM, not a developer. These design principles have emerged in teams building agent-facing APIs keep converging on the same patterns, and they look nothing like good REST design.
Mike Chambers
Mike Chambers is a Senior Developer Advocate for Generative AI at AWS, based in Brisbane. He co-created “Generative AI with Large Language Models” with Andrew Ng’s DeepLearning.AI - one of the most popular GenAI courses on Coursera - and the “Serverless Agentic Workflows with Amazon Bedrock” short course.
Mike builds agent systems in the open. His Lambda-MCP-Server (231 GitHub stars) was adopted into the official awslabs organization. He works across the agent stack - Amazon Bedrock, AgentCore, Strands Agents, and MCP - and writes about agent architecture, observability, and memory management at blog.mikegchambers.com.
Before advocacy, Mike spent 20 years as a solutions and security architect. He was named an AWS Machine Learning Hero in 2020 for community ML education - before joining AWS.