Your Agent Doesn’t Like Your APIs — Mike Chambers at AI Engineer Melbourne 2026
Your Agent Doesn't Like Your APIs
Years of API design have taught us that good design means simplicity for humans: clean REST endpoints, sensible hierarchies, well-structured JSON responses, and documentation that a developer can scan quickly. These principles have served us well. But there's a problem hiding in that success: APIs designed for humans often work terribly for machines that think like AI agents.
The mismatch is subtle but significant. Agents think in terms of outcomes and capabilities, not the elegant resource semantics that REST provides. A pagination strategy that makes sense to a human reader — "here's page 2 of results, fetch page 3 next" — can confuse an agent. Nested response structures that are intuitive to read become navigation problems when an agent needs to pull specific data. The granularity of endpoints that works for humans often leaves agents making bizarre tool-calling decisions that no engineer would make.
This isn't a minor usability issue. It affects reliability. Agents get stuck in loops, make unexpected API calls, or misinterpret the data they receive. The gap between "this API works great for the engineers using it" and "this API confuses the agent trying to act on it" is real and growing.
Mike Chambers, the Senior Developer Advocate for Generative AI at AWS, demonstrates this problem concretely in his talk. He shows a real agent failing against a standard REST API — not because the API is poorly designed, but because its design assumptions don't align with how agents reason. The agent struggles, makes redundant calls, or reaches incorrect conclusions.
Then comes the rebuild. Instead of trying to force the agent into the mold of traditional API design, the talk reframes the API around what agents actually need: outcome-oriented tools. Rather than resources and operations, you design for intent. Instead of pagination, you design for "get me all the data I need to solve this problem." Instead of nested structures, you design for "here's the information you requested, structured for decision-making."
This shift has profound implications. It suggests that as AI agents become more central to how systems interact, we may need a parallel universe of API design patterns. Not replacing REST, but complementing it. APIs that are optimized for the way agents think, which is fundamentally different from the way human developers think.
The practical challenge is real: most systems have both human developers and AI agents interacting with them. Do you optimize for one? Build two different interfaces? Or find a design philosophy that works reasonably well for both?
What's emerging in practice is that outcome-oriented APIs tend to work better for both humans and agents than purely resource-oriented ones. They're more stable across changes. They're more forgiving of misuse. They push logic to the right place — the server, not the client. Ironically, designing for agents may mean designing better APIs for humans too.
For teams building systems that AI agents will interact with, the lesson is direct: your API design assumptions may not travel. Test with agents early. Watch how they fail. Let that inform your design, not as a compromise with human usability, but as a fundamental part of what good API design means right now.
Mike Chambers is presenting this perspective on API design for agents at AI Engineer Melbourne 2026 on June 3-4, drawing on his experience at AWS and his co-creation of DeepLearning.AI's generative AI curriculum.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.
