Constitutional Prompting: Making AI Coding Agents Reliable Without the Iteration Tax — Prem Pillai at AI Engineer Melbourne 2026
Constitutional Prompting: Making AI Coding Agents Reliable Without the Iteration Tax
AI coding agents are impressive when they work. They generate code that's functional, sometimes elegant, often exactly what you needed. But they often don't work on the first try. Getting them to generate the code you actually want usually requires iteration: generate, review, reject, regenerate, review again. That iteration cycle is expensive.
Most teams treat this as an inevitable cost of using coding agents. It's not. It's a design problem.
The fundamental issue is that AI agents have values and constraints that aren't always aligned with your requirements. An agent might prioritise brevity over readability. It might generate clever code when you need maintainable code. It might use patterns that work but violate your team's standards. It might optimise for one performance dimension when you care about another. Without explicit guidance, the agent will make trade-offs that aren't your trade-offs.
Prem Pillai's approach to this problem is elegant: encode your values and constraints directly into the agent's operating principles, not as post-generation checks but as foundational directives that shape code generation from the start.
Constitutional prompting works by building a constitution: a set of principles that defines what good code looks like in your context. Not general principles like "code should be readable" (too vague) but specific ones: "prefer explicit variable names over single-letter names," "use error handling for all I/O operations," "write functions under 20 lines," "include docstrings for all public functions."
These principles are then embedded into the prompting strategy itself. Rather than asking the agent to "write a function to validate email addresses," you ask it to "write a function to validate email addresses while adhering to these constitutional principles." The agent is operating within constraints that reflect your actual requirements.
The magic of this approach is that it shifts the problem. Instead of generating code and then filtering it (which is what iteration does), the agent is generating code that's already likely to meet your standards. You're not correcting failures; you're preventing failures upstream.
This has profound implications for the iteration cost. If you can reduce the number of rounds of generation and review from three or four down to one or two, you've dramatically reduced the cost and complexity of using coding agents. If you can sometimes get it right on the first try, the efficiency gain is enormous.
Constitutional prompting also has a side benefit: it makes agent behaviour more predictable. An agent operating without explicit constraints is optimising for some implicit objective function that's unclear. An agent operating within explicit constitutional constraints is optimising for something you've defined. That makes the agent's decisions more understandable and more aligned with what you're trying to build.
The challenge is getting the constitution right. It's easy to write principles that are too vague ("write good code") or too rigid (constraints that prevent valid solutions). The constitution needs to capture the essential standards that define code quality in your context without being so prescriptive that it prevents innovation or eliminates valid approaches.
Building an effective constitution requires iteration too, but it's iteration on principles rather than code. You discover through use that certain constraints are too strict, others too loose. You refine them. But this iteration happens once, at the level of the constitution, not for every piece of code the agent generates.
This is also valuable for team coordination. Code standards that are written down only in the team wiki or a style guide are often honoured inconsistently. Code standards that are enforced by the coding agent are enforced consistently. The constitution becomes the executable definition of "code quality" in your team.
The approach also scales. When you're using agents for a large amount of code generation, the efficiency gains from reducing iteration cycles compound. You're not just saving a few minutes per function; you're saving significant engineering time across thousands of functions.
There's a deeper insight about AI agents here: they're not tools that produce random outputs that you filter through human judgment. They're systems that can be shaped and directed through careful prompting. The difference between an agent that requires three rounds of iteration and an agent that often gets it right the first time isn't capability; it's design.
Constitutional prompting is design. It's taking the requirements that usually emerge gradually through iteration and making them explicit from the beginning. The payoff is agents that produce more useful output with fewer feedback loops.
Prem Pillai, Senior AI Engineer at Block Inc, is presenting this talk at AI Engineer Melbourne 2026 on June 3-4.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.
