Year round learning for product, design and engineering professionals

Your weekly reading from Web Directions–Where Does the Rigor Go?

Before we begin with this week’s reading some news about upcoming events and more form Web Directions. Or jump straight to this week’s reading!

Project Noops

Mark Pesce and I team up to parse the signals out of the AI transformation as it happens at Noops. Read more and sign up.

AI Engineer Nights (Sydney and Melbourne)

In April we’ll be bringing you AI Engineer Nights in Sydney (April 16th) and Melbourne (April 9th)–free evenings with a taste of what we’ll have on stage for AI Engineer Melbourne.

It’s free, but places are limited so please RSVP!

AI Engineer Unconferences

Also taking place in Sydney (April 18th) and Melbourne (April 11th) in April, AI Engineer unconferences will explore the key ideas associated with the AI Engineer Conference series. From the impact of AI on the software development process and profession to the opportunities emerging with AI for new products and services. Whether it’s agents like OpenClaw or Enterprise Workloads, you set the agenda.

Again, AI Engineer Unconferences are free; just RSVP so we can plan best for the day

AI Engineer Early bird pricing ends next Friday

Tickets for AI Engineer are selling incredibly well, and early bird pricing ends April 3 (next Friday)–so don’t delay! Save hundreds on a full priced ticket and most importantly make sure you don’t miss out!

Where Does the Rigor Go?

The most interesting thing about this week’s reading isn’t the now-familiar observation that AI is changing software development. It’s the growing clarity about what exactly is being relocated when code generation gets cheap.

Charity Majors, writing about the Deer Valley unconference on AI and software engineering, astutely observes: “Constraint removal is mistaken for loss of rigor. But what actually happens, when things go well, is rigor relocation.” That phrase—rigor relocation—is the thread running through nearly everything worth reading this week.

Consider what Steve Krouse is grappling with. For decades, the lesson was consistent: unless you specified exactly what you wanted a system to do, it wouldn’t do it. And specifying exactly what you wanted was indistinguishable from programming. That’s no longer empirically true. “Reasonably imprecise descriptions” now get you remarkably close. As someone who’s been in this industry since the 1980s, I find this genuinely disorienting—not because it’s hard to see it happening, but because it contradicts everything we understood about the relationship between precision and working software.

But if code generation is no longer the bottleneck, what is? David Poll’s piece on code review offers one answer. Code review was never really about catching bugs—it answers the question “should this be part of my product?” That’s a judgment call about architecture, intent, and taste. Tests tell you if code works. Production observability tells you what the system is actually doing. Code review tells you whether the author’s intent was right in the first place. In an era of AI-generated code, that distinction matters more than ever, because the volume of code that works but shouldn’t exist is about to increase dramatically.

Chad Fowler pushes this further in Compile to Architecture. If code is cheap to generate, he argues, the real compilation target shouldn’t be a React app or a Django service—it should be the architecture itself. “The problem is no longer producing code. The problem is replacing it safely.” This resonates with something I wrote a few months ago about stack collapse: the layers of abstraction we built to make development manageable were themselves a response to code being expensive to write. Remove that constraint and the abstractions lose their justification. What remains is the need for architectural thinking that transcends any particular implementation.

Simon Willison’s latest chapter in his agentic engineering patterns guide brings this down to earth. Git—a tool most developers use reluctantly and understand superficially—turns out to be foundational for working with coding agents. The agents are fluent in Git’s features in ways most humans aren’t. The practical implication is that developers don’t need to memorise Git’s arcana, but they do need to understand what’s possible so they can direct their AI collaborators effectively. It’s another instance of rigor relocation: from remembering syntax to understanding capability.

The complicating perspective this week comes from an unexpected angle. Dominik Rudnik’s account of preparing for Google interviews using AI isn’t a story about shortcuts. It’s about a working developer who’d never formally studied algorithms using Claude to build genuine understanding of material he’d previously avoided. The easy criticism writes itself—”he’s just getting AI to do the learning for him”—but the reality is more nuanced. He used AI to work through a textbook on machine learning foundations, building mathematical intuition rather than memorising solutions. If that’s not rigor, I’m not sure what is. It’s just relocated from the classroom to a conversation with an LLM.

What to watch: the Deer Valley symposium produced a document that Majors herself found incomplete—she wanted more emphasis on production and observability. That gap matters. As our industry gets increasingly comfortable with AI-generated code, the teams that invest in understanding what their systems are actually doing in production will have a decisive advantage over those still arguing about whether AI can write code at all. That debate is over. The interesting questions are all downstream.

Now on with this week’s reading.

AI & the Future of Code

Reports of code’s death are greatly exaggerated

AI Native DevCoding AgentSoftware Engineering

Reports of code's death are greatly exaggerated

Until probably late 2025, I would have largely agreed with the sentiment of this comic. Having been working professionally as a software engineer for decades, having studied software engineering in the 1980s at university, I’ve seen the promise of high order abstractions replacing Human programming in languages like C or Java or Python or people of the language of your choice touted over and over again.

Whether it was 4GLs in the 1980s, or low-code, no-code more recently, the holy grail of programming seemed to be getting rid of programmers.

Time and again it turned out that unless you very meticulously specified what you wanted a system to do, it didn’t do what you wanted it to do. And very meticulously specifying what you want a system to do is, or has been until very recently, indistinguishable from programming.

That’s not empirically true anymore. It’s baffling. It’s almost unimaginable. But reasonably imprecise descriptions of what you want a system to do can get you very close to the system you had in mind. Theoretically, it’s hard to imagine. It flies in the face of decades of theoretical and, in reality, empirical experience.

But, as Galileo was supposed to have said when provided with all kinds of effectively theological arguments against a heliocentric model of the solar system, Eppure si morve–”but it moves”.

Coming to terms with the empirical reality of how large language models work, what they do is a singular challenge. Not just for software engineers, but for many people with the knowledge of their field.

This is one of many such examples.

Source: Reports of code’s death are greatly exaggerated, stevekrouse.com

Compile to Architecture – The Phoenix Architecture

AISoftware Engineering

Compile to Architecture – The Phoenix Architecture

For a long time we’ve treated frameworks as the target of software development. But if systems are meant to be regenerated and replaced safely, the real compilation target has to be the architecture itself. The industry is still trying to generate applications. A React app. A Django service. A Rails API. A FastAPI backend. That instinct made sense when writing software was the expensive part. But in a world where code can be generated quickly and cheaply, the real constraint has shifted. The problem is no longer producing code. The problem is replacing it safely.

Source: Compile to Architecture – The Phoenix Architecture, aicoding.leaflet.pub

A few months ago, in Stack Collapse, I suggested that the layers of abstraction we built on top of the underlying browser capability—the DOM and the browser APIs—was no longer something we should be doing. Here Chad Fowler explores a very similar idea.

Software Engineering in the AI Era

Code Review Is Not About Catching Bugs

AISoftware Engineering

Code Review Is Not About Catching Bugs

Code review answers: ‘Should this be part of my product?’ That’s a judgment call, and it’s a fundamentally different question than ‘does it work.’ Does this approach fit our architecture? Does it introduce complexity we’ll regret in six months? Are we building toward the product we intend, or accumulating decisions that pull us sideways? Does this abstraction earn its keep, or are we over-engineering for a future that may never arrive? Does this feel right – not just functionally correct, but does it reflect the taste and standards we want our product to embody?

Source: Code Review Is Not About Catching Bugs, davidpoll.com

Tests answer “does the code do what the author intended.” Production observability answers “what is the system actually doing.” Code review answers “was the author’s intent the right thing to build?” You need all three. None of them substitutes for the others.

Production Is Where the Rigor Goes

AIo11yObservabilitySoftware Engineering

Production Is Where the Rigor Goes

In early February, Martin Fowler and the good folks at Thoughtworks sponsored a small, invite-only unconference in Deer Valley, Utah—birthplace of the Agile Manifesto—to talk about how software engineering is changing in the AI-native era. This document represents an almost incalculable amount of engineering skill, practical expertise, and battle-hardened wisdom, from some of the leading voices and actual titans in our field. It’s also a fascinating capsule of where the industry is at in this weird, compressed moment of change, from people who aren’t trying to sell you anything. Across decades of software evolution, the same misunderstanding keeps recurring. Constraint removal is mistaken for loss of rigor. But what actually happens, when things go well, is rigor relocation. Control doesn’t disappear. It moves closer to reality. If [code] generation gets easier, judgment must get stricter. Otherwise, you’re not engineering anymore.

Source: Production Is Where the Rigor Goes, Honeycomb

We’ve covered the recent symposium by some world-leading software engineers on AI and software engineering held a few weeks ago. Annie Vella gave her thoughts. Here Charity Majors, another participant, reflects on what she feels were particular omissions or shortcomings. Above all, the importance of production and observability.

AI-Native Development

Using Git with coding agents – Agentic Engineering Patterns

AISoftware Engineering

Using Git with coding agents – Agentic Engineering Patterns

Git is a key tool for working with coding agents. Keeping code in version control lets us record how that code changes over time and investigate and reverse any mistakes. All of the coding agents are fluent in using Git’s features, both basic and advanced. This fluency means we can be more ambitious about how we use Git ourselves. We don’t need to memorize how to do things with Git, but staying aware of what’s possible means we can take advantage of the full suite of Git’s abilities.

Source: Using Git with coding agents, Simon Willison’s Weblog

Simon Willison continues his “book-shaped” project on agentic coding. The current chapter focuses on Git for agentic coding, with an overview of some of the most important features as well as how best to work with them alongside AI coding systems.

My Google Recruitment Journey (Part 1): Brute-Forcing My Algorithmic Ignorance

AISoftware Engineering

My Google Recruitment Journey (Part 1): Brute-Forcing My Algorithmic Ignorance

About 2 months ago, an email from xwf.google.com dropped into my inbox, referencing an application from a year prior that I even forgot about. My initial classification was that it is not possible and that this is just spam. But after the screening call, the reality hit: I will have two online interviews (one technical, one behavioral) in just a week. And not just a regular interview to another company, these will be interviews for a company that I still consider as one of the top-of-the-world factory of engineers. This was a critical state. I’ve worked as a software developer in telecommunications for a few years, focusing on high-level abstraction: routing, message processing, and writing business logic. In my hobbyist gamedev projects, even though sometimes I liked to make some pathfinding algorithm or to do a CPU 3D rasterizer by hand, at the end of the day my metric for success was simple: if it runs at >60 FPS without drops, it ships.

Source: My Google Recruitment Journey (Part 1), blog.dominikrudnik.pl

A fascinating account of how AI can sharpen and deepen your knowledge rather than diminish it. The author uses Claude to work through technical interview preparation—not as a shortcut, but as a way to build genuine understanding of algorithmic foundations he’d never formally studied.

delivering year round learning for front end and full stack professionals

Learn more about us

Web Directions South is the must-attend event of the year for anyone serious about web development

Phil Whitehouse General Manager, DT Sydney