Your weekend reading from Web Directions
A quick reminder that super early bird pricing for 2026 conferences like AI Engineer, ai×design, UX Australia, and Design Research ends January 31st. To get the absolute best possible price for any of these conferences, please register by then. You can attend in person or online, and CFPs are open. We’re looking for great presentations for all of these events.
Clearly something has shifted over the last few weeks. Perhaps people took time off over the holiday period to spend a bit more time working with ChatGPT, Claude, or Gemini, all of which saw significantly improved models in terms of capabilities—particularly when it comes to code—in the latter part of last year. Perhaps the models and harnesses like Claude Code and Google’s Antigravity took a step function change in capability. It certainly did feel like that to me.
Last week, I referenced the early 2025 prediction by Dario Amodei of Anthropic around a year ago that by the end of 2025, 90% of code would be generated by LLMs. At the time, and even well into the year, it seemed like a very foolhardy prediction. But by the end of the year, in many ways it had come to pass, and certainly was more than feasible in many situations.
Then, in recent weeks, we have seen extremely high-profile software engineers like Linus Torvalds, Gergely Orosz (the Pragmatic Engineer), and just very recently Ryan Dahl, the originator of Node.js, all either speaking approvingly of the practice of using LLMs for software engineering, or, in the case of the latter two, essentially endorsing Amodei’s prediction.
It mostly went smoothly, although I had to figure out what the problem with using the builtin rectangle select was. After telling antigravity to just do a custom RectangleSelector, things went much better. Is this much better than I could do by hand? Sure is. —Linus Torvalds
Models suddenly getting good enough to write most of my code—which I am now prompting—creates complicated feelings. It took a long time to get good at coding. And it’s not easy. Plus, there was something special about being in “the zone.” —Gergely Orosz
This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That’s not to say SWEs don’t have work to do, but writing syntax directly is not it. —Ryan Dahl
People who gave these systems a go 6, 12, or 18 months ago and found them underwhelming came back and realised how much more capable they’ve become. My timeline across social media, in blog posts, and on podcasts is full of people now thinking more deeply about the implications of all this.
That’s something I’ve been giving a lot of thought to as well. In my commentary associated with the pieces below, you’ll find a few of my thoughts, but it’s something I’m working on more deeply, with hopefully some more in-depth writing on this coming soon.
Among our collected pieces this week are both what I might call “practical” pieces for working with AI and LLMs as a software engineer, and some more open-ended pieces that consider questions like “what is the value of software when it can be produced so relatively inexpensively?” and “what is the role of a software engineer when AI systems are producing so much of the actual code?”
Right now, things might feel like they’re moving very quickly. I know many, even very thoughtful and capable software engineers, who feel it is very hard to keep up. I get the privilege to dedicate a lot of my time to thinking about and reading about these kinds of issues, and sometimes feel that way myself.
It’s worth keeping in mind the observation that we tend to overestimate the impact of technological change in a year and underestimate the change in a decade. A lot happens that doesn’t ultimately have long-term consequences, and at the time it’s very hard to know which specific developments will prove transformative. I think above all it’s important to be pragmatic. To think about your own work and how it might be transformed with these technologies and practices. Track what’s happening, but you don’t have to run out and implement a Ralph Wiggum loop at your bank tomorrow.
Hopefully, if you follow me on LinkedIn or Bluesky or Mastodon, or if you read this newsletter, or keep an eye on our blog—that will help you maintain a sense of what is developing. If you have the option of getting to our conferences, in particular AI Engineer in Melbourne in June (which is also online), or ai×design if design is your area of practice, then I’m very confident that will be extremely helpful in helping you keep abreast of what is going on.
The Open Web & Its Future
Some Thoughts on the Open Web

“The Open Web” means several things to different people, depending on context, but recently discussions have focused on the Web’s Openness in terms of access to information—how easy it is to publish and obtain information without barriers there.
…
In other words, we have to create an Internet where people want to publish content openly—for some definition of “open.” Doing that may challenge the assumptions we’ve made about the Web as well as what we want “open” to be. What’s worked before may no longer create the incentive structure that leads to the greatest amount of content available to the greatest number of people for the greatest number of purposes.
Source: Some Thoughts on the Open Web – Mark Nottingham
Mark Nottingham has been heavily involved in the development of standards like HTTP at the IETF for many years. We’ve also had the privilege of having him speak multiple times at our conferences. Here he brings together some thoughts about the open web and its future, which many are perhaps not without some reason expressing concerns about.
Publishers report very significant drop-off in referral traffic from search engines in the last year or so. Meanwhile, frontier model developers have freely used open web content to train their models, which have turned several of them into some of the biggest and certainly most fast-growing companies in history. So, what future does the open web have, and what can we do about it? Well worth a read here from Mark.
AI & Software Engineering
The Year Everything Changed

In popular imagination, “AI” has come to mean the cheap version of ChatGPT, prattling in a grating tone with too many emojis, variously misleading and making things up.
AI, in this view, is a stupid machine that makes stupid text. LLMs can certainly be this thing.
Software circles aren’t much better: LLM-enabled development is about code generation. Tell it to extrude code for a purpose, and maybe it will, and maybe it will work.
The truth of things is far, far stranger than either conception. By the close of 2025, it was possible to know the true purpose of LLMs: to act as the engines for a previously-impossible category of software.
Source: The Year Everything Changed – Network Games
The reason that this piece grabbed me was its opening:
If you asked me in 2024 what my biggest fear was, I’d have told you: I was afraid my best years were behind me, as someone who builds things.
That speaks to me. I’ve built things with software for the better part of 40 years, professionally for 30 or more. Even as my primary focus increasingly became communicating, organising conferences, and connecting people, it remained important to me to make and build things. Often they were tools we used internally to run our conferences and other systems better, but sometimes they were just ideas I wanted to explore.
Code as Commodity

Thanks to generative AI, code is following a similar pattern. Projects that would have been uneconomic through traditional software development are now just a prompt away. Those 500+-products-per-day on Product Hunt? Not all of them are good, but that’s what abundance brings.
But this doesn’t mean developers are obsolete. It means that the locus of value is broadening as this exclusive skill becomes more widely accessible.
Thus, the real question isn’t “will we be replaced?” but, “what becomes valuable when code itself is cheap?”
Source: Code as Commodity – TESSL
Over the course of the last three years I’ve worked extensively with large language models to write code. I started in late 2022 when ChatGPT first emerged, working with it to help write Bash scripts that would allow me to better take advantage of FFmpeg to automate aspects of a workflow that had been very manual—getting clips from videos at particular timestamps.
Looking back, I probably could have done this just as quickly without using the LLM, certainly not a lot more slowly. But something about it felt compelling. Over the last three years, I’ve continued to, and increasingly, worked with these tools and have seen them get better and better, sometimes on exactly the same task.
Somewhere around the middle of 2025, I started to feel that we might be on an S-curve with the capabilities of the models, at least when it comes to coding. An S-curve is one which looks exponential in growth but then plateaus—it approaches an asymptote. And it felt like we might have been plateauing. We had very good models, and even if they never got much better, they would still deliver a tremendous amount of economic value.
Toward the end of 2025, particularly with the launch of Opus 4.5 and Claude Code (which is how I now work with that model), it felt like we had a huge leap forward in capabilities—not just in what the models could do step by step, but in their longer-term approach to solving a problem. It was no longer a turn-by-turn affair; I could set up a task and the model could run for an extended period and, more or less in a single shot, create something that even six months before might have taken hours of turn-by-turn work to refine and ensure it worked the way I wanted.
At the beginning of 2025, Dario Amodei, one of the founders of Anthropic, predicted that by the end of the year 90% of code would be written by large language models. Midway through the year, that looked like a ludicrous prediction, and there were numerous mocking references to it. By the end of last year—certainly by now—I think it’s plausible. Anthropic themselves claim to have built their new Cowork feature in ten days largely this way, and certainly in my experience, 90% of the code I would have written by hand three years ago is now written by a large language model. I know as much as almost anyone about frontend development and the code that goes behind it: best practices, accessibility, performance. I wouldn’t hand-write just about any of this anymore, except when it comes to maintaining legacy systems. The next step for me is to see how well Claude Code and Opus 4.5 will do at helping me maintain some quite significant legacy systems I built over a period of as much as twenty years.
The assumption underlying this essay by friend of Web Directions Chris Messina—that code is essentially a commodity now, and not just code but actually software, and indeed entire products are—is at least a decent working hypothesis to explore. For someone who has invested 40 years of their life getting pretty good at writing all kinds of code, this could be terrifying. My work is commodified; my knowledge is a commodity.
If we look through history, when these things happen—a famous example being the weavers in the first part of the 19th century, who went from being artisans, incredibly well paid by the standards of the day for manual labour, to being commoditised within years by mechanical steam-powered looms—history teaches us we should perhaps be concerned.
Here Chris asks the question: “What can be our value when code itself is a commodity?” If you write software, or design software, or work in the development of these systems as a product manager, you should sit down and read this essay and explore that question for yourself.
As AI Coding Agents Take Flight, What Does This Mean for Jobs?

But if AI is doing more of the software building grunt work, what does that mean for the humans involved? It’s a question that’s front-of-mind for just about everyone in the industry. Anthony Goto, a staff engineer at Netflix, addressed this matter directly on TikTok a few weeks back. The most common question that he hears from new graduates and early-career engineers, he said, is whether they’ve made a mistake entering software development just as AI tools are accelerating. “Are we cooked?” is how he succinctly summed up the concern.
Source: As AI Coding Agents Take Flight, What Does This Mean for Jobs? – TESSL
Clearly, this is a question that is front of mind for a lot of folks in software engineering right now. In this piece from TESSL, there’s a useful roundup of what some prominent and well-known software engineers have been saying recently about the impact of AI and large language models on software engineering.
AI-Native Development in Practice
Scaling Long-Running Autonomous Coding
AI Native Dev LLMs Software Engineering

In my predictions for 2026 the other day I said that by 2029:
I think somebody will have built a full web browser mostly using AI assistance, and it won’t even be surprising. Rolling a new web browser is one of the most complicated software projects I can imagine[…] the cheat code is the conformance suites. If there are existing tests that it’ll get so much easier.
I may have been off by three years, because Cursor chose “building a web browser from scratch” as their test case for their agent swarm approach:
Source: Scaling Long-Running Autonomous Coding – Simon Willison
I’m surprised this hasn’t gained more notice over the last couple of days. I know a thing or two about web browsers and web technology. I’ve spelunked the source code of browsers going back over many years. I’m not even remotely capable of contributing a line of code to any modern browsers, but I do have a sense of what they have to do to render even the most basic web page. From the network layer all the way up to the rendering layer and beyond.
Now, this is not going to compete with Google Chrome. It’s not going to be your daily driver, but it’s an extraordinary example of just how capable modern large language model code generating systems have become in a very short period.
I don’t think we’ve begun to digest the implications of this. Not as software engineers for our practice and profession, but for the economy. A decade or so ago, Mark Andreesen said “Software is eating the world.” What’s happening now? And what will happen in the coming months and small number of years?
AI-Assisted Development at Block
AI AI Native Dev LLMs Software Engineering

About 95% of our engineers are regularly using AI to assist with their development efforts. The largest population is at Stage 5, running a single agent mostly outside of an IDE. The second largest population is at Stage 6 and is running 3-5 agent instances in parallel. Then there’s a small population that is actively building our internal agent orchestrator in preparation for the inevitable.
So how does an engineering organization move from Stage 1, where engineers are just starting their AI-assisted coding journey, to an advanced stage where they are managing so many parallel agents that they now need an orchestrator? Here’s how we’re doing it at Block.
Source: AI-Assisted Development at Block – Block Engineering Blog
I think it’s very valuable to read about the experiences of not just individual software engineers but larger organisations in their adoption of machine learning and generative AI in their software engineering practices. Here Angie Jones from Block’s open source team talks about how Block is adopting these technologies. This is a large organisation doing serious work in the financial technology space, and so I think it’s very valuable to pay attention to what teams like this are doing.
I Was a Top 0.01% Cursor User. Here’s Why I Switched to Claude Code 2.0.
AI Native Dev Software Engineering

You have 6-7 articles bookmarked about Claude Code. You’ve seen the wave. You want to be a part of it. Here’s a comprehensive guide from someone who’s been using coding AI since 2021 and read all those Claude Code guides so you don’t have to.
Source: I Was a Top 0.01% Cursor User. Here’s Why I Switched to Claude Code 2.0. – Silen
I’ve been working with several major models and products from Google and OpenAI to write software going back the better part of three or indeed more years now. But in recent weeks, I found Claude Code is just a step function change in capability.
So while this is very product-centric, I think, particularly if you are working with Claude Code at all or considering it, this is highly recommended. The more ideas you have about techniques for working with these technologies, I think the more you get out of them.
Ralph Wiggum Loop Explained
AI AI Native Dev Software Engineering

The Ralph Wiggum Loop is getting a lot of attention in the AI agent space, but there’s still confusion about what it actually is and what problem it’s trying to solve. In this video, we break down the real failure mode behind long-running agent loops. Context accumulation, why retries make agents worse over time, and why common fixes like compaction can be lossy. The Ralph Wiggum Loop is one response to that problem. Not as a magic trick, but as a pattern that resets context while preserving progress through external memory and review. This video uses goose as the concrete environment to explore the idea, but the concepts apply broadly to agentic workflows.
Source: Ralph Wiggum Loop Explained – YouTube
It’s a very good, succinct, and short overview of the Ralph Wiggum loop, which is all the rage right now. If you want to know what the fuss is about, set aside a handful of minutes and get up to speed with this.
Code Reviewing AI-Generated JavaScript: What I Found
AI Native Dev Debugging JavaScript Software Engineering

I recently had an AI agent build a JavaScript utility for calculating road distances using a third-party API. The task was complex: batch multiple API requests, validate inputs and outputs, handle errors gracefully, and manage timeouts. The agent delivered working code that passed its own tests.
The code worked, but several issues ranged from minor inefficiencies to a critical bug that would break production. Here’s what I found and how we fixed each one.
Source: Code Reviewing AI-Generated JavaScript: What I Found – Schalk Neethling
I think this is worth a read, despite being very light on some details such as which model and harness was used.
Part II, where he’ll look at the commonalities here and techniques for catching these issues during the development process, may well end up being more valuable. I look forward to seeing that.
Programming Languages & Tools
Nanolang: A Tiny Experimental Language Designed to Be Targeted by Coding LLMs
Computer Science Software Engineering LLMs

A tiny experimental language designed to be targeted by coding LLMs
Source: jordanhubbard/nanolang – GitHub
Brett Taylor, among others, has speculated that if increasingly large language models rather than humans write software, then the current programming languages we have, which are designed for humans for readability, writeability, maintainability, type safety, memory safety, and so on, will not really make sense.
This is an experimental language designed for a world of large language model code generators. Sure, it won’t be the last, but it’s the first that I’ve heard of.
AI & Learning
How To Use AI for the Ancient Art of Close Reading

Close reading is a technique for careful analysis of a piece of writing, paying close attention to the exact language, structure, and content of the text. As Eric Ries described it,”close reading is one of our civilization’s oldest and most powerful technologies for trying to communicate the gestalt of a thing, the overall holistic understanding of it more than just what can be communicated in language because language is so limited.” It was (and in some cases still is) practiced by many ancient cultures and major religions.
It might come as a surprise that a technique associated with such a long history could now see a revival with the use of Large Language Models (LLMs). With an LLM, you can pause after a paragraph to ask clarifying questions, such as ‘What does this term mean?’ or ‘How does this connect to what came before?’
Source: How To Use AI for the Ancient Art of Close Reading – fast.ai
At university, decades ago now, I studied English Literature among many other topics and was introduced to the concept of close reading. So this from FastAI caught my attention.
We often see concerns that AI and large language models are reducing people’s capacity to reason deeply and think extensively. There are many concerns about the impact on education, and I don’t think those should be dismissed out of hand. But approaches like this—using a large language model as a tool to aid processes like close reading—give me some optimism for more positive outcomes.
In my own experience over the last few weeks, I’ve been reading Anil Ananthaswamy’s wonderful Why Machines Think, about the mathematics of machine learning and AI.
I also studied mathematics—indeed have degree in the subject—at university, again decades ago. And while the mathematics in that book doesn’t go into great depth, I wanted to make sure I really understood the key concepts, particularly Bayes’ theorem, of which I had a general understanding but which Ananthaswamy looks at in some detail.
I used a technique of photographing each section of the book as I read it, uploading it to Claude, and then having a conversation about that section. When I wasn’t clear on a particular aspect, I asked for more detail and clarified my understanding. It certainly took longer, but I came away with a much deeper understanding than I would have if I’d simply worked through it in my own head.
We’re still I think at the an early stage in the development of these models’ capabilities. I believe they will get significantly better than they are today, even though by 2023 standards they’re already remarkably capable. And beyond that, we’ll develop new techniques, new approaches, new patterns. We’ll learn how to work with these tools, and this is one of what I think will be many examples.
Will people lazily use these tools to churn out “slop”? To hastily write essays about topics they don’t really understand? Put together multi million dollar reports for Governments? Yes, all of the above. But will this enable people who work intelligently with these technologies to learn more, to understand more deeply? I absolutely believe so.
AI & Environmental Impact
Electricity Use of AI Coding Agents
Throughout 2025, we got better estimates of electricity and water use of AI chatbots. There are all sorts of posts I could cite on this topic, but a favorite is this blog post from Our World in Data’s Hannah Ritchie. On the electricity front:
The average American uses 1600 liters of water per day, so even if you make 100 prompts per day, at 2ml per prompt, that’s only 0.01% of your total water consumption. Using a shower for one second would use far more.
Generally, these analyses guide my own thinking about the environmental impacts of my individual usage of LLMs; if I’m interested in reducing my personal carbon footprint, I’m much better off driving a couple miles less a week or avoiding one flight each year. This is indeed the right conclusion for users of chat interfaces like chatgpt.com or claude.ai.
Source: Electricity Use of AI Coding Agents – Simon P. Couch
For a while now there have been very serious concerns raised about the energy and water use impacts of large language models. Given we face a genuine climate crisis, it’s certainly important to consider this issue. But we’ve seen very few, and certainly very few disinterested studies of this issue.
This suggests that the concerns are somewhat overstated. Although it must be noted that this has to do with inference and that training, at least for now, seems to consume the majority of energy associated with large language models.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.