Back once again after a brief break, your weekly reading from Web Directions
Well, I’ve had a couple of weeks off from posting after a very good run this year. In my defence, we did run three conferences in ten days at the back end of November, and that tends to end up quite the consuming. But there’s more…
AI Engineer comes to Melbourne as part of AI Week
We also very excited to announced that we will be bringing the renowned and amazing AI Engineer Conference pioneered by Shawn ‘Swyx’ Wang to Melbourne June 3rd and 4th.
In fact, we’re so excited we’ve created the umbrella event AI Week, which will also include a new collaboration with UX Australia–ai × design, a one-day conference focused on the intersection of design practice and AI.
If you’re keen to host an event the first week of June in Melbourne focused on all things AI, let us know and let’s talk.
The CFPs are open for both AI Engineer and ai × design, as are registrations, so if you have training budget for 2025 still to be allocated, we’d love you to consider these events. Every edition of AI Engineer has sold out, so start planning!
UX Australia-it’s Sydney’s turn this year!
And that’s not all. Once again, we’ll be collaborating with UX Australia to bring their long-running conference to Sydney in August, plus we’ll be dusting off their Design Research conference to run the day before the main event. We’re are calling that week (August 24th–28th) UX Week, and we’ll be working with folks in the industry to host other events to make it even more worth your while getting to Sydney in late August.
The CFPs are open for both UX Australia and Design Research, as are registrations, so if you have training budget for 2025 still to be allocated, we’d love to see you there.
Alright, now with my excuses out of the way, here are a whole bunch of great articles that I’ve been gathering the last few weeks.
AI & Software Engineering
Three AI customisation concepts

I’ve narrowed this down to three concepts that feel foundational—the ones I keep coming back to when I’m trying to understand how AI systems actually work, or when I’m explaining MCP to someone, or when I’m making decisions about how to build something. These three form a kind of progression: understanding how AI represents meaning, then how you customize it, then how you make customization practical.
The patterns between these concepts interest me as much as the concepts themselves. How embeddings enable RAG, how LoRA makes fine-tuning accessible, how choosing between RAG and fine-tuning depends on whether you’re teaching facts or behavior. These connections make the whole landscape easier to navigate.
Source: Three AI customisation concepts by Anna McPhee
An excellent article exploring metaphors we can use to understand how to work with large language models. Anna gave a couple of great talks at our Developer Summit and then Enqueue in recent weeks.
How AI Is Redefining Software Engineering with Annie Vella, Distinguished Engineer
“Many of us became software engineers because we found our identity in building things. Not managing things. Not overseeing things. Building things. With our own hands, our own minds, our own code. But that identity is being challenged,” wrote Annie in March 2025 in her blog post, The Software Engineering Identity Crisis, which has since been read by tens of thousands of engineers and got them thinking about this (re)defining moment in what their role means.
In this conversation, Annie dives deep into the software engineer’s identity crisis, the rise of AI agents, and how engineers can prepare for a rapidly evolving future.
Source: How AI Is Redefining Software Engineering with Annie Vella, Distinguished Engineer | Aviator
When Annie Vella’s article first came out, we referenced it here, and it’s since been very widely read by software engineers everywhere. In this conversation, Annie explores these ideas more deeply. Listen or read a great detailed summary.
There’s a strange tension when it comes to folks thinking about software engineering and large language models. On the one hand, we hear a lot about how code is not the bottleneck and how there will be ultimately limited value in code generation tools because that’s not the thing that we need to be speeding up. On the other hand, as Annie identifies here, that it’s often deeply part of software engineering’s identity that we generate code, we write code.
I think this apparent contradiction probably goes to something deeper about the transformation that is occurring. And it is occurring in software engineering that it’s complex and seemingly contradictory things can be both true simultaneously.
Why Software Development Fell to AI First
I find it’s always important to examine why you made a mistake. The worst mistake I ever made was reading “Bitcoin: A Peer-to-Peer Electronic Cash System” in January of 2009, thinking “cool math toy, maybe someone will turn it into something useful someday” and moving on. My most recent mistake, however, was not realizing that software development would be the first field to be transformed by agentic AI. I always assumed it would be the last. Let’s examine why that was.
Source: Why Software Development Fell to AI First
A thoughtful essay on why LLMs work when it comes to software engineering.
A Month of Chat-Oriented Programming
AI Native Dev LLMs software engineering
TL;DR: I spent a solid month “pair programming” with Claude Code, trying to suspend disbelief and adopt a this-will-be-productive mindset. More specifically, I got Claude to write well over 99% of the code produced during the month. I found the experience infuriating, unpleasant, and stressful before even worrying about its energy impact. Ideally, I would prefer not to do it again for at least a year or two. The only problem with that is that it “worked”. It’s hard to know exactly how well, but I (“we”) definitely produced far more than I would have been able to do unassisted, probably at higher quality, and with a fair number of pretty good tests.
Source: A Month of Chat-Oriented Programming – CheckEagle
I recently listened to this Pragmatic Engineer podcast with Flask developer Armin Ronacher (highly recommended), where he talks about how he was an ardent resistor to the use of large language models for software development until he sat down and invested some time in that a few months ago, at which point he became convinced it was the future of how he was going to develop. Here’s something similar from Nick Radcliffe, who was a fairly outspoken critic of LLMs and chatbots. He spent a month doing chat-oriented programming.
While he found it infuriating and frustrating, he acknowledges it did indeed make him productive. He also details his experience and things that he learned that you might find valuable yourself.
Context Engineering & LLM Development
Context Engineering for Non Engineers

There are three layers of context you can control when using AI through web interfaces like Claude, ChatGPT, or Gemini: System Instructions—Your baseline configuration, Projects—Context that persists for specific work, and Prompts—Specific details for right now. Most people live entirely in Layer 3, never touching the other two. Then they wonder why their results are inconsistent. Think of it like clothing. Most people are using off-the-rack when they could have something tailored. The tailoring isn’t even that hard—you just need to understand where the adjustment points are.
Source: Context Engineering for Non Engineers – Eleganthack
While written for non-engineers, this is a valuable overview of the kinds of contexts that we can use when interacting with large language models from Christina Wodtke.
Writing a good CLAUDE.md
coding agent context engineering LLMs software engineering

The following section provides a number of recommendations on how to write a good CLAUDE.md file following context engineering best practices. Your mileage may vary. Not all of these rules are necessarily optimal for every setup. Like anything else, feel free to break the rules once you understand when & why it’s okay to break them and you have a good reason to do so.
Source: Writing a good CLAUDE.md | HumanLayer Blog
Providing context to coding agents like Claude is an important step in getting the most out of these systems. In Claude this is the claude.md file, elsewhere it’s an agents.md file. This article looks at a number of patterns and principles that could be valuable in developing and working with these kinds of files.
Don’t Fight the Weights

For context and prompt engineers (and even chatbot users) it’s helpful to be able to recognize when you’re fighting the weights. Here’s some signs you might be fighting the weights: You find yourself threatening or pleading with the model, the model makes the same mistake even as you change the instructions, the model acknowledges its mistake when pointed out then repeats it, the model seems to ignore the few-shot examples you provide, the model gets 90% of the way there but no further, you find yourself repeating instructions several times, you find yourself typing in ALL CAPS.
Source: Don’t Fight the Weights
If you’ve been working with large language models for a while in any sort of non-trivial way, then you’ll likely have run into this situation where you simply cannot get it to produce something that you want it to. A classic example was until relatively recently getting an output in JSON format.
Time and again I’ve run into the issue where I’ve ask a model to produce HTML, only to have it add extraneous content, even when explicitly asked to only produce HTML. But this framing of the challenge really helps understand perhaps what’s going on and how to work around it. Drew Breunig calls it fighting against the weights, and this makes a lot of sense.
AI Agents & MCP
What if you don’t need MCP at all?
AI LLMs MCP software engineering

I’m a simple boy, so I like simple things. Agents can run Bash and write code well. Bash and code are composable. So what’s simpler than having your agent just invoke CLI tools and write code? This is nothing new. We’ve all been doing this since the beginning. I’d just like to convince you that in many situations, you don’t need or even want an MCP server. Let me illustrate this with a common MCP server use case: browser dev tools.
Source: What if you don’t need MCP at all?
While MCPs occupy so much attention right now, they also have significant drawbacks, including expense in terms of tokens, occupying a significant chunk of your context window, and concerns about security and what Simon Wilson has coined the “lethal triad.” But what if, in many cases, you don’t actually need an MCP? We can use tools instead. That’s what Mario Zechner explores here.
Minefield Context Protocol
coding agent LLMs MCP software engineering

One of the concepts that are gaining lots of discussion is the “MCP” which stands for Model Context Protocol. Anytime I’d ask what an MCP is, I’d usually hear it described as an API. So then, why isn’t it just called an API? Because it’s not really an API. Sound confusing? You bet! Just like how I learned to code from actually building something, I decided if I was going to truly learn what this thing was, I’d have to build one. So I did and this is how that went.
Source: Minefield Context Protocol
Donnie D’Amato shares his experience of developing with MCP, something that might be valuable in your own learning.
Agent Design Is Still Hard
agents AI LLMs software engineering

TL;DR: Building agents is still messy. SDK abstractions break once you hit real tool use. Caching works better when you manage it yourself, but differs between models. Reinforcement ends up doing more heavy lifting than expected, and failures need strict isolation to avoid derailing the loop. Shared state via a file-system-like layer is an important building block. Output tooling is surprisingly tricky, and model choice still depends on the task.
Source: Agent Design Is Still Hard | Armin Ronacher’s Thoughts and Writings
If you’re considering building your own agent, this comprehensive article by Armin Ronacher will be very useful. You might also find this recent conversation on the Pragmatic Engineer podcast with Armin (which we cover above) to be worth a listen. I definitely did.
AI & Design
Beyond the Machine
I am so tired of hearing about AI. Unfortunately, this is a talk about AI. I’m trying to figure out how to use generative AI as a designer without feeling like shit. I am fascinated with what it can do, impressed and repulsed by what it makes, and distrustful of its owners. I am deeply ambivalent about it all. The believers demand devotion, the critics demand abstinence, and to see AI as just another technology is to be a heretic twice over. Today, I’d like to try to open things up a bit. I want to frame the technology more like an instrument, and get away from GenAI as an intelligence, an ideology, a tool, a crutch, or a weapon.
Source: Frank Chimero · Beyond the Machine
Frank Chimero gives a very nuanced and thoughtful meditation on the journey of AI and its impact on design and the creative endeavour. He uses the metaphor of AI as an instrument and the work of four renowned musicians and the lessons we might learn from their work about how we can work with AI.
How AI Changes Design AMA with Shamus Scott Grubb
A shift is happening in Design. New tools. New workflows. New capabilities. Because as production gets automated, thinking becomes more valuable, and creativity becomes the new premium. That’s why I’m speaking to Shamus Scott Grubb on The Design of Everyday People Livestream. We’re diving deep into: 1. How AI separates design from production, 2. What skills actually matter in this new reality, 3. How to position yourself on the right side.
Source: How AI Changes Design AMA with Shamus Scott Grubb – YouTube
Shamus Scott Grubb talks with Chris Nguyen from UX Playbook about what happens when AI sits between creation and production in design.
Design Thinking for AI: The 5-Stage Framework Every Builder Needs

AI has changed the texture of design. We’re not designing for people alone anymore, we’re designing with/for intelligence. That changes everything. I’m obviously not the only person thinking about how Design frameworks evolve. Recent research has started reframing what this looks like. Adam Fard calls it “AI-First Design Thinking.” Weisz, He, and Muller (2024) propose six design principles for generative AI that move beyond the empathy-prototype-test loop.
Source: Design Thinking for AI: The 5-Stage Framework Every Builder Needs
I don’t think it’s entirely coincidental that so any of this week’s links are about the intersection of AI and design. Something I’ve mentioned recently, and we’ve been covering a lot (so much so that we decided to run a conference all about it in June next year). Software engineers have been thinking about this a lot in the context of their work, but that’s not to say designers aren’t also thinking deeply about this as well.
New Rules for Enterprise UX and AI
I recently attended the Enterprise UX conference in Amersfoort. The presentations made it very clear that successful AI integration requires big changes in how companies work and how we build systems. The main message was: AI is not just a new tool; it forces us to change our basic rules for design and data.
Source: New Rules for Enterprise UX and AI — jasha.eu
Software engineers have been exploring for several years how best to work with large language models, think about the impact on how software engineering practise works.
All categories of products, from the big frontier model labs all the way through to early-stage startups, are exploring this space. It’s a question the design field is also asking. Here from the recent Enterprise UX conference in Amsterdam are some responses that emerged across the talks.
Generative UI and the Ephemeral Interface

This week, Google debuted their Gemini 3 AI model to great fanfare and reviews. Specs-wise, it tops the benchmarks. This horserace has seen Google, Anthropic, and OpenAI trade leads each time a new model is released, so I’m not really surprised there. The interesting bit for us designers isn’t the model itself, but the upgraded Gemini app that can create user interfaces on the fly. Say hello to generative UI. I will admit that I’ve been skeptical of the notion of generative user interfaces.
Source: Generative UI and the Ephemeral Interface – Roger Wong
One of the things people are speculating about when it comes to generative AI is that perhaps our user interfaces will themselves be generated on the fly, tailored to individual users’ needs. Esteemed folks like the Nielsen Norman Group and Luke Wroblewski have made such suggestions, but others like Roger Wong aren’t so sure.
Emily Campbell – AI UX Deep Dive

In one of the most popular episodes yet, Vitaly Friedman talked about what’s next for AI design patterns. In that episode he frequently referenced Shape of AI which is an incredible database of AI design patterns. So I wanted to get to the source and go deep with the creator Emily Campbell to learn how to design great AI experiences. Because she’s studied AI products more than just about anyone I’ve ever seen.
Source: Emily Campbell – AI UX Deep Dive – YouTube
We referenced The Shape of AI and Emily Campbell’s work cataloguing AI interaction patterns a few months back. Now here’s an in-depth interview with her about her work.
Will AI Agents Kill the Web as We Know It?
AI autonomous agents Design LLMs

The way we interact with the web today is surprisingly manual. Want to book a flight? You’ll probably head to a familiar airline’s website or open Google and type in your dates. If that site also offers hotel and car rental options, great—you might stay and book everything in one place. But more likely, you’re picky. So you go off searching for that perfect boutique hotel or the restaurant you’ve read about. Click by click, tab by tab, you stitch your trip together.
Source: Will AI Agents Kill the Web as We Know It? | Andy Budd
For better and for worse, the web is not simply for humans anymore. The reality is, bots have long been the most important visitors for most websites, in particular Google bot, which indexes your site, and has been for decades the most important source of traffic for most successful websites. And then many sites have APIs which are an interface for machines or code rather than for humans. If the promise of agentic AI is real, then while a human might ask an agent to complete a task for them, and that task might involve interacting with your website, the actual interaction won’t be with a human even if it is in service of them. There are many who are aghast at this idea. But it is increasingly a reality. Here Andy Budd reflects on the implications.
Frontend Development
We’re not entirely ignoring our first true love, the web and its technologies. Here’s a number of recent interviews and articles I think front-end devs will find very valuable.
Alex Russell on PWAs, App Stores, and Mobile Performance
front end development JavaScript performance PWAs

In this RedMonk conversation, Alex Russell, Partner Product Architect at Microsoft, discusses the state of mobile development, focusing on JavaScript performance, the state of Progressive Web Apps (PWAs), and the impact of major players like Apple (iOS) and Google (Android). They explore the importance of management in addressing web performance issues, the role of web standards in shaping the future, and the implications of AI on web development.
Source: Alex Russell on PWAs, App Stores, and Mobile Performance – RedMonk
Alex Russell has been a frequent speaker at our conferences going back many years. In 2015 at our CODE conference, he first talked about the idea of progressive web apps. Alex has had a profound impact on the web from his work on the early JavaScript framework Dojo, through many years contributing to Chrome both at Google and now at Microsoft, and to the technological landscape of the web with his work at the W3C TC39 and on the technical architecture group at the W3C. I know firsthand from many conversations we’ve had he is a very engaging conversationalist. This is an interview I would highly recommend.
Just build a website

In this entertaining and relatable video, two brothers experience the frustrations of downloading new apps and the consequences of not having them. They discuss the convenience and usability of apps, the abundance of apps on their phones, and the growing trend of relying on apps for everyday tasks. They also explore alternative solutions and find a balance between technology and simplicity.
Source: Instagram
Some years ago now, my teenage daughter introduced me to fairbairnfilms, the TikTok/Instagram account of two Australian brothers who do quite amusing little skits on popular culture. So why am I citing them here? Well, just recently, their skit was about their frustration with installing mobile apps and how everything required a mobile app and how apps should just be web sites. Perhaps I should email them and tell them exactly why this happens.
It’s amusing, if not entirely safe for work, given some of their expletives. I passed it around to folks I know in the industry all over the world, who all shared my amusement.
Storage in the browser
cookies indexedDB localstorage offline webstorage
What are the ways to persist data in the browser?
Source: Storage in the browser | Volodymyr’s website
From cookies to IndexedDB and more, there are a growing number of ways to persist data in the browser. This is an excellent overview of these.
Your URL Is Your State
frontend development web platform
This got me thinking: how often do we, as frontend engineers, overlook the URL as a state management tool? We reach for all sorts of abstractions to manage state such as global stores, contexts, and caches while ignoring one of the web’s most elegant and oldest features: the humble URL.
Source: Your URL Is Your State
Well over a decade ago, when developing some tools for exploring then relatively new CSS features like gradients, I hit upon the idea of using URL as a way of maintaining and sharing the state of these tools. It’s continued to surprise me that more developers don’t do this, so this article on using URLs for state might inspire some to explore more.
Using the Web Monetization API for fun and profit

Web Monetization gives publishers more revenue options and audiences more ways to sustain the content they love. Support can take many forms: from a one-time contribution to a continuous, pay-as-you-browse model. It all flows seamlessly while people engage with the content they love. Publishers earn the moment someone engages, while audiences contribute in real time, using a balance they control. I encourage you all to give it a try! Install the extension that polyfills the proposed Web standard, get a wallet, and then connect it to the extension.
Source: Using the Web Monetization API for fun and profit
Somewhere between subscriptions and advertising lies a business model that can enable new kinds of content and services on the web. Web monetization is a W3C standard that may enable just such an innovation.
V7: Video Killed the Web Browser Star

So I thought I knew as much as I needed to know about the HTML video element, and as usual, I was wrong.
Source: V7: Video Killed the Web Browser Star | Rob Weychert
Why don’t you know that you don’t know about the video element in HTML? Rob Weychert thought he knew the element well. Turns out there was still more to know, and he shares that here.
Start implementing view transitions on your websites today
CSS CSS Animation View Transitions
The View Transition API allows us to animate between two states with relative ease. I say relative ease, but view transitions can get quite complicated fast. A view transition can be called in two ways; if you add a tiny bit of CSS, a view transition is initiated on every page change, or you can initiate it manually with JavaScript.
Source: Start implementing view transitions on your websites today – Piccalilli
Yes, more on View Transitions. The API we should have had about a decade ago, and it might have saved us from a lot of rabbit holes and dead ends. Here you can get up and running with View Transitions with Cyd Stumpel who gave a fantastic talk at CSS Day on the possibilities of View Transitions that I highly recommend.
Start using Scroll-driven animations today!
animation CSS CSS Animation scroll-driven animation

To celebrate scroll-driven animations finally landing in Safari 26, here are some things you probably want to know before using them.
Source: Start using Scroll-driven animations today! | Blog Cyd Stumpel
Scroll-driven animations are something that we’ve long needed JavaScript to do, but that’s now changing with native support for scroll-driven animation in browsers here or coming soon. Learn more from the mega-talented Cyd Stumpel.
Inlining Critical CSS: Does It Make Your Website Faster?

Inlining critical CSS can make your website super fast. But it’s not always easy to implement, and there are some downsides. In this article we take a look at how you can optimize stylesheets on your website, and take a look at some common challenges.
Source: Inlining Critical CSS: Does It Make Your Website Faster? | DebugBear
We’ve long been advised that inlining critical CSS is a performance-enhancing technique. But there are definitely some gotchas there. So the folks from DebugBear give us more detail on what to do and what not to do.
Lots to shout about in Quiet UI

As President of Web Components, it’s my duty to publicly comment on every Web Component library and framework that exists. Today I’m taking a look at Quiet UI, a new source-available web component library soft-launched by Cory LaViska, the creator of Shoelace WebAwesome.
Source: Lots to shout about in Quiet UI – daverupert.com
When Quiet UI came out a few weeks ago, we gave it a mention here. Looks like an amazing and really valuable library of web components without any dependencies. Here, Dave Rupert looks at what it has to offer in more detail. Highly recommend you check it out.
GraphQL’s third wave: Why the future of AI needs an API of intent
AI Native Dev graphql software engineering
Every technology with real staying power goes through waves of adoption. The first wave attracts the early experimenters—the ones who can sense the future before it’s evenly distributed. The second picks up the enterprises that’ve felt enough pain to seek out something better. The third comes when the rest of the world catches up, usually because the ground itself has shifted and the old tools can no longer do the job. GraphQL is now entering that third wave. Most people still describe GraphQL as an alternative to REST. That was true in 2015. What’s happening today is different. In the era of LLMs and autonomous agents, GraphQL isn’t just a nicer API; it has quietly become the API layer AI was waiting for.
Source: GraphQL’s third wave: Why the future of AI needs an API of intent | Hygraph
An argument that GraphQL is the right API abstraction for AI-based applications. This piece looks at the adoption of GraphQL over the last decade or so and why that makes it the perfect choice for AI applications.
The Performance Inequality Gap, 2026

Meanwhile, sites are ballooning. The median mobile page is now 2.6 MiB, blowing past the size of DOOM (2.48 MiB) in April. The 75th percentile site is now larger than two copies of DOOM, and P90+ sites are more than 4.5x larger, and sizes at each point have doubled over the past decade. Put another way, the median mobile page is now 70 times larger than the total storage of the computer that landed men on the moon.
Source: The Performance Inequality Gap, 2026 – Infrequently Noted
Alex Russell updates his Performance Inequality Gap research for 2026, looking at both where we are in terms of median devices and networks, and the size of the pages that are now commonly being delivered.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.