A quick note–the Australian Financial Year ends June 30, so if you have training budget to spend, please keep in mind our upcoming conferences, and our streaming platform Conffab we’ve got you covered whatever your budget, from $20 a month and up.
I had hoped over the last few weeks to keep up my unbroken streak for the year of weekend reading posts, but a trip halfway around the world for CSS Day in Amsterdam then 3 days of our own conferences the following week in Melbourne was just a bit too much.
But today I’m back with a bumper crop from the last few weeks.
Speaking of CSS Day–what an incredible event. CSS has been one of the most significant technologies in my life for 30 years now, and to be in a room full of folks dedicated to it, including such giants in the field on the stage, was a huge privilege, and to speak there a genuine professional highpoint.
The recordings of the days are available now on Conffab for just $149 (€99) and then the fully edited individual videos, with our added secret sauce of interactive slides, transcripts chapters, and more, in a few weeks.
Now, on with your belated reading-we have a lot to catch up on!
CSS Cascade Layers Vs. BEM Vs. Utility Classes: Specificity Control
CSS is wild, really wild. And tricky. But let’s talk specifically about specificity.When writing CSS, it’s close to impossible that you haven’t faced the frustration of styles not applying as expected — that’s specificity. You applied a style, it worked, and later, you try to override it with a different style and… nothing, it just ignores you. Again, specificity.
Sure, there’s the option of resorting to !important flags, but like all developers before us, it’s always risky and discouraged. It’s way better to fully understand specificity than go down that route because otherwise you wind up fighting your own important styles.
There are three fundamentals of CSS that are subtle but powerful–a combination that should set our spider senses tingling. Inheritance (how style flows through document from ancestor to descendants), the Cascade (how style flows through our style sheets, from lower specificity to higher specificity) and specificity itself (the rules that decide when two CSS rules may apply to an element which takes effect).
For many years developers have wrangled with these. Sometimes via methodologies like BEM and OOCSS, and sometimes with more brute force, using utility classes or CSS in JS. But understanding these aspects of CSS and their interplay, and how they enable web design is a key capability of front end engineering. Here Victor Ayomipo looks at BEM, utility classes and the newer Cascade Layers and how they can help us better manage specificity and our CSS architecture.
This is a written adaptation of my talk at CSS Day 2025. It was a lovely event, but I realize life is complicated and not everyone can make it to events like this. There are videos up paywalled at conffab.com. I figure this written version can make my points as well.
This was a great talk at CSS Day on scope in CSS by Chris Coyier. Right now the recordings on the streams from each day are on Conffab. In a few weeks we’ll have the individual talks available too.
Not just slow – awful. Bloated, fragile, over-engineered disasters. They load slowly, render erratically, and hide their content behind megabytes of JavaScript. They glitch on mobile. They frustrate users and confuse search engines. They’re impossible to maintain. And somehow, we’re calling this progress.
This is a long detailed very opinionated look at the modern web and the mess we have made of it, with thoughts as to why. While his conclusions will infuriate and irritate many, regardless of your position on JavaScript frameworks Alderrson poses a series of questions I think it is important to consider. It reminds me of a talk I gave in 2012, at the dawn of the web app/framework era at FFConf–you can still watch it here.
Baseline Newly Available: Stay on Top of New Web Features
Mary Branscomb looks at Baseline Newly Available, and at how to think about when and how to incorporate newer web platform features into your sites and apps.
Thesis: Proponents of utility CSS * (presentational HTML) have never performed a CSS-only redesign †.
One could be tempted now to try to state the opposite for proponents of strict separation of concerns between structure, presentation, and behavior—but they haven’t usually, either.
The last decade or so has seen the emergence of competing paradigms for managing CSS Code. From the late 1990s to around 2012, the central tenet of ‘separation of concerns’ held sway. HTML was for structured semantic content, CSS for appearance, and JavaScript for additional interactivity. Something happened in the early teens. In response to the arrival of native iPhone and Android apps, the web aimed to compete with the interaction patterns of those platforms. RESTful stateless multipage architectures give way to the Single Page Architecture, where rendering and state management were handled on the client.
The concept of separation of concerns gave way to CSS-in-JS approaches, and to utility CSS, most well known now from Tailwind. Here Jens Meiert proposes that each reflects a different philosophy, and that both are fine
The preferences seem based on two different conclusions: Life isn’t easy, so let’s make it as easy as we can, vs.—life isn’t easy, so let’s deal with it. And that’s fine.
…I must admit: I didn’t know a lot about color in CSS (I still used rgb(), which apparently isn’t what cool people do anymore), so it has been a fun learning experience. One of the things I noticed while trying to keep up with all this new information was how long the glossary of color goes, especially the “color” concepts. There are “color spaces,” “color models,” “color gamuts,” and basically a “color” something for everything.
The recently revitalised CSS Tricks has been on a roll with articles about color and CSS where a lot has been happening-from new color syntaxes and spaces, to color functions. If you’re feeling like you’re not keeping up, well you aren’t alone. Here Juan Diego Rodríguez has a great primer to get you up to speed on all things new in color in CSS.
However, color in CSS can be a bit hard to fully understand since there are many ways to set the same color, and sometimes they even look the same, but underneath are completely different technologies. That’s why, in this guide, we will walk through all the ways you can set up colors in CSS and all the color-related properties out there!
OddBird’s color tool not only checks contrast ratios, but supports the new CSS color formats and spaces.
For years designers and developers were limited to colors in the sRGB colors space, using formats like hexadecimal, RGB, and HSL. As display technology progressed, so too has CSS, and we have access to additional color spaces and wider gamuts. These advances led us to build OddContrast, a color editing and testing tool that handles our new world of modern color formats.
There are quite a few color contrast checkers, but many if not most focus on hexadecimal and the RGB color space.
But CSS has introduced quite a few new color formats and spaces in recent years that are now widely supported so Oddbird (home of speaker Miriam Suzanne) have developed OddContrast a tool to help with color contrast and more for these new formats and spaces.
Exploring the CSS contrast-color() Function… a Second Time | CSS-Tricks
In many countries, web accessibility is a human right and the law, and there can be heavy fines for non-compliance. Naturally, this means that text and icons and such must have optimal color contrast in accordance with the benchmarks set by the Web Content Accessibility Guidelines (WCAG). Now, there are quite a few color contrast checkers out there (Figma even has one built-in now), but the upcoming contrast-color() function doesn’t check color contrast, it outright resolves to either black or white (whichever one contrasts the most with your chosen color).
The new contrast-color function of CSS will return either black or white, depending on which best contrasts with an input color (though it is not guaranteed to meet WCAG color contrast requirements).
How JPEG Became the Internet’s Image Standard – IEEE Spectrum
For roughly three decades, the JPEG has been the World Wide Web’s primary image format. But it wasn’t the one the Web started with. In fact, the first mainstream graphical browser, NCSA Mosaic, didn’t initially support inline JPEG files—just inline GIFs, along with a couple of other formats forgotten to history. However, the JPEG had many advantages over the format it quickly usurped.
JPEG is the image format that seemingly will not die. Long after its contemporary early web image format GIF has been confined to history, JPEG still lives on (though in 2025, really–we should all be using webp). Here’s a history of the format we at least know how to pronounce the name of!
Neurodivergent needs are often considered as an edge case that doesn’t fit into common user journeys or flows. Neurodiversity tends to get overlooked in the design process. Or it is tackled late in the process, and only if there is enough time.
But people aren’t edge cases. Every person is just a different person, performing tasks and navigating the web in a different way. So how can we design better, more inclusive experiences that cater to different needs and, ultimately, benefit everyone? Let’s take a closer look.
Designing for neurodiversity means recognizing that people aren’t edge cases but individuals with varied ways of thinking and navigating the web. Here Vitaly Friedman explores how we can create more inclusive experiences that work better for everyone.
What I Wish Someone Told Me When I Was Getting Into ARIA
If you haven’t encountered ARIA before, great! It’s a chance to learn something new and exciting. If you have heard of ARIA before, this might help you better understand it or maybe even teach you something new!These are all things I wish someone had told me when I was getting started on my web accessibility journey. This post will:Provide a mindset for how to approach ARIA as a concept,Debunk some common misconceptions, andProvide some guiding thoughts to help you better understand and work with it.It is my hope that in doing so, this post will help make an oft-overlooked yet vital corner of web design and development easier to approach.
Writing asynchronous code in JavaScript used to come with a limitation: the await keyword could only be used inside an async function. That changed when ES2022 introduced top-level await: a modern ES module feature that enables new patterns for asynchronous code at the module level.
Let me know if this sounds familiar: you’re deep into debugging or trying to access a deeply nested property in a JavaScript object. Suddenly you see this classic error:
TypeError: Cannot read property ‘x’ of undefined
This is a common pain point, especially when working with API responses, optional fields, or dynamic data structures.
Fortunately, JavaScript has a powerful feature to help with this: optional chaining (?.). Optional chaining has saved me from more than a few headaches, and I’m willing to bet it’ll do the same for you.
JavaScript will soon have a new feature that many developers are eagerly awaiting. The feature is the Temporal API which will fix many problems and inconveniences of the old Date object.
Learn how Declarative Web Push can help you deliver notifications more reliably. Find out how to build on existing standards to be more efficient and transparent by design while retaining backwards compatibility with original Web Push.
Apple relatively recently introduced decorative web push, a way to push web notifications to your users’ devices declaratively, meaning there’s no need to use JavaScript to send notifications from your web apps.
Printing the web: making webpages look good on paper
A huge part of building for the web is making experiences responsive. Usually, we think of responsive design in terms of making sites adapt to different viewport sizes, but what about being responsive to different mediums too?
Buried away within CSS lies potential for transforming a jumbled, ink-draining mess into a clean, sleek, readable document. But much like writing good error messages, print stylesheets are frequently a neglected afterthought, leading to frustrated users and wasted resources.
From just about the beginning, CSS has supported styles for the media type of print, though browser support was long patchy at best. Modern browsers though support print well (if not perfectly) and it’s something I’ve used for decades when creating printed material for my in person workshops, and in situations where folks still prefer or require PDFs.
Here’s a great guide to get the most from print CSS.
Why Silicon Valley CTOs Are Secretly Moving Away from React
“React isn’t failing because it’s bad. It’s failing because it succeeded too well.”
Those words from a CTO at one of Silicon Valley’s unicorn companies stuck with me. We were having drinks at a tech leadership meetup in Palo Alto — the kind where people speak more freely after the second round. The conversation had turned to frontend architecture, and I’d noticed a pattern in these closed-door conversations that wasn’t reflected in public discourse.While React still dominates job postings, conference talks, and Twitter debates, a quiet shift is happening behind the scenes at many top tech companies. CTOs and engineering leaders are questioning their long-term commitment to React and exploring alternatives — often without public announcements.
It’s merely anecdotal evidence, but it aligns with what critics of React have even saying for some time. See Jono Alderson’s piece we also round up this week.
Like ’em or loath ’em, whether you’re showing an alert, a message, or a newsletter signup, dialogue boxes draw attention to a particular piece of content without sending someone to a different page. In the past, dialogues relied on a mix of divisions, ARIA, and JavaScript. But the HTML dialog element has made them more accessible and style-able in countless ways.
So, how can you take dialogue box design beyond the generic look of frameworks and templates? How can you style them to reflect a brand’s visual identity and help to tell its stories? Here’s how I do it in CSS using ::backdrop, backdrop-filter, and animations.
We’ve covered the newish HTML Dialog element (and associated popover API) here quite a bit at Conffab. In this talk Andy Clarke explores some creative ways to use dialog.
More and more individuals and teams are reflecting on how they work with AI, like Atharva Raykar at nilenso. I find these reflections really valuable.
Malleable software: Restoring user agency in a world of locked-down apps
The original promise of personal computing was a new kind of clay—a malleable material that users could reshape at will. Instead, we got appliances: built far away, sealed, unchangeable. When your tools don’t work the way you need them to, you submit feedback and hope for the best. You’re forced to adapt your workflow to fit your software, when it should be the other way around.
In this essay, we envision malleable software: tools that users can reshape with minimal friction to suit their unique needs. Modification becomes routine, not exceptional. Adaptation happens at the point of use, not through engineering teams at distant corporations.
From being ‘bicycles for the mind‘ and tools we can tinker with an adapt, computers have become black box appliances over which we have little control, and we the servants of the software we use.
The folks at Ink and Switch here reimagine the way we might work with our software, making it more malleable.
the six-month recap: closing talk on AI at Web Directions, Melbourne, June 2025
Welcome back to our final session at WebDirections. We’re definitely on the glide path—though I’m not sure if we’re smoothly landing, about to hit turbulence, or perhaps facing a go-around. We’ll see how it unfolds. Today, I’m excited to introduce Geoffrey Huntley.
Geoff Huntley closed our recent Code 25 conference (stream recording available to Premium members, individual video coming soon for Pro members) with his thoughts on the impact of LLMs on the practice of software engineering. Here’s the script of his fantastic talk.
Imo fair to say that software is changing quite fundamentally again. LLMs are a new kind of computer, and you program them *in English*. Hence I think they are well deserving of a major version upgrade in terms of software.
This talk by Andrej Karpathy a couple of weeks ago got a lot of attention before any video emerged, and now you can watch that talk. Karpathy has quipped that ‘The hottest new programming language is English‘, and here he fleshes out that thesis.
AI-assisted coding for teams that can’t get away with vibes
AI should be adopted by serious engineering teams that want to build thoughtful, well-crafted products. This requires skillful usage of these tools. Our obsession with building high-quality software for over a decade has driven us to figure out how this new way of building can result in better products.
Building with AI is fast. The gains in velocity are important, because when harnessed correctly, it allows teams to tighten feedback loops with users faster and make better products.
Yet, AI tools are tricky to use. Hold it wrong, and you can generate underwhelming results, worse still, slow down your velocity by drowning your project in slop and technical debt.
This living playbook is based on our experience working with AI tools in the messy trenches of production software, where no one can afford to get away with vibes. I hope other teams can learn and benefit from our findings.
AI coding environments have evolved rapidly, from simple chat-based prompting, to retrieval-augmented generation (RAG), and more recently to autonomous agents. Each step has improved output quality, but also introduced new workflow patterns. Agent-based coding, for instance, often means longer execution cycles: the agent works autonomously for a while before returning for human feedback.
This shift resembles the transition from a synchronous for-loop to an asynchronous, event-driven architecture. Instead of one AI working step-by-step, we now have multiple parallel coding agents operating independently and reporting back—bringing both speed and complexity.
AI is reshaping the way we build software, shifting from code-centric to spec-centric development, where developers define what they want, and AI determines how to achieve it. But how do we get there? In this session, we’ll explore what AI Native Development means, why it’s worth pursuing, and how we can apply lessons from cloud-native and DevOps transformations to make it a reality.We’ll also look at the practical side understanding how the craft of prompt engineering can help us refine structured specifications to improve results. By experimenting with different prompting techniques and validation methods, you’ll gain actionable strategies for guiding AI tools more effectively.
An exploration of ‘AI Native Development’ and how LLMs are impacting the prate of Software Engineering. Around an hour, but an excellent overview.
We stress-tested 16 leading models from multiple developers in hypothetical corporate environments to identify potentially risky agentic behaviors before they cause real harm. In the scenarios, we allowed models to autonomously send emails and access sensitive information. They were assigned only harmless business goals by their deploying companies; we then tested whether they would act against these companies either when facing replacement with an updated version, or when their assigned goal conflicted with the company’s changing direction.
As MCP is rapidly adopted, the security threats posed by LLMs are being more and more considered. Here Anthropic outlines recent research into the security risks posed by various models.
Design Patterns for Securing LLM Agents against Prompt Injections
As long as both agents and their defenses rely on the current class of language models, we believe it is unlikely that general-purpose agents can provide meaningful and reliable safety guarantees.
This leads to a more productive question: what kinds of agents can we build today that produce useful work while offering resistance to prompt injection attacks? In this section, we introduce a set of design patterns for LLM agents that aim to mitigate — if not entirely eliminate — the risk of prompt injection attacks. These patterns impose intentional constraints on agents, explicitly limiting their ability to perform arbitrary tasks.
There’s a new breed of GenAI Application Engineers who can build more-powerful applications faster than was possible before, thanks to generative AI. Individuals who can play this role are highly sought-after by businesses, but the job description is still coming into focus. Let me describe their key skills, as well as the sorts of interview questions I use to identify them.
Skilled GenAI Application Engineers meet two primary criteria: (i) They are able to use the new AI building blocks to quickly build powerful applications. (ii) They are able to use AI assistance to carry out rapid engineering, building software systems in dramatically less time than was possible before. In addition, good product/design instincts are a significant bonus.
Some of the smartest people I know share a bone-deep belief that AI is a fad — the next iteration of NFT mania. I’ve been reluctant to push back on them, because, well, they’re smarter than me. But their arguments are unserious, and worth confronting. Extraordinarily talented people are doing work that LLMs already do better, out of spite.
All progress on LLMs could halt today, and LLMs would remain the 2nd most important thing to happen over the course of my career.
It’s been a long time coming, but we finally have some promising LLMs to try out which are trained entirely on openly licensed text!
EleutherAI released the Pile four and a half years ago: “an 800GB dataset of diverse text for language modeling”. It’s been used as the basis for many LLMs since then, but much of the data in it came from Common Crawl—a crawl of the public web which mostly ignored the licenses of the data it was collecting.
While we’ve long been largely positive about the impact of LLMs here at Conffab, we’ve also not ignored some significant challenges with the technology–including the high problematic intellectual property aspects of how these models have often been trained.
So it’s encouraging to see significant models now trained on openly licensed data as Simon Willison details here for The Common Pile v0.1…
EleutherAI’s successor to the original Pile, in collaboration with a large group of other organizations with whom they have been “meticulously curating a 8 TB corpus of openly licensed and public domain text for training large language models”.
I wonder if one of the reasons I’m finding LLMs so much more useful for coding than a lot of people that I see in online discussions is that effectively all of the code I work on has automated tests.I’ve been trying to stay true to the idea of a Perfect Commit – one that bundles the implementation, tests and documentation in a single unit – for over five years now. As a result almost every piece of (non vibe-coding) code I work on has pretty comprehensive test coverage.This massively derisks my use of LLMs. If an LLM writes weird, convoluted code that solves my problem I can prove that it works with tests – and then have it refactor the code until it looks good to me, keeping the tests green the whole time.
It sometimes feels that those critical of using LLMs for software development, or skeptical about their value have limited real world experience of using of them. Perhaps gave them a shot, weren’t impressed with what they produced, and left it at that.
This from Simon Willison resonates with me–you need to work with these systems, learn their strengths and weaknesses and develop your own workflows and frameworks to get the best from them.
I remember infuriating days in early 2023 going round and round the mulberry bush (this sometimes still happens, but far less so now than a couple of years ago) as a model would confidently produce something that didn’t work, correct it after prompting, wth another solution that didn’t work, correct that, again not working, then go back to the original suggestion.
There were plenty of times when I know I could have written the code more quickly, lamented I hadn’t just done it myself.
But that process helped me get far better results in the long term (injecting your expert knowledge will often break out of this cycle).
So learn from folks like Simon who have put in a huge-amount of effort, and who generously share what they have learned. This technology is not going away.
Can you break up your own job into a set of well-defined tasks such that if each of them is automated, your job as a whole can be automated? I suspect most people will say no. But when we think about other people’s jobs that we don’t understand as well as our own, the task model seems plausible because we don’t appreciate all the nuances.
Simon Willison quotes Arvind Narayanan citing a recent New York Times article on the impact of AI on Radiology. Long predicted to significantly impact the professional prospects of radiographers, it seems that’s not been the case. My instinct, having worked with these technologies extensively for some years now both for software engineering and product development, aligns with this observation.
Critical questions for design leaders working with artificial intelligence
Forty design leaders from around the world gathered at L’Alliance New York to shape a shared vision for the future of design leadership in a AI world.
AI is a ground-breaking technology, providing unforeseen capability and opportunity. But it is not without controversy, both ethically and technologically. AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.
Leading Design recently brought together a group of design leaders to consider the impact of generative AI on design practice. They pose 40 questions for designers and design leaders to consider in applying and working with generative AI.
The haunting question facing every AI product team: what can you build today that won’t be subsumed by tomorrow’s model?
A good chunk of the first wave of AI-powered products got crushed by this dynamic. They invested significant efforts to build, tune, and test prompts against their specific domain. At the time (circa ChatGPT 3.5), models were powerful but picky. Endlessly tweaking prompts in your pet domain was worthwhile and enabled brand new applications.
But then the GPT-4 level models arrived and nailed many of these challenges with one casual prompt.
We’re still very much focusing on the technology of LLMs for the most part. But the question of what products built with these technologies will have lasting value is a much more challenging (and for most folks interesting) one.
Many companies have an Engineer:PM ratio of, say, 6:1. (The ratio varies widely by company and industry, and anywhere from 4:1 to 10:1 is typical.) As coding becomes more efficient, I think teams will need more product management work (as well as design work) as a fraction of the total workforce. Perhaps engineers will step in to do some of this work, but if it remains the purview of specialized Product Managers, then the demand for these roles will grow.
We’ve published quite a lot on the impact of LLMs on software engineering–but what about product management? Andrew Ng sees the potential for an increase in the demand for PM skills relatively. But what does the AI era PM look like?
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.