The second law of thermodynamics states that entropy—roughly speaking, disorder—in a closed system only increasesover time. Perhaps there’s an analogous law for complexity in software systems, both at the micro and macro level.
Over the 30 years I’ve been building things for the Web, this seems to hold true for the things we build and the broader system itself. But is it inevitable, like the relentless, irreversible increase in the universe’s entropy, or is it more a choice?
At the very least, we can choose to reduce the complexity of the things we build. Much like with the second law and that important, often overlooked word “closed,” we can borrow from the broader system—the ever-increasing complexity of browsers and the web platform as a whole—to simplify our own outputs.
This week, we’ll explore the idea of complexity in some detail. We also have a roundup of great articles and videos on aspects of CSS, JavaScript, software engineering, and LLMs, plus the impact of generative AI on product design and development, and more.
A quick note: We have 4 more in person events coming up with CFPs closing soon and we’d love to hear from you!
Engineering AI focusses on the challenges and opportunities of generative AI for software engineering. We’ll go beyond code generation and the obvious stuff to the bigger questions posed to the profession by these technologies.
Developer Summit is our in-depth front end engineering conference–all things modern front end dev.
Next focuses on the transformative impact of AI, large language models, and intelligent agents on product design and development.
And Enqueue is for people who build WordPress professionally
CFPs close at the end of the month, so if you’re keen, head over to the event you’re interested in and learn more!
Simplicity: Sustainable, Humane, and Effective Software Development
In a 1912 commencement address, the great American jurist and antitrust reformer Louis Brandeis hoped that a different occupation would aspire to service:The peculiar characteristics of a profession as distinguished from other occupations, I take to be these:
First. A profession is an occupation for which the necessary preliminary training is intellectual in character, involving knowledge and to some extent learning, as distinguished from mere skill.
Second. It is an occupation which is pursued largely for others and not merely for one’s self.
Third. It is an occupation in which the amount of financial return is not the accepted measure of success.
In the same talk, Brandeis named Engineering a discipline already worthy of a professional distinction. Most software development can’t share the benefit of the doubt, not matter how often “engineer” appears on CVs and business cards. If React Summit and co. are anything to go by, frontend is mired in the same ethical tar pit that cause Wharton, Kellogg, and Stanford grads to reliably experience midlife crises.3
Again on the theme of complexity, this is a blunt piece by Alex Russell reflecting on talks he saw at a recent React conference. He concludes by saying
So how might we address this complexity as front end engineers? We’ve become fixated on monolithic patterns like SPA for all web content, when perhaps different approaches make sense in different contexts. Here Den Odell explains the Islands pattern and when it might be best deployed.
Before flex, before grid, even before float, we still had to lay out web pages.Not just basic scaffolding, full designs. Carefully crafted interfaces with precise alignment, overlapping layers, and brand-driven visuals. But in the early days of the web, HTML wasn’t built for layout. CSS was either brand-new or barely supported. Positioning was unreliable. Browser behavior was inconsistent.
Table based layout for web content ruled the roost for nearly a decade. And it was complex and complicated, hard to develop, harder still to debug and iterate on. We did replace it with something simpler–CSS. While the platform itself got more complex, what we built got simpler.
CSS animations have come a long way since Apple first introduced them to the web in 2007. What started as simple effects like animating from one color to another has turned into beautiful, complex images twisting and flying across the page.
But linking these animations to user behavior like scrolling has traditionally required third-party libraries and a fair bit of JavaScript, which adds some complexity to your code. But now, we can make those animations scroll-driven with nothing more than a few lines of CSS.
Scroll-driven animations have increased browser support and are available in Safari 26 beta, making it easier for you to create eye-catching effects on your page. Let me show you how.
A great intro to scroll driven animations using only CSS from the folks at Webkit. Once again the platform increases in complexity, but we as developers can use that to simplify what we do.
Separation of concerns is a computer science principle introduced in the mid 1970s that describes how each section of code should address a separate piece of a computer program. Applying this principle results in more modular, understandable, and maintainable codebases.
Web designers & developers of a certain age might remember the separation of concerns applied to the three languages of the web: HTML provides the structure, CSS provides the style, and JavaScript provides the behavior of a web page.
Separation of concerns is a long standing computer science pattern, and one which standards based web developers adopted around the turn of the century in architecting the code of web sites. Here Brad Frost revisits the concept for modern web development.
Tips for making regular expressions easier to use in JavaScript
Did you know your favorite website can detect when you’re browsing it in public transport and when you scroll it laying in your bed? Today we’ll learn how they can do it and how this info is used to fight bots.
If you build websites especially ones with any kind of forms, you’ll doubtlessly run into the scourge of bots filling them in and submitting them. Learn how to detect if your visitor is a bot and what to do about it.
I’m not the only one thinking about how context management is the key to good LLM applications. Since publishing our post detailing how long contexts fail, a conversation emerged regarding the term “context engineering,” compared to “prompt engineering.” (Let’s be clear…I had nothing to do with starting the debate, it’s just a happy coincidence…)
Today, Andrej Karpathy weighed in, supporting “context engineering”:
“People associate prompts with short task descriptions you’d give an LLM in your day-to-day use. When in every industrial-strength LLM app, context engineering is the delicate art and science of filling the context window with just the right information for the next step.”
When generative AI first started getting attention, the role of the Prompt Engineering and Prompt Engineering got a lot of attention. That’s waned over time, but the focus on the prompt as the key way of interacting with LLMs remains. Some like Drew Breunig suggest it’s context not the prompt alone that we should focus on. Here he explains.
The term context engineering has recently started to gain traction as a better alternative to prompt engineering. I like it. I think this one may have sticking power.
Context Engineering is new term gaining traction in the AI world. The conversation is shifting from “prompt engineering” to a broader, more powerful concept: Context Engineering. Tobi Lutke describes it as “the art of providing all the context for the task to be plausibly solvable by the LLM.” and he is right.
With the rise of Agents it becomes more important what information we load into the “limited working memory”. We are seeing that the main thing that determines whether an Agents succeeds or fails is the quality of the context you give it. Most agent failures are not model failures anyemore, they are context failures.
When working with coding agents it’s essential to provide them with the signals they need to determine whether they’ve done the right job or not. The typical signals come from inspecting the generated code, type-checking, linting, and running tests. What’s missing though is access to the application itself. If we could provide the agent with the tools needed to inspect and interact with the application at runtime, it would allow it to check its work directly rather than infer it from the code.
There’s been a lot of buzz lately about playwright-mcp which does exactly that. I figured I’d try it out this weekend but turns out it doesn’t play nicely with Electron, which the app I’m currently hacking on is using. This gave me an idea though, what if we could leverage the Chrome DevTools Protocol (CDP) in a more direct fashion?
AI is transforming the way we work — automating production, collapsing handoffs, and enabling non-designers to ship work that once required a full design team. Like it or not, we’re heading into a world where many design tasks will no longer need a designer.If that fills you with unease, you’re not alone. But here’s the key difference between teams that will thrive and those that won’t:
Some design leaders are taking control of the narrative. Others are waiting to be told what’s next.
We’ve focussed a lot of on the impact of LLMs and generative AI on software engineering, but these technologies are also transforming the process and practice of design. Here Andy Budd suggests design leaders need to take this transformation seriously and take charge.
Tech companies are only pretending to innovate, through copying futuristic aesthetics from science fiction without understanding their purpose.
Technological progress has always come from humanity’s grasp exceeding our reach; before we could build drones we needed to imagine flight. However, our successful flying machines bore little resemblance to their fictional counterparts. Design has not learned this lesson, and continues to try to wow users by recreating familiar aesthetics of futurity, rather than the outcomes depicted in this media. And whenever it does this (as with any aesthetics-first effort) the result is always a failure. The inevitable future disappears like so much smoke.
At this point, almost every software domain has launched or explored AI features. Despite the wide range of use cases, most of these implementations have been the same (“let’s add a chat panel to our app”). So the problems are the same as well.
Something about the chat interface that came with the release of ChatGPT, and GPT3.5 which powered it, was the killer feature for LLMs. The reasonably capable GPT3 had been around for a couple of years, and a relatively small number of developers had been exploring it via the API and playground, but it was the release of ChatGPT in late November 2022 that ignited the modern generative AI wave of excitement, innovation and it’s fair to say no little hype.
The breathtaking success of chatGPT (gaining 100M active monthly users far faster than any product that had come before it) meant that the chat pattern came to dominate AI related product design and features. But surely there’s many more interaction patterns out there to be discovered and refined. Luke Wroblewski has been thinking in public a lot about what comes beyond the chatbot.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.