Last week we took a week off for the East weekend. We did however post a substantial wrap up for our first unconference, on AI and Software Engineering–so please do take a look. You’ll get a sense of what experienced software engineers and engineering leaders are thinking about when it comes to the impact of AI on the discipline of software engineering, with some follow up questions to ponder.
This week another roundup of things I’ve been listening to, reading and watching. This week we’ve got CSS, typography, observability, learning TypeScript, plenty of JavaScript, security, and of course quite a bit at the intersection of AI and Software Engineering.
Do Frontend Developers Want Frontend Observability? (with Todd Gardner)
In an attempt to answer the question “Is Frontend Observability Hipster RUM?,” Kate and Todd delve into the concept of frontend observability by exploring its definition, historical context, and the uphill marketing hurdles involved in educating web developers about observability tools. Their conversation highlights the disconnect between observability terminology and the practical needs of developers (from SREs to frontend engineers), emphasizing the need for a user-centric approach to positioning these products for the market.
Observability has long been associated with the backend, but more recently (as we’ll cover at our upcoming Code conference) it’s finding a place with front end developers. Here Todd Gardner looks at observability from a front end perspective.
As I contemplate a long-overdue redesign of my own site, it’s worth taking a refreshing dip into what we’ve learned about web typography over the past 20+ years. From the pages of (where else?) A List Apart:
Historically with Test Driven Development (TDD), the thing that you’re testing is predictable. You expect the same outputs given a known set of inputs.
With AI agents, it’s not that simple. Outcomes vary, so tests need flexibility.Instead of exact answers, you’re evaluating behaviors, reasoning, and decision-making (e.g., tool selection). This requires nuanced success criteria like scores, ratings, and user satisfaction, not just pass/fail tests.
And internal evals aren’t enough. You need to make continuous adjustments based on real-world feedback.
When software is deterministic, testing the same functionality with the same inputs will always give the same outputs. But the output of LLMs is nondeterministic? So what does testing look like?
We don’t talk enough about the feeling of learning something deeply – the slow, sometimes frustrating and painful process that forges real intuition.
As software engineers, we know this feeling intimately. The slow burn of mastering a new concept or language. The pressure and anxiety of resolving your first production issue. The endless hours spent debugging a complex system that just won’t work – until, at last, it does. And how that struggle itself is the point: it’s what transforms a beginner coder into a software engineer with real intuition.
And yet, in the age of AI, that friction is exactly what we’re designing away.
GitHub Copilot, ChatGPT, Cursor, Windsurf – they’re extraordinary tools. They’ve changed the way we work. They accelerate us. They enable us to solve problems we previously would’ve struggled with. They’re becoming the new normal.
But they also flatten the terrain. And when the terrain is too smooth, we stop noticing what we’re stepping over.
Axel Rauschmayer is one of the foremost communicators on web development. You can read his Exploring TypeScript for free online or buy it for a very reasonable price.
Two years ago, in March 2023, I published a blog post called “The End of Front-End Development”. This was right after OpenAI released its GPT-4 showcase, and the general reaction was that human software developers were about to be made redundant, that software would soon be written exclusively by machines.
I was skeptical of these claims, and in that blog post, I made the case for why I thought software development would still require humans for the foreseeable future. My hypothesis was that LLMs would augment human developers, not replace them.At the time, the conventional wisdom on Twitter was that it would only be a few months before AI extinguished all demand for human front-end developers, maybe a year or two at most. Well, it’s been over two years since then! So, were they right? Are we currently living in the “post-developer” era?
In this blog post, I want to take a fresh look at the current landscape, to see how things have changed, and to see if we can anticipate how things will continue to evolve. If you’re an aspiring developer who is feeling anxious about your future career, my hope is that this post will give you some clarity. ❤️
Josh Comeau revisits his thoughts on the impact of LLMs on software development from a couple of years back. Josh comes down more on the skeptical side–seeing these tools as valuable, but perhaps their rate of innovation is slowing. Based on our recent unconference on the topic, this is not an uncommon position–though others are far more bullish in the impact of the technology.
I was on the epicgames.com website the other day, signing up so I could relive my Magic: The Gathering glory days with Arena. While doing that I saw their style for modal dialogs and thought I should try to re-create that with because apparently I’m both of those types of nerd.
the <dialog> element is powerful and I think underused. Here Chris Coyier recreates a dialog he found in the wild to explore the features of the Web platform. An excellent approach I find when investigating new platform features.
Using CSS backdrop-filter for UI Effects | CSS-Tricks
If you’re familiar with CSS filter functions like blur() and brightness(), then you’re also familiar with backdrop filter functions. They’re the same. You can find a complete list of supported filter functions here at CSS-Tricks as well as over at MDN.The difference between the CSS filter and backdrop-filter properties is the affected part of an element. Backdrop filter affects the backdrop of an element, and it requires a transparent or translucent background in the element for its effect to be visible. It’s important to remember these fundamentals when using a backdrop filter, for these reasons:to decide on the aesthetics,to be able to layer the filters among multiple elements, andto combine filters with other CSS effects.
CSS filters have been around for well over a decade (Firefox lead the charge on this one, something that’s waned in the intervening years, but kudos to Firefox being the first browser to implement the long awaited Temporal features, an update to JavaScript’s ancient Date Time functionality). But backdrop filters, though related to CSS Filters, are more powerful as detailed here by Preethi at CSS Tricks.
Stop Managing AI Projects Like Traditional Software
If you’ve spent plenty of time wading through modern JavaScript, odds are you’ve seen enough ellipses (…) to put even the most brooding 90s role-playing game protagonist to shame. I wouldn’t fault you for finding them a little confusing. Granted, I wouldn’t fault you for finding anything about JavaScript confusing, but I’ve always thought those ellipses were uniquely unintuitive at a glance. It doesn’t help that you’ll frequently encounter these little weirdos in the context of “destructuring assignment,” which is a strange syntax in and of itself.
Exploiting hallucinated package names represents a form of typosquatting, where variations or misspellings of common terms are used to dupe people. Seth Michael Larson, security developer-in-residence at the Python Software Foundation, has dubbed it “slopsquatting” – “slop” being a common pejorative for AI model output.”
We’re in the very early days looking at this problem from an ecosystem level,” Larson told The Register. “It’s difficult, and likely impossible, to quantify how many attempted installs are happening because of LLM hallucinations without more transparency from LLM providers. Users of LLM generated code, packages, and information should be double-checking LLM outputs against reality before putting any of that information into operation, otherwise there can be real-world consequences.”
Supply chain attacks via package managers are a well known security risk. If you’ve worked with LLMs to help generate code you’ll likely have come across them hallucinating non existent packages to include in your code. Mostly you’ll quickly discover these don’t exist. But it seems malicious actors are going and creating these packages to exploit this (or at the very least it’s a potential attack vector worth being mindful of).
Some features that every JavaScript developer should know in 2025
JavaScript is constantly evolving and newer features are introduced. This oftentimes makes older coding practices outdated, and even less efficient. Bellow is a list of some important features (old and new) that most developers might be unaware of.
A very useful roundup of JavaScript features you may not be aware, of including Iterator helpers, weak sets and weak maps, structured cloning, tagged templates and more.
Markdown and the Slow Fade of the Formatting Fetish
Year after year, document formats like .docx, .ppt, and pdf lose a little bit of steam. You might not have noticed… But Markdown is growing over and into the old formats, slowly, and nicely, like moss on a stranded star destroyer. Notes on a revolution in slow motion.
…
The slow shift from formats to Markdown is particularly powerful for productivity and educational apps. Emphasizing clear, structured thinking rather than visual decoration, moving from .docx to plain text slowly transforms how we communicate through computers. These are big claims on an ephemeral matter. To clarify in what ways Markdown transforms digital communication, let’s look at the history, economy, the design of traditional formats and how they compare to Markdown:
I have confession to make–I’ve never been much of a fan of markdown. Not really seen the point–HTML is right there. But seeing it as an interchange format, less than an authoring format makes a lot more sense. And while HTML would do just as good a job, markdown is simple and spare. In this detailed article the folks at IAWriter look at the history, economy, the design of traditional formats and how they compare to Markdown.
How to Build AI Applications In Minutes With Transformers.js
Transformers.js is a JavaScript library that provides the ability to run AI models in the browser. At the time of this writing, over 1,300 models are available for use and this number continues to grow each day.
With Transformers.js, it is also possible to convert custom models developed with Python, JAX, or TensorFlow so that they may run in the browser.
Supported tasks currently include:
Natural Language Processing (ex. text classification and translation)
Computer Vision (ex. image classification and object detection)
Audio (ex. automatic speech recognition and text-to-speech)
Multimodal (embeddings and zero-shot classification)
Smaller, more capable models running on the device, and in the browser are a solution for many AI and ML challenges, and accessible to any JavaScript developer with transformers.js as Danielle Maxwell explores here.
Pattern #1: From Producer to Manager – AI Native Dev
It all started with simple code autocompletion in our IDEs. But the real question became: is it suggesting the right code? As developers, we still had to verify everything the AI generated. Fast forward a few years, and we now have chat interfaces suggesting full code snippets. Of course, we still need to review them before pasting anything into our code window.
Chat became multiline suggestions, a single file became multiple files, all the way to the complete scaffolding of applications. The trend is clear: the more is generated by AI, the bigger the pull requests become. The time saved on generating the code now moved towards time reviewing the code, similar to reviewing a colleague’s code.
I too have seen this. The first generation of AI-powered products (often called “AI Wrapper” apps, because they “just” are wrapped around an LLM API) were quickly brought to market by small teams of engineers, picking off the low-hanging problems. But today, I’m seeing teams of domain experts wading into the field, hiring a programmer or two to handle the implementation, while the experts themselves provide the prompts, data labeling, and evaluations.For these companies, the coding is commodified but the domain expertise is the differentiator.
Many years ago, as a software developer who used my own software extensively (Style Master, one the earliest CSS Editors if you want to know) if I had a bug that annoyed me, or a feature I felt would help me, I’d likely fix it or build it. Back then, teams working on software were small, often solo devs, and often driven by an interest in or a deep knowledge of the problem space their software solved.
Now, whether it’s large applications or sites, fewer developers are experts in the domain the software they work on operates in. We are experts in developing software, which for decades has become an ever increasingly arcane and complex area of practice.
But as Drew Brunig observes here that may be changing, as LLMs transform the nature of software engineering. What implications are there for software developers? Becoming a domain expert outside software? That’s one approach.
Another would be to become someone adept at working with domain experts to explore new opportunities. To be honest the last decade of SaaS has seen very few of the era defining apps like previous eras saw–VisiCalc, the original spreadsheet, Photoshop, Wordprocessors, the browser. VCs seem to love investing in enterprise to do list apps judging by the ads I see on Youtube.
Dan Bricklin conceived of VisiCalc while watching a presentation at Harvard Business School while studying business finance. Photoshop was originally developed by one person, John Knoll. Perhaps small teams, driven by domain expertise might see the rise of a new kind of software?
Model Context Protocol has prompt injection security problems
As more people start hacking around with implementations of MCP (the Model Context Protocol, a new standard for making tools available to LLM-powered systems) the security implications of tools built on that protocol are starting to come into focus.