Your weekend reading–the AI Software Engineering edition
Every week I read a lot of articles, blog posts, listen to numerous podcasts, watch a fair number of videos, all trying to keep up with what is happening in a relatively narrow slice or two of technology.
It’s interesting that sometimes a raft of related things arrive almost all at once, and the last week or so has been one of those weeks.
I’ve been thinking a lot about the question “what is the nature of software engineering with the advent of large language models”? That does it mean to program? Is this even a career any more? And if so what does it look like.
If your instinct around this question is to say “these models don’t work, they just generate the next token randomly, they get things wrong all the time” (and so on, I have heard and read many such responses in recent months) you’re both right in the short term (they do make errors, in some sense they do ‘just’ predict the next token) but I think wrong in the longer term. If you’re not open to persuasion on this, you’ve made up your mind then it’s probably best to skip this week’s newsletter.
But otherwise you’re on the fence, or already invested–and I’d love to continue this conversation with you.
In the interview with Bret Taylor I that I link to below, and I really recommend (there’s a podcast and transcript version) he observes
Is your job as a maker of software to author a code in an editor? I would argue no just like a generation ago. Your job wasn’t to punch cards in a punch card That is not what your job is. Your job is to produce digital something, whatever it is, what is the purpose of the software that you’re making?
When I studied software engineering at university in the 1980s (the term was newly coined then) the field of “Computer Aided Software Engineering” was one of serious enquiry–people thought deeply about how using computers to program computers was going to impact programming computers.We’re only at the beginning of a similar conversation–about AI Aided Software Engineering. I think this conversation will be emergent and driven by those doing it, rather than take place within the academic world.
So this week, I round up several recent articles and more on the topic broadly, and invite you to think about this topic if only in the context of your career.
The AI Architect — Bret Taylor
The legendary CEO of Sierra, Chairman of OpenAI, and creator of Google Maps/Facebook Likes on the future of Software Engineering, and building great products and teams at the break of the dawn of AGI.
Source: The AI Architect — Bret Taylor – Latent.Space
On point with several of this week’s links that are focussed on the question of ‘what is the nature of software engineering in an era of large language models’ is Bret Taylor in conversation with the folks at Latent Space. The pertinent part starts about 19 minutes in–and there’s a transcript, or listen to the recording.
The End of Programming as We Know It – O’Reilly
There’s a lot of chatter in the media that software developers will soon lose their jobs to AI. I don’t buy it.
It is not the end of programming. It is the end of programming as we know it today. That is not new. The first programmers connected physical circuits to perform each calculation. They were succeeded by programmers writing machine instructions as binary code to be input one bit at a time by flipping switches on the front of a computer. Assembly language programming then put an end to that. It lets a programmer use a human-like language to tell the computer to move data to locations in memory and perform calculations on it. Then, development of even higher-level compiled languages like Fortran, COBOL, and their successors C, C++, and Java meant that most programmers no longer wrote assembly code. Instead, they could express their wishes to the computer using higher level abstractions.
Source: The End of Programming as We Know It – O’Reilly
There is clearly something in the air–Tim O’Reilly, one of the giants of the technology industry (founder of O’Reilly publishing, semi-coiner of the term Web 2.0, spotter of deep technology trends) has penned this long thoughtful essay on the question that has been front of my mind for some time now ‘what is the nature of software engineering when LLMs can increasingly do a lot of the work software engineers have done’. I think anyone who writes software should read this.
future belongs to idea guys who can just do things
There, I said it. I seriously can’t see a path forward where the majority of software engineers are doing artisanal hand-crafted commits by as soon as the end of 2026. If you are a software engineer and were considering taking a gap year/holiday this year it would be an incredibly bad decision/time to do it.
…
the people stages of AI adoption
- detraction/cope/disbelief – “it’s not good enough/provide me with proof that AI isn’t hype”
- experimental usage with LLMs
- deer in headlights/worry after discovering more and more things that it IS good at – “will I have a job? AI is going to take my job. The company is going to replace me with AI”
- engaged, consuming AI and starting to build using LLMs (ie. using Cursor) and evolving their thinking, trying new approaches. Realising the areas where it is not currently great at and learning how to get the right outcomes.
- concern/alarm/we need to bin our planning – “everything else we are doing right now feels just so inconsequential”
- engaged, realization that you can program the LLMs itself and doing it.
Source: The future belongs to idea guys who can just do things
I posted this last week too, but it kicked off a run of related articles so I’m repeating it this week.
Like a lot of people I’ve been thinking a good deal about what software engineering looks like in an era when LLMs can increasingly write code.
Now, writing code is only part of what makes a good software engineer–it has until now been a necessary and sufficient skill to have, or at least start a career in software engineering.
Like Geoffrey Huntley I suspect that won’t be true for much longer.
But right now a lot of the focus is on writing code-software engineering is much more than that.
Software engineering has faced transformational moments before–when I was at university Computer Aided Software Engineering–CASE–and its impacts own the practice was a significant area of focus. In the latter part of the 1980s 4GLs (fourth generation languages, where languages like C and Pascal were 3GLs) were going to abstract away much of the process of writing the code.
My instinct is systems being able to write capable code that humans traditionally have written, but which competent humans can read, debug, improve, verify, is a genuinely transformative moment.
So what do you do?
Geoffrey has some detailed thoughts well worth your time reading.
The LLM Curve of Impact on Software Engineers
There is so much debate online about the usefulness of LLMs. While some people see giant leaps in productivity, others don’t see what the fuss is about. Every relevant HackerNews post now comes with a long thread of folks arguing back and forth. I call it the new Great Divide.
I have a theory about this divide. The theory is that, on average, an LLM’s impact on someone’s day-to-day job largely depends on their level, and it follows a really interesting curve. In this post, I’ll explain the reasoning behind this idea.
Source: The LLM Curve of Impact on Software Engineers
There’s little doubt LLM based coding tools will impact software engineering perhaps more than any other field. Just how remains to be seen. Here Sergey Tselovalnikov, coincidentally a colleague of Geoffrey Huntley at Canva considers which levels of experience, from junior to staff will be impacted, and how-positively, and less so.
A Gentle Intro to Running a Local LLM
But there is an overarching story across the field: LLMs are getting smarter and more efficient.
And while we continually hear about LLMs getting smarter, before the DeepSeek kerfuffle we didn’t hear so much about improvements in model efficiency. But models have been getting steadily more efficient, for years now. Those who keep tabs on these smaller models know that DeepSeek wasn’t a step-change anomaly, but an incremental step in an ongoing narrative.
These open models are now good enough that you – yes, you – can run a useful, private model for free on your own computer. And I’ll walk you through it.
Source: A Gentle Intro to Running a Local LLM | Drew Breunig
I believe large language models are a transformative new paradigm of computing. Can they do all the things they are hyped to do well? No. Will they ever be able to? An open, and in many ways unimportant question. Since they can already do at least some things incredibly well. If your career involves making things that people interact with on computers, and you aren’t actively exploring the impact of these technologies on the world you do, I share Geoffrey Huntley’s view that very quickly you may find yourself vastly less productive than you would otherwise be (and your peers who do explore these technologies will have become).
Most of the ways we have work with these models until now has been via some sort of cloud service, whether a foundation model company, like OpenAI or Anthropic, a cloud computing service like Azure, AWS or Google Cloud Platform, or a host of open models like Hugging Face. But it is becoming increasingly feasible to run models on your own consumer grade hardware as Drew Breunig examines here (and even in the browser as we’ll explore with our online conference Inference later in the year.)
Introducing the Anthropic Economic Index
In the coming years, AI systems will have a major impact on the ways people work. For that reason, we’re launching the Anthropic Economic Index, an initiative aimed at understanding AI’s effects on labor markets and the economy over time.
The Index’s initial report provides first-of-its-kind data and analysis based on millions of anonymized conversations on Claude.ai, revealing the clearest picture yet of how AI is being incorporated into real-world tasks across the modern economy.
Source: Introducing the Anthropic Economic Index \ Anthropic
Important and interesting research from Anthropic, the developers of Claude, on how their models are being used. The data is open sourced for others to analyse. Perhaps no big surprise that by far the most common use of their models is for ‘computers and mathematics’. Perhaps a little more surprising is
AI use leans more toward augmentation (57%), where AI collaborates with and enhances human capabilities, compared to automation (43%), where AI directly performs tasks.
at least since so much of the concern about AI has been the replacement of human labour.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.