Year round learning for product, design and engineering professionals

I See Dead People

I.

At the end of M. Night Shyamalan’s celebrated The Sixth Sense, Bruce Willis’s character — a child psychologist — learns something the audience has probably been slowly working out the entire movie: he is, in fact, dead.

The film only really bears one watching, because once you know the ending, you realise the whole thing is a sleight of hand. We’re attracted to snippets of time — Willis waiting in a restaurant for his wife, Willis sitting at the table as she seemingly complains about their relationship, Willis sitting with the child’s mother as he comes home from school. But zoom out and see these moments in their broader context, and none of them are what they seem. Willis wasn’t engaging with Haley Joel Osment’s mother. He couldn’t have been — no one other than the boy can see him. Willis wasn’t engaging with his wife, because she couldn’t see him either.

But what, as my grandmother might have asked, does that have to do with the price of fish?

Well. On my darker days, I look around and I see dead people. I see roles, I see organisations, that no longer make sense given just how capable our AI systems are becoming.

I think this has already come — and come hard — for front-end development, an area of practice I’ve spent the better part of my life focused on. If it hasn’t already been almost entirely automated away, I think it very much soon will be. And I don’t think it stops at the front end. I don’t think it stops at software engineering.

This week, among the many hours of podcasts I listen to, I heard a long, thoughtful interview with Tudor Achim and Vlad Tenev, the co-founders of Harmonic. Their AI system, Aristotle, does formal mathematical proof using the Lean programming language — at an astonishingly high level. It achieved gold medal performance at the 2025 International Mathematical Olympiad, and recently solved a variant of a 30-year-old Erdős problem with zero human intervention.

Having got an admittedly pretty average mathematics degree way back in the late 1980s, I at least have some framework for understanding what their system is doing. Tudor and Vlad predict that by the end of this decade, systems like Aristotle will be able to prove just about anything provable in mathematics.

We are seeing the acceleration of human capability at a rate that has never happened before.

So when I see dead people, I also see myself.

II.

And yet. Over the last couple of months in particular, I don’t think I’ve ever been more productive or more creative.

With tools like OpenClaw and Claude Cowork, there is repetitive grunt work that someone running a small company has to do — tasks that are uninspiring and take time away from the things that not only do I most enjoy, but have the biggest impact for my business. Right now, these tasks haven’t been entirely automated, but they have been dramatically reduced in terms of the effort and time I need to put into them.

But then there’s the more enjoyable part. The creative part. The impactful part.

About two weeks ago, seeing the enthusiasm for emerging agentic systems among people I know and respect — and feeling that same excitement myself — I decided to get together anybody in my network who might be interested in exploring this, whatever their current level of knowledge.

In two or three hours, I had explored possible domain names, decided on one, registered it, built a website, written the copy, integrated it with our CRM’s API, and launched Homebrew Agents Club — a meetup inspired by the original Homebrew Computer Club, for people experimenting with AI agents. Upwards of 90% of that work was done by OpenClaw. The design, the copy, the deployment. Work that would previously have taken me an entire day under strong time constraints took 90 minutes end to end.

Then, when a couple of people said they couldn’t make the meetup and jokingly asked whether their agent could attend, I built a forum site for agents and humans to gather. Perhaps half a day elapsed, but maybe an hour or so of my time, most of which was thinking deeply about interesting challenges that emerged only after I’d built the initial system.

I’ve built a number of such projects over the last few weeks. An end-to-end IoT web application because I didn’t like the native app the hardware vendor created for their ecosystem — perhaps 30 to 40 minutes of my time and a couple of hours of Claude Code’s. This integrated hardware, external third-party APIs, and a complex real-time messaging system, all using technologies I had limited knowledge of myself. It simply didn’t matter.

As I wrote last week, there’s a moment in The Matrix where a child bends a spoon and tells a quizzical Neo: “The secret is… there is no spoon.” What limits Neo is his preconceptions about what is possible. What limits us — still, right now — is decades of mental habit built around the assumption that producing software requires scarce human attention, carefully managed and doled out. That assumption is no longer true.

Now let’s ground this in some reality. Do I genuinely think that next week, next month, or next year there will be no software engineers, no accountants, no — name a role? No, I don’t. Do I think there will be no large corporations? No. Societies, cultures, economies are complex systems. They have a strong degree of internal self-correction.

But complex systems can only take so much.

III.

As we learned in 2020, humans are really not very good at exponential thinking.

Here’s what makes the current moment exponential. Where the outputs from a system can be machine-verified — as they can with software, because we can run code through compilers and test suites, or with mathematics, where formal methods and languages like Lean create verifiable proofs — the feedback loop becomes extraordinarily fast by taking humans largely out of it. In other areas of practice — law, medicine — where human expertise is still required to verify quality and accuracy, the loop is slower. But it’s still exponential. It’s still compounding.

What we’ve seen in recent weeks is something folks like Sholto Douglas at Anthropic have been predicting: we’re reaching the point where models are helping train themselves. Models are helping write the software and harnesses that interact with them.

We are starting to make machines that make machines.

I’ve long been fascinated with the Industrial Revolution, and any historian of that period has likely asked themselves: why then? Why, between roughly 1750 and the late 19th century, did something transformative happen to the world’s technologies, economies, and sciences? Why didn’t it happen in Athenian Greece two and a half thousand years earlier? In China, where there were long stable periods of prosperity? In the Islamic Empire, which for nearly a millennium was stable, where the arts, sciences, and philosophy flourished?

This idea of industrial takeoff — that something exponentially different happens during this period — is a lesson we can apply directly to what is happening now.

It wasn’t the insight of a small number of geniuses. It wasn’t the enclosure laws or the discovery of cheap, abundant coal in Great Britain. All of these mattered. But what made the Industrial Revolution transformative was the exponentiating feedback loop. Breakthroughs in chemistry and physics allowed new techniques to emerge that could more cheaply produce ceramics or paints. Understanding of metallurgy allowed the creation of more finely tuned machinery — machinery that could itself make more finely tuned machinery.

They started to make machines that made machines. We are starting to do the same.

The Industrial Revolution took the better part of a century to play out. In 1750, there was no modern science to speak of beyond Newton’s laws. We knew nothing of electricity or magnetism, nothing of chemistry in any systematic sense. By 1850 — the emergence of electromagnetism, modern chemistry, modern economics for better or worse, Darwin’s theory of evolution. A profound change in our understanding of the world and our ability to impact it.

What we’re seeing now appears to be playing out in months rather than decades. Though it’s important to see its foundations in modern computing — emerging during the Second World War with people like Alan Turing and John von Neumann, who built the theoretical and physical foundations for computing that persist to this day. Machine learning has a history nearly as long. The ideas on which neural networks are built go back to the 1950s. But it was the emergence of the transformer architecture — and in particular, the attention mechanism — only in the last decade or less that set us on the path for the enormous, rapid increase in capability we now call artificial intelligence.

Now, there’s little doubt the Industrial Revolution had a profound and in many ways deeply negative impact on working people — first in rural, then in increasingly urban Europe. Skilled artisanry like weaving was reduced to a mechanical process. Highly paid, highly respected weavers within not a generation but years became obsolete. You’ve heard the term Luddite, I’m sure. You probably think of it as most people do — someone who is opposed to technology. But Luddism was much more complex, nuanced, and sophisticated than that, and I have a great deal of sympathy for the Luddites. I’d really recommend following up on their history, if only because it’s not one I think we want to see repeated today.

And it had profoundly negative consequences for the broader world. The Industrial Revolution was the engine of colonisation. It saw India’s thriving, in many ways world-leading manufacturing sector — particularly in textiles — deliberately destroyed, and the locus of the world’s textile manufacturing transferred to Great Britain, among many other atrocities.

The lessons of this history are twofold. It gives us insight into moments when profound transformation happens because of compounding increases in human knowledge and capability. But it also shows us that at moments of drastic transformation, you can empower, enable, and even encourage great greed and profoundly negative consequences for societies, cultures, and the world.

I say all this because I don’t want to pretend that what’s coming is all rainbows and unicorns. It won’t be. Anyone telling you otherwise is selling something — or hasn’t thought hard enough about history.

We’ve already seen a slew of articles that are almost openly panicky. Elliot Bonneville argued that all that matters now is money — that ideas and human capability simply aren’t important anymore. The Atlantic published “The Worst-Case Scenario for White-Collar Workers“, observing that the well-off have no experience with the kind of job market that might be coming. Andrew Yang wrote something similar in “The End of the Office“.

I attended an invite-only event for the creative industries just last week, and there was certainly enormous anxiety — and at times quite active hostility — toward these technologies from a not-small percentage of the people in the room. Very senior people in Australia’s creative industries. I have considerable sympathy for people who feel that way, even when the response is un-nuanced and reactive.

So it would be easy to be nihilistic. Easy to be negative. To simply give up in the face of all this.

But that brings me back to where I started. How do you hold two contradictory truths at the same time? I see dead people — roles, organisations, perhaps even my own livelihood. And at the same time, I have never felt more creative, more productive, more alive in what I do.

Ted Chiang, the marvellous short story writer, observed a few years ago that your fears of AI are fears of capitalism. I think that puts it succinctly and very well.

Over the last 200 years since the beginning of the Industrial Revolution, we have reshaped the world. Not always for the good. We have reshaped the relationships between people. The economy of northern Europe in 1870 was profoundly different from 1750. But these were human choices. Markets and capitalism, as they exist at any given moment, are not fundamental truths about the universe. They’re choices — typically unconscious choices — we make. They’re technologies we build to achieve outcomes.

We are probably reaching a moment where we are going to have to make new choices about what economies look like, what economics looks like, what markets look like. Because the systems we’ve developed over the last 150 or 200 years to distribute resources — our economies, our economic theories, the markets we’ve created — in the context of machines that can do profoundly more than we could have genuinely imagined even a decade ago, it seems pretty clear they won’t cope with the transformation we are facing.

I see dead people. Including myself. But I’ve also never felt more alive. The way we hold both of those truths is by recognising that what happens next is not inevitable. It’s not determined by the technology. It’s determined by us.

These are human choices. And we’re going to have to start making them.

delivering year round learning for front end and full stack professionals

Learn more about us

Web Directions South is the must-attend event of the year for anyone serious about web development

Phil Whitehouse General Manager, DT Sydney