The end of the AI beginning
AI has had many ‘springs’ over the decades, with its promise always just around the corner.
In 1970, an early AI pioneer (his legacy is now quite odious) predicted:
“In from three to eight years we will have a machine with the general intelligence of an average human being.”
This wasn’t the last time such ambitious predictions were made.
AI springs have often been triggered by new discoveries—neural networks (dating back to the 1950s), expert systems, statistical approaches to machine learning, convolutional neural networks, generative adversarial networks, transformers. Each wave seems to promise genuine breakthroughs, with initial results appearing extraordinary. But none of them have scaled from specific applications to the general intelligence once promised. Disillusionment sets in, and another winter follows.
Yes, along the way, tremendous value has been delivered by these innovations. But each spring has been followed by a winter when AGI (artificial general intelligence) remained stubbornly out of reach.
So where are we right now? In another spring? A summer? An autumn? Or has another winter arrived already?
Given the volume of papers being published, the new models announced almost daily, the massive investments flowing into the field, NVIDIA’s breathtaking GPU sales, and the recent Nobel Prizes awarded to pioneers in AI (one in physics, one in chemistry) which came as a surprise shall we say to many—it doesn’t feel like winter just yet.
But for those not following closely, and for many using the first generation of generative AI products, there’s is a sense of disillusionment. A sense that the promise has not (yet) been delivered.
So where are we now?
The Beginning of the End?
Maybe the emperor has no clothes. Perhaps there’s no “there” there. The statistical generation of the next token based on what came before is a great parlor trick, but after the thrill of a machine seeming intelligent wears off, there’s no lasting value. It’s simply not worth the environmental cost, the staggering investment, or the opportunity cost. Many will argue this spells the beginning of the end for generative AI.
Or the End of the Beginning?
Maybe there is something there, but it’s too early. Perhaps these statistical methods will lead to transformative applications, we’re just not there yet.
Still, companies are spending hundreds of millions—even billions—to adopt ChatGPT, Copilot, and specialized tools for generating marketing copy and social media posts.
Look at any modern app—you’re likely to see a button with that magic, starry icon we now associate with “AI.” And chat boxes are everywhere.
So why the disillusionment?
Because, to put it bluntly, our generative AI products largely suck. Whether from massive tech companies or nimble startups, they fall short. It’s actually easy to understand why—and it might not be entirely a bad thing.
Why Do AI Products Fall Short?
The current batch of AI products disappoints because we have little idea what we’re doing—not so much from a technological perspective (though there’s still much we don’t understand there), but from a product perspective.
Two seemingly unrelated yet powerful forces shape the products we’re building: our experience of building digital products over the last several decades, and science fiction. The recent (initially aborted) launch of OpenAI’s voice model—seemingly modeled after Samantha, the AI from the Spike Jonze film Her—illustrates how deeply science fiction has shaped our expectations.
These influences are constraints that are hard to escape, ingrained in how we imagine the possibilities of AI applications. They are the water we swim in, the gravity well we can’t quite escape.
While it might seem that new computing paradigms instantly give rise to new types of products, it often takes years, even decades, for that transition. It took that sort of time frame to learn how to build products for personal computers, then the web, and then mobile devices.
New interaction paradigms emerge shaped by their antecedents, partly due to users’ expectations. Early smartphones had physical keyboards because users were familiar with them. Today, smartphone UIs are still largely driven by text input from QWERTY keyboards and buttons inherited from desktop GUIs.
Similarly, web-based applications still bear the legacy of desktop origins. Think of the Google Docs UI—it still resembles a Windows app from the 1990s, with its pull-down menus.
But these emerging paradigms are also shaped by those who build these new products—designers, developers, engineers—influenced by their own roots in older paradigms, who bring their experience, expertise, and intuitions. Just as users carry forward their expectations, the creators of these solutions are similarly shaped by their past, making it challenging to envision the new.
And then there’s science fiction. Top down visions of the future, imagined whole of cloth, tropes that have a life of their own. Science fiction creates a feedback loop–imagined technologies influence those we actually build–and there have been fewer more powerful or long standing themes in science fiction than artificial intelligence, from the Golem and Frankenstein, through Asimov, to a relatively recent spate of films, like Her and Ex Machina.
The generative AI products we’re creating fall short because our vision is constrained by these forces. Or expectations as users and as professionals and the cultural constraints of centuries of shared imagined futures.
This has only been exacerbated by the recent rush to ship AI features in the current generative AI gold rush. ChatGPT was the fastest product to reach 100 million users. Jasper AI achieved unicorn status almost overnight—valued at $1.7 billion by late 2022 after being founded in 2021. Venture capitalists rushed in.
In this frenzied climate, every company needed an AI story. The imperative was to do something—anything—and that was unlikely to be nuanced or exploratory. It was more likely reflexive and derivative.
So, we stuck chatbots everywhere. We added magic ‘generate’ buttons. The 2010 sparkle emoji got its place in the sun (the plot of the Emoji movie sequel perhaps?)
Right now, the products taking advantage of generative AI are built in a rush, rather than through experimentation and discovery. But we still don’t fully understand these systems and their emergent properties—still learning things they can do that we didn’t expect. Perhaps we should let the technology guide us toward discovering its true applications, rather than forcing it into familiar molds.
Why “The End of the AI Beginning”?
More than a few people argue we’re at the beginning of the end for generative AI—that another spring has quickly turned to winter, and another promising technology hasn’t lived up to the hype. Perhaps even take a little satisfaction in that.
Maybe that’s how this will play out. But I prefer to think of this as the end of the beginning. The end of an initial phase, where we reached for the most obvious applications and solutions in a rush not to be left behind.
Now comes a much harder but more rewarding phase: discovering genuinely transformative applications. Applications that are native to large language models. Solutions that don’t just bend the cost curve for existing tasks but enable entirely new ones—perhaps by reducing the time taken by orders of magnitude, or by eliminating the drudgery in a process.
And this will happen not by top-down directives or corporate AI training programs, but through a process of discovery and experimentation.
So, let’s get busy experimenting.
Keen to do that?
Get along to Josh Clark and Veronica Kindred’s Sentient Design workshop in Sydney November 28th, and Next our one day ideas driven conference where Josh is speaking on November 29th (which is streaming as well) to explore these ideas.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.