Prediction is difficult, especially when dealing with the future
“Prediction is difficult, especially when dealing with the future” is a phrase attributed to physicist Neils Bohr, but also Samuel Goldwyn, Yogi Berra and (of course) Nostradamus. Humans love predictions, as the ongoing fascination with Nostradamus attests.
Thinking about the future is something we all do, but predicting it is something fewer of us do with any great discipline. What we often do is to imagine futures, which is something quite different.
I’m not sure whether it’s better characterised as imagining or predicting, but thinking about the future of technology is something I’ve spent a reasonable chunk of the last over 4 decades doing, since I first read about the DynaBook, a vision of the future of computing by Alan Kay, from 1968 (when I was 2).
I captured some of these thoughts in a presentation I gave a few years back now ‘how to predict the future with this one weird trick’, including at Smashing Conference in 2017 (it seems like a lifetime ago now). It holds up rather well I think (or perhaps I just miss speaking at, organising or just being at conferences). The trick? We’ll get to it at the end.
I thought of this presentation recently since there’s currently a lot of hype around a number of technologies, and partly because I was recently playing with a technology that I actually feel might have a significant impact on education, media, and culture more broadly (more on that for a moment).
Non fungible futures
Cryptocurrencies, non fungible tokens and Web3 (a term as someone who has been involved with the web since more or less its inception I totally reject–you can probably tell where this is going) saw a huge uptake in interest during the pandemic.
And huge challenges in recent weeks as the scale of grift, woolly thinking, hype, greed and straight out criminality associated with so much of the sector became increasingly apparent, and the price of many cryptocurrencies collapsed.
There are many (a few of whom I actually respect) who think despite all that there is a “there” “there”, particularly when it comes to the underlying blockchain technologies, but many esteemed software engineers and computer scientists like Grady Booch, are skeptical even about the usefulness of the underlying technologies.
And while Arthur C Clarke famously observed
a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrongClarke’s 3 Laws
Isaac Asimov advanced a corollary
When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervor and emotion—the distinguished but elderly scientists are then, after all, probably right.Scientific Skepticism
As you can probably tell, I would not predict cryptocurrencies and blockchain technologies will have the transformative impact their good faith proponents do. Booch among others has called blockchain-like distributed databases ‘a solution in search of a problem’–keep that idea in mind, we’ll return to it in a bit.
AR and VR
Apple’s WWDC takes place in less than a week. Each year Apple unveils at this conference new and exciting technologies, tools and sometimes even whole platforms for developers. Rumours currently are flying that we’ll finally see this year Apple’s long-awaited glasses, headsets, AR or maybe VR product at least teased, if not fully announced there.
It’s almost a given among, well anyone you ask, that AR (or maybe VR or maybe both) are the future of computing and human computer interfaces.
Though just what problem AR and VR are solving is never quite spelled out. Sure we’ll live in VR a lot of the time, or play games, or go to meetings or whatever, but again like blockchain, there is a lot of handwaving, a lot of aspiration. The word ‘imagine’ comes up a lot among proponents of AR and VR, just as with proponents of technologies I refuse to call ‘web3’ [TIRTCW3]–”Imagine”, proponents of new technologies will say, “[insert some seemingly intractable complex problem of human interactions and/or societies solved in a week using TIRTCW3]”.
AI (or what we now might more accurately call Artificial General Intelligence) “the ability of an intelligent agent to understand or learn any intellectual task that a human being can”) is again a staple of our imaginings of the future. Imagined in countless science fiction books and movies (which will typically skip over the hard parts like ‘is this even possible‘ to go straight to the fun bits–sexy killer robots) we spend a lot less time focussing, again, on what problems AI might solve. Imagine…
Boiling the ocean from first principles
The modern smartphone is an engineering marvel. It occupies in many of our lives too much of our waking time. It has transformed industries, created whole new ones.
But it evolved step by step, the modern device slowly emerging from many different technologies, and fields of research-screens, haptic and touch based UI, radios, batteries, operating systems, memory, the internet and web.
Perhaps in their idle moments the creators of the Motorola 8900X-2 imagined something a little like the modern iPhone, but mostly they were solving the problem of how to allow a user to make phone calls when on the go, within constraints of prices and technologies of 1992.
Imagination and ambition are amazing, and required to do anything substantial, but practical reality, solving a problem for a user today is the really hard and necessary part
You may have heard of GPT-3, from OpenAI, “an autoregressive language model that uses deep learning to produce human-like text”. If you’re a developer, you will likely have heard of one application, GitHub Copilot, trained (not without criticism) on a large body of open source code, many developers have marvelled at how accurate its suggestions for not just whole lines of code, but whole chunks of code are.
But there are many more examples, a large number of which are gathered together at GPTCrush and GPT-3 Demo. I highly recommend exploring some of these.
If you’d like a little prediction from me-the impact of GPT-3 and similar technologies will be significant. Right now the best analogy I have is that it’s like an electric drill for the mind, making light work of previously time consuming, laborious tasks.
As an example, currently I’ve been exploring extracting keywords from our old conference presentations (a task that typically requires considerable domain expertise and quite some time). There’s surprising concordance between my hand tagging, and GPT-3’s tags, and the savings in time would be very significant.
So, setup an account and explore the playground as well as the demo collections I suggested.
So how is GPT-3 different from AR and VR, TIRTCW3, and General AI? Rather than the output of ambitious imaginings of a whole new future, GPT-3 is more like the modern smart phone, the result of a long chain of increasingly more sophisticated research into machine learning. Its applications are small, useful tools built on top of this long evolution.
The one weird trick?
Perhaps the distinction is best captured by Alan Kay, whose (ironically) imagined future computer, Dynabook, first drew me into lifelong fascination with computing, with his often quoted phrase
“The best way to predict the future is to invent it”.
Those who imagine a world of [waves hands, mentions a technology, complex multifaceted problem solved] aren’t inventing it.
Those who are inventing it, solving small problems step by step, experimenting, failing, learning, they are the ones predicting the future. One experiment at a time.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday morning.