Year round learning for product, design and engineering professionals

Photoshop 1.0 and the landscape of possibilities

I first started programming for the Mac in 1986, in my second year of computer science. The computer science department had moved from Vax minicomputers running some form of Linux to labs full of networked Mac II computers. This was the real heyday of the Macintosh’s first run. You’d find them in art departments where traditional typesetting was rapidly being replaced by WYSIYG editors and laser printers. In student newspapers and magazines. In libraries. The Mac felt like a finished product, particularly compared to DOS, which still dominated the PC world (it would until Windows 3.1 in 1992).

What more could possibly be added to the platform? I remember one programming class where we built a vector-based image editing tool—placing objects, resizing them on a canvas, adding text. It felt like a toy. Looking back, though, someone who took that foundation and refined it could probably have built a world-leading vector editor in a few weeks.

Which might sound hyperbolic. But then the Computer History Museum recently released the original source code for Adobe Photoshop 1.0.

One person, one program, a paradigm shift

Photoshop 1.0 launched in 1990, several years after the time I’m describing. At that point, pixel-based image editors already existed for the Mac–the Mac even shipped iwth one, MacPaint, if memory serves me correctly. Why would you need another one?

Thomas Knoll built one anyway. Not originally for editing photos—this was a time when scanners, even black and white ones, were either extraordinarily expensive or nonexistent. There were few, if any, photographs to edit digitally. He built it for creating editing digital bitmap images.

Written effectively by just one person, Photoshop transformed computing. It added a word to the dictionary–a verb no less. Six years after the Macintosh launched, years after the IBM PC, the defining piece of software of the GUI era was written by a single individual.

The original spreadsheet, VisiCalc—arguably the defining software of the PC era—was created by two people: Dan Bricklin and Bob Frankston. The Apple II, the transformative computer of the personal computing era, had hardware and software designed by a tiny team (t the time the Apple II was released there were only 12 or so people working at Apple). CP/M and the original DOS, the operating systems that dominated that era, were originally written by individuals (neither by Bill Gates to be clear).

Fast forward forty or fifty years, and this seems unimaginable. Surely what they were doing must have been trivial if individuals or tiny teams could build such transformative, epochal products?

But there’s a different lesson here.

The Landscape of Possibility

Think about the first decade of the web, or the early days of Facebook, Google, Instagram. Those early versions look incredibly simplistic by today’s standards, yet they too were developed by very small teams, sometimes individuals.

What’s going on?

I’d suggest, using a metaphor from both evolution and artificial intelligence, that there’s a landscape of possibility that emerges with any new technology—the range of things this technology makes possible. The challenge is we have no vantage point from which to survey that landscape at the time (later it all can seem so obvious). As Steve Jobs put it in a slightly different context

You can’t connect the dots looking forward; you can only connect them looking backwards

We discover it not through systematic planning, or grand visions, but through a process almost random and piecemeal.

Gradient descent, and stochastic gradient descent are machine learning techniques that explore landscapes by taking steps from where you are. They’re effective but not particularly efficient, and they tend to optimize for local maxima rather than finding the truly transformative peaks.

We explore technological landscapes the same way. We stumble onto features almost by accident.

Meanwhile, the mythology of futurism tells us there are visionaries among us who can see the future and make extraordinary predictions. We look back at golden age science fiction writers who imagined lunar colonies, flying cars, video calls. Some came to pass, others didn’t. But even the ones that did were often accidental discoveries.

We overlook how we actually discover the future. We don’t predict it—we invent it through small, incremental steps, recalibrating with each one. As Alan Kay put it “the best way to predict the future is to invent it“.

The iPhone Lesson

Consider perhaps the most dramatic technological product of our lifetimes: the iPhone. When Steve Jobs announced it nearly twenty years ago, he described it as being three things:

  1. A phone
  2. An iPod 
  3. A revolutionary internet device

The vast majority of its uses over time have fallen into that third category. And even then, the internet became ion many cases, simply plumbing for applications built on the platform. No one imagined—there was no master plan for—an app store that would let people order food at 2 AM and have it delivered in fifteen minutes, or a ride almost any time in very many parts of the world.

These uses emerged incrementally. Looking back they seem obvious. But if you had a time machine to 2006 with knowledge from 2024, what would you build? It’s simultaneously a stupid question (we know the answers) and a profound one, because we’re at a similar point of inflection right now as when the smartphone first emerged (some like analyst Benedict Evans would so no more significant than that, I’d argue ultimately far more significant, indeed as significant as the emergence of the personal compute the GUI or consumer internet).

Paradigms and blinkers

When a new computing platform emerges, it’s extraordinarily difficult to imagine what it enables. Our imagination is constrained by the generation that came before it. We know what computers do. We have such a deep, intuitive understanding of what computers we’ve worked with enable that breaking out of that paradigmatic thinking is extraordinarily hard.

Max Planck, one of the most influential physicists in history, originator of quantum physics, rather cruelly observed: “Science advances one funeral at a time.” When you’ve come to understand your field in a particular way, it’s essentially impossible to look at it differently.

Thomas Kuhn coined the term “paradigm” to describe how entire generations of scientists come to understand their field in a particular way—a shared framework that’s eventually overturned by revolutionary discoveries rather than incremental progress.

Classical physics gave way to quantum and relativistic physics not through smooth evolution but through paradigm shifts (and funerals as Planck put it).

The same pattern, I’d argue, applies to technology. We had the paradigm of the PC: a device on a desk, not networked, running applications for a single user. From the late 70s through the early 90s, we saw increasing sophistication—the emergence of the GUI, the move from character generation to bitmaps to higher resolutions to color. By the early 90s, compared with the late 70’s computing, particularly the Mac, was extraordinary. But in many ways also largely unchanged.

These devices weren’t connected to one another. Their data locked up for the use of one person at a time.

When the internet arrived and brought global connectivity, I was already a software developer building software for people to work with information in a hypertext context. Yet it took me years—years—to change my thinking from building products that sat on a desktop with isolated databases to something that connected to the wider world.

The Pattern Repeats

Skip forward more than a generation. I’m still writing software, still helping educate developers and people creating technological products.

What I see—not as criticism but as observation, because this pattern recurs—is us approaching generative AI the way we initially approached the smartphone, the consumer internet, the GUI, the personal computer. We see it within the context of the existing paradigm.

Generative coding tools sit in our IDEs or command line interfaces. They work with Git, with programming languages we’re familiar with. This all makes sense. I’m not saying we shouldn’t be doing this—right now, in a way it’s hard to imagine doing it any other way.

But I’ll suggest that if generative AI is genuinely transformative, even just within software engineering, all of this will look very quaint before long. A whole new landscape of possibility is opening up.

That’s certainly true of software engineering. I think it’s true of other ways we use these emerging novel capabilities. Software engineering just happens to be the field I know best.

The Photoshop Moment

Which brings us back to Thomas Knoll writing Photoshop in the late 1980s—not just a proof of concept but all the way through to launching 1.0, source code that continues to live on in Photoshop thirty-five years later. And despite the addition of a myriad of features since, from a vast array of filters, to “AI” background fill, it’s a piece of software whose purpose (editing photos) and even core UI are essentially unchanged in the intervening 35 years. 

And there’s at least a chance of similar opportunities today. Not necessarily to create something with Photoshop’s specific cultural impact, but certainly to re-imagine what computing can be in a particular domain. Not just to do what we’ve been doing so far more efficiently or productively or delightfully, but to explore something genuinely new in the landscape of new possibilities. To build things that simply haven’t been feasible until now, either because they required computing power we didn’t have or because the economics of doing them with humans or semi-automation didn’t make sense.

I think there’s every chance that not far from now, we’ll look at the chatbox sitting in our products waiting for human input as quaintly as we view the DOS interface compared to the GUI. Though the command line interface survives to this day, particularly in software engineering—new paradigms don’t entirely discard the old. We still use Newtonian physics to land probes on asteroids and people on the moon, but we require relativistic physics to create GPS.

Out there right now, I have no doubt someone is writing the equivalent of Photoshop 1.0. They’re writing the equivalent of VisiCalc. They’re exploring the landscape of the newly possible.

Perhaps they’re young and unencumbered by years or decades of experience that both enable us but also constrain us. Or perhaps there’s someone with those years of experience who’s capable of drawing on them but looking past how they constrain us.

That’s how every computing revolution has actually happened. Not through grand visions of the future, but through someone noticing an adjacent possible that everyone else walked past because they were looking in the wrong direction.

The landscape is there. Will you take this emerging opportunity to explore it.

delivering year round learning for front end and full stack professionals

Learn more about us

Web Directions South is the must-attend event of the year for anyone serious about web development

Phil Whitehouse General Manager, DT Sydney