Intuitions and AI–some weekend reading for April 26th
A quick note that super early bird pricing for Code our front end developer conference in Melbourne (and online) June 20th and 21st ends next Friday May 3rd.
Honing intuitions about things that are novel
Brace yourself, today it’s all about AI. I hear the sighs and groans. Like when I was 14 in the school playground talking about computers in 1980 or so.
Back then I guess my poor victims could escape talk of computers by going pretty much anywhere else. Now you’re not so lucky–it seems there’s nowhere to escape the near ubiquitous talk of “gen AI” (I don’t know if people call it that).
And with countless newsletters, Linkedin posts, Youtube videos all dedicated to lukewarm takes and top ten uses of AI lists, I get it–John, what do you have to add to this noise?
Good question. I’d like to think I’m a skeptical optimist. Around AI there’s a lot of not particularly well thought through optimism (hype, whatever you want to call it). And there’s similarly (and to an extent I’ve never seen before-other than, entirely correctly, with crypto currencies) surprisingly strong resistance to this emerging technology among many technologists.)
Now I’d like to think I’m not a mindless booster of these technologies (I also like to think I’m a cool dad, so my radar on these things might not be entirely well calibrated). As a student of “history from below” I am something of an admirer of the luddites, whose opposition to the emergence of new technologies which threatened and indeed destroyed their economic existence was more nuanced and principled than is largely credited. Their means was the destruction of looms, but not indiscriminately–they were overtly political acts, much more systemic and programmatic than most imagine them. But the Luddites knew these technologies were impactful
I’ve spoken frequently about how I am using generative AI, and encouraging everyone I know to really explore them in relation to their own work. Not because tomorrow it will give you super powers, but because it seems not at all unlikely generative AI represents a paradigmatic change in how we interact with and use computational power, or ‘compute’ as the kids say.
Now, like blockchains and crypto currencies largely appear to be–that’s a hedge to be polite they totally are)–this may end up being a dead end. I think the odds right now are against that.
Certainly what I wouldn’t be doing right now is steadfastly adhering to a belief that these technologies are “only predicting the next token”, “simply hallucinating”, and the myriad other critiques I read along those lines that indicate the person saying them has perhaps at best passing experience using the technologies.
So today to help further your thinking on all things generative AI, I’m rounding up some things I’ve read and listened to recently including some skeptical positions grounded in extensive experience and reflection that I think are very valuable. Plus other hopefully valuable reading that might help think more broadly and deeply about what these technologies are, how they work, and how to think about them.
But first, let a little story. Recently I was chatting with my friend and sometimes co-conspirator Mark Pesce. We were chatting about AI generated music and it went something like this
John Allsopp:
Interestingly my daughter who is 11 played me a Song of the first 100 digits last night
Mark Pesce:
The machines are trying to control us john
I’d rather have songs teach me French
So I obliged. In a few seconds Suno had generated this song based on my prompt “a song to help me learn basic French-in a lo fi house hip hop style” (somehow lo-fi hip hop seemed apt).
With lyrics (whose validity I can’t really attest to, but translate back coherently) and all.
Indulge me. Try to imagine the song. How synthetic and facile it might be. Something not so much to criticise for how badly it is done but to be marvelled at being done at all. Now take a listen.
Honestly of all the experiences of using generative AI the last couple of years this was the most effecting.
What should I be doing now?
Something I have recommended for a long time (in AI years) is developing intuitions for how these technologies work. That includes specific models, which each seem to have their own flavour, temperament, uses, strengths, weaknesses.
And to explore a range of technologies–not just text or code based, but image generators, music generators.
I have done some exploration of character based models as well, like Replika and Character.ai. They’re relatively primitive but you can see glimpses of what might be possible there.
And Elicit by Ought, STORM a project out of Stamford, both exploring how large language models can be used to support research are another tangent I think might be closer to how we work with generative AI in years to come than the one shot “write me this thing, produce this code, make me this image” approach that largely shapes our intuitions about how we should work with these technologies at the moment.
One last piece of advice-use paid versions and the best possible models you can. GPT4 is significantly better than GPT3.5 in many cases and will give you a far better sense of what’s possible. A free Google AI Studio account gives you access to Gemini 1.5 Pro for free.
Opinion | How Should I Be Using A.I. Right Now? – The New York Times
There’s something of a paradox that has defined my experience with artificial intelligence in this particular moment. It’s clear we’re witnessing the advent of a wildly powerful technology, one that could transform the economy and the way we think about art and creativity and the value of human work itself. At the same time, I can’t for the life of me figure out how to use it in my own day-to-day job.
So I wanted to understand what I’m missing and get some tips for how I could incorporate A.I. better into my life right now. And Ethan Mollick is the perfect guide: He’s a professor at the Wharton School at the University of Pennsylvania who’s spent countless hours experimenting with different chatbots, noting his insights in his newsletter One Useful Thing and in a new book, “Co-Intelligence: Living and Working With A.I.”
Source: Opinion | How Should I Be Using A.I. Right Now? – The New York Times
Ezra Klein has had a number of thoughtful episodes on generative AI with a range of guests the last 18 months or so, and this conversation with Ethan Mollick perhaps the most thoughtful and extensive populariser of the technology I found very worthwhile for further developing and refining my intuitions of how to potentially work with it. Read the transcript or listen if podcasts are more your thing.
AI isn’t useless. But is it worth it?
But there is a yawning gap between “AI tools can be handy for some things” and the kinds of stories AI companies are telling (and the media is uncritically reprinting). And when it comes to the massively harmful ways in which large language models (LLMs) are being developed and trained, the feeble argument that “well, they can sometimes be handy…” doesn’t offer much of a justification.
Source: AI isn’t useless. But is it worth it?
Molly White is well known for her deep, thoughtful well researched but also funny criticism of crypto currencies.
Here she turns her attention to AI.
While I am a big user of generative AI technologies, and proponent of their use, this is a thoughtful position.
A lot of the criticism of these technologies I hear comes from folks who have clearly decided they don’t like them, and their critiques are largely vacuous, and coming from a place of clearly far less experience with them than some have.
My instinct is these technologies have been massively over hyped, and we are focussed on use cases that will appear as misdirected at the obsession with personal computers in the kitchen for recipe keeping that was an abiding use case for early personal computers.
But others are exploring much more interesting and transformative uses–I point to folks like Ought, among many. I can tell you the uses we are putting this technology to internally is enabling things that would otherwise be almost entirely infeasible at scale, that are additive in value.
But there’s no little value in Molly’ observations here either.
Historical analogies for large language models
How will large language models (LLMs) change the world? No one knows. With such uncertainty, a good exercise is to look for historical analogies—to think about other technologies and ask what would happen if LLMs played out the same way.
I like to keep things concrete, so I’ll discuss the impact of LLMs on writing. But most of this would also apply to the impact of LLMs on other fields, as well as other AI technologies like AI art/music/video/code.
Source: Historical analogies for large language models
If you think Large Language Models will have significant impact on the world, good or otherwise, it makes sense to think about how, and why, for a bit.
One common way of thinking about something new is to think of it in terms of things that have come before. This piece critiques ten common historical analogies used to think about LLMs and their impact. From chess playing machines to painting versus photography. Like models, all analogies may be wrong, but some are useful.
Models All The Way Down
What this training set contains is extremely important. More than any other thing, it will influence what your model can do and how well it does it.
Yet few people in the world have spent the time to look at what these sets that feed their models contain. Source: Models All The Way Down
A strong but non technical introduction to where the data for our large models comes from, how to interrogate those models (that would take a literal lifetime to investigate if you spent all your time just looking at the images in on significant dataset) and why that’s important.
Looking for AI use-cases — Benedict Evans
We’ve had ChatGPT for 18 months, but what’s it for? What are the use-cases? Why isn’t it useful for everyone, right now? Do Large Language Models become universal tools that can do ‘any’ task, or do we wrap them in single-purpose apps, and build thousands of new companies around that?
Source: Looking for AI use-cases — Benedict Evans
Evans is an analyst, but very experienced and thoughtful. I think he well captures the reality of LLMs right now that the existing use cases of real value are quite nice, and a lot of the mainstream focus is over hyped.
I think the analogies to early computing are helpful. Like Evans I remember those days vividly, my early to mid teens. When as I’ve said more than once computers in the kitchen to store recipes was a common use case for the personal computer.
Looking up recipes is indeed a common use case for the Web, and a solid business for many companies. But it is barely a rounding error when it comes to all the uses we have put computers too since around 1980, so few of which were imagined then, except by perhaps a handful of people like Douglas Englebart and Tim Berners-Lee (a decade before the Web Berners-Lee created ENQUIRE in 1980).
Back then a tiny handful of people, almost entirely male, white, upper middle class, mostly English speakers, with privilege and resources, had any sort of access to computing. Now billions do.
What impact that has on innovation and experimentation it’s not entirely clear now, but surely it must. And where once programming a computer required arcane knowledge difficult to acquire, now these barriers are far lower.
Notes on how to use LLMs in your product. | Irrational Exuberance
I’ve been working fairly directly on meaningful applicability of LLMs to existing products for the last year, and wanted to type up some semi-disorganized notes. These notes are in no particular order, with an intended audience of industry folks building products.
Source: Notes on how to use LLMs in your product. | Irrational Exuberance
I’ll finish with something practical! This is a detailed and thoughtful piece on how, and how not to, think about using these technologies in product development.
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.