The web’s essence, or original sin?
One of the things we all noted as we worked on the original WorldWideWeb browser project at CERN recently was that the Web from its inception was designed to not only be a medium where you consumed information, but also one where you created it. The original Web browser was also an editor for Web content.
Time and again over the history of the Web, we’ve championed and heralded its democratisation of voice–whether the earliest web sites, sites created using services like geocities, the early social media of myspace, and of course blogging. Alongside the relative ease of publishing content came commenting, something early bloggers encouraged and thrived on. The sign of a healthy community was the vibrancy of the comments.
In fact this two-way flow of information predated the Web–it was the essence of Bulletin Boards (BBS) and Usenet newsgroups, which survived for many years after the advent of the Web (until 2000 or later, much of the community around Web development was centred on newsgroups).
I personally remember these days well, and their sense of shared community. Squabbles and differences of opinion would of course surface, but across the many communities I participated in throughout at least the 90s, and then as a blogger in the first years of this century, on the whole, communities were healthy and respectful over extended periods of time.
“Netiquette” rules of thumb, such as only saying something online you’d say in person helped keep discourse respectful. The velocity of growth, and the arrival of new community members was such that they could be gently guided toward more acceptable behaviours (for the most part). Banning members of communities and groups was rare.
Something, somewhere, changed. Over the years, I’ve thought a lot about why.
I think it has less to do with the nature of the early adopters of these technologies, and more to do with the scale of community. Where communities are small, and connections therefore more relatively strong between members, where the voice of an individual is more significant, simply because there are fewer overall participants, perhaps social norms more readily emerge. Maybe the Dunbar number, the number of people humans can reasonably maintain stable social relationships with, around 150 or so, has an impact here. As communities reach the many hundreds, then thousands, then more, the sense of community collapses.
And what has happened in the last 10-15 years is we’ve industrialised communities. Where in the earlier days on the Web, and indeed for perhaps all of human history prior, they were emergent phenomena, with Twitter, Facebook, Instagram, communities can technically scale, and very rapidly, to billions. But we humans have absolutely no idea how to operate at that scale–we’ve evolved to operate at the scale of clans, villages and towns. At the scale of the Dunbar number.
Couple this with the focus of social media giants–on designing to maximise attention (since they are engines for converting attention to money), leveraging the addictive nature of humans. These have been deliberate design choices, as has been observed by Tristan Harris, former Google Design Ethicist, now co-founder of the Center for Humane Technology, whose mission is “to reverse the harms created by technology platforms, and to re-align technology with humanity”.
In short, we designed the wrong things, for the wrong reasons.
After all, so little effort has been put into intentional designing for community, for civility, for politeness and decency. Into humane design.
This is an area of been more than a little interested in for quite some time.
One of the first people who I saw focussing on this extensively was someone we’ve been privileged to have speak at Web Directions a couple of times, Caroline Sinders. You can find a number of essays she wrote about this topic, including one that left a lasting impression on me–how ‘the downvote’ gives power to the mob.
At first I was stopped in my tracks. The down (and up) vote is something we see almost everywhere online. Youtube alone must have recorded countless billions of up and down votes. How could something so seemingly benign (up and down are matched right?) and commonplace, be toxic.
And if something so seemingly benign and commonplace can be toxic, and abusive, what else are we missing when we build online communities, and services and products? And given that more or less everything is increasingly online, how can we design products, services and communities more intentionally so as to avoid the potential for their misuse, their abuse, and their toxicity?
What’s a community without the capacity for communicating between its participants? And yet, comments are increasingly being recognised as vectors for abuse. Only last week, Youtube announced it was disabling comments on almost all videos featuring children, following a controversy over predatory comments being posted on videos of children.
But bloggers, and even well resourced major publishers like NPR have turned long off comments, to diminish the trolling and abuse associated with them. This goes back well over a decade, to the relatively early days of blogging.
Can we turn our back on commenting, rating systems, and the other interactions that form the foundations of online communities? Or do we need to become more intentional and focussed on the design of these interactions, on both how they can be misused-to curb that abuse–and how they can add value–to optimise for that? I firmly believe the latter.
But it starts with the recognition this is a design problem. And the responsibility of everyone who designs and build digital products and services to be mindful of, from the very beginning of a project.
It begins with a questions we should ask about every feature, every product, every service we build. Not ‘can we do this’ but ‘should we do this’? Or more completely, ‘why should we do this’?
And it continues into thinking deeply about how what we are working on could be abused, be mixed, used for bad outcomes. Dan Brown has referred to this as ‘abusability testing‘, and Aaron Z Lewis has talked about the idea of “Black Mirror Brainstorms”. These ideas, akin to “penetration testing” (or ‘pentesting’) does by security teams would seek to find the potential misuses of a product, service or one of their features, before they are designed and built.
Not only do I believe this is the right thing to do from an ethical perspective, I also believe it is existentially important for organisations.
Trust, and reputation in the inline world are everything. Lose your users trust, their respect, and you will lose them. Perhaps the very biggest online players can survive repeatedly abusing that trust, at least for a time. But not forever.
Black Mirror Brainstorming at Design Leaders
If this is something you’re keen on exploring, it’s something we’ll focus on quite extensively at Design and Design Leaders in Melbourne in April. At Design Leaders we’ll conduct a “Black Mirror Brainstorm” to end the day. An activity we hope attendees will take back to their teams. I hope you might interested in joining us.
About Design ’19
Great design is no longer a nice to have for successful digital products, services and customer experiences. It’s vitally important. Design brings world leading experts across the complete Product Design spectrum to share their insights and expertise. If you are involved with the design of digital product or services, you can’t afford to miss this one-of-a-kind Australian event. Only in Melbourne, arguably Australia’s Design capital, April 11 and 12, 2019.
Who’s it for?
Design is for everyone involved with the design of great products and services. From user research, to product owners and managers, UX, CX, IxD and Product Designers, Design Managers, Art and Creative Directors, content developers and strategists.
Register now
Tickets start from just $1195, for early bird pricing (ends March 16). So don’t delay, register today!
Great reading, every weekend.
We round up the best writing about the web and send it your way each Friday.