Is there a moral dimension in designing user experiences?
I came across this article by a software creative consultant, someone who has seen their discipline of user interface and experience design stand up as a force for good in the late 90s inevitably slide towards evil in subsequent decades. From their perspective, those initial efforts to help users solve problems have given way to design dark patterns, meant to trick users against their own best interests. The Google that used to pride itself on presenting search results with distinctive, non-intrusive ads is intentionally blurring the line between organic results and paid placement.
The simplest, and therefore most likely, explanation is that UX is a tool to achieve an end. This is not a knock against design as a discipline: I’d argue that the same trajectories apply to engineering, product, and everything else that goes into making technology products. Tools are amoral, in as far as they’re used to create and sustain business models, which themselves have shifted in the past 3 decades; the business of consumer technology has ascended from internet nascency to the primary driver of global economic growth.
It’s easy to see why technology felt like such a morally positive force in the late 90s through the 2000s—there was a lot of invention going on with the commercialization of the internet! Although the dotcom bust placed a damper on how fast consumer tech was evolving, this was still the era of web apps and smartphones as we know them today, with new innovations arriving every month or 2 on how we interact with our devices: Gmail’s 2GB of storage was magical; the introduction of the iPhone is still celebrated a decade and a half later. This was early in the technology adoption curve, when there was so much greenfield space to explore that invention carried the day.
No innovation lasts forever, particularly at the rapid pace of the late 2000s. As consumer technology muscled its way to becoming the most important growth industry in the world, economies of scale and general maturity leads to consolidation, and in 2021 we’re left with a handful of extremely powerful tech mega-corporations. It’s much harder to grow the pie when we’ve covered most of humanity in computing, so instead of striving to grow the pie, these companies are now vying for the same slices. Previously, it felt like the breadth of opportunities expanded faster than the formation of new entrants, and the graduation into maturity has instead brought threats of regulation and scrutiny1.
Against this backdrop, UX ends up having to serve the needs of the business, now concentrating on extracting value instead of creating it. It may very well mean adding friction to an unsubscribe flow to make it harder to leave a service, or purposefully obfuscating copy on data permissions to eek out an advantage in user perceptions on data privacy. Disappointingly, the standard bearers in user attention and focus—the Googles, Apples, and Amazons of the world—no longer feel as beholden to our collective well-beings.
So the question is: is UX the right place to take a moral stand against exploiting the user? On some level, Design—and often, Product—is charged with representing the user, so there’s some logic for designers to fight for user-centric clarity and utility in the face of relentless demands to increase profits and drive growth. But at the same time, it feels like a step too late; if the business deems it prudent to deliver a worse user experience for the sake of its bottom line, that decision is already a fait accompli.
When I was getting my undergrad degree in Computer Science, I took a small optional class on the ethics of engineering. Studying the Challenger disaster along with other case studies, the professor sought to make an appeal to future engineers that we had an obligation—to ourselves, our chosen field, and society at large—to behave ethically, especially during those times when all the incentives point us in the other direction. More recently, there’s been a renewed call to reengage in ethics within education, to attempt to infuse that sense of doing the right thing into those who would drive our collective software futures. It’ll be an uphill battle.
Ironically, poorly legislating tech to try to protect the end user often ends up entrenching the monolithic incumbents.↩