If you’re building some kind of technology and you’re not declaring it as ‘human-centred’, you’re missing out on this hot new trend. It seems like everybody suddenly wants to hire people who can make things human-centred. Maybe it’s because they’ve read all the experts recently insisting that human-centred is the only way to be.

But don’t start changing all those adjectives in your pitch decks just yet. Despite the hype, human-centred design is nothing new. In fact, it’s been a practice since the late 1950s. And since the late 1950s it has perpetuated the convenient lie that if people can use it easily, then it’s a success. That’s the same lie that made it ok for us to destroy our aquaculture with plastics and produce cars that drive themselves into emergency vehicles.

We might think we’re putting humans at the centre of our technology, but it’s actually just individuals. Specifically it’s the individuals who will pay for the technology, immediately benefiting themselves and the company producing it at the expense of those around them and probably even their future selves.

What problem(s) did human-centred design try to solve?

The term ‘human-centred design’ first came to prominence in the late 1950s at Stanford University. Now that school boasts an Institute for Human-Centered Artificial Intelligence, celebrated as the evolution of this legacy by ‘bringing together leading thinkers across multiple fields so that we can better prepare future leaders to learn, build, invent and scale with purpose, intention and a human-centered approach.’ But it’s not an evolution if it’s the same ideas in a new building.

We should expect attitudes to have changed over the last seven decades. The concept’s origins as a more formalised practice trace back to Professor John E. Arnold. In 1958, new to Stanford, he began teaching a ‘Creative Engineering’ summer course. 66 years is a long time in the philosophy of technology, especially with the acceleration of technological advances. Somehow, though, the accompanying understanding of human need stagnated—a narrow focus that rendered us blind to the evidence the great harm building on the periphery.

If we go back around 70 years and look at engineering practices, we’ll see that engineering tended to dictate usability. Machines could achieve their output as long as the human operators could contort themselves to avoid immediate injury (often causing long-term chronic pain). Human welfare was unimportant: The machine worked and now 20 identical widgets come out in the time it used to take for one.

The idea of usability informing engineering—that people’s physical abilities, their habits, desires and limitations should be primary considerations when designing technology—was largely unheard of. The arrogance that a creator requires to believe they have a better solution to a problem is the same arrogance that blames the user of a technology when it fails.

These attitudes still show up. In the late 1990s, when I started working with technology and customers, support teams regularly employed the acronym ‘PEBKAC’.[1]

In 2010, when asked about the iPhone 4’s connectivity problems when held a certain way, Steve Jobs responded:

All phones have sensitive areas. Just avoid holding it in this way.

We certainly haven’t mastered human-centred design, even with its designation as an international standard: It’s included in the ISO’s ‘Ergonomics of Human-System Interaction’ standard (ISO 9241). Does that mean, though, that it should still be the standard we aim for in our practice?

Human-centred design concentrates out biases

We know so much more about the potential harm technology can bring and how that harm extends beyond human-centred problems (environmental pollution, species extinction threats, etc.). We have known this for decades. The truth is that human-centred design too often ignores the ecosystems we rely on to survive as a species. In practice it is biased towards finding short-term benefits for the people directly affected by the design and spends little-to-no effort on the contributions made to accelerating human extinction.

For the past few years we’ve seen articles about human-centred artificial intelligence (AI). The focus is on how AI will make things easier for humans and assist us in our work. But if we look beyond the immediate convenience, we see the immense computer power required to build, maintain and improve the large language models (LLMs) that make the artificial intelligence possible. With that power consumption comes unprecedented water use and the potential for huge carbon emissions.

IBM have been at the forefront of artificial intelligence for decades. Their explainer of human-centred AI focuses on utility: more immediate, trustworthy and insightful human-AI collaborations. Another part of their website summarises IBM’s concerns for humans and AI to happily coexist:

Despite increasing levels of automation enabled by AI, the common thread to all of these systems is the human element: people are critical in the design, operation, and use of AI systems. We have a responsibility to ensure those systems operate transparently, act equitably, respect our privacy, and effectively serve people’s needs.

This is in-step with common understanding of ‘human-centred’ as a concept, but the question remains: Does this put the needs of humans at the centre of this work?

Design should anticipate and counteract its negative effects

Victor Papanek’s Design for the Real World (Second Edition, 1984) states our responsibility clearly in the title of its final chapter: ‘Design for Survival and Survival through Design’. It’s there that he writes about the three characteristics of design that provide its value ‘as the primary, underlying matrix of life’: that it should be integrated, comprehensive and anticipatory.

Integrated, comprehensive, anticipatory design is the act of planning and shaping carried on across the various disciplines, an act continuously carried on at interfaces between them.

In an earlier chapter he had already outlined the path we were taking with technology and its relationship to commerce and economics:

We are beginning to understand that the main challenge for our society no longer lies in the production of goods. Rather, we have to make choices that deal with “how good?” instead of “how much?”… But the margin is narrowing fast. With all these changes, the designer (as part of the multidisciplinary problem-solving team) can and must involve [themselves]. [They] may choose to do so for humanitarian reasons. Regardless of this, [they] will be forced to do so by the simple desire for survival within the not-too-distant future.

Next steps: design for human survival in our natural environment

Human-centred design perpetuates hyperbolic discounting and normalcy bias. Through integrated, comprehensive and anticipatory design, we should be able to acknowledge these biases and take efforts to counteract them.

In human-centred design we are still only looking at people’s preëxisting needs, tending more towards the personal. To be truly integrated, comprehensive and anticipatory, the design of our technologies needs to include human survival as part of the ecosystem that has supported us for so many millennia and the societies that help us support each other.


  1. I thought it was hilarious at the time. I was young and arrogant. My personal journey as a designer is one of learning humility and approaching each new scenario with curious ignorance. ↩︎