Humans Acting At Scale

18 Nov 2022

The aphorism "A person is smart, but people are stupid" has always stood out to me. It never sat right. If an element of a set contains a property, then the set ought to exhibit the same property. Right?! Well, what are we really trying to communicate when we say this?

I think one aspect of what we're trying to say is that when we take a given person in a situation, we're probably imagining that person at their best. Any given person in their best moment, ready, willing and focused, is going to behave as intelligently as they possibly can.

But what are we saying about "people" in groups? We're no longer imagining one situation, but all situations. And not when any of those given people are at their best, but everywhere along the Bell curve of their possible performance.

We're comparing an individual high water mark, with the average of a bunch of people between their good and bad moments and days. No wonder! I think there is a more charitable description than we can give this than the common aphorism. That is: Humans Acting at Scale.

Why am I interested in this as a software engineer? I build systems that other engineers operate. I build products that users & customers operate. When building these systems and products there are a lot of decisions to make along the way. When we're making these decisions are we thinking about our users and their abilities?

Are we thinking that they will only be operating the system or product after 10 hours of good sleep, after their first cup of caffeine, in their favorite outfit, without a care in the world, and totally focused on doing an excellent job today? Or do we think that they'll be operating the system or product with one hand because with their other hand they are on the phone trying to get someone to come fix their furnace, after they got a bad night's sleep because their dog was up until 1am vomiting? If they drop their phone on the keyboard, or the cat walks across it—what happens?

How many things does this operator need to remember (Knowledge in the Head) versus things the system is showing them and giving them feedback on (Knowledge in the World)? How are details surfaced to the operator? How are expectations and impact of their operations communicated back to them? Do they get a clear confirmation screen? Are their actions reversible?

Once we're working with a product or system that is operated by a group, or multiple groups, of people we have to stop considering the fact that we know one of the individual humans who is operating it. "I know Steve, Steve is great at his job. This is a little complicated, but he can handle it." We have to consider how many other people are also operating it, who no doubt are also good at their job. But that we are all simply human, and our ability to perform & operate varies all the time due to an uncountable number of factors.

Demanding a standard of "just be better and don't make mistakes next time" is not a valid approach you can have towards operators.

Designing with empathy is not about pitying people, or even thinking you're better than them (you're not). It is about designing for the worst outcomes and making sure your system and product support what the humans need in the face of those worst outcomes.