In an interesting post on daring fireball, Gruber rhetorically asks:
Our desktop computers human interfaces haven’t fundamentally changed since 1984 keyboard and mouse/trackpad for input, overlapping draggable resizable windows on-screen, and a hierarchical file system where you create and manage document files.
Have you ever sat back, scratched your chin, and wondered when the computer industry will break free of these current interfaces which can be a hassle even for experts, and downright confusing (e.g. click vs. double-click) for the non-experts? Surely no one expects the computer interfaces of, say, 50 years hence to be based on these same metaphors and input methods. What’s the next step?
I believe the future of UI will be boring. Here are my counterarguments for Gruber and all who have faith that the future holds brilliant revolutions in human computer interaction. It’s a minor point in his essay, but I will happily run away with it, go much too far and beat it to death while its back is turned.
The good, but simple counterargument: the core metaphors computer interfaces are based on haven’t changed much in 300 years: text and numbers. We have wide keyboards to fit all the numbers and characters (including vestigial, emoticon fodder like ; and ^) we use when we type. This is true whether it’s Twitter or a master’s thesis or whatever it is we’ll be doing in 2150. Language is goofy. If you have goofy language you get goofy UI. When we learn to think in binary, the door for cool UI flies wide open, until then we have a large burden of old interactivity to design for.
A more sophisticated counterargument: look at other machine interfaces. Cars have had steering wheels for 100 years. Doors around the world have had knobs for decades and probably will for our grandchildren. Your umbrella and your gym bag have handles and probably always will. Just because time passes and new alternatives surface has surprisingly little bearing on change. Your umbrella might some day have a multi-touch interface on it, or work telepathically, but in the universe of all things, radical interface changes are rare. Interfaces in general are boring. Most interfaces in the world have a low bar: they need to not suck. It’s mostly only the interface designers that think the world will be saved by improving user interfaces. Sometimes we’re right. Often we’re not. The idea of a door itself is a kind of human interface. I’m sure there is a door design group somewhere angrily ranting about the lack of progress in door design.
The fancy argument is that dominant design repels most attacks. There are lots of bad ideas that were adopted first, became dominant, and have been impossible to shake. The DVORAK vs. QWERTY keyboard debate is a canonical example. It doesn’t matter if DVORAK is actually 5x better that QWERTY, the cost of relearning is perceived to be prohibitive, so most people never have the motivation to try, and there are huge reinforcements of the status quo (e.g. people who teach typing classes). Metric system vs. English in the U.S. is another good example. A particularly retarded example of dominant design is electric plugs. Studying why the world has 50 different plugs and voltages explains much about resistance factors against innovation. Or world peace.
The scariest argument is that big change often does not happen for logical reasons. From all my studies of actual paradigm shifts for the Myths of Innovation, often big change comes from the blind side, not some sense of orderly progress. It’s when things get truly disrupted, someone goes out of business, a war starts, or perhaps an alien spaceship with working UI intact crash lands in Silicon Valley, that there is enough momentum to sweep away the old thing. To be all cliche about it, black swans are a consistent source of revolution, more common that we like to admit. It’s entirely possible the key technology that replaces GUI will not be something invented by UI Designers: it may very well come out of the left field of some other industry, brain implants or something, where they didn’t know about all of this UI stuff we’ve worried about for years, and we watch as they clumsily sweep everything we know into the dustbin.
It would likely take something like Voice Recognition, if it actually worked, which would free us from backward compatibility, no keys and no mice, small learning curve, providing enough momentum to allow big change.
This is scary because with the new wave, new bad assumptions are made (like QWERTY) that were minor at the time, but then as efficiency becomes important people (especially designers) see the oversights and the holes, but holes too late to fix. There will be some stupid set of conventions with Voice Recognition, or whatever comes next, that the next wave of designers will blame us for not catching before it was too late.
The rookie trap designers and technologists fall for is confusing cool with useful. Or cool with good. 3D user interfaces, gestures and VR or Minority report style UIs, generally suffer from what’s called gorilla arm – our bodies are simply not built to work this way. Yet these ideas get kicked around for decades are rarely show tangible usability advantages compared to conventional designs. They may have clear niche wins, and will be great for special case problems, but that’s not a revolution. There are reliable ways to study claims and shed light onto the value of these ideas, yet each generation seems to ignore them, fueled by a romantic idea of what the future should look like.
The ego trap is the obsession with the new. We get bored so easily. If things aren’t changing we worry something is wrong. But we overlook how most of the world doesn’t change quickly at all. Most technology doesn’t change much. The wiring that powers your home, the plumbing that brings you water, the roads you go to and from work on, work in mostly the same way they always have. This is OK. Lack of upgrade is not a sign of failure.
Your genetic code is hundreds of thousands of years old, and seems to be working quite well if you’re reading this. It might just be more important to consider how the tech in question will make your dreams, or someone else’s, come true. If you worry more about then ends, rather than the means, revolution in UI is less important than you suspect it is. I mean, I’m a writer. If a quill pen was good enough for Shakespeare, what do I really have to complain about?
And the kicker is: I haven’t checked recently, but I bet the huge percentage of desktop computing time is still spent reading, typing or futzing with a mouse. If I’m right, and if you’re sitting, human ergonomics dictates some limits to range of motion and form factors to reduce repetitive stress. As long as these facts are true, well designed keyboards and mice are hard to beat, and even if they aren’t, they’ll still be around for a long long time anyway.