The future of UI will be boring

In an interesting post on daring fireball, Gruber rhetorically asks:

Our desktop computers human interfaces haven’t fundamentally changed since 1984  keyboard and mouse/trackpad for input, overlapping draggable resizable windows on-screen, and a hierarchical file system where you create and manage document files.

Have you ever sat back, scratched your chin, and wondered when the computer industry will break free of these current interfaces  which can be a hassle even for experts, and downright confusing (e.g. click vs. double-click) for the non-experts? Surely no one expects the computer interfaces of, say, 50 years hence to be based on these same metaphors and input methods. What’s the next step?

I believe the future of UI will be boring. Here are my counterarguments for Gruber and all who have faith that the future holds brilliant revolutions in human computer interaction. It’s a minor point in his essay, but I will happily run away with it, go much too far and beat it to death while its back is turned.

The good, but simple counterargument: the core metaphors computer interfaces are based on haven’t changed much in 300 years: text and numbers. We have wide keyboards to fit all the numbers and characters (including vestigial, emoticon fodder like ; and ^) we use when we type. This is true whether it’s Twitter or a master’s thesis or whatever it is we’ll be doing in 2150. Language is goofy. If you have goofy language you get goofy UI. When we learn to think in binary, the door for cool UI flies wide open, until then we have a large burden of old interactivity to design for.

A more sophisticated counterargument: look at other machine interfaces. Cars have had steering wheels for 100 years.  Doors around the world have had knobs for decades and probably will for our grandchildren. Your umbrella and your gym bag have handles and probably always will. Just because time passes and new alternatives surface has surprisingly little bearing on change. Your umbrella might some day have a multi-touch interface on it, or work telepathically, but in the universe of all things, radical interface changes are rare. Interfaces in general are boring. Most interfaces in the world have a low bar: they need to not suck. It’s mostly only the interface designers that think the world will be saved by improving user interfaces. Sometimes we’re right. Often we’re not. The idea of a door itself is a kind of human interface. I’m sure there is a door design group somewhere angrily ranting about the lack of progress in door design.

The fancy argument is that dominant design repels most attacks. There are lots of bad ideas that were adopted first, became dominant, and have been impossible to shake. The DVORAK vs. QWERTY keyboard debate is a canonical example. It doesn’t matter if DVORAK is actually 5x better that QWERTY, the cost of relearning is perceived to be prohibitive, so most people never have the motivation to try, and there are huge reinforcements of the status quo (e.g. people who teach typing classes). Metric system vs. English in the U.S. is another good example. A particularly retarded example of dominant design is electric plugs. Studying why the world has 50 different plugs and voltages explains much about resistance factors against innovation. Or world peace.

The scariest argument is that big change often does not happen for logical reasons. From all my studies of actual paradigm shifts for the Myths of Innovation, often big change comes from the blind side, not some sense of orderly progress. It’s when things get truly disrupted, someone goes out of business, a war starts, or perhaps an alien spaceship with working UI intact crash lands in Silicon Valley, that there is enough momentum to sweep away the old thing. To be all cliche about it, black swans are a consistent source of revolution, more common that we like to admit. It’s entirely possible the key technology that replaces GUI will not be something invented by UI Designers: it may very well come out of the left field of some other industry, brain implants or something, where they didn’t know about all of this UI stuff we’ve worried about for years, and we watch as they clumsily sweep everything we know into the dustbin.

It would likely take something like Voice Recognition, if it actually worked, which would free us from backward compatibility, no keys and no mice, small learning curve, providing enough momentum to allow big change.

This is scary because with the new wave, new bad assumptions are made (like QWERTY) that were minor at the time, but then as efficiency becomes important people (especially designers) see the oversights and the holes, but holes too late to fix. There will be some stupid set of conventions with Voice Recognition, or whatever comes next, that the next wave of designers will blame us for not catching before it was too late.

The rookie trap designers and technologists fall for is confusing cool with useful. Or cool with good. 3D user interfaces, gestures and VR or Minority report style UIs, generally suffer from what’s called gorilla arm – our bodies are simply not built to work this way. Yet these ideas get kicked around for decades are rarely show tangible usability advantages compared to conventional designs. They may have clear niche wins, and will be great for special case problems, but that’s not a revolution. There are reliable ways to study claims and shed light onto the value of these ideas, yet each generation seems to ignore them, fueled by a romantic idea of what the future should look like.

The ego trap is the obsession with the new. We get bored so easily. If things aren’t changing we worry something is wrong. But we overlook how most of the world doesn’t change quickly at all. Most technology doesn’t change much. The wiring that powers your home, the plumbing that brings you water, the roads you go to and from work on, work in mostly the same way they always have. This is OK. Lack of upgrade is not a sign of failure.

Your genetic code is hundreds of thousands of years old, and seems to be working quite well if you’re reading this. It might just be more important to consider how the tech in question will make your dreams, or someone else’s, come true. If you worry more about then ends, rather than the means, revolution in UI is less important than you suspect it is. I mean, I’m a writer. If a quill pen was good enough for Shakespeare, what do I really have to complain about?

And the kicker is: I haven’t checked recently, but I bet the huge percentage of desktop computing time is still spent reading, typing or futzing with a mouse. If I’m right, and if you’re sitting, human ergonomics dictates some limits to range of motion and form factors to reduce repetitive stress. As long as these facts are true, well designed keyboards and mice are hard to beat, and even if they aren’t, they’ll still be around for a long long time anyway.

Updates:

58 Responses to “The future of UI will be boring”

  1. Dorian Taylor

    The investment in keyboards, mice and opaque blobs of data crammed into strict hierarchies is huge, but consider also the investment in things like the logical separation between memory and storage or even the Von Neumann architecture itself. When I consider this problem I often think the major change has to begin from the inside out (like it kinda sorta did with mobile devices).

    Reply
  2. Scott Berkun

    I think the human element is the key problem with UI change that makes it different from other kinds.

    The huge changes in cell phones have not really been at the UI level. We still push buttons, we still hold them up to our ears, and we still talk into them. The architecture behind all this (Cell towers, packet switching, etc.) would be incomprehensible to Alexander Graham Bell, but the phone itself would likely still make some sense.

    Similar thing would be true for Sholes (one of the inventors of the typewriter).

    Reply
  3. Jonas Feiring

    I think you are wrong.

    I havn’t used a mouse in ages. Being mostly on laptops the last ten years, my main pointing tool is the touchpad. Which is increasingly multitouch friendly. I didn’t really think twice about suddenly scrolling and right clicking with two fingers.

    Change happens under our nose all the time. But we fail to notice, and only see it when we look back. Look back. We used to interact with a phone by turning a numbered wheel. It’s not long ago.

    Future of UI will be delightful, and we won’t notice it before it’s something we take for granted. It won’t be a revolution. It will sneak in the back door while we are looking elsewhere. :-)

    Reply
  4. Paul OFlaherty

    Necessity is the mother of invention and it is out of necessity that new ways of doing things and new UI’s become the norm.

    Also, people are creatures of habit and it’s very hard to break those habits. People gravitate towards what they know which is why massive departures from a known UI are hard for people to accept and hence they fail to become popular.

    Necessity is the one game changer that will make people adopt a new UI en masse.

    Reply
  5. Dan Saffer

    I agree in general with your premise, Scott. Old technology is extremely hard to entirely get rid of, after all. Radio didn’t disappear when tv arrived.

    But the things that are disrupting old technology currently are, well, other old technologies. Touchscreens, gestural interfaces, AR, etc. have been around for decades themselves, waiting until the technology and the world matures enough for them. Will some of these technologies fall by the wayside? Sure. But some of them we’ve just been trying to figure out What They’re Good For, and that’s the whole trick. I might never replace my boring physical keyboard with a touchscreen, but apparently mobile phones are a good use of the technology, as are ATMs and check-in kiosks at the airport.

    For things like gestural interfaces, we just haven’t had the “of course” moment yet. Of course you’d use gestures for X instead of old Y. Or, more likely, there is no Y–we have to do whatever it manually or simply can’t do it at all. I saw this robot that can flip cars over. If you could control that robot with gestures, a mechanic might find that pretty useful.

    So I’m not ready to write off new technology yet, even though most of us will continue to make products with the old technology. But even those, properly employed, can be disruptive and exciting too.

    Reply
  6. Jay Fienberg

    Good piece, but I wonder if you’ve embedded an assumption that makes some of your argument circular? You are talking about the computers we individually use for writing having UIs we individuals use for writing.

    I also use a computer that looks like a piano to make sound–totally different UI. And I use a computer when I drive–it’s controlled by the gas pedal on my car (I think?). And then there’s the WII–also a computer.

    So, part of how computer UIs change in non-boring ways is adapting to other mechanical UIs that were not previously computer-based. And, as well, the non-mechanical components enable hybrids–my synth has a pitch wheel, for example.

    I’d also add that there’s huge room for change when computer start serving four or more hand uses, rather than only two. What if some new tablet is optimized for four-hand activities?

    Reply
  7. Scott Berkun

    Paul: invention has many mothers – necessity is only one of them.

    And even if it was the sole parent of invention, I’d love to hear your argument for why people *need* to replace their GUIs, keyboards and mice.

    Reply
  8. Joe McCann

    Not sure how much UI development you’ve ACTUALLY been doing lately, but those “cool” things like gestures/Minority Report stuff is actually being implemented and studied in useful ways. There are countless times I’ve been on my Nexus One where i thought, wow a simple gesture sure would work better than click, tapping, clicking, and tapping…

    Reply
  9. Scott Berkun

    Joe: The question I grabbed from Gruber was about desktop computing. There are some differences for mobile, as the ergonomics are different. But fewer of these changes will migrate back to PCs than people think, since the form is different.

    Also, again – find me a study that shows for the 10 most common tasks people do with computers that gestures are a) faster to use b) easier to learn c) anything. We don’t have to guess about these comparisons – it’s easy to get data.

    Reply
  10. Jake

    I would add one other point: the cost not just of relearning and opportunity, but of dollars and cents. You can buy a keyboard and mouse today for maybe $20 or $30, and they usually come with computers. You can buy a snazzy Kinesis Advantage and Evoluent Vertical Mouse for under $400 if you care about such things. How much would a “new” interface cost? How much could it cost, when at least improved interfaces (Dvorak, maybe, and Kinesis, definitely) aren’t much used?

    Like you (I think), I’d guess that some new interface will arise when some new need arises — much like touchscreen interfaces appear to be becoming more common on cell phones, where they make sense.

    Reply
  11. Murli

    Scott, 50 different types of plugs and voltages, is innovation, not lack of it — albeit not the sort one might desire. The Mac/Win type of GUI introduced standardization — before them, there were a variety of interface paradigms and their expression. The world is losing hundreds of human languages (and culutural variety) annually — due to standardization (the opposite, in some sense, of innovation).

    Text and numbers are not metaphors — they are data in their symbolic essence. We may not need keyboard if we find a way to transmit data directly from our brains either through wires or wirelessly — and this changes the interface metaphor, even if the type of data (text and numbers) doesn’t change.

    It is pretty widely accepted in the interface/experience design field that the interaction metaphors that worked at one level of scale 25 years ago don’t work very well any more either at the greatly magnified scale of today, or the different use environments of technology (mostly mobile, on a diverse variety of devices rather than the standard desktop context for which they were originally developed).

    Gruber raises an interesting and important issue. While your post was, as always, entertaining, insightful, and thought provoking, I believe you missed Gruber’s point on this one.

    Reply
  12. Dorian Taylor

    Ah, I ought to clarify. My remark about mobile devices was with regard to them as a novel category of computer and/or means of controlling computers. What I meant was that the smartphone form factor had to happen in order to even make that single incremental change from WIMP to whatever you want to say is going on there. Likewise as Jay mentioned, there are plenty of other devices (pianos, cameras, whatever) that are effectively computers in a skeuomorphic chassis.

    So yeah, the future of PC UI is most likely locked within certain confines, and likewise the mobile UI. The windows for sweeping UI innovations in those categories are effectively closed. (Arguably, it would be hard to communicate what a given device was if it didn’t bear some superficial resemblance to something familiar

    Reply
  13. Jones

    “There are countless times I

    Reply
  14. Kevin Granade

    I got a kick out of this, as I’m not using such an interface as you describe.
    I run (currently Debian, Ubuntu or Gentoo; depending on the system) Linux utilizing the ion3 window manager. While I DO have a hierarchical file system, the fast majority of the time when I access a file I search for it based on name or other attributes (size, file extension, creation date, latest modification, etc…)

    My window manager does not present windows to me as overlapping windows on a single desktop, instead I have an arbitrary number of desktops (usually 10, but a simple command can change the number), each of which has different windows displayed. On one desktop I have Thunderbird and Pidgin side-by-side (email and instant messaging), on other screens I have one to four separate terminal windows or text editors running, and I usually have Songbird and Firefox (A media library app and a web browser) running maximized in their own Desktops. Occasionally I have xpdf and/or OpenOffice running on miscellaneous desktops displaying formatted documents, and even more occasionally I have games or movies running, once again in their own desktops. Switching between these desktops is simply a matter of hitting “windows key”+ “a number key” to jump to a specific desktop by number, or hitting the “next” or “previous” buttons on my keyboard to move through desktops one at a time.

    As an aside, when navigating in Firefox, I have the application in “full screen” mode, so there is NO menu taking up space on my screen, the entire screen is filled with the web page, which is what I actually WANT to look at. All navigation is performed by either a gesture (using the FireGestures plugin), a context menu (right click and a menu pops up), or by mousing up to the top of the screen, which causes a small version of the usual Firefox menu to appear, for entering URLs, performing searches, etc.

    The punchline too all of this? The ion3 window manager, which enables most of this behavior, was written by a single, independent programmer. No commercial affiliation, no research lab, just a guy that got sick of the status quo in window managers.

    Perhaps the problem isn’t that the “industry” hasn’t come up with the next best thing, but that you expect the “industry” to do it in the first place. Programmers all over the world are steadily working on making computers work the way they want them to instead of the way businesses want them to, that is open source.

    Reply
  15. Thomas Petersen

    Good article Scott and I mostly agree. But you creating a kind of strawman IMHO.

    The future of UI design on computers will not advance much as long as our input and output devices, let alone needs are as they are.

    That is probably not something anyone who deals with interface design would disagree with.

    But you are forgetting that games have some very different interfaces and often different approaches to information, that is on a desktopPC. We haven’t even started talking about WII controllers or GuitarHero or LittleBigPlanet.

    What is important and exiting is WHAT you will be able to access via a UI not the UI in itself.

    Reply
  16. Jesse

    I use gestures all the time on my laptop. Scrolling with two fingers, using three to navigate forward & back. They’re already so ingrained in my muscle memory that apps that don’t support them feel broken. Momentum scrolling on the Magic Mouse also feels like a major improvement.

    Incremental UI changes can accumulate to make a big difference, even if we don’t have the benefit of a time-lapse show it.

    Reply
  17. est

    there’s another situation for innovation: country differences

    Just a simple example: US has a wide usage of 1G mobile phones, then China has a huge 2G GSM market, now the US goes from 1G directly to 3G, China should prepare 4G now.

    I think this cycle has not yet been facilitated in the current globalization, plus peer competition can invoke more innovations. It’s like cold war without hostility.

    Reply
  18. mpg

    > Studying why the world has 50 different plugs and
    > voltages explains much about resistance factors
    > against innovation.

    I’ve often wondered what the underlying factors are on that one. Do you have any readable cites for this?

    (I recall reading a book some years ago about the adoption of standardized cargo containers and learning, to my naive horror, that the factors hindering their adoption were more about unionization and business practices than anything to do with the container design itself… A design which, as we know, has subsequently proven over they years to be remarkably resilient and amenable to a number of unplanned usages.)

    -mpg

    Reply
  19. Sean Crawford

    Some of these ego/cool traps last too darn long. I recently paid ten extra dollars for my telephone, just to have one with a handset I could lazily curl my fingers around, instead of the still too common futuristic fingertip one. (Which I had put up with for fifteen years.)

    My worst example: back in the late 1960s I read an encyclopedia article on futuristic stuff that would never happen: One example was tiny wheels on a baby carriage (perambulator) Never, because it would be harder to push. In the 1980s my sister had to push a pram down a rural gravel road and she said she was very frustrated at how she could not buy any with decent wheels. They are a little bigger now, but most are still very small. (Although I suppose ball bearings move easier now, as witness the resurgence of skateboards in the 1980s.)

    Reply
  20. Christopher Fahey

    “When we learn to think in binary, the door for cool UI flies wide open.”

    I think you might have that backwards. To me, it is precisely the limitations and idiosyncrasies of the “human interface” that make computer interfaces effective, fun, compelling, or even “cool”. The core UI challenge will always be to machines that work with human fingers, human eyes and ears, human language and human numbers — that is, within the limits of our physical and cognitive abilities. Even when we someday have direct neural implants allowing us to access and use technology right from our brains, it will still be the interface designer’s job to define the thoughts and feelings we want users to utilize in creating and managing those new “user experiences”. In this brave new world, UI designers will look at lot more like playwrights and politicians than they do industrial engineers.

    Reply
  21. Shell

    Technology has a way of taking unexpected turns. ‘Base’ devices like keyboards, mice, pen tablets, and the increasingly ubiquitous ‘touch’ technology is likely to be here unchanged for a long time. But there are other input interesting devices out there too: Sony’s VR pet that reacts to movement, Nintendos Wii input, motion and locational input in iPhones, not to mention console joypads with tilt, velocity and vibration.

    These technologies will continue to drive new ways of building UIs, and seed new ways of interacting with tech that we’ve just not though of yet.

    I don’t think it’s possible to really predict where we’ll be in say, 20 years: 20 years ago none of the technologies above existed.

    Will we still be using keyboards in 20 years time? Probably, in some form. We also still use pens and paper after 1000s of years too.

    Reply
  22. Mike

    Don’t forget about all the money floating around Silicon Valley, and all the people who have huge financial incentives to hype the idea of the revolutionary new thing that changes everything (and can be patented).

    Reply
  23. Dan Milham

    A long and eloquent way of saying that things which are not broken need not be fixed.

    Reply
  24. Duncan Wilcox

    What if instead of calling this hypothetical future device a “Desktop computer”, we name it based on what we do with it?

    A “web consumer” appliance is likely to be quite unlike current desktops. Think iPhone apps specific for a website/service, now think an appliance with an industrial design resembling that UI.

    An “image processing” device is unlikely to ever have the mass distribution that would justify specialized hardware, and in general I see content creation apps as the last genre of app that would work on anything but a general purpose computer.

    Now consider the people who are perfectly happy with applications that have a “library” approach, say iTunes, iPhoto (on the mac), to some extent Picasa. These applications let them be fully productive without ever needing to know details of the filesystem or multi-application interaction.

    This is clearly unacceptable for power users and that’s why, in part also because of habit (yes, it’s not necessarily a positive thing), current Desktops will remain largely unchanged for the foreseeable future.

    However there is a clear need in the market for user interfaces that trade some of the flexibility and the complexity it brings (tweaking the filesystem directly, dragging and dropping stuff around the place, single vs. double clicking), for a radically simpler user interface.

    The complexity of a user interface is related to how many concepts users have to learn to complete their task. For a large number of tasks things like filesystem layout or overlapping window management are totally unnecessary concepts.

    Devices with this new UI will replace current desktop UIs for a subset of their functionality, effectively pushing the whole desktop category up (in price point mainly, because it will also replace it in volume). It’s the sort of UI I’d be happy to have my mom use.

    This seems so clear and evident that I’m sure it’s just a matter of time. I’m hoping it will be Apple’s new “tablet”, but who knows.

    Reply
  25. Tiago Pedras

    A few notes to complete the part where you say that the revolution will come from somewhere else.
    Voice recognition will probably never be an actual alternative to our UI methods. And it’s main problem is: how are you going to interact and talk at the same time? Unless computers start doing things themselves, that change won’t come for a long time.
    And that’s the same thing with the telepathic / brain implants solutions. The brain is indeed to much complicated for us to see these kind of solutions anytime soon.

    The fact that there is an almost mechanic part of your brain that allows to type your keyboard while you speak or drive your car while thinking of your work day still needs to have a hands-on approach to interfaces.

    Reply
  26. Jeff

    Absolutely true—but I think you missed the point that boring is GOOD in the same way that you explained in Confessions, that people respond well to the familiar. There is a reason the Dvorak keyboard never caught on. That is a very good point about plugs and voltages in different countries, though.

    It occurs to me that refining the UI—or rather the whole “user experience”, or UX—is a difficult task. Maybe it reached its pinnacle in the laptop, and what is keeping devices like netbooks at bay is that they are not really improving on that UX in a substantial way.

    I actually just wrote a post about user experience as well. After the firehose-blast of news from CES, perhaps it is on everyone’s mind.

    http://jefro.wordpress.com/2010/01/14/user-experience-is-king/

    Keep up the good work! After reading Confessions, yours is now the first blog I seek every morning.

    Reply
  27. Matías Halles

    UI in every aspect is all about being seemless to the user. It’s about taking for granted the interface you are using. Keyboard/Mouse to computer, remote to TV, hand to a glass of milk, mouth to breast (milking breast… or not). And, unless someone is as mindblowing intelligent like Tesla, and can do ALL areas of communication with an increbible performance, there’s always gonna be the need for different actors and *stuff* that affects UI development as a whole. There’s a whole lot of factors that are (and will be) constraining interfaces development and unless someone makes a real breaktrough (most people don’t need to type x5 faster) in any of those factors then there’s no active need for interfaces to change. And that’s the real beauty. There’s no need to change interfaces unless you sense something can be done more efficiently and have a solution for it, which is the most difficult part of all. There’s always the need for making stuff more efficient but you have to have the motivation to look for this problem, which most of the time is not as obvious as with other knoweledge areas.

    Reply
  28. John Jerimiah

    @Dan Milham thankyou – summed up my reaction to this long winded article in one sentence.

    Reply
  29. RogerV

    Just a minor correction: Francis Bacon was the actual author of the works attributed to the play actor, William Shakespeare. The actor was barely able to scratch out a legible signature of his name.

    Reply
  30. Alastair

    Metric inherently better than imperial? Not really. Metric is great for really small and really large numbers and computations, but if you are joiner or a builder 12 can be divided by 2,3,4 and 6, ten can be divided by 2 and 5 – what gives you the most flexibility? This is the heart of imperial unit’s longevity.

    Reply
  31. Morten

    I kind of agree, but have you taken a look at perzi.com A totally new approach to the way the user interface is created. (And the product it actually produces is pretty innovative as well)

    Reply
  32. Ryan Bell

    Your statement about ends vs. means made me think of another article I read recently that makes one very good point:

    “The Real Work is not formatting the margins, installing the printer driver, uploading the document, finishing the PowerPoint slides, running the software update or reinstalling the OS.

    The Real Work is teaching the child, healing the patient, selling the house, logging the road defects, fixing the car at the roadside, capturing the table’s order, designing the house and organising the party.”

    http://speirs.org/blog/2010/1/29/future-shock.html

    I would add, “The Real Work is not designing the UI – it’s what you enable people to do with that UI”. A user interface is a means, not an end. Which can be hard for those of us in UI design and development fields to remember.

    Reply
  33. Matthew Smith

    This is scary because with the new wave, new bad assumptions are made (like QWERTY) that were minor at the time, but then as efficiency becomes important people (especially designers) see the oversights and the holes, but holes too late to fix. There will be some stupid set of conventions with Voice Recognition, or whatever comes next, that the next wave of designers will blame us for not catching before it was too late

    .

    With the above in mind. I’m particularly curious about the effects of the current infrastructure that something like the iPhone is beginning to create, and where the pitfalls are. I utterly agree with your thinking on this and wonder if there is any regular method of addressing it when groups like Apple remain distanced from even their fanboy constituency.

    Reply
  34. Phil Simon

    I just listened to the Spark podcast on this post. I’m certainly no expert in UI and it’s foolish for me to think that we have “the best” tools right now.

    I think that the point Scott makes about “technology” not allowing people to write a book or make a movie is a critical one. It seems silly to think that we need a better mousetrap to do something creative and innovate.

    Great post, podcast, and comments.

    Reply
  35. Sean Crawford

    Speaking of doorknobs will still be the same, in British Columbia they are passing-have passed legislation that all new homes (often built for young people) must have the knobs as handles, like in Europe, for the sake of future senior citizens.

    Speaking of knobs and people hating change, I recall a magazine feature: When I was a boy a man invented a door knob that did not go click, so that a sleeping Child would not be awakened. His “better mousetrap” did not sell because people were so used to hearing a click.

    Reply

Pingbacks

Leave a Reply

* Required