optimal (adj) : the most desirable measurable outcome possible under a restriction expressed
Perfection in design is not possible. You can never simultaneously satisfy all of the possible objectives for any created thing. For example, a ballpoint pen is an excellent tool for writing, until you need to erase. Yet a pencil fails you when you need to sign a legal contract. For any product there are no mathematics, and no algorithms, for deciding which objectives to satisfy in a single design, or even for accurately defining an optimal solution within any of those objectives. While there are methods that effectively evaluate and illuminate promising directions, they are sensitive tools that work as guides, not maps. All design involves too many possible objectives and solutions for complete confidence. An optimal design, in the broadest sense, is impossible.
Complete designs are tradeoffs
Whether you are designing a website or a microwave oven, you have at least three sets of overlapping criteria to deal with:
These aspects create lines of both synergy and tension. If you focus on one it will pull against the others. For example, if your website is forced to ship in half the planned time (resources), you will have to reduce how much you can satisfy business or customer goals. It’s a zero sum game: choices that benefit one attribute are often at the expense of another. To call something optimal, it has to be the most desirable outcome of a specific set of restrictions. But which restrictions do you measure? How do you define the desirable outcome across clearly conflicting objectives? What resources should be applied to satisfying which objectives? These decisions involve many domains and perspectives. To include them all, with equal rigor, requires a very complex set of considerations.
Often it is the separation of decision making power across domains that causes the most design problems. When designers are not privy to business and technological decisions/implications, and business or development leaders are not aware of the customer impact of their choices, failures result. Each is making decisions as if there was only one domain of importance, which is rarely the case. Unless a team is balanced in its decision making ability, and has leadership that can understand the cross domain tradeoffs (or delegate them appropriately), good design is nearly impossible: one aspect of the tradeoffs will unintentionally outweigh the others.
Any discipline can fall into this trap regardless of their political power. Designers can make choices that have outstanding usability or aesthetic values, but poor business or technological implications. Engineers can choose great technological models, that fail the important user and business scenarios. The real secret ingredient is synergy, and mastery of tradeoffs, not an isolated brilliance in any singular domain. It’s not about finding a specific measurement or applying a singular technique: it’s having a rationale for combining perspectives that overlap, or conflict, with each other. There is no single metric for this in any industry or philosophy, and in part, this is why the idea of an optimal design is so strange.
A design is static, the universe is chaos
Even if you managed to find the mythical “optimal” solution within the cross domain constraints of a design problem, there is another reason that optimal design is impossible: things change. On the day your code is finished, your design stops changing, but the world keeps moving. Your competitors may release a new version, or go out of business. Your budget may be cut in half, or the size of your staff may double. New information may be made available to you, or you may find something than invalidates a key assumption you made. Any of the individual variables that you tried to design for can change at any time. And most important of all, your customer’s needs and desires change.
People’s needs and desires are always on the move. The user scenarios and tasks that you target in your work have a certain lifespan. Some segment of your users will develop new needs and goals that you did not plan for. The success you have with one design may expose users to new areas of your website, and they will discover that there are new things that they want to do, that your design does not enable. This is what’s called an enabled problem, since it occurs only as the result of some initial success. But it still represents a challenge to the designer. How do you elevate an existing design to meet new demands, while simultaneously satisfying less progressive customers? How do you manage or measure the rate of change between these groups of users? It takes significant effort and resources to capture, much less analyze, data on these types of trends. Even if you could capture it effectively, what would it mean to optimize this kind of decision? To say you’re optimizing would require such a narrow definition of your goals, that you are no longer focused on the design itself. Instead you are optimizing for one specific effect of the design: and that may be the only thing you can ever really optimize.
What’s worse, the needs of even individual users change. People outgrow things. They develop new opinions, and change their habits. The same person that is thrilled when you launch, may have learned the basics through your design, and moved on to a more elaborate offering from one of your competitors. A change in career, hobby, spouse or lifestyle can upset their relationship with your services enough for them to move on completely. How can you optimize for something so fragile and variable as human behavior? You can plan for it, and make assumptions, but that is never accurate enough to call it optimal.
No matter what you do, it gets worse
The last straw is that people are not homogeneous: they have conflicting needs and opinions about the things you make. With a user base of a reasonable size, you will always find contradictions in their needs, desires and performance in the usability lab. The same feature that one segment of users has difficultly with, will sometimes be the exact same feature another group of advanced users can’t live with out. One side may outweigh the other in number, but the conflict still exists: you have to make decisions that will weigh one group’s needs against the other. This is yet another dimension to design problems. It must be added to all of the others mentioned so far, multiplying the complexity of your design problem.
The smart designer really isn’t looking to optimize: instead she’s striving for meaningful balance. No choice is made in isolation. Even bug fixes, or purely engineering decisions, can represent this complexity: many subtle bugs, once encountered and learned, can be tolerated by people. Human beings adapt their own behavior to work around problems. But when the bug is removed, in some cases, it may force people to endure the negative experience of changing their behavior to relearn a task. They may initially suffer as a result of the removal of a design flaw. How can you optimize for this scenario? What algorithm can you apply to determine when this decision makes sense? The answer is you can’t. Tools or rules of thumb may help you to model the possible outcomes, and user data may express some salient points about human behavior, but there is no way to completely eliminate the uncertainty from these kinds of difficult decisions.
Optimize this, please!
Lets say I’m wrong about the above. For the sake of argument, lets say that it is possible to create a machine that can take in a bunch of design constraints and objectives, along with usability, business, and engineering data, and through iteration on it’s own internal algorithms produce an optimal design solution. We’ll call it the design machine. Lets assume that the machine takes just as much time to work as the design/engineering team, but unlike the human team, its results are “optimal”. Lets see what happens.
If the design machine takes time to operate, it is itself consuming a resource: time. While it is generating its solution, the engineering team is waiting. Somehow, the machine has to account for itself in it’s own computation of what is optimal. How much time should the machine spend coming up with an optimal design? But there’s the trap: as soon as it decides to take less time to conserve the time resource, is it still an optimal design? There is no way to tell, because it’s a paradox. The word optimal is the wrong word for the dynamics of a design problem, because it implies the existence of a single algorithm for precise measurement, which given everything so far in this essay, is never possible.
The paradox continues if you try to add more machines. You could argue that a second machine is needed, assuming that the optimal way to use the first machine is to have another design machine that can decide when and how to use the first machine. But this puts you back in the exact same trap: the second machine is consuming resources too. The paradox runs off into infinity, because there is always another criteria or objective that you can decide to add to your evaluation of a design, and when you add it to the equation, your sense of the mystical “optimal” is forced to change.
The key lesson is this: no matter how valuable or powerful your tools are, people still have to make tough choices, with limited knowledge of the important variables, and an inability to predict precisely what the outcome will be. This goes beyond design or business, it’s a fundamental quality of human existence.
A case study before the conclusion: The lesson of the swiss army knife
Everything has its price. The more criteria you try to satisfy simultaneously, the more constrained you are for satisfying them individually (this is why clear project goals are imperative). In this respect, a swiss army knife represents a fascinating design compromise. It provides 10 or 20 tools in a simple, small, portable device. However, as anyone who has ever tried to use one can attest: many of the individual tools don’t work very well. They are hard to use, often difficult to activate, and require the application of great dexterity or strength to achieve ones goals. If you could only take one tool with you on a camping trip, a swiss army knife would be a good choice because of it’s versatility. But versatility sacrifices specialization. A swiss army knife is not the best screwdriver, or weapon. Frankly, it’s not the best anything – it’s the best “manything”.
The swiss army knife reflects another view of optimization. An optimized design can reduce how robust, or resilient, a design is. To optimize, you have to push towards the upper bound of a specific measurement. But the closer you get to that upper bound, the further you are from the upper bound of any other specific measurement. If you design the best straight edge screwdriver in the world, you’ll be out of luck when the world converts to phillips screws. A more versatile design, that can switch between different screwdriver types or sizes won’t be as optimal for any specific situation, but it will be more robust and versatile for the world. An argument can be made in many design problems for the intentional lack of optimization of any one constraint as a design goal. Redundancy and flexibility can be highly desirable goals.
Note: An interesting approach could be to say that versatility is just optimizing for non-optimization, but I’d still argue that many of the points in this essay still apply.
The less than optimal conclusion
The important challenge to designers and engineers is to broaden the set of tradeoffs we are fluent in. The wider our perspective is on the diverse viewpoints that come together in making something, the more effective we can be in making sure the outcome is something good. If we allow the common segregated relationships for design and engineering, or usability and business, to continue, then we enable important decisions to be made without the benefit of key perspectives. If we don’t strive for broader awareness, the work of designers and usability engineers is guaranteed to be uphill. At the same time, the work of executives, business managers and engineers will always be compromised by their inability to understand the true impact of the choices they make. It’s the quality of communication across disciplines that has the greatest impact on overall design success.
As much as we’d like to find a single method, procedure or formula for simplifying web design, it’s a red herring. Any problem involving interpretation and perspective can not be derived or reduced into formula. Since the beginning of civilization, all forms of creative activity, including engineering and science, have been subject to the uncertainties of design. Experiments, theories and mathematics can be very helpful in the process, but it is people, hopefully skilled in cross domain tradeoffs, that inevitably make the decisions. The secret is not to focus exclusively on the pursuit of tools, or perfect methods: it’s in the development and cultivation of true design intuition, through experience, of the people that use them.
(Note: this essay was originally titled The myth of optimal web design, and published first March 1999)