Even if you don’t know a lot about it, you’ve probably heard the word “usability” in the past. Usability is the ease-of-use and learnability of a human-made object. Usability is closely related to, but not limited to, ergonomics, human-computer interaction, and/or cognitive psychology. Terms like “usability” are cropping up more often along with terms like “user experience” and “customer experience” because product design is now more user-focused than ever.
The study of usability is as applicable to technical communication as it is to e-commerce websites or touch-screen tablets. After all, we know that users generally don’t want to use documentation. They may not even want to use the product or process that has been documented. We also know that users don’t tend to abandon things because they have completely failed at them. Users abandon things earlier on: when they become annoyed, lose faith in the product, or experience unexpected results that confuse or frustrate them. This combination of disinterest and a willingness to give up makes the relationship between documentation and users tenuous at the best of times, making usability extremely important.
Usability is a broad and complicated study, but it is possible to extract some conceptual principles from it and apply those principles to improve technical communication. Though not an exhaustive list, what follows are five important usability considerations that I apply at all times.
1. Claims about how people use, understand, or interact with something are not useful if those claims are not supported by empirical data.
A colleague and I had the opportunity to design and execute usability tests on a new online help system. In the design stage of the online help system, I had categorized a very important topic in a specific section, where I felt it most appropriate. My other colleagues did not universally agree, and after some discussion, we categorized the topic elsewhere. So who was right?
In the usability test, almost every tester looked for this topic in the same place: where none of us thought anyone would look for it. In truth, we were all wrong–so wrong that most of our testers could not even find this topic.
What fascinated me most, however, was the passion with which we argued that people do abc or people want xyz: “I’m telling you, people are going to blah blah.” Despite the fact that all of us had strong convictions about user behavior, empirical data rendered these beliefs worthless.
2. Users do not understand why things work the way they do.
We understand how to use computers because we have been conditioned to do so. For example, when you see underlined words on the internet, you know you can click on them (if you can’t, you should be able to). Yet, there is no evolutionary quality of humans that compels us to interact with underlined words. The only reason for this expectation is the decades-long history of underlined hyperlinks in web browsers.
The implication is simple: you can’t make up your own standards and expect users to understand them. For example, if you decide that
- all underlined links will open in the same browser tab
- non-underlined links will open in a different browser tab
- red links will take you to an external website
- blue links will take you to another page within the current website
You cannot assume that your users will make the same associations (they won’t). Moreover, you cannot assume that if you enforce these standards long enough, users will eventually learn (they probably won’t). People spend most of their lives not using your documentation. Their only reference point for how your documentation works is their experience with the world outside of it.
3. Users have no idea how your documentation is structured, where they are within that structure, or how the current topic relates to all the other topics.
Even the best documentation can fail to pay homage to what I consider one of the prime problems of designing information: information-seeking behavior. Too many of us have an overly simplistic view of the information-seeking process, which Peter Morville & Louis Rosefeld call the too-simple model. I paraphrase this view in the following diagram.
The act of finding information is a complicated, messy psychological process. Users may not know how to phrase their query. Users may jump from topic to topic in unpredictable ways. The nature of the user’s query may change as they interact with the documentation and learn more about what they’re looking for. In online help systems, users may jump between the table of contents, index, sitemap, and search. In a printed document, they may skip over prerequisite information in search of a quick answer further in. What is certain is that users will be relatively bad at finding what they set out to find. They’ll often come close, but close isn’t good enough.
Users need constant “safety nets” to help them deal with their disorientation. In online help, you might have worked very hard to build an excellent information architecture, but that will be lost on a person who arrives at a page through search. In a printed manual, you might have nested subtopics nicely within your major topics, but that won’t help someone who has quickly scanned your document in search of a particular paragraph. Contextual navigation like breadcrumbs (for online help), inline links (for online help), references (for PDF documents), or “See Also” sections with directly-related tasks give users context about what they’re seeing and the ability to navigate similar pieces of information until they find an answer.
Think of this like going on vacation. You can take an airplane to the airport, but once you’re there, you might not know where your hotel is in relation to the airport. Even if you do know where the hotel is, you still need a way to get there. The airplane took you very close, but not to your final destination. Moreover, the airport can’t predict if you want a taxi, a bus, a rental car, or nothing. So, numerous options should be provided (unobtrusively) to accommodate all types of travelers. This will maximize traveler satisfaction and encourage a future return to the city.
4. Users are informavores.
By now, the Information Foraging concept should be ubiquitous with technical communication. Information foraging purports that humans look for information the way any animal forages for food: in a way that maximizes net intake per unit of time. In Jakob Nielsen’s words, “to say that web users behave like wild beasts in the jungle sounds like a joke, but there’s substantial data to support this claim.”
Both in and out of formal usability test sessions, I have watched users behave in a way that resembles lions on a savannah. The user was given a task, then evaluated his or her options. Upon picking up a scent, the user reacted quickly, clicking on whatever had the scent of success. The user did not necessarily take time to think about what he or she was clicking on. Like lions spotting prey, what mattered was that something “smelled” like a high-value yield. The scent was pursued, not intellectually, but instinctively, until it no longer seemed promising (even if it actually was promising), at which point the chase was abandoned, and a new path was chosen. This behavior agrees with the underlying tenets of information foraging theory.
The practical implication of this is that how you choose to organize, present, label, and write information has an important, immediate, and observable impact on your users’ experiences with it. Even the smallest usability mistakes will eventually drive task failure in a large enough sample of users.
5. Obtain and act on usability data.
Just as the usability experts can’t rely on generic rules-of-thumb, established standards, and third-party research, neither can you. You may be a usability savant and your documentation may seem to be highly usable, but how can you confirm this without data (see consideration 1, above)?
There are many ways to evaluate the usability of your documentation. Here are a few.
- Do a usability test. Dollar-for-dollar, usability testing is one of the most cost-effective ways to improve whatever it is you make. Even if you don’t have the budget to invite your customers to test sessions, consider inviting co-workers, family, or friends. Few things are more beneficial than watching real people try to accomplish real tasks with your documentation, one-on-one, in real time. For helpful information, see Introduction to Usability Testing and the User Testing Articles from Nielsen Norman Group.
- Do a card-sorting exercise. Bad information architecture is the second-largest barrier to usability after a bad (or non-existent) search system, and a search system is not a replacement for information architecture. Card sorting is a quick and easy exercise that asks a tester to group a selection of topics in a way that makes sense to them. It will seem like a shock to you at first, but you will find that your perception of good information structure correlates somewhat poorly to your users’ perceptions of the same. Optimal Workshop produces some helpful tools, including OptimalSort.
- Track user activity with analytics. If your documentation is online, see how users interact with it. Find out how many people read your documentation, what files they access, where they come from, where they click, how many files they look at, and how long they stay on a page. Ask yourself hard questions. Why are users ignoring something you thought was important? Why do they keep looking at information that doesn’t seem important at all? If an entry page has a high bounce rate, what’s wrong with it that prompts users to leave so quickly? Google Analytics is a great place to start.
- Look at search logs. An offshoot of analytics, search logs allow you to see what users are searching for. How do the searches correlate to the analytics? Are users looking for the botanically-correct “solanum lycopersicum” when you are using the more familiar term, “tomato”? Without search logs, how can you know? Most people are incapable of using search effectively. Does your search system have a means of dealing with this, or is the output as bad as the input? Moreover, what kind of business trends can you observe, given what users search for? Google uses its own search logs to power an early-warning system for influenza outbreaks. See how.
By Ian Alton, Technical Writer
Ian is a technical writer in the software development industry. His primary interest is making information more findable.