Usability

Usability, a term that refers to how easily and effectively a person can use a document, website, or product to achieve a purpose, is an integral element of workplace and technical writing. Usable writing should help readers quickly locate, understand, and apply information to complete their task/s. Online, usability is critical for survival. If users can’t figure out how to purchase something, they’ll quickly go to a different site to shop. The vendor loses money. If users can’t find the information they need, they will move on—there are many other sites that can meet their needs, and the original site loses money. In the office, if employees spend large amounts of time figuring out unclear documents or deciphering poorly written instructions, the company loses money with wasted time.

Usability testing started being a key component of technical writing in the1960s. The burgeoning computer industry required user manuals, and engineers realized that it was important to know how users interacted with the materials and the technology. When personal computers became available in the 1980s, and when the 1990s brought the World Wide Web into households and businesses, engineers, designers, and technical writers recognized that usability research and testing were essential to see how people used and interacted with computers and documents. Usability testing was implemented for the development of programs, software, and instructional materials.[1]

As a technical writer in the 21st century, you must incorporate some usability evaluation or testing into your document design process. Without some level of testing, you won’t know if you have done the job, or if your reader is annoyed or frustrated by writing that’s not accurate or comprehensible, a design that’s not accessible, information that’s missing, or even links and design features that simply don’t work.

Characteristics of Usable Documents

According to Jakob Nielsen,[2] a usable document or web site must be:

  • easy to learn so that a user/reader can quickly accomplish the desired tasks
  • efficient, enabling the reader to accomplish the task in a timely manner
  • easy to remember in terms of the process needed to use the document or web site
  • free from errors, enabling the user to complete the task without mistakes
  • satisfying to use; the reader will find it pleasant or enjoyable to use its design

As you can see, usability combines ease of use with the functions an audience requires to do what they need to do. Usability is essential to effective technical writing and design.

Usability Testing

The best way to guarantee that your site or document is usable and useful is to evaluate it. The methods you choose will largely depend on the size and significance of the project and can range from simple to complex.

At the first level, careful proofreading or evaluation of the document using a checklist may reveal areas that need development or clarification. Ask someone to review your draft or prototype and offer suggestions that will improve the design of the document. Fuller testing comes in different forms, and may include interviewing users, observing users directly, administering a questionnaire or survey, and/or conducting focus groups.

Most types of usability evaluations involve three groups of individuals:

  1. users, or the primary audience for the document
  2. subject matter experts (SMEs), who are knowledgeable about the topics of the document or web site
  3. usability research experts, who are trained to determine what questions to ask about the draft or prototype and how to best acquire the answers that will be most useful

According to Dr. Carol Barnum,[3] the following characteristics of usability testing apply among these three groups:

  • Participants represent real users, and perform real tasks. If they are paid participants, they represent the target audience.
  • Subject matter experts may be observers and/or co-developers of research questions.
  • Researchers observe actions, record what the participants say, analyze the findings, diagnose problems, and recommend changes. They follow a research protocol for whatever type of testing occurs—lab testing, testing without a lab, or field testing.

In a usability lab, which is the most expensive and time consuming type of usability testing, a number of users come into a controlled environment and are given a task to complete in a specific time frame. Observers may watch from behind two-way mirrors and record what they see or hear, or they may use a camera and monitor to observe and listen to the participants. Typically a lab requires dedicated space and equipment, including video and/or audio recorders.

Testing without a lab requires a space such as an office or conference room where the participants and observer will not be disturbed. The researcher may sit next to the participant and observe, or have the participant “think aloud” during a process. The researcher may take notes or record using technology, depending on preference and research protocols.

Field testing means that the observer goes to the user and tests in the actual environment in which the user will use the document or device. As an added bonus, researchers in the field can observe users in their natural environment, which may include both supports and distractions.

Usability testing can be expensive and time consuming, but in most cases will be worth the time and expense. The costs of not testing a major product or program are reflected in the amount of additional training needed to support the users, the competitive advantage of the product or program, the image and reputation of the organization, and the efficient use of employee and client time. [4]

How to Test for Usability

Make a Plan

A plan should document what you’re going to do and how you’re going to do it. Identify the following in your plan:

  • how many participants you’ll need
  • what you want them to do / what tasks you want to test
  • what equipment you’ll need
  • where you’ll conduct the test: lab, non-lab, or field
  • how many testing sessions you’ll need
  • what metrics you’ll use for evaluation (For example, subjective metrics include the questions you’ll ask the participants about ease and pleasure, and quantitative metrics indicate what data about errors, completion rate, or time to complete a task you will collect.)

Recruit Participants

Find people who are as close to your target audience as possible, and note that you may have multiple user groups. It’s OK to use your own colleagues for testing during piloting stages, but not during actual testing. Know that you don’t always need a huge number of people in order to test. For subjective metrics, sometimes five users can give you as much information as you need. For quantitative, statistical data collection, you should have about twenty users. If you are going to conduct iterative testing over the course of developing a document or site, you should have a different group of participants for each test. Lastly, since participants are usually compensated, you will need to decide how you will pay them. Keep in mind that you cannot pay federal employees.

Run the Test

sample usability test

A typical usability test might look like this:

The facilitator welcomes the participant, explains the test session, and asks any demographic questions. The facilitator then explains what the participant will do. The participant begins working on the task and may think aloud during the process while the observer or facilitator takes notes of what is said and the participant’s actions. The session ends when tasks are complete or the mandated time is up. The facilitator either interviews the participant with end-of-session subjective questions, or thanks the participant, offers the compensation, and escorts the participant from the testing area.

Jen Bergstrom observes that choosing the best moderation technique for the session depends on the goals of that session. A concurrent think aloud (CTA) is useful for understanding participants’ thoughts as they work through the task. The retrospective think aloud (RTA) has the participants retrace their steps when the session is complete. Concurrent probing (CP) requires that the facilitator ask follow-up questions whenever the participants make a comment or do something out of the ordinary. Retrospective probing (RP) waits until the end of the session and then asks questions about the participants’ thoughts and actions as a follow up. Each method has its pros and cons for gathering subjective data (these methods do not collect quantitative data).[5]

Interpret and Record the Data

After you finish conducting your tests, turn the data into information that you can use to improve the document or site. Essentially, you sort the quantitative data, such as performance measures, and the subjective data, such as attitude. Analyze all data carefully, looking for issues with the document or site. Lastly, present your research in a report.

sample usability documents

Here’s an example of a usability report for a study conducted on The Purdue OWL.

And here’s a link to many sample templates for all aspects of usability testing, from the U.S. Department of Health and Human Services Usability.gov website, Templates & Downloadable Documents page.

 

try it

The FEMA (Federal Emergency Management Agency) website at https://www.fema.gov/assistance/individual has resources for people who need individual assistance.

Consider the usability of the site for an individual with limited computer experience. What would you do to improve the chances of this individual in a flood or hurricane?

 

Useful pages from Usability.gov

  • User Experience Basics – identifies key factors that influence user experience
  • User Research Basics – explains different types of user research and when it might be appropriate to use each type
  • Planning a Usability Test – explains elements of a testing plan and test metrics
  • System Usability Scale – an industry standard of 10 questions and explanation of scores and interpretation
  • There are many more useful pages; these are just a select few.

 

[1] Jameson, D. A. (2013). New options for usability testing projects in business communication courses. Business Communication Quarterly, 76(4), 397-411. Doi:10.1177/1080569913493460.

[2] Nielsen, J. (2012). Usability 101: Introduction to usability. Retrieved from: https://www.nngroup.com/articles/usability-101-introduction-to-usability/

[3] Barnum, C. M. (2002). Usability testing and research. New York: Pearson Education, Inc.

[4] Ibid.

[5] Bergstrom, J. R. (2013). Moderating usability tests. Retrieved from https://www.usability.gov/get-involved/blog/2013/04/moderating-usability-tests.html