Sara Zailskas Walsh recently made a solid case for content testing. Here’s a blurb from her Medium article that summarizes the benefits:
“How Does Content Testing Help Design?
Here are three big benefits:
- It helps designers establish the framework for our conversations with customers.
- It helps designers understand the words we need to use so customers understand us.
- It helps designers understand the information and emotion their designs need to convey in customer moments.”
I highly recommend reading her whole article, as she makes a lot of good points, including:
- Content needs to be tested
- Content creators rarely get to ask all their questions during usability testing
- In content-first design, designers need to know what the content creators are trying to convey
I’d like to suggest some additional methods for testing content.
Content Testing in a Vacuum
As Sara points out, we need to understand not only if content fits with the design, but also if it’s reaching our audience. We need to know if they feel “yes, this organization gets me” based on the terminology that we use. We also need to know if they think “that makes sense” when they read our instructions. But words don’t stand alone in design. That means that test content in a word document will get inaccurate responses.
The best use of content testing, as Sara says, is to clarify the conversation before getting into design. With that in mind, here are a few ways to test – and alternatives to traditionally testing – the conversation.
Traditionally, designers create journey maps to understand how a user gets from Step 1 (learning a brand exists, beginning a search for a new car, finding out they’re pregnant) to the final step (making a purchase, giving birth, etc). They use the map to identify when and where the user takes action, which will then influence the design.
I’ve long been a proponent of journey mapping for content strategy. A content strategist can add to a big picture journey with a realistic understanding of the content that exists to support the journey. This is also the first step to creating a viable conversation. While journey mapping isn’t specifically a type of testing, it helps validate the types of content and information that will create the conversation.
We test content to validate our beliefs and understanding with actual users. At Mad*Pow, we’re big fans of starting the design process with a workshop that involves actual users. Users take a stab at crafting the ideal conversation they would have to accomplish their goals. This could be with a person, website, or application. They explain why they make the design choices they do, and as we listen to them we hear their reasoning. Then we’re able to create conversations (through both content and design) that follow that same reasoning.
In addition, participatory design gives us a chance to test out and iterate on conversation snippets or ideas with real users. We can sometimes iterate on one idea multiple times within just a few hours. We learn what rings true and what does not connect for them. Participatory design saves a lot of time and energy down the road, because users get involved early.
Sara brings up an excellent point in her article. She points out a flaw in using design testing to test content: “But I always wanted to ask more than we ever had time for.”
There’s an obvious solution here, and it doesn’t require us to isolate content from design. Just as we can test design with “real” (placeholder) content in place, we can set up content-specific testing with “real” design in place. We can run an entire content test and ask all the questions the content team needs to have answered, using designs that augment our content choices.
I advocate for this choice over copy-only because, as Sara says, the sum of all design parts makes for a great experience. A well-done design supports the content choices we make. However the content alone can allow people to get overly distracted by a certain word choice that, in the greater scheme of the design, makes far more sense.
The key to content-specific testing is working closely with the research team. We need to let them know early on that there are content areas that need exploration, above and beyond what can be found out in usability testing.
There are many other ways to test content, and I am all ears. How else do you prototype conversations?