Cookie Consent by Free Privacy Policy Generator Topic Finder

Will an answer be easy to find if we mimic human dialog in user assistance?

Research has shown that most people avoid using the traditional book-like manual, whether it’s on- or offline. Researchers have provided a number of explanations to why this is the case. One common explanation is that users avoid manuals since they perceive them to be difficult to search.

In response, many technical communicators are exploring innovative new ways to design and deliver user assistance to make manuals more user-friendly. To make the manual an option for the user, we need to make answers easy to find.

The following article series proposes innovative solutions on how to design for the searching user to inspire technical communicators to break through to a new age of user-friendly manual design. This article is the first article in the series.

In this article, I introduce the dialog that happens between two colleagues when one of them (the user) is seeking an answer from the other (the expert) regarding the use of a product. I argue that answers are easy to find if technical communicators mimic this dialog in user assistance. Stay tuned to find out.

The starting point – answering user questions

But before we dig into the dialogs, let’s do some conclusions. The key to make an answer easy to find is to understand users and especially their product usage behavior and information-seeking behavior. Many of us have understood that this behavior means that users ask questions as a consequence of not being able to make sense when using the product.

When users search for an answer, they go to the information source they believe will provide an answer in an easy manner. They’d much rather spend their time and energy actually using information, than tediously sifting through pages of irrelevant content. Users turn to the manual if they know it could provide an answer in an easy manner.

To make the manual provide an answer in an easy manner, technical communicators must first acknowledge that we are answering questions. To know what answers to write we must predict user questions as we often develop user assistance in parallel with product development.

But to write the answers and organize them into a static book structure is not the solution. The solution is most often not to throw answers out on the web, hoping that the user finds them via Google. The user needs something new – a dynamic search user interface that makes answers easy to find.

In upcoming articles, I will provide you with a solution on how to chunk content as a solution to the “chunking controversy”. I will guide you through the process on how to predict user questions. I will talk about how to use facet taxonomies as a vehicle to design user assistance and much more. It is all about findability.

What are these two dialogs all about?

I will exemplify the two dialogs by looking at the exchange that happens between two colleagues when one of them (the user) is seeking an answer from the other (the expert) regarding the use of a product.

In its simplest form, the exchange goes something like this: The user asks the expert a question. The expert and the user start a dialog and at a certain point the user contentedly returns to product use.

What happened here? What makes this simple dialogue so effective? There are two fundamental processes working in tandem with each other, which explains why this simple dialogue works so effectively. The expert is assessing the situation, and the user is assessing the relevance of the answer.

Is it possible to mimic these two dialogs when designing user assistance? Well, let us first sort out what the dialogs are all about.

The situation assessment dialog

Let’s go back and look at the way the user asks their fellow expert for help. First, the user asks a question such as, “How do I do task X?” The expert makes an assessment of the user’s situation by examining the context to gather information such as, what product is being used, how the product is being used, and by identifying other contextual clues that will help the expert provide the most suitable answer. The expert is doing a situation assessment since the expert knows that the task X is done a little differently depending on, for example, product version.

If the expert knows the user in terms of habits, work responsibilities etc, the process of assessing the situation often become an easy task. Maybe the expert becomes confident about the assessment of the users situation without asking any questions. As a result, the expert tells the user how to do task X.

But maybe the expert is uncertain about, for example, the product version the user is using from looking at the screen. The expert cannot make a proper situation assessment without asking questions.

Thus, the expert asks a situation assessment question, for example “But, what product version are you using?”. The expert engages the user in a reciprocal dialogue, I refer to as the “Situation Assessment Dialogue”, in order to assess the situation to find the most appropriate answer.

But hang on, there are some more details you need to know to understand how to design for findability.

We know from information science research that humans are often bad at expressing their information need. The reason to why users often ask an expert for help, might be that the user gets help in clarifying the information need as the expert is doing a situation assessment.

The user might ask the expert “I am looking for, what I think is called, an ‘icon’ which I shall click to do a task which I do not know the name of”. The expert might not understand the user question, and the expert asks a number of situation assessment questions such as “What type of outcome do you want to get by using the software?”.

So the user tries to answer these questions. Finally, the expert has assessed the user situation and, as a bonus, the user has a clear picture on the information need.

So why is an information source assessing the situation of the inquirer?

An information source, such as the expert, is assessing the situation of the inquirer by asking a number of search situation assessment questions. Why? There are at least two reasons:

  • To understand the information need. A user question is often vague an incomplete, so the search situation assessment questions are asked to at all understand the information need.

  • To fetch the most relevant answer from the “answer storage”. Even though the information source has understood the question, sometimes several sibling answers are applicable. The construction of the answer often differs depending on the situation. The same task is performed differently depending on the product version, for example.

The search situation assessment questions are asked due to that the initial user query is often the “tip of the ice berg,” not immediately revealing the complete situation. This is the case even though the user is capable of expressing the information need in a way that makes sense.

The conclusion is that a user who is asking a question is located in a search situation. A search situation can be described, decomposed, modeled etc from a set of search situation facet values.

Thus, the answer to a situation assessment question is a value in one of the search situation facets, such as the product configuration being used, the environment in which the product is used, the tasks the user is trying to do, the result of the task the user is trying to accomplish etc.

To know the user situation in terms of the search situation facet values is very fundamental for many reasons. I argue that you, as an information architect, start by modeling search situation facets as a way to predict user questions. I call this approach the reversed taxonomical approach.

And, we can use these facets to build a search user interface that makes the answer easy to find (Clippy, R.I.P.). I will return to how you do this in upcoming posts.

The relevance assessment dialog

Just as the expert engages in a Situation Assessment Dialogue to retrieve lacking information from the user, the user may enter into a corresponding process of assessment to judge the relevance of the received answer.

Using the experience and knowledge to assess the relevance of the received answer, the user might ask, “Was this really the answer what I was looking for?”

The user have (if dissatisfied) the opportunity to enter into a follow-up dialogue, or “Relevance Assessment Dialogue,” in order to assess the degree of relevance the answer received holds for them and their specific situation. The user makes follow-up inquiries such as, “That was not what I was asking for – what I meant was…”.

If the expert quickly says something (what appears from a first glance to be correct) without apparently assessing the situation, the user might doubt the answer. The user asks the expert: “Is what you are saying valid for my product?”.

A final conclusion

The questions the expert asked to assess the situation of the user according to the situation assessment dialog may actually help the user assess the relevance of the answer. In other words, if the search situation facet values are explicitly available as part of the answer in the manual, the user gets more confident that the answer is relevant.

Thus we mimic the situation assessment dialog in user assistance to both help the user find the answer and to help the user assess the relevance of the found answer.

Can we mimic human dialog to make an answer easy to find? It is possible, and this is what SeSAM and Excosoft Finder are all about – allowing you to design for Findability. Contact us on jonatan.lundin@excosoft.com if you are interested in a webinar about SeSAM and Excosoft Finder.

Stay tuned for the next article in Design for the searching user article series, written by Excosoft information architect Jonatan Lundin.