Recently, a lot of discussion has emerged in social media about user assistance and findability in technical communication and how to design for findability. Faceted navigation and search has been discussed as a solution, and I believe that it may be a promising way out. In fact, a lot of discussion in the technical communication community is about what comes after the “book paradigm”. I do see a trend in that more and more technical communicators are saying that we are answering user questions instead of writing books.
As you know from reading my blog, I have for a long time argued that we must predict user questions since we often work in a pre-release mode. And, I have argued that faceted navigation (including faceted search) is a good way to assist users in finding an answer.
But, the product usage behavior users exposes, which sometimes leads to an information-seeking behavior, causes a “problem” for us. To design effective user assistance, we need to understand this behavior I call the “mental model focus behavior”. In this blog post, I will explain what I mean.
In an empirical research study I did together with a colleague in the Netherlands, we saw patterns of behavior that are interesting. We made an experiment in a user experience laboratory, where users where asked to do exercises with an unknown tool. They had the manual available which was prepared so that it contained the information needed to solve the exercises.
Rather surprisingly, when users read a topic that contained information relevant to the task they only managed to operate the tool successfully about half the time, when turning back to the tool after having read the topic. Almost one third of the topics that users looked up were not relevant to the task at hand. In 30% of the occasions where a non-procedural information need was expressed, users tried to meet this need through a topic containing procedural information.
Some users went back to the same topic several times during the course of solving a particular exercise. Meaning, they first tried the tool, looked up a topic, went back to try the tool to then look up another topic to then the try the tool again and finally look up the first topic that they had previously looked up (within the same exercise). They sometimes looked up a topic that included the solution to the problem several times without being able to solve the task.
Respondents did not apparently read topic as we think they should. How can we explain this strange behavior?
The “mental model focus behavior” can be explained as follows (this is not a scientific explanation – but merely my own initial thoughts). Users are trying to build an explanation when not being able to use the product as wanted. They try to make sense by finding the reason and the solution to a perceived problem.
When trying to make sense while exploring the product, users pick up clues from the interface that may support the built explanation. Users form a mental model that is or is not in accordance with the mental model the product designer had when designing the product. At some point, the user leaves the product and searches for information. Users ask questions.
The user is really looking for evidence in answers that supports the built explanation stemmed from an “invalid” mental model. As soon as the user finds “evidence” in the topic, s/he goes back to the product and tries a slightly modified problem solving strategy.
The user interpretation horizon is somewhat skewed as information in topics are interpreted on the basis of the built explanation. As a result the user perceives that the topic is saying something it is not. The user is so focused on validating or falsifying the hypothesis constructed from the mental model that the user is not receptive for information that is contradictory to what the user is trying to investigate.
It just requires too much mental effort to change track and start to build another explanation, that is, to build another mental model based on information written in the topic. Users seem to ignore any fact in a topic that is contradictory to the built explanation (or theory or idea).
What we also saw is that users do not read the whole topic. They just read up to a point where they found supporting evidence which sometimes was found in the first sentence. When users find information that they perceive support their own belief, they can read that sentence over and over again, without reading other sentences in the topic.
This behavior is supported in cognitive science, since humans do memorize and understand information that conforms to the own belief much better that contradictory information. This behavior really has impact on how a topic must be written.
A long topic is seldom read, which is one reason to why I think a topic must answer one (1) user question and also clearly signal what question it answers. A long Every Page is Page One topic may in some cases fail. To combine several answers in one (1) topic does not seem like a good idea in this perspective. Users are not as rational and patient as we think they are.
It is probably a fundamental human behavior to focus the attention on identifying forms and visuals that corresponds to the imagined belief. To focus mental capacity is to save energy and this behavior has probably been around since before the days of hunting mammoths. We have a sophisticated filter that helps us focus on one thing only, thus filtering out everything that is not pertinent to the current focus.
This behavior also has impact on the search user interface design. A Google search box may also fail as the user is entering key words that are fetched from the built explanation, which may be completely “wrong”. And the user is using “wrong” or “vague” key words as our ability to express our information need is poor.
How do we design user assistance to reduce, from our view an apparent irrational behavior? First of all, it is not an irrational behavior but an energy saving behavior. I argue that the key is to understand the information-seeking behavior user’s display. This is a complex behavior. But the essence is that users ask questions. So we must somehow predict questions as we often work in a pre-release mode. But, we cannot predict every user question. That is impossible, since we cannot predict the mental model a user may build.
My whole work on SeSAM (Search Situation based Architecture and Methodology) is aiming at providing the technical communication community with a framework to predict user questions, classify them according to a multi faceted classification system and build faceted navigation environments based on filters, designed on the facets.
Now, a search user interface may be multimodal, include gamification or artificial intelligence as a design solution to help user “unfocus” and omit an invalid mental model.
I think one solution lies in reducing the amount of content that users are exposed to. If a user assistance knowledge base is perceived to include a vast amount of answers, the user may only give each found answer a few seconds to prove that it is worthy of the users time. The user knows that there are so many other answers that may better answer the question.
Instead of putting every egg in one basket, meaning investing all energy in digging into one (1) answer that may turn out irrelevant, it is better to quickly examine all answers in the knowledge base to sort out the ones that are most pertinent.
The automagical manual is a concept to automatically remove the answers that are not applicable in a particular user search situation. The automagical manual means that the user assistance interface is talking to the product and other environments such as social media hubs, to fetch data that is used to automatically do selections in facets.
This is an opposite of context sensitivity or embedded user assistance where the product is the master and the information is the slave. In an automagical manual context, the user assistance is the master and the surrounding environment is the slave. When using an automagical manual, the user is only exposed to a few answers which probably lead to a more methodical behavior.
Another design aspect that probably helps the user in not finding the same topic over and over again, is to allow a user open several answers in parallel. Just like you do when opening several web pages in separate tabs in your browser. Then a user can jump between the tabs to form a complete answer.
Well, the Excosoft findability platform includes many interesting requirements. If you want to know more about the Excosoft findability platform project, contact Jonatan.