The title of this article might appear to be counter-intuitive to what we understand about user experience and user research. But its true for a specific reason. By nature, many UX researchers have a tendency to ask the research study participants about their current and future 1. Behavior, 2. Preferences, likes and dislikes 3. Expectations about the product, design or interface in question and they end up listening to what the users have to say. What these researchers don’t understand is the fact that by listening to what the users have to say, they are risking their UX research project and that they will eventually follow the wrong information pathways and user insights.
The resulting product experiences based on these insights are bound to fail in the market. Because, we, as UX researchers and designers, also know that “Users don’t know what they really want” and they don’t report their true actions or feelings. So listening to their opinion can misguide us in UX design projects. Now the questions is why we should not listen to the users? The answer to this question lies in the difference between attitude and behavior. Attitude is the expression of what users say they do and behavior is what they really do. Various research studies across the world have proved that human attitude and behavior are almost always inconsistent and there is hardly any correlation between attitudes and behaviors. Lets see a few of those studies.
Study 1: People reporting about their food habits
In a study by Brenner, Philip S “Exceptional Behavior or Exceptional Identity?
Study 2: People reporting their spiritual attitudes and actions
Another study “Social Desirability Bias in Dietary Self-Report May Compromise the Validity of Dietary Intake Measures”, International Journal of Epidemiology, conclusively found that people tend to report eating less than they actually do. So, In short, users will say that they want this feature or that they prefer to use a particular product but in reality, their
Study 3: People reporting about their personal hygiene while on the road
In yet another study in the UK, to quote BBC “…99% of people interviewed at motorway service stations toilets claimed they had washed their hands after going to the toilet. Electronic recording devices revealed only 32% of men and 64% of women actually did.”
So in short what these studies show is that users do not report accurately (they either under report or they over report) about their behavior because of the social desirability bias. Social desirability bias is a cognitive bias that make people to acknowledge less, those behaviors that they see as undesirable and report more, those behaviors that are seen as more socially desirable.
So if researchers cannot rely on what users say, then what is the alternative to get insights into the users needs. Instead of listening to the users, always “Observe their behavior”. Observe how they interact, how they react to products, designs and extract insights into the whys and what for about the users. And this is where the concept of Experience Sampling comes in.
Experience sampling is a strategic research methodology that asks “participants to report on their thoughts, feelings, behaviors, and/or environment on multiple occasions over time” while they perform a task related to a product or service in question. This method is so important because it uncovers users needs that they don’t obviously report because of the social desirability and other cognitive biases. This method also gives insights into the features that users might really useful to meet their needs. This technique also uncovers the current pain points & delights of the users pertaining to the task in question.
How to run an experience sampling study:
Think really hard on the question that you would like to know the answer. Example, how did you book airline ticket in the last couple of months ?
Set the number of interruptions. Don’t ask them to record their feelings too many times and do not ask them questions about their feelings that will require them to write a paragraph. Both these will result in poor cooperation from the participants. This will also depend on the number of answers you are looking for. The bottom line, make the interruptions and answering as effortless for the user as possible. If you need too many answers from the users, then increase the number of participants so that you can keep the questions answered by each participant at a minimum.
Find a medium to notify/communicate. This can be SMS, email, app notifications, on-phone or browser alerts, phone calls etc. Always have a plan beforehand to collect the user responses and consider ways to automate the gathering of responses so that when you sit for analysis of the responses, you don’t end up doing copy and pasting and sifting through 1000s of notes etc. Ideal way to collect the data on a dashboard properly sorted out. So plan beforehand how you are going to do the data analysis.
Carefully pick your participants and plan what you are going to do with them. Brief the participants. But stay vague enough so that you don’t end up giving them answers so that what you get as response contain only the examples that you suggested them in your brief. Practice the question sequence as many times as possible so that you are clear where exactly to interrupt and ask what so that you get the correct insightful response from the participants. Always make sure to incentivize the participants and if possible let them know in advance about the incentive so that they have good reasons to share their real thoughts and feelings. If you cannot afford to pay, always acknowledge people, with their permission through an office “wall of fame” or a web page featuring “actual users who helped us define the product experience”
Decide on response categories and adjust categories based on user responses. If possible try to define the categories of responses before hand, even before you start the research. Main categories can be 1. who and what get affected by the task. For example, the people, environment, the processes, the outcome etc. 2. Location 3. Issue 4. how do they currently resolve the issue 4. Their understanding of why the issue occurred etc.
Once the responses have been categorized, you can generate frequency charts that indicate how many times each responses have emerged out from the users responses. This is necessary to reassemble the data in a meaningful way to exact actionable insights out of the data. Once you start to do this, you will stumble on patterns that indicate certain common themes or issues emerging more prominently than others. For example, if as in the above user response “campaign response reporting” emerges more often than others, then you have to consider seriously to include the same as a feature for the next release of the software. Also you can look for keywords that emerge from the responses. This will also give you a good idea about the issues, pain points, delights of the users.