Successful products are ones that provide great user experiences (UX). Companies that focus on UX succeed and companies that don’t fail. The top 10 most successful companies in the S&P index invest heavily in engineering a successful UX, while the 10 least successful companies do not. Popular opinion is that UX is essential to success; however, when discussing with potential stakeholders all the efforts involved in creating a stellar UX, some are surprised by the amount and type of work that must be done. Some believe that designing a UX is about nothing more than hiring designers and developers, putting them in a room, and telling them to “make it user-friendly”. Those with this belief may try to shortcut requirements gathering tasks, like observations, interviews and usability testing with their own devised workarounds. Below is a list of questions and concerns clients have to common user research activities aimed at gathering system requirements.
Objection 1: Why do you need to do a contextual inquiry? I know what the job is so why can’t I just tell the design team?
Contextual inquiries are a highly effective way to redesign enterprise software to increase its effectiveness. To those unaware, the technique may sound exotic. Below are reasons why contextual inquiries are an important part of the requirements gathering process.
The documented process always differs from how the work is actually executed.
The output of contextual inquiries are models of how the work is currently being done on the ground. Those doing contextual inquiries inevitably find that how the work is being done on the ground differs from how upper management believes it is being executed. The difference is usually due to workarounds arising from:
1) incompatibility between tools and humans
2) designing without considering the environment and context of use.
Not factoring these elements into design means incompatibilities will persist and inefficiencies will continue.
Knowledge of the work is often tacit and cannot be extracted through questioning.
When talking to experts about their work, they will often leave out several important steps because they don’t think to mention them. These overlooked steps are proceduralized knowledge. The human brain strives to be efficient. Analogous to a computer, the brain will automate tasks to spare them from working memory. Automated tasks are like habits. They are tasks that have been done so many times that they can be executed in “autopilot”. If research teams only talk to expert employees with extensive experience, these employees will not think to mention tasks they execute while in “autopilot”, leaving out important details that will not be considered in the design.
The physical environment often imposes unforeseen requirements.
If teams do not consider the physical environment, they are not considering the space in which the application will live and operate, which will surely result in a sub-par user experience. By analogy, try driving a large truck in the old part of Toledo, Spain. Sure, the truck works beautifully and is functional, but you’re driving it through a city founded in the 5th century that is built for pedestrians. The UX of that truck will be a nightmare because of the environment. (See image of the roads in Toledo, Spain on the left) The physical environment imposes constraints that need to be considered and designed for.
The physical characteristics of the user group and context of use often introduce unforeseen requirements.
Knowing the physical characteristics of the group always provides design direction. Does the user group share physical characteristics that need to be considered in design? Does the user group wear gloves or colored safety goggles that could affect visibility of the screens? In one example, we designed a solution for a project where the stakeholder believed all that was needed was a tablet. After a contextual inquiry, we learned that the user group all wear gloves throughout the day that are not compatible with touchscreens and preferred traditional input devices because of the rugged work environment. We also learned that simply replacing workers’ current monitors with iPads would not be a feasible solution because of the monitor’s placement and the frequency of interaction. Had we replaced current monitors with iPads, as was the stakeholder’s assumption, users would have had to take off their gloves well over 150 times a day and frequently adopt an awkward posture that could lead to repetitive stress injuries just to interact with the new technology. Think about the inefficiency of removing gloves 150 times a day, the dissatisfaction of interacting with a touchscreen, and the potential cost in medical bills from repetitive stress injuries.
Objection 2: Why do user interviews? Why can’t I go ask users myself?
User interviews are an important step in the User-Centered Design process. While observations and contextual inquiries inform design teams of how users perform their job and the constraints they deal with, user interviews inform design teams of how users think about their job, company culture, and language. In addition, user interviews help teams identify the design direction that will get employees excited about using the new technology. Getting users excited about technology is correlated with user-uptake, which is the metric CIOs care about the most, and job satisfaction. However, conducting an interview is not the same as having a conversation. Reasons why you want someone with experience conducting the interview include:
User interviews are a skill. Being able to not ask leading questions and to not be influenced by personal biases takes practice.
It is common for rookie interviewers to ask leading questions and to use data to advance their own agenda. For example, a stakeholder who wants to remove a current system in favor for a new mobile one, may ask all interviewees: “Tell me everything you dislike about the current system and why a new mobile system would be better”. Overall, it is not uncommon for a person with a political bias to collect data to support his or her political agenda and this usually manifests in the form of asking leading questions. It is best to use third party companies who are only interested in effectively realizing the project objectives – nothing more, nothing less.
Stakeholders do not know what information is most relevant to the design team.
If stakeholders ask their own questions, they may not get all the info needed to drive design, and they may undervalue certain opinions, without realizing how they are relevant to automation.
Furthermore, interviews are a qualitative data collection technique where the real insights come from identifying patterns and trends within a data set collected from a representative sample. Just like analyzing numbers, there are methods and techniques for analyzing words. If stakeholders aren’t versed in how to do this, key insights may be lost, and the design team may head in the wrong direction.
Objection 3: How valid is qualitative data from usability tests?
Once requirements are defined and the design direction established, prototyping and iterating on the design prior to development is key to launching a successful product on the first release. Data collected from usability tests can be qualitative, quantitative, objective or subjective, or any combination depending on the project objectives.
Qualitative data are as valuable as quantitative data, and maybe even more so.
Many have a perception that quantitative data is more valuable than qualitative data. This is not true. Qualitative data, if collected and handled properly, can be more valuable than quantitative data because it shines a light on the root cause of quantitative data. If quantitative data is all that is collected, teams in the aftermath have to make judgment calls about why the numbers are what they are and what needs to be done to improve them.
For example, say a researcher collected metrics on the visual appeal of an application, and results show that on average, users rate the visual appeal as very low, 2 out of 10. If all that was collected are the numbers, teams will be left puzzled with what to do to improve the metric. By collecting qualitative data, patterns may emerge that nearly all negative comments related to aesthetics were related to dissatisfaction with the color palette. This insight gives design teams clear design direction and success can be achieved with fewer iterations.
The type of data have no bearing on validity.
Validity is about collecting information about what you want to know, regardless of the type of data. For example, if a person was interested in the effectiveness of a design for call center agents, conducting usability tests with managers would be invalid. Also, as long as data is collected from randomly selected individuals from the population of interest, and questions and tasks directly relate to what teams want to know about the stimulus or concept, then the data is always valid regardless of type.
Objection 4: Can’t we do it faster and cheaper with market research?
Market research only tells you where the market has been.
It does not allow teams to leapfrog or gain an advantage on the competition. If teams only release products that are solely informed by market research, they will always release something that is similar to the competition. Furthermore, market research will not tell you if the product is usable, satisfying, and effective. The product could be a great idea and bring new features to market, but if it is not executed well and refined through user input in the middle of the lifecycle, it will inevitably fail or be swallowed by competition.
Objection 5: Won’t this take too long and cost too much?
UCD efforts save money. NASA estimates that changing requirements late in the life cycle can be exponentially expensive. Many argue that the reasons why requirements change is the lack of user input at the start. Every dollar invested in effectively defining requirements through thoughtful user input at the beginning of the process is 100X cheaper than discovering and fixing requirements errors after deployment. As Jared Huke, the Director of Design and User Experience at ChaiOne says, “users will tell you what the requirements should be. You can listen to them at the start of a project when it is very cheap to do so, or you can wait until you launch the product and find that you have to do it all over again.”