Using Surveys in the User-Centered Design and Agile Lifecycles For Better Usability

By Adrian Garcia | Sep 15, 2016

Surveys are an effective way to gain insight needed for designing and delivering innovative technology. Broadly, surveys can be used for two reasons: 1) as a means of measurement and 2) as a means of discovery. Surveys can be used at any point in the agile and user-centered design lifecycles. Used at the early stages of the lifecycle, they are effective for benchmarking and discovering how users think about issues pertinent to the project. Used in the middle of the lifecycle, they are great for measuring whether a project is on track to meeting specific objectives–such as hitting specific usability targets. At the end of the lifecycle, they can be used to determine if a project’s objectives have been met.

Surveys As a Means of Measurement

Surveys are useful for measuring a number of factors. What a researcher measures with surveys depends on the project’s objectives and how far along the project is in the lifecycle. At the beginning of the lifecycle, surveys are excellent for setting benchmarks. In the middle of the lifecycle, they can be used to assess whether development is on track to meet the project’s overall objectives. After deployment, surveys can be used to determine if project objectives have been met.

Surveys For Benchmarking – Early In the Lifecycle

It’s common for projects to kick-off with a goal of improving an existing benchmark. For example, a previous client was interested in delivering new mobile technology that would improve their current level of customer satisfaction and the company’s perceived level of innovation. How can surveys help?

To increase the company’s customer satisfaction and their level of perceived innovation, a researcher first needs to know the current levels of both variables and have a sense of their underlying causes. To discover this, we sent surveys measuring customers’ current level of satisfaction, complete with follow-up questions investigating its underlying causes. For instance, if customer satisfaction was low, why was it low? We did the same for customers’ perceptions of the company’s level of innovation. Drilling into identifying the underlying causes of a benchmark is called “key driver analysis”. Interestingly, this project resulted in discovering that customer satisfaction was most negatively impacted by poor technical support and that deficits in perceived innovation were caused by the company’s products being too outdated. Follow-up surveys investigating why technical support was such a problem revealed that the greatest predictor of customer satisfaction was how knowledgeable the tech support representative was with the caller’s IT department. No follow-up survey for perceived innovation was needed.

By knowing that customer satisfaction and perceived level of innovation was important to the client, and that there was room for improvement with both variables, we focused on discovering ways to resolve these issues throughout the scope of the project. This allowed us to improve levels of perceived innovation by designing technology with a futuristic tone and improving customer satisfaction levels by making help documentation and communal support more discoverable and easy-to-use. Furthermore, these benchmarks were measured against in all subsequent usability tests, giving our team and the client confidence that customer satisfaction and level of perceived innovation would significantly increase when the product launched.

Surveys for Assessment: Middle of the Lifecycle

For projects in the middle of the lifecycle, surveys can be used to assess whether development is on track to meet specified objectives after launch. For example, one client was interested in delivering a web service that was promoted by the target population, and deemed easy-to-use. To gauge whether the web service would be promoted after launch, we defined as a goal that usability tests should result in net-promoter scores of 10 or more, which was considered a significant improvement from the benchmark (that we had previously set using surveys). To ensure that the end product would be considered easy-to-use, we defined as a goal that usability tests should result in a system usability score of 70 or greater.

These metrics were tracked throughout the usability testing cycles. This streamlined development by making design finite, where we stopped concepting and iterating once we hit our metrics. Conducting usability tests at key stages throughout development, and collecting metrics relevant to client interests every step of the way is essential for delivering effective solutions that meets everyone’s needs.

Surveys For Post Product Launch

To extend the previous example, after all the usability testing the clients asked: “Did achieving a net promoter score of 10 and a usability score greater than 70 actually result in users promoting the web service and finding it easy-to-use after launching the re-design?” Surveys are great for determining if overall business objectives such as these have been met.

After allowing an appropriate amount of time to pass, we sent out surveys to the existing user base to investigate the extent to which the objectives have been met. Some of the questions we asked respondents were: 1) Did they actually recommend the web service to friends in the past week or month? and 2) Did they actually consider the site easy-to-use? Using follow-up surveys post launch in conjunction with analytics that monitor key business metrics tells a very powerful story. For instance, we determined that not only was the site being promoted and considered easy-to-use, but that improving on these benchmarks contributed to increased revenue and an increase in customer accounts.

Something of note: Creating surveys for measurement is no easy task. When creating surveys for measurement, be aware of two things: Validity and Reliability.

  • Validity: For a survey to be valid, it needs to actually measure what the researcher intends for it to measure. For example, if one were interested in measuring respondents’ perceptions of how innovative a company is, they would need to do their due diligence to ensure that the survey measures just that. If necessary, there are many ways to test validity.

  • Reliability: Reliability refers to how consistent the measuring device is. Explaining reliability may be done more simply with an analogy. If you use a ruler to measure something, then you will always get the same measurement every time you try to measure the object of interest. By contrast, if you use an elastic band to measure something, you may or may not end up with the same measurement each time. If you’re holding the elastic band tightly, then you’ll get a different measurement than when you measure the object while holding the band more loosely. The ruler would be considered to have high reliability while the elastic band would be considered to have low reliability. To test for reliability, you can run a statistical analysis called a Cronbach’s alpha.

There are analyses and techniques researchers can use to determine if their surveys are valid and reliable, but often, executing these techniques will be out of scope for most projects. For this reason, researchers should stick to standardized surveys whenever possible. Standardized metrics were discussed in a previous blog post on standardized usability questionnaires.

Surveys As a Means of Discovery

Surveys don’t always have to measure things. They can be used to explore. In this respect, they are similar to interviews. Surveys are great for exploring needs, thoughts and attitudes across a sample of a target population. While this technique is best at the beginning of the lifecycle, it can be effective at any point during development. However, the later in development this kind of information is collected, the more expensive it becomes to implement.

Surveys For Discovery: Exploring Needs, Thoughts and Attitudes

For one project, our research team was brought in late in the development cycle to help developers create a tablet based sales enablement tool. After development was well underway, the project stakeholder was uncertain about which features the tablet should contain and what would make salespeople most effective. Consequently, the research objective was to discover which features were most valuable to salespeople so that they can be implemented in the tablet. Due to timelines and budgets, the only research method available to our team was surveys. However, this did not mean that we were unable to get valuable and actionable information.

Surveys were sent to salespeople that asked open-ended questions about the nature of their work, the tasks that are most mission critical, and their thoughts and reasons on what tasks and workflows novel technology would need to support to make them effective in the selling process. With this information, we were able to radically change the project’s current course of development to include features developers and designers didn’t think of during the ideation sessions. In fact, it was discovered that the features salespeople consider to be the most valuable weren’t even being included in the product at all! The survey had brought completely new insights.

Survey Tools and Tips

There are various tools to conduct surveys including SurveyMonkey, Qualtrics, and Google Forms. Our preferred tool to use is Qualtrics. When conducting surveys, remember to direct efforts towards the business objectives, where the project is in the lifecycle, and the concepts of validity and reliability.

When should I use a survey?

As stated earlier, principally, surveys can be used for two reasons, as a type of measurement and a type of discovery. So when should you use a survey? As a means of discovery, use surveys when you need a high volume of information in a short amount of time. Interviews can provide the same kind of information, but they take longer to execute. As a means of measurement, use surveys when the information is nonexistent or is difficult to access. For instance, most clients won’t have the information readily available for researchers to conduct a key driver analysis. Most of the time, researchers will have to collect this information themselves. In sum, surveys are great for collecting the insights needed to guide teams through designing and developing innovative technology that meets business objectives and that end users will enjoy using. They can be used at any stage throughout the product development lifecycle. At the beginning of the lifecycle, they are a great way to benchmark metrics that will continue to be tracked throughout development. They are also an effective way to quickly discover the needs and attitudes of large samples of individuals. As stated earlier, surveys are effective in determining if a project is on track to meet business objectives when used in the middle of the lifecycle. After launching the product, surveys are an effective way to discover if the project’s objectives have been met.


row_ux_research

Let's Talk :)