By Scott Gunter, Chief Operating Officer at Usability Sciences

When deploying a voice of the customer (VoC) program for your website, there are three critical questions to ask. These questions can guide your program design and the technology you need to support it. The questions are:


  1. What business objective do we want to achieve by deploying a VoC program?
  2. Given our strategic intent, what do we need to capture?
  3. Given our objectives and our information needs, what survey technology should we use?

Starting with the first and most important question:

What business objective do we want to achieve by deploying a VoC program?


Your answer to this question dictates the other key questions you ask. VoC programs typically serve one of two distinct strategic objectives:

  1. Benchmarking
  2. Continuous improvement


Answer A: Benchmarking

Some survey vendors claim that their models serve both objectives, but you need only look at what they do (how they structure their surveys) rather than what they say. Benchmark-purposed surveys feature a multitude of ratings questions and the whole approach is metrics-heavy. One provider we’ve seen uses the first 13 questions to exclusively serve the need for benchmarking, and it is not uncommon to find a survey with perhaps one open-ended question in a 30- or 40-question set.


Because benchmarking is a comparative exercise, it is essential to have an “apples-to-apples” approach to the survey experience itself. This means serving up the same set of questions to every survey respondent, regardless of their actual pathway through the website. When done poorly, this can make for a long and not necessarily relevant exercise for the respondents. Brevity and relevance dictate response quality, so the longer and less relevant the survey questions, the lower the likely quality of the responses.


The value your benchmarking surveys deliver, therefore, point to comparative performance. Why your scores rank as they do will likely remain unclear, except at the level of general causality – “navigation,” for example, or “look and feel.” “Navigation” involves a multiplicity of variables, so you still need to conduct additional, granular research to get to the root cause.


Answer B: Continuous Improvement

If your strategic intent is continuous improvement, your tactical objective is to understand causality. Ratings questions are useful, but their value rises exponentially if they are linked to follow-up questions designed to understand why the respondent rated that aspect of the site or visit as they did.


“Based on why you came here today, how successful was your visit?”

Followed up with:

“Please help us understand the main reason you were unsuccessful [or successful].”


Continuous improvement-purposed surveys, therefore, exhibit a much greater balance between ratings questions and open-ended questions. Answering an open-ended question takes much more effort on the part of the respondent, so the question set needs to be as concise and as relevant as possible. In other words, continuous improvement surveys should be tailored to the individual respondent, so that the questions reflect the visitor’s particular pathway through the site and take into account the visitor’s unique experience.


This ability to “customize” each set of questions to the individual respondent depends on having a survey technology capable of tracking visitor behavior and incorporating those details into a set of tailored exit questions.


“We noticed that you viewed the ‘What’s New’ section in Handbags. How appealing did you find our new designers?”

Followed up with:

“Please help us understand why the new designers struck you that way.”


There is obviously a limit to the number of times you can ask a respondent to elaborate on an answer, so be judicious about where, when, and what you ask for regarding explanatory information. In pursuit of continuous improvement, responses that help you understand why your visitors react to the various aspects of the site experience are gold. They provide a detailed, granular understanding of causality, and by identifying the root cause of a problem, you can fix it permanently.


Even though the shortest route to improvement is to eliminate common problems, it is also valuable to understand why visitors like positive aspects of their experience. Positives tell you:


  1. What to emphasize and expand
  2. What not to break when you go about fixing other problems

The second question to ask is:

Given our strategic intent, what do we need to capture?


If your VoC strategy is aimed at benchmarking, the answers to this question are limited by the survey takers’ tolerance for responding to questions that may or may not be of much interest or relevance. Those questions yield trending metrics and ranking data parsed into as many sub-components as the survey can reasonably ask before the quality of the data comes into question (people responding mindlessly to the questions, just to get through the survey).


VoC teams focused on continuous improvement, however, have an opportunity to serve the interests of their internal customers at a much deeper level than can the benchmark providers. This is why we use the term “capture” rather than “ask” in how we phrase the second question. Intelligent survey technology can capture far more than the answers to survey questions. In the background, it can collect the pages visited, the categories or brands viewed, the products compared, or the tools used; it can monitor for events, such as abandoned carts, the creation of a wish list, or checkout abandonment.


This abundance of behavior can be used to trigger the type of experience-specific survey questions referenced above or to provide a full, detailed picture of the respondent’s site visit. The addition of behavioral data allows for much more sophisticated analysis, since the questions asked of the data can be triangulated via metrics, verbatim responses, and actual behavior. These analytical benefits can, to some extent, be replicated by integrating survey data into web analytics data. Since that is a post-processing data merge, however, it obviously precludes the ability to ask behaviorally-triggered questions during the survey itself. Post-process data merging therefore loses the “in the moment” contextual value of capturing the visitor’s reaction when it is immediate and at its most vivid point.


The ability to capture almost every aspect of a respondent’s visit opens up a vista of possibilities for your internal clients. Once you remove the constraints of “dumb” survey technologies, your clients are free to ask for data to address their real business challenges. Your survey strategy will allow you to accommodate needs as diverse as the client groups you serve.


  • Executives typically want top-line visibility, such as trending data for NPS, systemic problems, or initiative-specific metrics.
  • The online marketing team wants to know demographics associated with the primary acquisition vehicles; they want to know why visitors responded better to one promotion than another.
  • The site search team wants to know specifics about visitor response to the reorganization and layout of the results page.
  • The merchandising team wants to know why a brand favorite no longer converts at its previous high rates.
  • The site architects want to understand how the new taxonomy in the Sale section is working for returning visitors.
  • The Checkout team wants to know why conversion dipped when they added a Guest Checkout option.


The variations are endless. Yet the beauty of adding behavioral capability to your surveys means that you can enjoy the best of both worlds. You can capture contextual data that provides exceptional value and insight to your internal customers; at the same time, you can utilize that behavioral data to minimize the number of questions you ask of your respondents by asking them only questions relevant to their immediate site experience.


Your “master survey document” might have 50 questions (perhaps 10 common and 40 variable) and 30 behavioral events to monitor, but no survey respondent will ever see more than 20 questions. The question mix could be different for each one. You achieve sufficient volume of answers to the variable questions through experimenting with invitation rates.

The third question you ask is:

Given our objectives and our information needs, what survey technology should we use?


The answers by now are obvious. If your VoC purpose is benchmarking, your need is simple and your survey technology can be basic, too. If, however, you embrace continuous website improvement and an obsession with understanding causality, your VoC strategy will best be served by a sophisticated platform, one with the ability to capture the entire visitor experience and frame each survey accordingly.


Qualtrics Site Intercept is not the only technology capable of executing this kind of strategy, but it is, in our opinion, the most robust and, by far, the easiest to use and to implement. We know this because we had our own behavior-based survey technology for 15 years before we decided to sunset it in favor of Qualtrics Site Intercept. We arrived at that decision after a thorough evaluation of the market offerings and a candid assessment not only of how we could best compete in the survey service space, but how we could best protect our time and investment in our existing client relationships. Our clients trust us so long as we can provide best-in-class service. For that, we need best-in-class technologies.


The Authoritative Guide to Voice of the Customer

Download Free Ebook


Scott Gunter

Bio: Scott Gunter, Chief Operating Officer for Usability Sciences, is a Usability professional with a 17 year track record of successfully leading and conducting user experience research using an extensive array of methodologies including Site Intercept Research. Scott has successfully completed over 300 research projects on behalf of companies like J.C. Penney, Kohl’s, Office Depot, and Dollar Tree. In addition to conducting research, Scott also writes articles for various UX publications including, has presented at multiple conferences on the topic of user experience and research techniques, and also creates and produces webinars at Usability Sciences.