How to Ask Sensitive Survey Questions
Imagine you are responding to an online survey and you’re asked about a sensitive topic, such as drug use, sexual behavior, racial attitudes, or your income. You might be reluctant to tell the truth and instead respond with what you think is a “socially acceptable” answer. This tendency is called “social desirability bias,” and if you’re not careful, it can bias your data, too.
Strategy 1: Validate against known data
In some cases, you might be able to validate the survey responses and figure which of your respondents are not being truthful. Income, for example, it is a topic that is difficult to measure for many reasons, but one is that most people feel some anxiety about reporting values that they perceive to be too low or too high because being perceived as either very poor or very wealthy can be embarrassing. This leads to inaccurate estimates that don’t match the actual distribution of incomes in the population. But if you are able to get access to bank statements or other gold-standard records of income or wealth, you can check to see who is truthful and who is not.
But it can be difficult to get gold-standard data to benchmark your survey data against. For example, you’ll likely never be able to validate responses about sexual behavior or racial attitudes, because there is no gold-standard data to measure these topics.
So then what else can you do to avoid social desirability bias?
Strategy 2: Ask the respondents to answer for other people instead of themselves
People tend to view themselves and their loved ones favorably. Also known as the “Lake Wobegon Effect,” humans have a natural tendency to overestimate their abilities and good qualities. In survey research, often see something similar to the Lake Wobegon Effect when asking respondents about sensitive topics for which they might be tempted to give a misleading socially acceptable answer. One way to get around this is to ask people to answer the socially desirable question about other people.
For example, in a recent study that we conducted at Qualtrics, we interviewed a large number of parents about their child’s technology usage––but for certain potentially socially desirable questions we asked the parents to evaluate their child’s friends rather than their own child. As expected, parents consistently rated their own kid’s behaviors as being better than their kid’s friends. However, research indicates that the true value of the estimate is likely to be closer to the parent’s evaluation of their child’s friends. One key point to keep in mind about this strategy is that it works best for topics that the respondent can reasonably evaluate about other people.
How you ask survey questions can make or break your data, and if you’re asking questions about sensitive topics, how you ask becomes even more critical.
When answering questions about sensitive topics – such as sexual activity or drug use – people tend to give what they perceive to be a socially acceptable answer, instead of telling the whole truth. In a recent post, I taught you several ways to detect and avoid this “social desirability bias,” including validating your data against gold-standard benchmarks and asking your respondents to evaluate other people’s habits instead of their own. This week I have two additional strategies that you can employ. Both have benefits and costs, so thinking carefully about which is most appropriate for your research is important.
Strategy 3: Guarantee anonymity
This approach is reassuring respondents that their data will be completely anonymous. Prior research has indicated that respondents are more willing to share sensitive information if they know the data can never be linked with their personal information. However, recent research has also demonstrated that complete anonymity may come at the cost of data quality.
Strategy 3: Use a control
The final and commonly used approach is the “Item-Count Technique.” In this approach, the researcher creates two lists of items. The first is the control list, which contains completely innocuous things that should not be sensitive to anyone:
The second list contains everything on the first list plus one sensitive item:
Respondents are then randomly assigned to see one of the lists. To determine the prevalence of the sensitive item the researcher finds the difference in the mean number of items reported between the two lists. The difference is attributed to the sensitive item.
The downside of this approach is that you only get estimates of the sensitive item in the aggregate population of your respondents, not at the individual respondent level.
For any of these strategies, it is important to put yourself in your respondents’ shoes and consider that they might not enjoy answering uncomfortable or complex questions. Whichever strategy you decide to employ when asking about sensitive items, it is generally best to avoid asking any more sensitive questions than absolutely necessary.
Want more handy tips on research methodology right now? Check out Three Reasons Why Shorter Surveys are not Always Better and Six Ways to Pretest Your Survey. If you want to become a survey expert, check out our handbook of question desgin below.
The Qualtrics Handbook of Question Design
September 18, 2019
Fast academic research: Putting insights to work at Griffith University
August 13, 2019