Using Attention Checks in Your Surveys May Harm Data Quality
Thinking of checking up on respondent attention mid-survey to make sure that you’re getting good data? Think again.
In this article, we highlight how new findings from our Qualtrics Methodology Lab are helping us to revisit and refine advice that is commonly given to survey researchers, namely the use of attention check questions to ensure data quality. But first some background.
A History of Attention Checks
When the cognitive demands of a survey begin to exceed the motivation or ability of respondents, they often employ a set of response strategies that allow them to reduce the effort that they have to expend without leaving the survey altogether. Several of these strategies are grouped together under the term ‘satisficing’ – including acquiescence, straight-lining, choosing the first reasonable response, or saying ‘don’t know’ or ‘no opinion’ (Krosnick 1999; Vannette and Krosnick 2014). Other behaviors include skipping some necessary questions, speeding through the survey by giving low-effort responses, not fully answering open-text questions, or engaging in a variety of other behaviors that negatively impact response quality.
No researcher or organization wants low-quality data in their results, and since at least 2009 there has been a widespread trend toward attempting to identify survey respondents that are not carefully reading instructions (Oppenheimer, Meyvis, and Davidenko 2009). While the original research indicated that filtering these respondents might increase experimental efficiency, other researchers quickly began using this method not only to identify respondents who do not read instructions but also as a proxy for other low-effort response strategies listed above. These strategies for identifying “bad” respondents are known by a variety of different names, including:
- Attention checks
- Instructional manipulation checks
- Trap questions
- Red-herring questions
These strategies vary somewhat in how they are implemented, but they all share an interest in “catching” respondents who appear to not follow instructions in the survey. While many different factors could result in the pattern of results that these techniques produce, the most common explanation is that the offending respondents are not paying sufficient attention to the survey. For the purposes of this post, we will refer to all of the strategies listed above as “attention checks.”
Since the assumption is that respondents that fail the attention check are not paying attention, there is a common belief that these “bad” respondents should simply be eliminated from the dataset, the sooner the better. Qualtrics has in the past recommended for using attention check questions and eliminating those respondents that fail them from the data, because the attention-check technique seemed intuitively reasonable and pragmatic, and is often used across the research industry.
The Problem with Attention Check Strategies
However, our research scientists in the Qualtrics Methodology Lab recently conducted a careful review of emerging research on this topic and found that much of it advises against eliminating these respondents from most datasets (Anduiza and Galais 2016; Berinsky, Margolis, and Sances 2014; 2016; Hauser et al. n.d.; Miller and Officer 2009). Rather, “failing” an attention check question should be used as one of many data quality metrics to be evaluated after data collection has completed.
Removing respondents that fail attention checks is likely to introduce a demographic bias.
Part of the rationale behind the current academic recommendations is the fact that the respondents that “fail” the attention check are not a random subset of the population, meaning that eliminating those respondents from the survey may introduce a bias into the results of the data if some demographic or psychographic groups are disproportionately likely to fail the attention check.
Given our prior recommendation of using attention-check techniques at Qualtrics, we decided to investigate this approach ourselves to determine what the effects of using attention check questions might be. Toward that end, our Qualtrics Methodology Lab conducted a large-scale global survey experiment. One important finding was that removing those respondents that fail attention checks is indeed likely to introduce a demographic bias, particularly for age (Vannette 2016).
Surprisingly, when comparing different types of attention check questions against a control group in a randomized experiment, we also discovered that simply asking an attention check question caused respondents to behave worse later in the survey. The very mechanism that is intended to detect low-quality responses in a live survey actually induces respondents to produce lower quality responses throughout the rest of the survey in ways that are not as immediately detectable (Vannette 2016).
Related research has also found that attention checks can induce Hawthorne effects and socially desirable responding, wherein respondents edit or censor their responses because they feel they are being watched (Clifford and Jerit 2015).
One implication of these findings is that it may be best to omit attention check questions altogether.
At Qualtrics, we have stopped recommending that our customers use attention checks. We of course do not prohibit their use; we just recommend not using them as a means to improve data quality since they actually seem to degrade data quality.
If Not Attention Checks, Then What?
In the Qualtrics Methodology Lab, we are still actively working to understand the causal mechanisms that may be at work to produce the results that we’ve found. One of our current hypotheses is that respondents may recognize the attention check question and subsequently have a sense of being ‘past the trap’ which induces them to invest less effort into their responses. An alternative hypothesis is the feeling that a researcher has tried to ‘trap’ them may reduce trust or the feeling of reciprocity with the survey researcher or organization and that may influence willingness to engage thoughtfully and carefully with the survey. There could also be many other factors at work that are reducing respondent motivation, resulting in the degraded data quality that we observed. There is clearly a need for further research on this topic.
Our survey scientists are also working hard on developing new methods for improving data quality and increasing respondent attention and engagement. While we’re not quite ready to take the wraps off yet, we have already made some promising findings that we will share in future publications. The results documented in this blog post were presented at the 2016 annual conference of the American Association for Public Opinion Research (AAPOR).
David L. Vannette, PhD is Principal Research Scientist in the Qualtrics Methodology Lab and a Qualtrics subject matter expert.
Get the best data from your respondents eBook: The Qualtrics handbook of question design
Get the best data from your respondents
eBook: The Qualtrics handbook of question design
This post replaces a 2013 post titled: 4 Ways to Ensure Valid Responses for your Online Survey
- Anduiza, Eva, and Carol Galais. 2016. “Answering Without Reading: IMCs and Strong Satisficing in Online Surveys.” International Journal of Public Opinion Research.
- Berinsky, Adam J, Michele F Margolis, and Michael W Sances. 2014. “Separating the Shirkers From the Workers? Making Sure Respondents Pay Attention on Self‐Administered Surveys.” American Journal of Political Science 58(3): 739–53.
- Berinsky, Adam J, Michele F Margolis, and Michael W Sances. 2016. “Can We Turn Shirkers Into Workers?” Journal of Experimental Social Psychology 66: 20–28.
- Clifford, Scott, and Jennifer Jerit. 2015. “Do Attempts to Improve Respondent Attention Increase Social Desirability Bias?” Public Opinion Quarterly 79(3):790-802.
- Hauser, David J, A Sunderrajan, M Natarajan, and Norbert Schwarz. “Prior Exposure to Instructional Manipulation Checks Does Not Attenuate Survey Context Effects Driven by Satisficing or Gricean Norms.” methods, data, analyses.
- Krosnick, Jon A. 1999. “Survey Research.” Annual Review of Psychology 50(1): 537–67.
- Miller, Jeff, and C O Officer. 2009. “Beyond ‘Trapping’ the Undesirable Panelist: the Use of Red Herrings to Reduce Satisficing.” CASRO Panel Quality Conference
- Oppenheimer, Daniel M, Tom Meyvis, and Nicolas Davidenko. 2009. “Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power.” Journal of Experimental Social Psychology 45(4): 867–72.
- Vannette, David L. 2016. “Testing the Effects of Different Types of Attention Interventions on Data Quality in Web Surveys. Experimental Evidence From a 14 Country Study.” Paper presented at the 71st Annual Conference of the American Association for Public Opinion Research in Austin, TX.
- Vannette, David L, and Jon A Krosnick. 2014. “Answering Questions.” In The Wiley Blackwell Handbook of Mindfulness, A Comparison of Survey Satisficing and Mindlessness, eds. Amanda Ie, Christelle T Ngnoumen, and Ellen J Langer. John Wiley & Sons, 312–27.