Emoji’s, thumbs, stars, or sliders are crowding into numbers, response boxes/bubbles, or verbal labels in online survey response options. There is a popular belief that using more media-rich survey response scales will reduce respondent burden and result in increased respondent engagement, higher participation rates, fewer drop-offs, faster survey completion, and higher data quality.

While there is some intuitive appeal in using more media-rich response options, the academic survey methodology literature does not provide support that these techniques result in increased respondent engagement, reduced respondent fatigue, or higher quality data. In fact, some academic research indicates that the opposite may be true.1 At least one published study concludes that, when given a choice, most respondents prefer a traditional HTML-based survey over a more interactive, media-rich survey.

Drawbacks of Media-Rich Tools

Some of the contributing factors that prevent more media-rich tools from achieving the goals of reducing respondent burden and improved data integrity include the following:

  • Respondents do not have the exact same context, or assign the same meaning, when providing a response (e.g., what a 3-star rating means to one respondent may not be the same across individuals). This can create less data consistency and lower quality.
  • Some respondents may actually find it more difficult to answer visual rather than verbally based response options (e.g., emoji’s) or gamification tools (e.g., sliders). People that are willing to participate in surveys, whether customers, employees, or members of online panels, have become quite familiar with more traditional question/scaling approaches and they may actually need to spend more time learning new media-rich response formats when answering questions with these novel response scales. The extra effort expended on processing and interpreting media-rich scales may actually create additional cognitive burden for respondents and consequently degrade data quality. It is better to have respondents spending their cognitive energy on producing accurate answers than trying to decode a novel response format.
  • There may be issues in consistently displaying media-rich content across mobile devices. This inconsistency can lead to respondent frustration, and further data quality problems.

The study from Downes-Le Guin, et. al. concludes that:

“…the keys to greater survey engagement lie not in graphical enhancements or greater interactivity in the presentation of survey questions, but rather in dealing more effectively with the fundamental components of respondent burden that survey methodologists have long recognized: survey length, topic salience, cognitive burden (i.e. poorly written or hard to answer questions) and frequency of survey requests.”

Focusing on better survey design and contact-management fundamentals can both improve your survey data quality and respondent experience while also avoiding the problems that media-rich response scales may produce.

Conclusion

Focusing on the basics of effective survey design and sample management tends to achieve the often-stated goals of higher respondent engagement and better data quality when employing more media-rich surveys. Qualtrics’ software accommodates both traditional and media-rich scaling. However, whenever possible, we recommend using more traditional response formats for rating scales (e.g. verbal labels on all scale points, balanced scales, 5 or 7 points depending on the nature of the construct being measured and the device types being used by respondents) along with best-practice survey/sample designs. This approach consistently provides reliable and accurate results, while still giving respondents a comfortable, non-burdensome survey experience.