How to make stakeholders believe your research

Apr 10, 2026

Better data alone won’t fix stakeholder skepticism when the numbers don’t feel human. This guide explores adaptive follow-ups and video feedback—two methods that bring real voice into quantitative insights and change how decision-makers engage with research.

share
copy
A stock image of several professionals gathered around a table looking at a laptop

There is a particular kind of stakeholder skepticism that better data can't resolve. It's not that they don't believe the numbers—it's that numbers don't feel like people, and decisions that carry real consequences feel more legitimate when there's a human voice attached to the evidence. The purchase intent score is lower than expected and someone in the room wants to know what the respondent actually said. A percentage likes the concept and someone wants to know why the other percentage doesn't. These are reasonable instincts, and the answer isn't to produce more charts.


What you’ll learn 

  • How to make research findings more persuasive
  • How to add human context without increasing survey length
  • When to use video to strengthen stakeholder conviction
  • How to present human voice in a way that changes the conversation

Who this guide is for

This guide is most useful once you're running studies with some regularity and have reached the point where stakeholders look at your findings and say they want to hear it directly from customers. Some familiarity with survey logic and question types is helpful before adding the methods described here.


Why numbers alone don't always close the deal

Survey data has a credibility ceiling that's determined less by sample size or statistical rigor than by whether stakeholders can connect it to a recognizable human reality. A 47% favorable rating for a new product concept is a number. A video of five customers explaining in their own words why they'd buy it is a story. Both are real evidence, but they work on different parts of the decision-making brain, and a research program that can produce both has a meaningful advantage over one that produces only the first.

These methods aren't substitutes for quantitative research. They're complements that resolve the specific kind of skepticism that numbers alone leave on the table. When to deploy them is a judgment call based on what decision is at stake and what the decision-maker needs to act with confidence.

Adding adaptive follow-ups to capture the right 'why'

A common approach to capturing qualitative context is to include an open-ended question after a rating scale, asking respondents to explain their answer. This works, but it doesn’t differentiate by response. Adaptive follow-ups fix this by routing each respondent to a question relevant to what they actually said.

How to do it

Step 1: Identify which questions warrant a branched follow-up

The best candidates are rating questions where the 'why' differs meaningfully by response — satisfaction scores or concept preference ratings. For these, the follow-up you want to ask one respondent is categorically different from the one you'd ask another 

Step 2: Set up Display Logic on your open-ended questions

On any open-ended follow-up question in Qualtrics, click display logic and add conditions based on the prior question's response. For example: show 'What specifically fell short of your expectations?' only to respondents who rated 1-3, and 'What stood out most positively?' only to those who rated 8-10. The branching is invisible to respondents.

Step 3: Filter open-ended responses by rating at analysis

In Data & Analysis, use filters to view open-ended responses segmented by the rating they followed. The qualitative context for each segment is now specific rather than generic, which makes it far more useful in a readout.

Quick tip: Filters don't persist between sessions. Save yours to reapply them instantly.


Adding video feedback

If your license includes video feedback, it changes what's possible in the qualitative portion of your research. Rather than typing their explanation, respondents record a brief video. For the right questions, this produces something qualitatively different from text: you see the respondent's face, hear their tone, and capture the kind of unedited reaction that typed responses often smooth over. The use case where it earns its keep most consistently is concept or product testing where stakeholder conviction is the obstacle—a customer explaining in their own voice why they'd use something is more persuasive than any percentage rating.*

How to do it

Step 1: Add a Video Response question type

In the survey builder, add a video response question after your most critical qualitative question. Keep the prompt specific and conversational: 'In 60 seconds or less, tell us what you thought of this concept and whether you'd actually use it.' Vague prompts produce vague responses.

Step 2: Build a highlight reel for stakeholder presentations

Individual video responses are for reading closely. The highlight reel feature in Qualtrics is how you present findings to leadership. Use the Video Editor to select the strongest 3-5 clips, trim each to the relevant moment, and sequence them into a 60 to 90 second compilation. Aim for clarity and consistency over quantity—three respondents saying the same thing clearly is more persuasive than eight saying it with varying degrees of articulation.

Step 3: Share via link before the meeting

A highlight reel shared as a link before a planning meeting puts customers in the room before the discussion begins. Decision-makers who have watched the reel engage differently with the quantitative data that follows it.

Knowing when these methods are worth the effort

Not every study benefits from video or adaptive follow-ups. Video feedback earns its place in concept and product testing, usability research, attitude and usage, and any situation where an emotional response matters as much as a preference rating. Adaptive follow-ups pay off most when your study will produce meaningfully different respondent segments and you need qualitative context specific to each. For a straightforward satisfaction tracker or an internal pulse, neither is necessary. Save them for the research that's informing consequential decisions, and they'll consistently deliver evidence that changes the conversation.

*Video Feedback availability depends on your license. If you don’t see it, contact your Qualtrics Account Executive or Brand Administrator.

Next step: Make sure your best research is still working six months from now. A compelling readout influences the decision in the room. What happens to that evidence after the meeting is a different problem. Learn how to build a research library that makes prior work findable, reusable, and worth more over time. →

Related Content_

Explore More Articles

Article_

How to stop rebuilding from scratch: A guide to scalable research infrastructure

Article_

From question to findings: A faster research workflow for busy teams

Article_

The no-guesswork guide to research readouts