Open-ended survey questions are where the real signal lives—and where most programs hit a ceiling. A customer rates their experience a '6' and writes 'the service was okay.' You know there's more to it. But by the time you notice the pattern, the moment has passed and the customer has moved on.
Adaptive Follow-Ups (part of Conversational Feedback) change what's possible in that moment. Instead of a static survey that simply accepts a brief answer, the experience becomes a brief, contextual conversation. It uses active listening to acknowledge what was said and asks contextually relevant follow-ups to dig deeper, transforming vague feedback into actionable insights.
What this looks like in practice:
Customer response: "The service was okay."
Adaptive Follow-Up: "You mentioned the service was okay—was there a specific moment that stood out, either positively or negatively?"
Customer response: "The agent was helpful but it took three calls to actually resolve my issue."
Qualtrics ran two studies—a controlled experiment and a large-scale observational study across real-world programs—to measure exactly what changes when you turn on Adaptive Follow-Ups. The results below are what you can expect to see in your own data.
The engagement paradox: more questions, less burden
The first concern most researchers have about adding follow-up questions is survey fatigue. The experimental data addresses that directly.
In a randomized experiment with 1,843 respondents, participants who received AI-generated follow-ups actually rated the survey as less burdensome than the control group—9.7% vs 13.8% —despite answering more questions. Completion rates were statistically identical (97% with follow-ups vs. 96% without). And 66% of respondents who received follow-ups described the experience as more engaging than other surveys they had taken (and 31% said there was no difference).
What explains this? The follow-up questions are contextual—they respond to what the respondent actually said, not a generic probe. That makes the experience feel like a conversation rather than a questionnaire. The respondent feels heard, which changes how they engage.
Better engagement would be worth having on its own. But the more consequential finding is what happens to the data when that engagement changes. A respondent who feels heard doesn't just rate the survey higher—they write more, use more specific language, and surface more of what your analysis needs.
The 87% increase in topics identified in the experimental study using the airline industry TextIQ model is the kind of improvement that changes what your analysis can surface.
Sentiment, emotion, and effort detection—the enrichments that tell you how customers feel, not just what they said—improved by at least 20 percentage points in the controlled study.
What changes in your data
The controlled experiment confirmed the experience improvement. The observational study—26,000+ responses across diverse industries using production Adaptive Follow-Ups—confirmed it scales to real programs with real respondents.
General topics: Qualtrics TextIQ AI-powered topics. Industry topics: Qualtrics standard TextIQ topic models by industry. *Definition: Lexical words are nouns, adjectives, verbs, and adverbs that convey the meaning of a text.
In practical terms: more than twice as many topics identified means your analysis has significantly more signal to work with.
Participation and speed—what to expect in your program
Two metrics matter most when you're thinking about deploying a new question type at scale: completion rates and response time.
The completion rate outperforming the Qualtrics benchmark is notable—even with follow-up questions added, the conversational format kept respondents engaged through the end. Follow-up answer rates were consistent with open-ended question answer rates generally.
How the follow-ups are designed—and why it matters for your data quality
Understanding what Adaptive Follow-Ups are optimized for helps you use them well.
They solve one specific problem.
The follow-up is not trying to replace a qual interview or generate new research questions. It is solving one problem: the open-ended response that was too brief to be useful. It focuses on getting a better answer to the question already asked.
They are built to acknowledge, not interrogate.
The follow-up question references what the respondent said and asks for one more specific detail. This is what drives the engagement paradox—the respondent feels heard rather than processed.
They are trained for actionability.
Early versions of the model had a tendency to probe for granular details that weren't useful for program improvement—the kind of specificity that looks thorough but doesn't tell you what to change. The current model is trained to identify follow-up angles that produce insights a program owner can act on.
They distinguish dissatisfaction from hostility.
The model is trained to recognize the difference between a frustrated customer—who benefits from a follow-up—and a hostile or inappropriate response, where the follow-up should not trigger. This keeps the experience professional for respondents and keeps your data clean.
Turning it on in your program
Prerequisites: Adaptive Follow-Ups are available on surveys with open-ended questions. Check with your CSM if you're unsure whether your account tier includes this feature.
The highest-impact surveys to start with are those where open-ended responses are already part of your analysis—relationship surveys, post-interaction surveys, and any program where you're using text analytics to surface themes. These are the surveys where richer responses will immediately change what your analysis can show.
A few things worth confirming before you enable:
- Which surveys in your program rely most heavily on open-ended analysis? Start there.
- Are you using Text iQ enrichments (sentiment, emotion, effort) or topics? If so, you will see the data quality improvement most directly.
- Do you have a sense of your current average response length on key open-ends? Establishing a baseline now makes the improvement visible when you check back in 30 days.
Step-by-step
- Select a text entry question in your survey.
- Under Conversational feedback, select Include Follow-up question.
- Set (or exclude) your company name.
- Set how many follow-up questions you want to generate.
- Specify topics to avoid.
- Confirm your changes.
For more details, see Conversational Feedback.
What becomes possible
The best qualitative researchers have always known that the follow-up question—the one that says "tell me more about that" at exactly the right moment—is where the real insight lives. What's changed is that moment can now happen at the scale of your entire customer base, in real time, without a researcher in the room. That's what makes Adaptive Follow-Ups worth understanding: not the response rate lift or the topic count, but the shift in what's possible when listening at scale stops being a compromise.