Having information on customer preferences, behaviors, likes and dislikes, income, and demographics — the list goes on — helps businesses to create more tailored products and services, as well as captivating experiences.
But even if you create and distribute the surveys, how can you ensure that they’re fair, unbiased, and contain questions that are easy for respondents to answer? This is where ‘bias’ comes in — and reducing it is key to creating great surveys that gather data, encourage honest responses, and benefit your business.
What is survey bias?
Bias is defined as a “deviation of results or inferences from the truth, or processes leading to such a deviation” and it occurs in every survey. It’s impossible to eradicate bias as each person’s opinion is subjective. This includes the researcher, who thinks up the questions and plans the research, and the participants, who answer the questions and share their thoughts.
There are several ways survey bias can influence the accuracy and integrity of interviews, as well as the answers provided by participants. For example:
- Selection: How was the survey sample selected? How many participants completed the survey? Was the sample broad enough to capture the most valuable insights?
- Response: How are participants swayed by leading factors from the interviewer? Such as the questions asked, their format, and the respondent’s desires to be socially accepted?
- Interviewer: Is the interviewer unconsciously sending signals to participants that could alter their answers? Are the interviewers biased? Are the survey questions tailored towards specific outcomes?
These are just a few ways survey response biases can creep into research projects. In this guide, we’ll share a few examples of the above and how you can reduce sampling bias and survey bias. First, how can survey bias influence your survey data, response rate, and survey results?
How can survey bias influence my research/results?
Survey bias can cause a plethora of problems for researchers, including:
Data issues: The data produced doesn’t accurately reflect the opinions of participants, as there are less than truthful responses, extreme responses, or inaccurate answers. This data won’t help you reach your goals.
Poor strategies and investment: As management and senior leaders base future business decisions on market research and survey insights, survey bias can affect how they invest money, time, and resources — potentially taking the wrong course of action.
Low return on investment: Poor insights lead to poor product performance. When you’re targeting the wrong customers as a result of survey biases, e.g. leading questions, you won’t get the information you need to improve your offerings and the overall experience.
Dissatisfaction: Stakeholders and investors will be dissatisfied with performance levels and may reduce market research budgets over the long term.
Inconclusive research: Surveys may need to be repeated to test whether the data or the researchers are at fault, which takes time, money, and resources.
Ultimately, good actions, progress, and innovation are based on good data quality. If the management can’t rely on or trust the research results for accuracy, then it’s a lose-lose situation for all people involved.
Which survey type is most likely to be affected by survey bias?
There is no single survey type that experiences more bias than another. Bias can affect all survey types, including:
- Panel interviews
- One-on-one fact-to-face interviews
- Group interviews
- Telephone interviews
- Webinar or video polls
Survey bias is a universal issue that researchers should be aware of and plan for before every research project. The best thing to do is to think about survey design and use the right survey tools to empower respondents to answer honestly. This way, you can get accurate, valuable survey results.
What are the 3 types of survey bias?
Selection bias creates inaccurate or unrepresentative data. This is because it’s gained in an unfair way that’s detrimental to the accuracy and goals of the research.
For example, you could select a non-random sample, a sample that has a crucial market segment unaccounted for, or a sample that doesn’t engage can all affect data results by providing too much, not enough, or the wrong kind of feedback. You could also choose to focus on samples that validate your own viewpoints and perspectives (confirmation bias), offering no new insights for your teams to act on.
Sampling bias, also known as selection bias, is when your sample is unrepresentative and will not provide the right feedback to support the goals of the survey research.
Some samples forget to include the right target customer market segment. This can lead to inaccurate data results. For example, if your brand makes toys for children and you’re wondering about their aesthetic appeal, surveying a sample made up of parents would tell you why the parents buy the toy, but not why the toy is considered appealing to children.
Sampling bias can also occur when the researcher creates a one-sided sample because they believe they know who the survey should target.
But while they might be correct, creating a one-sided sample may overstate the importance of respondents’ feedback, as well as miss the diverse viewpoints of other non-customer segments that may want to use your product or service.
Examples of sampling bias
- Certain population groups aren’t covered in polling or survey sampling, leading to skewed sample data results.
- Non-probability sampling methods are used incorrectly. Non-probability sampling methods don’t offer the same bias-removal benefits as probability sampling (which use a random sample).
Even with a perfect sample selection (no sampling bias), respondents may not answer the survey. But why?
Well, they may not like filling in surveys, or their email could be inactive (so make sure to keep your data up to date). They might not like your brand or don’t understand the purpose of your survey. Or they might just hang up the phone, or throw the survey in the bin.
Whatever the reason, your results won’t be indicative of the full sample. This means that, because of unresponsive sample members, you may miss out on crucial data that would help you analyze trends or identify correlations.
For every survey, there will be those who don’t answer. The idea is to keep this to a minimum, ideally a small percentage of the full sample survey size. If the percentage of unresponsive sample members is higher than average, you know that your results have a non-response bias.
Examples of non-response bias
- A survey that is aimed at finding out the views of criminals or closed-off groups would more than likely have low participation from these groups as they don’t want to share their illegal activities or share the information they know. Therefore, the remaining responses would be from participants that might not best represent the target market of the research.
- A survey that can be filled in and posted may have more responses than an electronic survey done over the phone if respondents live in an area with poor signal coverage.
What if your sample is in itself filled with the wrong kind of people, just because the right kind of people are no longer available to speak to (e.g. you’ve no longer got their details)? Survivorship bias is where you target the right customer market segment, but due to natural turnover, you only have the people who are left — the ‘survivors’.
These ‘survivors’ are more likely to be favorable and biased in their results. But to get the full picture, you have to hear from the people who are not around anymore, as they represent the full picture.
Example of survivorship bias
- A brand is looking to understand why employee turnover is so high, so they do research with their current employees. However, the people that will give them the insights as to why they left are those that have left the organization. As they aren’t part of the research sample, the results will have survivorship bias.
Response bias is when your sample provides responses based on the survey questions, but the answers they provide aren’t what they really believe or think. Instead, participants’ survey responses are based on the structure and language of the questions, leading them to answer in a particular way.
Some examples of response bias in action are:
- Asking about customer satisfaction but only providing two positive responses and one negative, e.g. Very Satisfied, Satisfied and Dissatisfied. To balance the survey questions, consider adding two positive and two negative options.
- Taking an emotional approach to questions — e.g. “Your parents are getting old and would like to see their grandchildren. Would you consider having a child soon?”
Extreme response bias
For this response bias, some participants will choose an extreme answer value to answer a question that has a scale as an answer (e.g. Likert scale). This will increase the response bias if the question is phrased in a way that suggests that the right answer is an extreme-ended one.
This is more common where the researcher has failed to make the question neutral, or if the question has a ‘closed’ yes or no response that forces you to be extreme in your approach.
Examples of extreme bias
- The question: ‘Is it okay to spank your children as a form of rearing education?’ would elicit a strong response bias in favor or against the practice of physical discipline.
- The question: ‘Should a family member have the right to end their own life if they have a terminal illness?’ is an emotive topic that forces a participant to think of a stressful scenario and decide on a strong position.
Neutral response bias
This type of response bias occurs when the researcher creates questions that are not specific enough, or don’t evoke a strong enough response for respondents to pick an extreme either way.
As a result, participants pick a neutral position on a Likert answer scale. This doesn’t help the overall results of the research, as you would like to have a mixture of extreme and neutral responses that tell you more about your participants’ varied views.
Example of neutral bias
- On a scale of 1 to 5, how do you feel about these animals?
Dog, cat, bear, lion, goldfish
(Scale of 1-5, where 1 is Hate and 5 is Love)
For pet owners, or non-pet owners, the answers will be neutral as, without experience of owning each one, it’s unlikely they’ll have an extreme view.
Acquiescence bias (also known as the yes bias, the friendliness bias, and the confirmation bias) tends to be one of the more commonly recognized response biases by researchers.
This bias is the tendency for survey respondents to agree with the survey questions, without their response being a true reflection of their own position or beliefs. This is because it’s easier to say yes and agree — to please a researcher or complete a survey — than to hold a disagreeable position.
This occurs when the question is phrased in a way that asks the participant to confirm a statement, or when the question is answered with opposing pairs, such as ‘Agree / disagree’, ‘True / false’, and ‘Yes / no’.
Examples of acquiescence bias
- ‘When you have your coffee, do you enjoy it with milk or without?’ This question presumes that the participant has and enjoys coffee. They may dislike coffee and enjoy another beverage instead.
- ‘Do you consider yourself a good person?’ This question would ultimately end in a ‘yes’ answer as participants are unlikely to answer in a way that makes them look unfavorable. This is an example of a leading question, one that leads you towards a particular answer.
Question order bias
Question order bias, or order-effects bias, occurs when related questions are placed in a certain order. For example, once a participant answers one question positively or negatively, the participants feel they have to answer any follow-on, related questions the same way.
This is a bias based on the participant’s desire to be consistent with their answers, whereas, in reality, there could be different answers to a set of questions on one topic.
Examples of question order bias
- Asking the primary question in a loaded way, for example: ‘Do you want kids?’ and then following up with questions about the perception of motherhood or fatherhood. The former question sets up the participant for an extreme answer (yes or no). The second question could relate to the participant’s view of their own parents, though the order of the questions suggests that this is a follow-on question to be answered similarly.
- Another example is double-barreled questions that ask two things at the same time, implying that they’re linked. For example, What do you think of this clothing brand and the management?’
Social desirability bias or conformity bias
Survey takers may want to appear more socially desirable or attractive to the interviewer as people are careful about how they appear to others. From a survey perspective, this could be respondents answering uncharacteristically or lying to appear in a positive light.
The researcher’s choice of topic could be the source of the issue, or it could be the participant’s insecurity or comfort with the topic that affects their answers.
Examples of social desirability bias
- If participants are influenced by societal ‘norms’ for behavior and appearance, e.g. how a person ‘should’ appear or act, this can affect their answers. For example, drinking can lead to binge drinking and health problems, yet it’s an acceptable social norm for workers and teams.
- Social desirability bias may also cause participants to go over the top and inflate their own status to seem more successful or progressive than they actually are. For example, lying about their annual income or level of education.
The last type of survey bias is created by the actions of the interviewer. The way that a question is asked, or the way the interviewer makes a participant feel in the survey, can impact what results they receive back.
As the reliability of the data is on the line, the interviewer owes it to themselves to do their best to remove bias, though they may not even realize what they’re doing.
Demand characteristic bias
When a participant is doing a survey, they are aware that they’re in an interview setting and may act differently because of that. If you recall how nerve-wracking interviews are for a new job, you’ll see why you might say something that wasn’t accurate or wholly true because of the pressure on you.
As such, researchers will get biased responses from surveys that are incredibly formal or hosted in an uncomfortable setting. To help respondents and get accurate data and valuable data — researchers need to help participants forget that they are being interviewed and asked survey questions.
Examples of demand characteristic bias
- If the survey starts without a good introduction, the participant doesn’t understand the purpose of the study and spends time trying to figure it out, while still being asked questions at the same time. This can lead to pressure and stress, and you won’t get the best answers out of them during this time.
- If the setting of the interviews is unwelcoming and you don’t do your best to keep the participant comfortable, the discomfort the participant feels may come through into the way they answer their questions — rushed or anxiously.
Reporting bias arises when the research team decides on the publication of the research based on the positive or negative outcome, from the analysis of the data.
Examples of reporting bias
- A healthcare research team found that they can’t make a case that their medical painkiller cream decreases pain when used on test participants. The brand may choose not to publish the results sharing this information, which is unethical and doesn’t represent the facts based on the research.
How to prevent survey bias
Given how prevalent bias in surveys is, what can you do about it to protect your survey work, but to make sure you get the right answers back from survey takers? Answers based on their beliefs, needs, and views?
Here are some suggestions that will help prevent survey response bias:
- Add in a ‘don’t know’ or ‘not applicable’ option in answers so that the participant doesn’t feel the need to answer a question incorrectly or not at all.
- Ensure you have an up-to-date participant list that covers the right target audiences and is a random sample. If you need help knowing what is a good sample size, try our free sample calculator tool.
- Ask non-respondents to participate in a follow-up survey. Sometimes, people are busy and miss the email invitation. A second round may provide more ‘yeses’ that will help create a better picture of results.
- Consider the best way to reach your target audience so you can connect with them. This means that you won’t get survivorship bias where the longest-serving participants are the only ones available to interview.
- Avoid phrasing questions emotionally or using emotional topics, unless you’re trained to provide guidance and support for participants that react badly or are emotionally triggered by the questions.
- Write questions in a neutral way that doesn’t indicate a preference for one answer or another. For example, ‘Do you like coffee?’ becomes ‘How do you feel about coffee?’
- Avoid simple ‘yes and no’ questions that don’t allow for elaboration or mixed viewpoints. Instead, use a scale or multiple choice answers.
- Ask someone in your company to review the questions for bias. A fresh pair of eyes can really help identify issues and areas for improvement.
- Provide incentives for participants to complete the survey. This will help keep them focussed and engaged with the survey until the end.
- Ask one question at a time to avoid double-barrelled questions that might confuse participants or make them respond a certain way.
- Mix topics questions up so that there are no linked groups on a topic that occur one after the other, preventing question order bias.
- Avoid emotionally charged language to prevent extreme response bias.
- If you’re always getting a lot of ‘yes’ responses, try an answer scale range that doesn’t encourage acquiescence bias. For example, “Definitely will not, Probably will not, Don’t know, Probably will and Definitely will.”
- If your participants are not happy to be interviewed for the survey, you can try suggesting anonymous feedback so that you’re able to collect key insights that you wouldn’t have gotten otherwise.
- Stay neutral and professional as you survey, so that you don’t unconsciously show a preference for one answer over another. This allows for unbiased responses that aren’t fed by unconscious body language or tone of voice.
- Provide a clear welcome and introduction without telling the participants about what’s coming up as questions. This means participants won’t have time to stress about the survey questions coming up and can take each question at a time.
- Be nice to the participants and thank them for their time. Your participants want to know their time is being taken seriously. A warm manner can help them feel at ease.
How can Qualtrics prevent survey bias?
You might have sampling bias in your marketing list, or you might have inadvertently created questions that lead the participant to a specific answer. But why take the risk when you can see the hundreds of questions on offer with our free survey templates?
But you can go one step further. With our integrated all-in-one solutions, you can get all your surveys, customer and participant data in one place.
With the ability to improve your survey quality using AI and create research surveys by just dragging and dropping the right modules, you have everything you need. What’s more, you’ll benefit from analytics and dashboard reporting, giving you both an at-a-glance and comprehensive view of responses.
Our survey and panel management tools can help you reach the right audiences around the world, right when you need them. And with the inbuilt intelligence assisting with personalization, you can boost response rates and show the customer that they’re front of mind.
After all, 13,000 of the world’s best brands that use our software can’t be all wrong!
But if you want to know where to start with your survey questions, we can help. Find out what kind of questions our experts have created for use in your surveys to reduce the risk of survey bias.