Not getting a big enough sample
To draw accurate conclusions from your research, you need a statistically significant sample – that is, a large enough group that the results represent the views of your target population as a whole and can’t be put down to coincidence.
To calculate your sample size, you need to know the population size of the group you want to study (for example, women in the UK aged 25-34) and factor in a margin of error (the smaller it is the more accurate your conclusions) and a confidence level.
The sample size is then worked out using the following equation:
N = population size | e = margin of error | z = z-score
The z-score is the number of standard deviations a proportion is away from the mean and is derived from the confidence level.
Last-minute changes wrecking your logic
It’s difficult to avoid last-minute changes to your research. In fact, being able to adapt and make changes right up to the deployment date is a real benefit to managing your projects in-house.
But too often, those changes aren’t accompanied by the vital step of re-testing the survey and as a result, survey logic can break and affect your data.
Question and scale errors in the research
It might sound basic, but designing the question set and measurement scales is the foundation of good data in your research. So make sure you follow best practice for writing survey questions like avoiding leading questions and choosing the correct question types.
When it comes to measurements and scales, there’s a right type for each question and getting it wrong can invalidate your results. So make sure the scale is appropriate to the output you’re looking for – for example if you’re measuring NPS, you’ll need a scale of 1-10 – and ensure you’re avoiding order bias on multiple choice questions by randomly rotating the answers.
Surveys that are just way too long
You’re going out to your panel with a survey – so you might as well take the opportunity to gather a few more insights, right? Not really – adding in lots of ‘nice to have’ questions or trying to combine two projects into one can make for a very long survey. That brings plenty of problems, least of all a likely drop in response rates as respondents become fatigued and drop out before completing your survey.
The end result is a very costly project as you have to recruit more panelists to make up for the drop off in responses. Poor research ensues as tired or disengaged respondents don’t make for high quality responses. 12 minutes is typically the upper limit before you start to see a spike in respondents dropping off midway through a survey.
Making the results too difficult to understand
The success of any research project can be measured by how it impacts the organisation – but all too often, results are discarded or studies ignored because others in the organisation don’t understand them.
Make sure you present the data to the organisation in a way that drives action – show your stakeholders exactly what the data is telling them and how it’s useful to them. It’s a good idea to provide examples and model the impact of any changes you put in place based on your findings. For example, if you run product testing research and find consumers in your target group would prefer feature X over feature Y, try to translate this into the impact on sales to really show the value of the research to your key stakeholders.
Talk to our Research Services team
Trusted by 9,000 brands for everything from product testing to competitor analysis, Core XM is the world’s most powerful. Our team of research experts are on hand to help you design and launch successful projects, from designing the surveys, testing them and reporting on the results.