Getting the best out of conjoint analysis
Most product testing will involve conjoint analysis – this asks respondents to consider a trade-off between multiple product features, ranking which ones are more important to them.
It’s a powerful technique for predicting buyer behaviour, and to realise its potential requires a little more preparation than for other types of survey question.
Important factors include selecting the right product attributes to include in the test, and presenting them in a way that is clear and easy for the customer to understand so they’re able to make a fair comparison.
More about using conjoint analysis
Avoid ambiguity in question wording
To get like-for-like responses from participants, they all need to understand your questions in the same way.
To help that happen, avoid words with more than one meaning, for example crane (bird) vs. crane (lifting machinery), and other words with the potential for confusion, such as homophones like “hear” and “here” and heteronyms, which look the same but sound different, e.g. axes (on a graph) vs. axes (plural of axe).
In general, using plain language is the best way to connect with the maximum number of respondents. Opt for short words and simple sentences, as these are easy to understand. As a general rule, use only as many words as you need to be accurate, specific and clear in your question wording.
Use scales where possible
Scales are a great tool for collecting rich data with minimum respondent effort. Use a scale with 5-7 points, so that there’s a meaningful jump in value between each point and a wide range of respondent opinions is represented.
Make sure that labelling is consistent too – label either every point on the scale or none at all, since labelling only some of them creates a bias towards these ones. And, be consistent – the order and structure of your scales should stay the same across all your questions to avoid confusing respondents.
A common question structure seen in surveys is a list of selectable options with a final field labelled something like “Other – give details.”
Statistics show that this type of question, which blends free text with selectable options, isn’t as successful as the more straightforward format of providing an open field for the respondent to type into.
Previously, market researchers would have recoiled in horror at the thought of analysing thousands of open text responses. But luckily, technology has come a long way and with text analytics software like Text iQ, they’re automatically scored and analysed so you don’t have to sift through each one to draw out insights and patterns.
Ranking or rating?
Asking respondents to rank items means you get a relative score that compares one item to another, and essentially forces a choice where only one item can come out on top. Ranking questions produce data that is more reliable, but they are harder work for the respondent to answer.
In contrast, asking someone to rate items on a scale is easier to do, but it can result in poorer-quality data that doesn’t give you as meaningful an outcome, for example when a participant rates all the items the same.