Skip to main content
Loading...
Skip to article
  • Qualtrics Platform
    Qualtrics Platform
  • Customer Journey Optimizer
    Customer Journey Optimizer
  • XM Discover
    XM Discover
  • Qualtrics Social Connect
    Qualtrics Social Connect

Conjoint Analysis White Paper


Was this helpful?


This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The feedback you submit here is used only to help improve this page.

That’s great! Thank you for your feedback!

Thank you for your feedback!


What is Conjoint Analysis?

Definition

Conjoint analysis is a market research technique for measuring the preference and importance that respondents (customers) place on the various elements of a product or service. It can play a critical role in understanding the trade-offs that people would make when given different product options and different product configurations. At the heart of conjoint analysis is the idea that product attributes can increase or decrease the likelihood of an overall package being purchased; thus we can quantify that preference.

How is a conjoint analysis conducted?

Conjoint analysis is conducted by showing participants varying packages (also called bundles, products, or options). Participants are instructed to evaluate those packages and select one based on what they’re most likely to purchase, or what is the most appealing to them. The respondent will have to choose from a series of packages, making trade-offs as they proceed.

Conjoint surveys usually have between two to four packages per question. The selections participants make shed light on which features and feature combinations show up more frequently in favorable bundles, as well as which features and feature combinations are more common among the unfavorable bundles.

The steps for running a conjoint analysis are:

  1. Determine the attributes to be tested in the conjoint analysis.
  2. Generate the experimental design.
  3. Design the survey that hosts the conjoint tasks.
  4. Collect responses.
  5. Analyze the conjoint results.
  6. Report the findings.

Each of these steps builds upon the previous and works toward the end goal: understanding the customer base’s favorable trade-offs and preferences.

Qualtrics has developed an XM Solution that enables researchers in their larger research, allowing them to quickly and simply conduct conjoint analysis and run respondents through trade-off exercises. There are different methods and approaches for collecting the choice data which are known as conjoint types. The Qualtrics XM Solution currently supports choice-based (discrete) conjoint analysis.

What business objectives does conjoint analysis answer?

Conjoint specializes in answering questions that no other methodology can answer. Some of those questions include:

  • What feature or functionality of a product is most important and influential in measuring preference and appeal?
  • What are customers focused on when making their purchase decision? What has the greatest impact on whether they will purchase or not?
  • What role does price play in decision making, and what are the pricing sweet spots?
  • How sensitive will customers be to shifts in pricing?
  • What is the monetary or relative value to the market of each of the features we are thinking about including? How much more would customers be willing to pay for a premium feature?
  • What trade-offs will our customers be likely to make? If we know we need to increase price, what features or functionality can we add to our offering so we don’t lose appeal and market share?
  • What does market share look like for different products? How does the shifting and changing of the product configuration affect market share?
  • How do the products we are considering compare to the competition? What can we do to best compete against what is currently on the market?
  • If we are looking to make changes to our existing product, what are the best improvements we can make? What will resonate best with our existing customers?
  • What is the optimal product that we can offer to increase the number of buyers? To maximize our revenue? To maximize our profits?

As you can see, conjoint analysis can provide insight for diverse and dynamic business questions – and these are just the product related inquiries it answers. The length and legitimacy of this list is a primary reason why those who regularly conduct conjoint analyses are so fond of them. Conjoints provide vision into a wide array of business objectives and can provide crucial confidence to researchers and organizations.

Defining the Conjoint Attributes

Feature and Levels

The structure of the variables we want to incorporate in a conjoint analysis are features and levels.  The features are the primary categories of the variables; each feature consists of a set of levels, which are more specific units of each feature.

Example: In a conjoint study to test dinner packages, here’s how we might format our features and levels:

Features Levels
Main dish Chicken, Steak, Seafood
Side dish Fries, Salad, Soup
Drink Water, Soda
Price $10, $15, $20, $25

There is a tricky balance to deciding which features and levels will be incorporated in the study. If you don’t test a variable, you will get zero vision into its preference, but testing too many features and levels can lead to respondent fatigue, inconsistent responses, and worthless data.

There is not a one-size-fits-all approach when it comes to the number of questions and packages you present each respondent.  Although different types of conjoint can facilitate greater or fewer variables, traditionally researchers would want to include 2-8 features with 2-7 levels per feature.  Because this experience best suits the respondent, this is considered the sweet spot for choice-based conjoint analysis, and will generally yield the best results.

Keep in mind that the more features and levels you include, the more difficult and overwhelming the conjoint will be for the respondents. More features and levels means we need to ask more questions.  This tug-of-war between whether or not to test a product attribute is an important decision that should not be overlooked. Researchers should carefully consider what should be inserted into the conjoint and what should be excluded.

Regardless of the number of attributes you test in the conjoint, it is essential that they are clear and concise. If the respondents can’t grasp the bundles they are reviewing, the data will mean nothing. The text used for both the features and their levels should describe them plainly but accurately. The study creator should consider and even focus on the survey-taker and their context of the product being examined. Ask yourself, “Will someone outside our company understand these bundles?”

Be mindful that lengthy text can clutter the page and make the choice tasks daunting and overwhelming. A fantastic enhancement can be using images when finding the right words to define an attribute seems challenging. “A picture is worth a thousand words” can ring true in conjoint analysis.

Exclusions & Prohibited Pairs

When the team is determining the product attributes to test, it is important to look for combinations that just don’t make sense to combine. These aren’t necessarily two levels that are unlikely to be paired together, but two levels that would be confusing and impossible to pair. These are typically referred to as exclusions, or prohibited pairs.

Example: When testing in-home technology, you would want to exclude Amazon Echo (the device type) with Google Assistant (the operating system). That’s because Amazon Echos cannot use the Google Assistant operating system, and there’s no reason the survey takers would want to.

Removing prohibited pairs creates holes in our design and model and reduces the independent nature of the variables, so they should be avoided wherever possible.

Experimental Design

Experimental design and Conjoint Analysis

The nature of most conjoint analysis projects is that not all combinations can be displayed to a respondent. A list of every combination, or the full factorial, can easily reach into the hundreds or thousands of bundles. Obviously, we could never show each respondent every possible bundle. But how do we obtain insights on the favorability of different combinations?

Similar to other experimental approaches, strategic and scientific principles are leveraged in deciphering how to get a read of the entire combination space while only showing a subset.  Experimental designs in conjoints maximize the number of data points and the coverage across potential packages, while minimizing the number of profiles we expose to the respondent.

There are several approaches in determining the cards that will be presented to the respondent. A card is a bundle or a profile being presented to the respondent for evaluation. In the past, when computers were not as accessible and powerful as they are now, predefined design tables were generated and referenced by researchers.  You would identify the number of features and levels (often a 3×3 or a 4×4) and go find the corresponding design table and incorporate that into your survey.  However, these tables reduce the amount of flexibility most researchers want and need to define the feature attribute space.

Now, most choice-based conjoint and rating-based conjoint designs encapsulate fractional factorial card sets that will be presented to respondents. Fractional factorial means that we will show a fraction of the full factorial.

There are several key ingredients in determining what strategic subset of profiles will be displayed within the survey:

  • Card sets should have relative balance across each level. This means that within a feature, each level should be included in a similar number of bundles.
  • There shouldn’t be one level that is shown in six bundles, while another level is only included in one bundle. As with any survey research, randomization techniques improve the validity of responses and control psychology order bias.
  • Conjoint designs are best suited when there are a lot of versions or blocks that all incorporate a subset of bundles. Jordan Louviere (one of the early pioneers of choice modeling) and the Sawtooth Software’s founders both agree that the more versions that are a part of the overall design, the better. A respondent would be assigned to one of those versions which would dictate which package constructs they would be presented.
  • Other principles that are often included in conjoint design discussions are orthogonality and d-efficiency.  There are debates on the necessity and importance of integrating these concepts into the experimental design for conjoint studies.

The base questions that need to be input into design generation for choice-based conjoint is the number of questions or tasks that will be presented to the respondent as well as the number of choices or alternatives there will be per question. The traditional choice based approach typically calls for two choices, and has the respondent choose between option A and option B. That being said, it is definitely appropriate to show three or more bundles per question. The principal question that needs to be thought through is whether more alternatives will create an overwhelming experience for the respondent. Sometimes just evaluating two bundles for preference can be a daunting task. Additionally, if a ‘none of these’ option is to be included in the study, screen space might lend to a better experience with two choices and the none.

The number of questions that will comprise the conjoint portion of the survey should be calculated based on the number of choices per task as well as the size of the conjoint attributes being tested. The general formula for determining the number of cards that should be displayed is:

Number of Cards = Total # of Levels – # of Feature + 1

The total number of levels is simply the sum of the number of levels across all of the features. Based upon the total number of cards and the number of choices per question, it is easy to reverse engineer the number of questions.

However, some calls are more subjective than others. For example, you may need to decide whether the survey should be shortened by reducing the number of questions and increasing the bundles per question, or if that hurts the data quality. The best approach for resolving the balance between questions and alternatives per question is simply to test. Create the survey and take it. Distribute it to colleagues and get their opinion on the density of the question versus the length of the survey.

How a Qualtrics Conjoint generates its experimental design

Qualtrics uses a randomized balance design approach that encourages some, but not too much, overlap with the levels. The approach is similar to Sawtooth’s Balanced Overlap Design. This approach is highly effective when coupled with Hierarchical Bayesian estimation techniques.  The basis of the design approach is to present different respondents with different packages for them to evaluate. We want to make sure that the different levels are properly represented for evaluation. The design is formulated with versions which is the set of questions. Within each version there are the same amount of tasks and within each task there are the same number of choices.

The number of versions is calculated using the following formula:

Number of Versions = (Base Number * Maximum number of levels in any feature) / (Number of choices per question * number of question)

The outcome of this formula is rounded to the nearest round number divisible by 10.

The Base Number is 750 if our total number of levels across all features is less than or equal to 10, and it is 1,000 if the total number of levels across all features is greater than 10.

The algorithm first randomly generates bundles for each of the tasks and choices. Then it has checks on each version to ensure that there is relative balance across the number of times each level is shown. The algorithm doesn’t force each level to be shown the exact same number of times, but does ensure that the difference between the level seen the most in that version and the level seen the least is no more than a deviation of two. Versions that don’t meet these conditions are refactored until they do comply with balance rules. The algorithm continues until the desired number of versions is generated.

Survey & Sample Size

Survey Programming

Conjoint analysis is powered by the responses gathered through the survey. The survey is the touchpoint with the respondents where the design is presented and trade-off selections are made.

When a conjoint study is conducted, it is usually the focus of the survey, but not the entirety of it. It is critical that the conjoint exercise within the survey is concise and well-structured. The data and insights will only be as accurate as the packages are clear. A conjoint survey can commonly include screener questions (to ensure the right type of respondent makes it through), an introduction with educational resources, and demographic questions. There are no hard rules on how many other questions can be added to a conjoint study, or where in the Survey Flow the conjoint should fall. It should be noted that any question being asked of the respondents outside of the conjoint takes up time and focus that could otherwise be given to the conjoint exercise. Survey length should be considered as the study is being designed and built out. Fatiguing a respondent is a surefire way of degrading the caliber of the study. Surveys that take more than 10-15 minutes are more susceptible to fatigue and data quality issues

The data collected from a conjoint study is only accurate if the respondent can realistically put themselves in an actual purchasing setting. Assuring that the respondent is fully  informed on the packages they will be selecting amongst is a must within conjoint analysis. Many studies are testing concepts that are well-known and relatable by the general public. However, if that is not the case with your project, time should be devoted in advance of the conjoint to properly educate the respondent through descriptions and/or videos. The clearer a package is to the survey-taker, the truer the resulting utilities will be.

In addition to the descriptions being simple and straightforward, the layout of the cards should also lend to understanding and clarity. This allows the respondent to make comparisons and answer definitively.

Sample Size

The number of responses you should collect and the relevance to the individuals taking the survey is critical to the success and accuracy of the conjoint results. Here’s an equation Sawtooth Software uses to determine the number of responses:

Number of respondents = (multiplier*c)/(t*a)

multiplier = 750-1000

c = largest number of levels across all features

t = number of tasks or questions

a = number of alternatives or choices per question

We recommend that the multiplier is 750 for larger projects and 1000 for smaller projects. Sawtooth recommends a multiplier of 300 to 500 but we feel a larger number provides more conclusive results and simulations.

It is important that the individuals taking the conjoint exercise are reflective of those that would be at play to purchase, order and opt for your product or service. Frequently, researchers will define screeners at the beginning of the survey to ensure pertinent opinions are gathered.  Alternatively, groups will often have lists of current or prospective customers that they can deploy the survey to.

Modeling of Conjoint Analysis

Overview

Analyzing conjoint analysis is where data turns into predictions and models. It is where respondent selections are translated into preferences. The outcome of the analysis will be an understanding of what is valuable and what is not, and will illuminate how combinations should be bundled.

The core of the analysis is the statistical modeling that estimates the utility that respondents assign to each level. Because of the statistical modeling, conjoint analysis gets a reputation as “complex,” but this is also what allows conjoint to have a reputation for being a world-class research technique. There are several statistical approaches used for calculating of utility preferences, including regression and multinomial logistic modeling, typically conducted on the aggregate level.

Regardless of the manner in which the survey selections are modeled, the output should be utility coefficients that represent the value or preference that the respondent base has for the distinct levels of each feature. For designs and analysis methods that allow for individual-level calculations of utility scores, we can derive preference models for every single respondent. This can be advantageous for a number of reasons including segmentation of various data cuts, latent class analysis, and simulations. The primary approach taken to yield individual-based utility models is Hierarchical Bayes (HB) estimation. This technique uses Bayesian methods to probabilistically derive the relative value of each variable being tested.

Hierarchical Bayes Estimation

Hierarchical Bayes (HB) estimation is an iterative process that encompasses a lower level model that estimates the individual’s relative utilities for the tested attributes, as well as an higher level model that pinpoints the population’s predictions for preference. These two work together until the analysis converges on the coefficients that represent the value of each attribute for each individual. HB estimation borrows information from other responses to gain even better and more stable individual-level results. It is very robust and allows us to get really good reads into the customers’ preferences, even while presenting fewer tasks to the respondent.

The technique is deemed “hierarchical” because of the higher and lower level models. This approach estimates the average preferences (higher level model) and then gauges how different each respondent is from that distribution to derive their specific utilities (lower level model). The process repeats over a number of iterations to ultimately help us hone in on the probability of a specific concept being selected based on its construct. Qualtrics specifically uses a Multinomial Logistic Regression model.

The Qualtrics Conjoint Analysis Solution uses Hierarchical Bayes estimation written in STAN to calculate individual preference utilities. Qualtrics runs 1000 iterations per Markov Chain and and runs 4 chains.

Individual Level Utility Coefficients

The outcome of the Bayesian model are preference scores that represent the utility that the individual attaches to each level.  These scores are frequently called partworth utilities and are the basis of all summary metrics and simulations derived from the conjoint study.  The utility file would have a row for each respondent included in the conjoint analysis and a column for each unique level testing within the study. In modeling the preferences of each respondent, the utilities help us predict what selections respondents would make when faced with different bundles. The utilities are ordinal in nature and tell us the rank order of each level tested with some magnitude of contribution to the total bundle utility of a package.

The partworth utility scores are zero-centered and are generally within the range of -5 to +5. In the conjoint solution, the raw utility scores for each individual can be exported to a CSV using the Summary Metrics option.

Summary Metrics & Conjoint Reporting

Conjoint Summary Metrics

With the derived utility coefficients as the basis of the analysis, outputs and deliverables can be prepared to showcase the findings of the study. They will be the building blocks of all of the summary metrics and simulations. The core summary metrics that typically accompany conjoint analysis are detailed below.

  • Feature Importance: The amount of influence and impact that a feature has in decision-making amongst product configurations. The greater the feature importance, the more weight and control it has in what makes a favorable product. Feature importance is calculated by taking the distance between the best and worst level within that feature. The bigger the distance, the more important the feature. A simple way to think about feature importance is that the levels of that feature have a big impact on whether or not a package is selected or not in a choice-based conjoint model.
  • Average Utility Scores: The average utility score of each level across all respondents. These are ordinal in nature and will show the relative preference between levels. The average utilities can give some directional understanding but should not be a standalone metric to summarize the conjoint analysis.
  • First Choice Preference Scores: The first choice preference scores indicate the percentage of respondents that found the most utility with the different levels. Within each respondent’s utility coefficients they will have a top or most preferred level within each feature. The first choice scores will be the distribution of respondents that found that level to be the best option for that feature.
  • Preference Share: The preference share is the measurement of the probability that a level would be chosen over another with all else feature components held constant. It is a product of the utilities being calculated using a Multinomial Logistic Regression model and is derived by exponentiating the level utility and dividing that by the sum of all of the exponentiated levels within the feature.
  • Willingness to Pay: The amount of money a customer is willing to pay for a particular attribute of a product in comparison to another attribute. Typically we recommend that a base case or current case level is defined and then we can determine how much more or less they are willing to pay in comparison to the base level. Each level can have a willingness to pay compared to the base case. This can only be used when price or cost is a feature in the conjoint analysis. It is calculated by finding the amount of utility difference between the different price points and then applying that dollar per utility ratio to the other levels and their utility scores. We usually like to calculate the willingness to pay on the respondent level and then aggregate and summarize.
  • Optimal packages: This is the optimal package in regards to maximizing customer preference and appeal. This might not always be the exact approach an organization would want to move toward, as the cost of implementation may be prohibitive, but it can guide directionally.

Reporting on Conjoint Analysis Insights

Conjoint analysis can provide a variety of incredible insights about the predicted behavior of customers. Different metrics and charts can showcase trends and commonalities in responses. But the primary output of a conjoint analysis study should always be the conjoint simulator. The simulator should be the tool of choice to answer key questions like the trade-offs customers would make and how different packages would compare to each other. The summary metrics listed above are helpful and serve a purpose, but should always point you back to the simulator.

Conjoint Analysis Simulations

What is a Simulator?

The conjoint analysis simulator is an interactive tool that facilitates the testing and predicting of preference amongst plausible product configurations. The simulator typically includes a series of dropdowns that allows for the creation of packages that consist of the attributes that were included in the conjoint study. At the core, conjoint analysis is a technique for recognizing the trade-offs that customers would make when presented with different choices. The preference simulator embodies this objective by reporting the estimated trade-off customers would make when presented with 2 or more options. The potential scenarios within a simulator can be astronomical as product constructs and the segments to include can be altered.

In addition to the obvious trade-off analysis, there are a variety of uses that are extremely valuable in deriving insights from conjoint results. The most prevalent practices with the simulator are running competitive landscape analysis, improvement from a product base case, and the relative value of product attributes.

Business Objectives Covered by the Conjoint Simulator

Competitive Landscape Analysis with a Simulator

Healthy businesses will frequently look over their shoulder to research how the competition compares. Conjoint analysis is a great tool to uncover how a business’s potential product configurations would compare to the competing options on the market. This is contingent, however, on the attributes of the competing products being included in the study’s features and levels. Within the simulator, the competitor’s product attributes can be laid out and then, with the remaining options, you can define different bundles to preview how they would stack up to the existing market.

Improving an Existing Product with a Simulator

Oftentimes, products need to go through revamps and improvements to stay ahead of competitors and to remain relevant and innovative. This requires progressive adjustments. A conjoint study is a fantastic methodology for understanding where companies can make the most compelling changes to excite new prospects and retain their current users. With data in hand, a simulator can be utilized to capture the what-ifs of making changes to the attributes. “Option 1” within the simulator can be set to be the current product, and “Option 2” can be iteratively changed by the controller to discover where the biggest gains are available.

Gauging the Relative Value of Product Attributes with a Simulator

Any product is, at its core, a combination of multiple features. It is a sum of its parts. Grasping the preference of those parts is essential to conjoint analysis. Expanding upon “preference,” it makes sense to try and further quantify the value of each level. If price was included within the attribute set, the simulator can be an outstanding tool for inferring that value. The process would be to mirror the same product configuration in “Option 1” and “Option 2.” By changing a single level or group of levels, you will find the preference share no longer equal. With the other option, move the price level to find where the two packages now are equal again. The difference in price between “Option 1” and “Option 2” can be interpreted as the relative value of that level or group of levels.

FAQs