This article describes how to perform the statistical analysis of a choice-based conjoint study. The key steps are:
- The goal: containing multiple estimates of the utility of each attribute level for each respondent
- Check the questionnaire
- Fit a choice model to the data
- Hygiene test: check for convergence
- The “smell test”: checking the means distributions of the coefficients
- Remove random choosers
- Remove "irrational" respondents
- Check external validity
- Experiment with alternative model specifications and choose the best model
1. The goal: containing multiple estimates of the utility of each attribute level for each respondent
In the early days of choice-based conjoint (the 1980s), the goal was to analyze the data to create a chart like the one below.
The height of each bar was referred to as a utility (or partworth or coefficient), where:
- Utility is another way of saying how appealing something is.
- The utility of the first level of each attribute was set to 0, and all the other utilities of attribute levels are measured relative to this first utility (see Introduction to the Multinomial Logit Model for more about this).
- Because each attribute is centered to 0, it's easier to compare the marginal increase in utility from alternatives within and across attributes.
If the utilities shown above are correct, they tell us that:
- People prefer bigger pay rises to smaller pay rises. (No big surprise.)
- Having 5% pay rise is pretty marginal. It needs to be 10%.
- People value a carbon neutral employer. It's worth almost 10% of salary. The key point is for an employer to be committed to being neutral in 10 years.
- People really care about the tools they use. You have to pay people a lot to use bad software.
- People prefer fully remote. But, it's less important than the other things.
From this information alone it's possible to draw lots of interesting conclusions and even to predict market share.
While estimating utilities is useful, at a basic level they aren't quite right. Consider Work location. Clearly some people will prefer to work from home and others from a workplace. An understanding of differences between people cannot be obtained by looking at utilities like the ones above.
In the 1990s new techniques were invented that made it possible to calculate a separate utility for each person in a study, such that the overall utilities were are just an average of each person's utilities. As shown below, the, the utility of a 20% salary increase is 2.1. This is an average. We can see that the first person's utility is 0.3, the next person's utility is 0.8, and so on.
With such respondent-level data more nuanced conclusions can be reached (e.g., differences by segment) and simulators become more accurate.
Inevitably there must be a lot of noise in the data from a choice-based conjoint study. If we ask people to make, say, 10 choices, where they choose from, say, four alternatives, how can we so precisely compute their utility is, say, 0.3 rather than 0.4? We can't .
Modern techniques for calculating utilities calculate the distribution of possible values for each respondent for each attribute level. These possible utilities are called draws. In the example below, respondent 7 has an average utility of 2.3, but this is an average of 100 possible numbers. For example, it's possible the utility for respondent number 7 is is 1.8, 3, 2.3, 2, or any of the 100 numbers in this table
The average of these 100 numbers is our best single guess. (And, there is nothing special about these particular 100 numbers; a different set of draws could be produced provided they had a similar distribution, in terms of the mean, standard deviation, and overall shape.)
The goal of the statistical analysis of choice-based conjoint is to calculate the draws of the utilities for each respondent for each attribute level as accurately as possible. The first step in this is to check that the questionnaire is appropriate.
2. Check the questionnaire
The number of complexities involved with choice-based conjoint studies is large enough that good practice is to perform extremely diligent testing both prior to collecting any data and after a soft send.
See Checking a Choice-Based Conjoint Questionnaire.
3. Fit a choice model to the data
The statistical analysis of choice-based conjoint studies starts with fitting a choice model to the data. As a point of jargon, choice models are used to model any data where respondents have had to choose between alternatives, whether the data has been obtained from a choice-based conjoint study, or, from some other means (e.g., a panel recording household grocery purchasing).
There are three main choice models:
- The multinomial logit model. This is not useful in practice, but it is the fundamental building block of the more advanced techniques and it is useful to understand how it works. See Introduction to the Multinomial Logit Model.
- Hierarchical Bayes. This is the standard model also and the best model for most problems.
- Latent class logit. This can be the best model if the only goal of the analysis is to identify segments.
See the Displayr help article How to do the Statistical Analysis of Choice-Based Conjoint Data for instructions on how to create these models in Displayr. In Q, it's almost identical, except that the user starts off via Automate > Browse Online Library > Choice Modeling > Hierarchical Bayes.
4. Hygiene test: check for convergence
When using more complicated models to analyze data, such as choice models, it is possible that sometimes the result that is provided is only provided because the software ran out of time. Such a model is said to have not converged.
Good software will provide a warning if this is a problem. See Checking a Model for Convergence for more information. The remedy is to increase the number of iterations of the model.
5. The “smell test”: checking the means distributions of the coefficients
Once it is believed that the model has converged, the next step is to review the standard outputs of the choice model and check that the results seem sensible. This is referred to as a “smell test”. If something is rotten, there is a good chance of discovering it at this stage. For almost all models, this is the most important step in statistical analysis.
Refer to Displayr Help's How to Read Displayr's Choice Model Output for general advice on interpreting the outputs of a choice model.
6. Remove random choosers
The goal of a choice-based conjoint study is to understand what drives choice. However, some respondents do not take much care when answering the questions, providing near-random choices. It is good practice to identify such respondents and clean them from the data.
For more information, see:
- Using RLH to Remove Random Choosers from Choice-Based Conjoint Studies for a discussion of the logic of this approach.
-
How to Remove Random Choosers from a Choice-Based Conjoint Model for instructions on how to perform this in Displayr.
7. Remove "irrational" respondents
Sometimes respondents answer questions in a way that indicates they are basically irrational. For example, some respondents may always choose the most expensive product. Where there is strong evidence that this behavior makes no sense in the context of the study it is appropriate to exclude such respondents from the data.
For more information, see:
-
Identifying Respondents with "Irrational" Preferences for a discussion of the logic of this approach.
-
How to Remove Irrational Respondents from a Choice-Based Conjoint Model for instructions on how to perform this in Displayr.
8. Check external validity
Once we have removed the random choosers and irrational respondents we hopefully have a reliable model. We check external validity to confirm it is reliable. This is done by:
- Checking the predictive accuracy of the choice model via cross-validation.
- Verifying that utilities calculated from the choice model correlate with other data.
- Verifying that the model can accurately predict historic behavior in a market.
For more information, see Checking the External Validity of a Choice Model.
9. Experiment with alternative model specifications and choose the best model
The steps above create what can best be described as a vanilla model. It is quite basic. Most conjoint studies use such vanilla models, but they can usually be improved. Whether they should be improved is a time-cost-quality tradeoff.
The basic way of improving a model is to create alternative models, and continually compare them to see which is best, based on predictive accuracy (see How to Compare Choice Models (Cross-Validation)). This is sometimes called a "bake-off".
The main ways of modifying choice models are:
- Segmentation. This involves creating different models for different segments. An example of this is in Case study 2 in How to Compare Choice Models (Cross-Validation)
- Code categorical attributes as numeric. The mechanics of how to do this are described in the Displayr Help article How to Specify Numeric Attributes in Choice Models. The theory is described in the following articles:
- Interactions. This is beyond the scope of this article. It is rarely done in practice.
- Covariates (aka predictors; e.g., demographics). This is beyond the scope of this article.
- Modify the prior covariance distribution. This is an advanced topic and is rarely done in practice. Common approaches are:
- Changing the mixing distribution from normal (the default of a Hierarchical Bayes model) to triangular, discrete, or some other distribution.
- Changing the distributions of priors.
- Changing the number of mixture components.
See the Displayr Help article on How to Change the Specification of a Choice Model for an explanation of how to perform the more advanced modifications of choice models in Displayr and Q.
Comments
0 comments
Article is closed for comments.