Anchored MaxDiff experiments are a more exotic version of the standard MaxDiff questions with additional questions designed to work out the absolute importance of the attributes. That is, while a traditional MaxDiff experiment identifies the relative importance of the attributes, an anchored MaxDiff experiment permits conclusions about whether specific attributes are actually important or not.
Example
The table on the left shows the Probability % from a traditional MaxDiff experiment in which the attributes being compared are technology brands. Looking at the analysis we we can see that:
- Google and Sony come in first and second place in terms of preference.
- Apple has done better amongst the women than the men.
- Intel and HP have done relatively better amongst the men than the women.
If you add up the percentages, each column adds to 100% and thus the analysis only focuses on relative preferences. Thus, while a naive read of the data would lead one to conclude that women like Apple more than do men, the data does not actually tell us this (i.e., it is possible that the men like every single brand more than do the women, but because the analysis is expressed as a percentage, such a conclusion cannot be obtained).
The table on the right shows the same analysis but in terms of the Coefficient. This is also uninformative, as these are indexed relative to the first brand, which is Apple. Thus, that men and women both have a score of 0 is an assumption of the analysis rather than an insight (the color-coding is because the significance test is comparing the relativities, and 0 is a relatively high score for the women).
Anchored MaxDiff resolves this conundrum by using additional data as a benchmark. In the table below, a question asking likelihood to recommend each of the brands has been used to anchor the MaxDiff experiment. In particular, a rating of 7 out of 10 by respondents has been used as a benchmark and assigned a coefficient of 0.[note 1] All of the other coefficients are thus interpreted relative to this benchmark value. Thus, we can see that Apple has received a score of less than seven amongst both men and women and so, in some absolute sense, the brand can be seen as performing poorly (as a score of less than seven in a question asking about likelihood to recommend is typically regarded as a poor score). The analysis also shows that men have a marginally lower absolute score than do women in terms of Apple (-0.68 versus -0.50), whereas Google has equal performance amongst the men and the women.
Types of anchored MaxDiff experiments
There are two common types of anchored MaxDiff experiments.
Dual response format
The dual response format involves following each MaxDiff question with another question asking something like:
Considering the four features shown above, would you say that... ○ All are important ○ Some are important, some are not ○ None of these are important.
The responses for each task are then given values relative to the response to the dual question.
MaxDiff combined with rating scales
Before or after the MaxDiff experiment, the respondent provides traditional ratings (e.g., rates all the attributes on a scale from 0 to 10). This is treated as an additional question within the MaxDiff and the values given to these ratings are anchored to a value(s) on the scale.
Comments
0 comments
Article is closed for comments.