Choice-based conjoint is a complicated methodology. There are many opportunities to make mistakes. Consequently, it is appropriate to be very cautious and meticulous when checking a quesionnaire. To check a choice-based questionnairequestionnaire:
- Check the questionnaire by being a respondent
- Check the data capture and any randomization mechanism
- Conduct a soft send and analyze the resulting data
Check the questionnaire by being a respondent
A lot of time can be saved by your completing your question yourself, answering the questionnaire as if a respondent. The trick to this is to:
- Concentrate really hard. The natural temptation is just to click questions without carefully reading them. However, lots of errors get avoided if you carefully complete the questionnaire, every time you make changes. It is very boring, but very useful.
- Adopt some fake personas. That is, imagine, say, four different segments of respondents and complete the questionnaire as if in each of them.
An alternative is to conduct a qualitative technique called think-alouds.
Check the data capture and any randomization mechanism
To analyze the data from a choice-based conjoint study it is necessary to know exactly which respondents were shown which choice tasks (questions) and in what order things were shown (e.g., how the alternatives were randomized). If the person who is programming the data collection questionnaire is not experienced in doing choice-based conjoint studies, it is a good idea to carefully check that all the data is being captured appropriately. It is not unheard of for studies to need to be redone because of failures to correctly capture the data on which respondents saw which questions.
Where you are using some form of randomization in your study (e.g., if using split cell designs), it is a good idea to check that whoever has programmed it is using an appropriate mechanism for doing the randomization. Sometimes people who are inexperienced with advanced methodologies develop creative but non-random approaches to randomization, which ruins the integrity of the study (e.g., randomizing based on the time of the day, or an earlier response to a question in the study).
Conduct a soft send and analyze the resulting data
A “soft send” is survey research jargon for initially collecting a subset of the final sample (e.g., 10%). The data collection is then paused while you:
- Check that the data is being collected properly. That is, you know what choices people made and which questions they were shown.
- Estimate a hierarchical Bayes model.
- Form preliminary conclusions. That is, check that the model is telling you what you need to know for the study to be a success. Sure, the standard errors will be relatively high, but key conclusions should still be making sense at this time.
If everything makes sense, continue with the data collection.
This approach is the only one that checks for clerical errors. That is, it’s possible you have a great design, but due to clerical errors, it is not administered correctly. It also allows you to recover if you have made a mistake in the entire conception of the experiment. For instance, sometimes conjoint studies inadvertently include a couple of factors (attributes) that are so important that everything else becomes insignificant. Where the hypotheses of interest relate to the insignificant factors, this is a big problem. It is best to identify this kind of problem before you have finished the fieldwork. Otherwise, it cannot be fixed.