A common problem when analyzing surveys is that:
- The data analysis requires sampling weights (e.g., due to over-representation of sub-groups from the population).
- The software being used to analyze the data has not been written to accommodate sampling weights.
- The software has been written to accommodate frequency weights.
In such situations, it is possible to scale the sampling weight so that it can be analyzed using software that is designed for a frequency weight. Such scaling can be referred to as weight calibration, although it should be stressed that there is no standard name for this scaling process and many people that use weight calibration will not refer to it as "weight calibration".
For example, if a study has a sample size of 300, an average weight of 1.3 and an effective sample size of 120, then each weight is multipled by 120 / (1.3 * 300). The resulting weight, which can be referred to as the calibrated weight, is then treated as a frequency weight.
Although this approach is generally superior to not treating the sampling weight as a frequency weight without calibration, it is not in a strict sense, valid, as:
- The effective sample size is itself generally computed incorrectly (see Design Effects and Effective Sample Size).
- Even when the correct effective sample size is used the resulting standard errors of estimates are different to those obtained when employing more valid methods (e.g., Taylor series linearization).
Nevertheless, calibration is, in general, superior ignoring the weight or treating a sampling weight as a frequency weight.[1]
Also known as
Weight normalization [2]
References
- Dorofeev, Sergey and Peter Grant (2006): Statistics for Real-Life Sample Surveys, Cambridge University Press, Melbourne.
- DL Hahs-Vaughn (2005): A primer for using and understanding weights with national datasets, Journal of Experimental Education
Comments
0 comments
Article is closed for comments.