It is common practice for commercial researchers to have rules of thumb regarding sample size. For example, various researchers do not test for statistical significance when samples are less than 30, 50, or 100.
Such rules of thumb do not have any formal justification. All tests of statistical significance explicitly take the sample size into account. Many tests function quite adequately with very small sample sizes. The only time to be particularly concerned about sample size is when there is a problem regarding distributional assumptions:
- As discussed earlier, assumptions regarding the specific distribution, such as whether it is normal or not, become key determinants of the accuracy of p-value computations when smaller sample sizes are used.
- Deviations from simple random sampling (and, consequently, the i.i.d. assumption) can be particularly problematic with very small samples.
The key thing to keep in mind as regards sample size is that even if the sample size is small and the various assumptions are not met, it is still better to do a test with lots of violated assumptions than to do no test at all (as when no test is conducted a conclusion is still going to be reached regarding whether the difference is meaningful or not, so it is better to perform an inexact computation than to guess).
Comments
0 comments
Article is closed for comments.