- The term is applied to the likely spread of estimates of a parameter in a statistical model. Measured by the standard error of the estimator; this can be decreased, and hence precision increased, by using a larger sample size. (B. S. Everitt (2002): The Cambridge Dictionary of Statistics, Second Edition, Cambridge.)
- The level of detail used in computations (e.g., the number of significant digits used in computations).
Example of precision (first meaning)
Consider a simple survey question such as:
How likely would you be to recommend Microsoft to a friend or colleague? Not at all Extremely likely likely 0 1 2 3 4 5 6 7 8 9 10
This question was asked of a sample of 312 consumers and the following frequency table summarizes the resulting data:
From this table, we can compute that the average rating given to Microsoft is (0×22+1×12+…+10×22)/312=5.9 out of 10. However, only 312 people have provided data. As there are around seven billion people in the world, it is possible that we would have computed a different answer had we interviewed all of them. But how different? This is the question we seek to understand by evaluating precision.
The standard technical measure of the precision of an estimate is its standard error which, in this case, is computed as 0.189763619.
The most common way of communicating the precision of an estimate to a non-technical audience is using confidence intervals. If we use the simplest formula for computing confidence intervals, we compute that the 95% confidence interval for the likelihood to recommend Microsoft is from 5.6 to 6.2, which can be interpreted loosely as a range in likely values true values that would have been obtained had everybody been interviewed.
Comments
0 comments
Please sign in to leave a comment.