The standard deviation of the sampling distribution of a statistic (B. S. Everitt (2002): The Cambridge Dictionary of Statistics, Second Edition, Cambridge.)
The standard error is a measure of precision - estimates with higher standard errors have lower precision - and is used in the computation of confidence intervals and many tests of statistical significance.
Computation
There are a number of common methods of computing standard errors:
- Formulas for computing the standard errors directly from data. For example, the most well-known computation of the standard error is the standard error of the mean which is computed as {\displaystyle s/{\sqrt {(}}n)}, where s is the standard deviation and n is the sample size.
- Formulas for computing the standard errors from regression outputs (sometimes referred to as analytic standard errors).
- Algorithms for approximating the hessian, which is then used as an input into formulas for computing standard errors (sometimes referred to as numeric standard errors). This is generally the method employed with mixed multinomial logit. It is commonly employed when analytic standard errors cannot be computed.
- Resampling methods, including bootstrapping, jackknifed, and permutations. These are commonly used when it is believed that the assumptions for the above three methods are not met.
- Using intermediate calculations from Bayesian estimation methods (i.e., the posterior distributions of the parameters).
Where the assumptions of each of the methods are met, they all compute the same standard error (Differences can arise due to issues of numerical precision and model specification.) The only exception to this is Bayesian estimation methods, which often have slightly different goals and thus can lead to different results.
Example
If the standard deviation of a variable in a simple random sample is 2.763 and the sample size is 212 then the standard error is:
See confidence interval for the use of this calculation.
Comments
0 comments
Please sign in to leave a comment.