A practical problem in data analysis is that often the numbers we are analyzing are inaccurate to some unknown degree. A fix for this problem is to use the Delta Principle, which is to focus analyses on comparing numbers with similar types of errors. A case study is presented. Then, there's a discussion of the implications of the delta principle on how to write questionnaires so that analysis is easier.
The delta principle
One of the ugly truths of survey research is that all survey results contain some error, and the level of error is often both unknown and unknowable. The delta principle is to focus analyses on evaluating results by comparing them with other results with the same types of errors. The "delta" refers to the Greek letter delta, which is often used in mathematics as shorthand for a difference of some kind (e.g., one number minus another number).
Stated more succinctly, the delta principle recommends that we focus on relativities rather than absolute magnitudes when interpreting results from surveys.
The most common ways that the delta principle is used in analyses are:
- Comparisons by sub-groups
- Comparisons over time
- Comparisons against benchmarks.
- Comparisons against very similar questions
Case study: do you have a best friend at work?
Polling and consulting company Gallup's G12 Index asks 12 questions to assess the level of commitment of employees. Question 12 asks Do you have a best friend at work?
It is not hard to find problems with this question. Is it measuring something that's even relevant? Aren't best friends for leisure rather than work? Surely an efficient bunch of motivated people can be successful without being best friends? Needless to say, this question has elicited a lot of criticism over time! However, when it is analyzed using the delta principle, its usefulness becomes clear.
Let's say that 16% of your company Strongly Agree with this question. Is that a good result or a bad result? All the problems with the wording of the question make it hard to assess what such a number may mean. And this is where the delta principle comes into play.
While you cannot look at the 16% number on its own and conclude much, you can still use it to compare with other instances where the question has been asked. The basic principle here is that we can understand any result using a simple formula:
Result = Quantity of Interest + Noise
The quantity of interest in the best friend question is the degree of connectedness that people have with their colleagues. This is obviously imperfectly measured by Do you have a best friend at work?, implying that the noise is very large and potentially swamps the quantity of interest.
Now, here's where the magic comes in. Let's say we have asked Do you have a best friend at work at two companies, getting results of 16% at one company and 26% at the other. Both of these results are on their own very inaccurate due to the noise. But, the noise is due to the question wording, and so we can assume that the noise is consistent in both results, which means that the difference between the results (26% - 16% = 10%) is meaningful. That is, we can expect that much of the noise cancels out when we compare data that has the same type of noise.
If this idea doesn't make immediate sense, it is easy to work through the maths. As a thought experiment, imagine that poor wording means that people give a score that is 25% less than the truth. This means that the real level of connectedness for the first company is 16% + 25% = 41%, and 26% + 25% = 51% for the second. The difference between these numbers is also 10%.
This example calculation also reveals an important caveat about the delta principle. It has some inbuilt assumptions. The calculation assumes that the error is that each number is 25% too low. But it is as likely that the error is that each number is half the correct number, and if that is the case, the difference of 10% is wrong. So, while the delta principle is very helpful, it is far from guaranteed to be valid.
Comparison by sub-group
The most common way of applying the delta principle is to compare results by sub-groups. For example, if our survey finds that 12% of engineers at the company have a best friend, but 32% of marketers do, then we could conclude that the marketers are more connected than the engineers.
Comparisons against benchmarks
A second way of using the delta principle is to compare numbers against benchmarks. Gallup has published that the average firm scores 20% on Do you have a best friend at work? From this, we can determine that the first company has a poor score, but the second has a good score relative to the benchmark (provided the differences are not due to sampling error).
Comparisons over time
The third way of using the delta principle is to repeat studies over time, comparing to see how their results change, with the implication being that changes over time reflect movements in the quantity of interest rather than noise.
Comparisons to similar questions
The delta principle can also be applied by creating additional questions with similar problems. For example, we could ask Did you have a best friend at your previous workplace? and compare this data with the data from Do you have a best friend at work.
Using the delta principle when writing questionnaires
The delta principle has an important implication for writing questionnaires: consistency is more important than well-worded questions.
It is natural to look at a question such as Do you have a best friend at work? and to try and come up with alternative ways of asking the question. The logic of this comes back to our earlier formula:
Result = Quantity of Interest + Noise
If we can improve the wording of the question, we reduce the noise, which means that our result is closer to our quantity of interest. Who wouldn't want to do that?
Experienced researchers do not see it this way. The problem with rewording questions, the formula changes to:
New Result = Quantity of interest + Different noise
And, the problem with this formula is that you can no longer apply the delta principle, as noise and different noise are not likely to be the same.
For this reason, experienced researchers tend to prize questions that have been used before above new questions that cannot be compared with other data. When writing new questionnaires they re-use old questions. When conducting tracking research, they do not change the wording of questions without putting up a massive fight.