Last month, we issued the second of three installments of
newsletters centered on identifying opportunities to drive more “bang for the
research buck” on tracking studies and included some suggestions on how to
execute that process. In this third and final installment on the subject, we
will outline some of the practices required on preserving historical trends of
customer satisfaction data and bridging the potential gap between results
obtained from the previous research supplier to the new research firm
commissioned to execute a better or more updated tracking program.
The key objective in managing these changes to an existing
tracking study is to preserve historical trends that have been obtained in
previous waves of research by mitigating the risks involved in this change. If
a risk mitigation plan is not deployed, new results may not be comparable to
previous ones, and the time, money, and resources spent on historical trends
are wasted.
With that end in mind, the mitigation plan must be designed
to identify all sources of variance (characteristics of the data and collection
methods that may prevent newer data from being comparable to past data) and,
one by one, eliminate or control for each source of variance to the extent
possible. In the end, if there are still significant differences between past
data and new data, an algorithm must be built to equate historical data to new
data and enable comparability to historical trends.
The sources of variance in tracking study migration include:
- Method of data collection
- Survey items’ wording, scaling, and sequence/position in questionnaire
- Quality and other relevant behaviors of interviewers
- Changes that result from implementation of programs or other organization-related policies and processes that impact customers’ perceptions of and satisfaction with the organization
- Changes in the marketplace that impact customers’ perceptions of the organization
The first three above can be considered “error variance
sources” that could be eliminated. The last two sources should be considered
“market-related variance sources” which cannot be eliminated but can be
accounted for and controlled.
Maintaining the same data collection method and using the
exact same questionnaire are key to mitigating risk and preserving historical
trends because consistency in method of data collection and surveys can
eliminate the first two sources of error variance. However, if the exact same
surveys and exact same method are used by different suppliers, then finding
significant differences between historical and new satisfaction scores can only
be attributable to differences in interviewers (error variance source # 3),
market-related variance sources notwithstanding.
In order to control for market-related sources of variance,
we recommend conducting waves of a given tracking study in parallel – that is
continue to allow the research supplier responsible for historical trends to
collect data and have the new research supplier collect data in the exact same
format and with the exact same population. If market-related sources of variance
exist during the timing of this parallel data collection process, then those
variances sources will have an equal effect on data collected by both
suppliers. Therefore, these variance sources will be held constant and thus
controlled for in comparing the data collected from one supplier to the other.
In terms of parallel testing outlined above, we recommend
running a given tracking study in parallel for at least three months. While
this may result in increased study costs, it bears the benefit of preventing
the loss of historical trends, which are usually far more costly. To reduce
these costs, the number of interviews administered by one of the suppliers need
not be as many as the number of interviews from the other supplier.
Therefore, if all other study-related sources of error
variance are eliminated and market-related sources of variance are held
constant, and if significant differences are found in the parallel test, an
in-depth analysis of that data is required. This analysis entails statistical
testing of the scores or levels of satisfaction reported as well as the
variance of each survey item’s data. While industry standard for significance
testing is set at the 95% confidence level, in this case we recommend that the
confidence level be reduced to 90% or even lower.
So, statistically test the differences between all attribute
scores from both studies completed in parallel. Any significant differences
between attribute scores should be examined and a complete EDA (Exploratory
Data Analysis) should be completed on that study. However, it will also be
necessary to test for differences in sample composition. This sample
composition test should be completed prior to the samples being finalized, but
interviewing techniques and completion rates may impact respondent composition
between the studies.
At the completion of the three month parallel tracking
study, a complete technical report containing any significant differences found
between the two studies should be provided with complete explanations (ANCOVAs)
for the differences. In the case of an attribute score being significantly
different between the two studies, the historical data can be statistically
adjusted.