Sociology, Department of

 

Date of this Version

2-26-2019

Document Type

Article

Citation

Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.

Comments

Copyright 2019 by the authors.

Abstract

It is well documented that interviewers can have profound effects on the survey data collection process. This research looks to build on that knowledge by examining the relationship between CATI interviewers and data editors and how the recording of answers, and editing of the survey answers recorded, contribute to total survey error (TSE). Specifically, we are interested in comparing answers recorded by CATI interviewers and the final response codes after the editing stage. Since the interviewing stage and the editing (or processing) stage are often two distinct phases of the data collection process where the interviewer and data editor work independently of each other, we hypothesize that variable processing errors may be introduced, and errors of measurement may go uncorrected, leading to final recorded responses that deviate from true values. To investigate this, we behavior-coded thirty-nine CATI interviews from the April 2018 Agricultural Labor Survey conducted by the United States Department of Agriculture’s National Agricultural Statistics Service (USDA NASS) and compared the responses recorded by the interviewers with the final response codes after data editing. We used the behavior-coded data to identify answers that behavior-coders believed to contain measurement error, and those that appeared – based on the behavior codes of the interviewer-respondent interaction – to be accurate. We then examined whether answers flagged by the behavior-coders for potential measurement error were changed to different answers during the data editing process. Similarly, we looked at answers considered by the behavior-coders as being accurate when recorded by the interviewers and compared them to final response codes to see if they had been changed to something else – something not (or less) accurate. Preliminary analysis of the data has produced several instances where recorded answers with measurement error were not changed, and several instances where accurate answers were changed to something less accurate during the editing stage. When this type of error occurs, the impacts on final estimates can be larger than expected. For example, one respondent to the Agricultural Labor Survey had indicated that one of their hired workers had worked between four and six hours during the reference week. When recording the answer the CATI interviewer indicated that one worker worked four hours – an answer deemed to be accurate by behavior coders who listened to the exchange on the audio recording. During the editing stage, however, the recorded response was changed, and the final response code indicated that this one worker had worked forty hours during the reference week. The result is an overestimation of how many hours this one worker worked. While it is unknowable, it is plausible that the data editor changed the answer because they believed the interviewer had committed a processing error when typing in the data. Preliminary analysis suggests there are cases where answers with no measurement error are being changed in the editing stage, and conversely, that cases with measurement error are going unchanged. The independence of the interviewing and data editing processes seem to produce variable errors in measurement and processing.

Share

COinS