Sociology, Department of

 

Date of this Version

1-26-2019

Document Type

Article

Citation

Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.

Comments

Copyright 2019 by the authors.

Abstract

In the total survey error paradigm, nonsampling errors can be difficult to quantify, especially errors that occur in the data collection phase of face-to-face surveys. Field interviewers play “… dual roles as recruiters and data collectors…” (West et al, 2018), and are therefore potential contributors to both nonresponse error and measurement error. Recent advances in technology, paradata, and performance dashboards offer an opportunity to observe interviewer effects almost at the source in real time, and to intervene quickly to curtail them. Edwards, Maitland, and Connor (2017) report on an experimental program using rapid feedback of CARI coding results (within 72 hours of the field interview) to improve interviewers’ question-asking behavior. Mohadjer and Edwards (2018) describe a system for visualizing quality metrics and displaying alerts that inform field supervisors of anomalies (such as very short interviews) detected in data transmitted in the previous 24 hours. These features allow supervisors to quickly investigate, intervene and correct interviewer behavior that departs from the data collection protocol. From the interviewer’s perspective these interactions can be viewed as a form of learning “on the job”, consistent with the literature on best practices in adult learning, and from the survey manager’s perspective, they can be an important feature in a continuous quality improvement program for repeating cross-sectional and longitudinal surveys. We build on these initiatives to focus on specific areas where interviewer error can be a major contributor to TSE.

We plan an experiment to investigate ways rapid feedback based on CARI coding can impact a survey’s key statistics. The experiment will be embedded in a continuous national face-to-face household survey which has an ongoing protocol of weekly CARI coding and interviewer feedback. The treatment will be rapid feedback on question-asking behavior for several critical items in the CAPI instrument, items that are known to be problematic for interviewers and respondents, and that produce data that do not benchmark well to other sources. The survey’s field staff is organized into a number of reporting regions. The treatment group will be a subset of interviewers who work in several of these regions. The control group will be the remaining regions, following the existing protocol.

We also plan a descriptive study of contact attempt records, based on other surveys that equip interviewers with smart phones. The interviewers can enter records on the phone or on the laptop computer used to conduct CAPI interviews. Entry on the smart phone was designed to increase the proportion of records entered shortly after the event occurred and to increase recording accuracy. The records are available for review by supervisors, who monitor smartphone usage and advise interviewers on contact strategies, based in part on the contact record history. We will investigate whether interviewers who enter records primarily on the phones generate more records, more paradata per record, and more accurate paradata. Other studies have shown that the quality of paradata on contact attempts can be quite poor, but it is a primary input for propensity modeling (an element of many responsive survey designs). Thus the quality of the contact records can have an indirect role in nonresponse error.

Share

COinS