Sociology, Department of

 

Date of this Version

2-26-2019

Document Type

Article

Citation

Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.

Comments

Copyright 2019 by the authors.

Abstract

In interviewer-administered surveys, interviewers are involved in nearly all steps of the survey implementation. However, besides many positive aspects of interviewers’ involvement, they are – intentionally or unintentionally – a potential source of survey errors. In recent decades, a large body of literature has accumulated about measuring and explaining interviewer effects on survey unit nonresponse. Recently, West and Blom (2017) have published a research synthesis on factors explaining interviewer effects on various sources of survey error, including survey unit nonresponse. They find that previous research reports great variability across surveys in the significance and even direction of predictors of interviewer effects on survey unit nonresponse. This variability in findings across surveys may be due to a lack of consistency in key characteristics of the surveys examined, such as the group of interviewers employed, the survey organizations managing the interviewers, the sampling frame used, and the populations, as well as time periods, observed. In addition, the explanatory variables available to the researchers who examine interviewer effects on survey nonresponse differ largely across surveys and may thus influence the results.

The diversity in findings, survey characteristics, and explanatory variables available for analyses call for a more orchestrated effort in explaining interviewer effects on survey unit nonresponse. Our paper fills this gap as our analyses are based on four German surveys with a high level of consistency across the surveys: GIP 2012, PIAAC, SHARE, and GIP 2014. The four surveys were conducted face-to-face in approximately the same time period in Germany. They were administered through the same survey organization with the same pool of interviewers. In addition, we were able to use the same area control variables and identical explanatory variables at the interviewer level.

Despite the numerous similarities across the four surveys, our results show high variability of interviewer characteristics that explain interviewer effects on survey unit nonresponse across the surveys. In addition, we find that the interviewers employed in the four surveys are rather similar with regard to most of their socio-demographic characteristics, work experience and working hours. Furthermore, the interviewers are similar with regard to their behavior and reporting about deviations from standardized interviewing techniques, how they achieve response and their reasons for working as an interviewer. The results, therefore, suggest that other differences – such as topic, sponsor, research team, or interviewer training – between the four surveys might explain the identified interviewer effects on survey unit nonresponse.

Share

COinS