Sociology, Department of

 

Date of this Version

2-26-2019

Document Type

Article

Citation

Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.

Comments

Copyright 2019 by the authors.

Abstract

To enhance response among underrepresented groups and hence, to increase response rates and to decrease potential nonresponse bias survey practitioners often use interviewers in population surveys (Heerwegh, 2009). While interviewers tend to increase overall response rates in surveys (see Heerwegh, 2009), research on the determinants of nonresponse have also identified human interviewers as one reason for variations in response rates (see for examples Couper & Groves, 1992; Durrant, Groves, Staetsky, & Steele, 2010; Durrant & Steele, 2009; Hox & de Leeuw, 2002; Loosveldt & Beullens, 2014; West & Blom, 2016). In addition, research on interviewer effects indicates that interviewers introduce nonresponse bias, if interviewers systematically differ in their success in obtaining response from specific respondent groups (see West, Kreuter, & Jaenichen, 2013; West & Olson, 2010). Therefore, interviewers might be a source of selective nonresponse in surveys.

Interviewers might also differentially contribute to selective nonresponse in surveys and hence, potential nonresponse bias, when interviewer effects are correlated with characteristics of the approached sample units (for an example see Loosveldt & Beullens, 2014). Multilevel models including dummies in the random part of the model to distinguish between respondent groups are commonly used to investigate whether interviewer effects on nonresponse differ across specific respondent groups (see Loosveldt & Beullens, 2014). When dummy coding, which is also referred to as contrast coding (Jones, 2013), are included as random components in multilevel models for interviewers effects, the obtained variance estimates indicate to what extent the contrast between respondent groups varies across interviewers. Yet, such parameterization does not directly yield insight on the size of interviewer effects for specific respondent groups.

Surveys with large imbalances among respondent groups gain from an investigation of the variation of interviewer effect sizes on nonresponse, as one gains insights on whether the interviewer effect size is the same for specific respondent groups. The importance of the interviewer effect size for specific groups of respondents lies in its prediction of the effectiveness of interviewer-related fieldwork strategies (for examples on liking, matching, or prioritizing respondents with interviewers see Durrant et al., 2010; Peytchev, Riley, Rosen, Murphy, & Lindblad, 2010; Pickery & Loosveldt, 2002, 2004) and thus, a effective mitigation of potential nonresponse bias. Consequently, understanding group-specific interviewer effect sizes can aide the efficiency of respondent recruitment, because we then understand why some interviewer-related fieldwork strategies have great impact on some respondent group’s participation while other strategies have little effect.

To obtain information on differences in interviewer effect size, we propose to use an alternative coding strategy, so-called separate coding in multilevel models with random slopes (for examples see Jones, 2013; Verbeke & Molenberghs, 2000, ch. 12.1). In case of separate coding, every variable represents a direct estimate of the interviewer effects for specific respondent groups (rather than the contrast with a reference category).

Investigating nonresponse during the recruitment of a probability-based online panel separately for persons with and without prior internet access (data used from the German Internet Panel, see Blom et al., 2017), we detect that the size of the interviewer effect differs between the two respondent groups. While we discover no interviewer effects on nonresponse for persons without internet access (offliners), we find sizable interviewer effects for persons with internet access (onliners). In addition, we identify interviewer characteristics that explain this group-specific nonresponse. Our results demonstrate that the implementation of interviewer-related fieldwork strategies might help to increase response rates among onliners, as for onliners the interviewer effect size was relatively large compared to the interviewer effect size for offliners.

Share

COinS