Sociology, Department of
Date of this Version
2-26-2019
Document Type
Article
Citation
Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.
Abstract
The interviewers’ task in the data collection process is a complex one, with many judgments and decisions being made from moment to moment as they ask questions to get answers from respondents (Japec, 2008). Many survey organizations train their interviewers to use standardized language and read questions verbatim. However, in practice, interviewers may need to use a conversational approach and probe respondents to get the answers needed. This research explores the process by which interviewers make such decisions in real-time by conducting research with interviewers about their experiences collecting data. Using a cognitive interview approach, we asked interviewers about multiple aspects of the survey process, including how they handle asking and probing about sensitive or difficult-to-answer questions, how they decide to probe further versus accept an answer as-is, and when they decide to use lead-ins to questions such as apologizing or distancing themselves from the survey. We also had interviewers provide feedback on hypothetical vignettes (varying in their level of sensitivity and difficulty) that closely mimicked interviewer-respondent interactions they might experience in the field.
We conducted a total of 27 semi-structured cognitive interviews with survey interviewers from a federal statistical agency. The interviewers had a wide range of experience interviewing at their agency, from under one year to over 15 years, and across multiple survey topics, including employment, health, housing, crime, and expenditures. Two researchers conducted the interviews, three of which were conducted in person and 24 by telephone, each lasting approximately 60 minutes.
Major themes that emerged during the interviews were coded and analyzed by the researchers. For instance, we categorized the reasons respondents find questions sensitive or difficult to answer (e.g., invasive questions, recall problems, privacy concerns). We also identified themes and coded the types of question lead-ins interviewers reported using to address sensitive or difficult questions (e.g., distancing, apologizing, and repeating the question). We also provide qualitative analysis and descriptions of emergent probes and other techniques interviewers reported using to help with the survey process, such as reminding respondents of the confidentiality of their responses, the importance of their data, the ability to skip a question, and how interviewers go about deciding whether to probe further or accept a response. We also found evidence that interviewers sometimes experience sensitivity or discomfort themselves when asking respondents about sensitive topics, and strategies they have identified to overcome those challenges. Finally, we will report on the interviewers’ reactions to hypothetical vignettes depicting interviewer-respondent interactions and provide analyses about how interviewers handle these situations, as well as their ratings of how sensitive or difficult these survey questions would be for them to administer and for respondents to answer in the field.
Learning directly from interviewers about how they think through an interview and what obstacles they face is a critical step in beginning to understand how to develop realistic data collection decisions, and improve training and support for interviewers. We will discuss the results of these interviews and their implications for improving the survey process.
Comments
Copyright 2019 by the authors.