Sociology, Department of

 

Date of this Version

2-26-2019

Document Type

Article

Citation

Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.

Comments

Copyright 2019 by the authors.

Abstract

Interviewers play a significant role in telephone and face-to-face interviews, including gaining respondent cooperation and administering survey questions. Increasingly, interviewers’ perceptions of the respondent and interview experience, such as cooperativeness and interest, are also being used to assess measurement error and make adjustments to data (West, 2013; Kirchner et al., 2017). Although interviewer perceptions are typically recorded at the end of the interview, interviewers are likely to begin forming perceptions about the household and respondent based on their first contact attempt (and continue developing them during the interview). We hypothesize that interview context factors, such as interviewer perceptions of the physical interview environment and the respondent’s reluctance, may interact with question characteristics, such as sensitivity and cognitive burden, to influence interviewer-respondent interactions.

One survey that may be particularly impacted by context factors is the Survey of Income and Program Participation (SIPP). The SIPP is a multi-wave household survey that asks questions about economic well-being, family dynamics, and housing security, among other sensitive and cognitively burdensome topics. SIPP interviewers receive standardized training but they also implement individual strategies as they react to the questionnaire, respondent, and interview context. For example, interviewers may anticipate that some respondents will react sensitively to interview questions and proactively tailor questions to reduce sensitivity. The criteria that interviewers use to make these judgments may vary, resulting in differences in question-asking and probing behavior that may ultimately affect response distributions and respondent burden in unexpected ways.

The purpose of the present study was to develop a framework of the interviewer-respondent interaction from the interviewer’s perspective. A primary goal was to investigate whether an interviewer’s question-asking or probing behavior differs between contexts that are sensitive or burdensome (e.g., sensitive and non-sensitive questions; reluctant and non-reluctant respondents). In addition, we identify the interviewer behaviors that appear to reduce respondent behaviors associated with measurement error.

To do this, we combined several data sources from the 2014 SIPP Panel: computer audio-recorded interviewing (CARI) recordings, interviewer perceptions of the physical interview environment (Neighborhood Observation Instrument, NOI) and of respondent behaviors during contact attempts (Contact History Instrument, CHI), SIPP data including responses to survey questions and demographics, and interviewer characteristics. Three researchers independently transcribed and coded audio recordings of the full interaction for a sample of the targeted questions. Behavior codes included: whether the interviewer changed the survey question and what type of change was made (e.g., tailored the question to match the respondent’s situation), how the interviewer reacted to responses (used a suggestive probe); whether the respondent did not give a codeable response (a vague answer that does not unambiguously match a response option); and other codes that describe the interaction (pauses, interruptions). We plan to present descriptive analyses of interviewer and respondent behaviors, as well as modeling results that examine the extent to which (a) selected interview context factors predict interviewer behavior and (b) interviewing strategies predict response and interview outcomes.

Implications for data quality, interviewer training, questionnaire design, and survey methods in general will be discussed.

Share

COinS