Sociology, Department of
Date of this Version
2-26-2019
Document Type
Article
Citation
Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.
Abstract
As people increasingly adopt SMS text messaging for communicating in their daily lives, texting becomes a potentially important way to interact with survey respondents, who may expect that they can communicate with survey researchers as they communicate with others. Thus far our evidence from analyses of 642 iPhone interviews suggests that text interviewing can lead to higher quality data (less satisficing, more disclosure) than voice interviews on the same device, whether the questions are asked by an interviewer or an automated system. Respondents also report high satisfaction with text interviews, with many reporting that text is more convenient because they can continue with other activities while responding. But the interaction with an interviewer in a text interview is substantially different than in a voice interview, with much less of a sense of the interviewer’s social presence as well as quite different time pressure. In principle, this suggests there should be different potential for interviewer effects in text than in voice. In this paper we report analyses of how text interviews differed from voice interviews in our corpus, as well as how interviews with human interviewers differed from interviews with automated interviewing systems in both modes, based on transcripts and coding of multiple features of the interaction. Text interviews took more than twice as long as voice interviews, but the amount of time between turns (text messages) was large, and the total number of turns was two thirds as many as in voice interviews. As in the voice interviews, text interviews with human interviewers involved a small but significantly greater number of turns than text interviews with automated systems, not only because respondents engaged in small “talk” with human interviewers but because they requested clarification and help with the survey task more often than with the automated text interviewer. Respondents were more likely to type out full response options (as opposed to equally acceptable single character responses) with a human text interviewer. Analyses of the content and format of text interchanges compared to voice interchanges demonstrate both potential improvements in data quality and ease for respondents, but also pitfalls and challenges that a more asynchronous mode brings. The “anytime anywhere” qualities of text interviewing may reduce pressure to answer quickly, allowing respondents to answer more thoughtfully and to consult records even if they are mobile or multitasking. From a Total Survey Error perspective, the more streamlined nature of text interaction, which largely reduces the interview to its essential question-asking and -answering elements, may help reduce the potential for unintended interviewer influence.
Comments
Copyright 2019 by the authors.