Sociology, Department of

 

Date of this Version

2-26-2019

Document Type

Article

Citation

Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.

Comments

Copyright 2019 by the authors.

Abstract

In developing self-administered interviewing systems that go beyond text, survey designers are faced with choices about how to represent the interviewing agent. In speech-dialog systems like ACASI and IVR, designers must decide if the voice that presents the spoken questions is unambiguously male or female, whether the pronunciation is regionally marked, etc. Any visual representation of an interviewer (e.g., a photograph, a video) requires designers to choose features that visually convey demographic features like race, gender, age, etc. Here we investigate whether the representation of animated virtual interviewers (VIs) affects responses in the same way that analogous attributes of human interviewers do. Specifically we ask whether VI race and gender, represented visually (through motion capture of particular interviewers projected onto animated models) and vocally (through audio recordings of particular interviewers), produce response patterns reminiscent of those attributed to human interviewers’ race and gender. In a web survey of 1735 respondents (half Black and half White, half female and half male), respondents answered questions asked by one of 16 VIs (Black, White, female, male). The VIs were created by mapping motion and audio recordings of black, white, male and female professional interviewers onto animated 3D models, such that the same facial motion was displayed on faces of different races and genders, in order to rule out the possibility that any effects observed would result from motion or vocal characteristics of particular interviewers. Although virtual interviewers’ gender did not affect responses, their race did: more respondents reported strong (rather than not strong) opposition to preferences in hiring when a White rather than Black VI posed the question. Respondents’ affective reactions to the VIs also suggested that the visual representation of interviewer race influenced answers, at least for some respondents: When respondents were asked (post-interview) to choose a VI for a hypothetical future interview, those who chose a VI whose race matched their own also reported more polarized racial attitudes. For example, White respondents who picked a White VI rated whites more warmly on a feeling thermometer than Blacks, with the opposite pattern for Black respondents who chose a Black virtual interviewer. Although it was clear to respondents that the VIs are not human, our evidence suggests that at least sometimes respondents can react as if they were. It also suggests that well-known race-of-interviewer effects need to be considered seriously in self-administered surveys, and that visual and auditory representations of interviewers can have substantive effects on responses. The findings raise important questions about what counts as standardization in a self-administered interview (e.g., standardizing interviewer appearance vs. how respondents experience the interviewer) and what kinds of VI representations lead to the most honest and accurate answers.

Share

COinS