Sociology, Department of

 

Date of this Version

2-26-2019

Document Type

Article

Citation

Presented at “Interviewers and Their Effects from a Total Survey Error Perspective Workshop,” University of Nebraska-Lincoln, February 26-28, 2019.

Comments

Copyright 2019 by the authors.

Abstract

Community-based participatory research (CBPR) projects often employ members of the host partner community to engage and assist with research projects. However, CBPR may also introduce bias to survey statistics when community partners work as interviewers for projects within their own communities. Here, the advantage of employing interviewers from the local community and region may lead to unintended bias when participants and interviewers know each other outside of the research project. In situations where a preexisting social relationship exists, there is a greater possibility of social desirability bias. This may be particularly true for sensitive issues where they may not wish for members of their community to learn something about themselves which would otherwise remain hidden or private.

This paper examines three sources of potential interviewer effects upon measures of mental health and cultural engagement. Both are key outcomes of a random control trial intervention underway with American Indian youth living on or near reservations. Mental health is measured using the Achenbach System of Empirically Based Assessment (ASEBA) with a 107 item tool which gauges levels of externalizing and internalizing behavior among minors, as well as three subscales for internalizing behavior and two subscales for externalizing behavior. Cultural engagement is measured with an inventory of common local/regional cultural activities.

Three interview/interviewer characteristics are assessed in this paper. The first is whether an interviewer reports knowing the participant very well or somewhat well compared to not knowing them at all. We find that participants who were known by the interviewer scored lower on the aggregate internalizing scale and in all three subscales than participants who were unknown to the interviewer. However, there was no effect upon either externalizing scores or reported cultural participation when the interviewer knew the participant. The second factor tested is whether the interviewer reported a third party present during the interview who was listening or taking part. Here we found that having a third party present was associated with lower scores on the somatic complaints subscale of the internalizing scale, but no other subscale or aggregate measures. The third interview factor tested is the interviewer’s assessment of whether the participant was open with their responses or not. Here we find participants who were rated as open, reported higher levels of cultural participation and lower values on the internalizing subscale that assess withdrawn characteristics. We found no difference in the externalizing scales or the aggregate internalizing scale. The effects of these associations upon study outcomes and their potential to shift the diagnostic criteria of the ASEBA are discussed in the full paper.

Share

COinS