Department of Special Education and Communication Disorders

 

Date of this Version

2012

Citation

Published in Aphasiology 26:2 (2012), pp 162–176. doi:10.1080/02687038.2011.628004

Comments

Copyright © 2012 Psychology Press/Taylor & Francis Group. Used by permission

Abstract

Background: Augmented input (AI), or the use of visuographic images and linguistic supports, is a strategy for facilitating the auditory comprehension of people with chronic aphasia. To date, researchers have not systematically evaluated the effects of various types of AI strategies on auditory comprehension.

Aims: The purpose of the study was to perform an initial evaluation of the changes in auditory comprehension accuracy experienced by people with aphasia when they received one type of AI. Specifically, the authors examined the effect four types of non-personalized visuographic image conditions on the comprehension of people with aphasia when listening to narratives.

Methods & Procedures: A total of 21 people with chronic aphasia listened to four stories, one in each of four conditions (i.e., no-context photographs, low-context drawings with embedded no-context photographs, high-context photographs, and no visuographic support). Auditory comprehension was measured by assessing participants’ accuracy in responding to 15 multiple- choice sentence completion statements related to each story.

Outcomes & Results: Results showed no significant differences in response accuracy across the four visuographic conditions.

Conclusions: The type of visuographic image provided as AI in this study did not influence participants’ response accuracy for sentence completion comprehension tasks. However, the authors only examined non-personalized visuographic images as a type of AI support. Future researchers should systematically examine the benefits provided to people with aphasia by other types of visuographic and linguistic AI supports.

Share

COinS