Department of Special Education and Communication Disorders
ORCID IDs
Jonathan Brumberg https://orcid.org/0000-0001-5739-968X
Kevin Pitt https://orcid.org/0000-0003-3165-4093
Document Type
Article
Date of this Version
4-2018
Citation
Published in final edited form as: IEEE Trans Neural Syst Rehabil Eng. 2018 April ; 26(4): 874–881.
doi:10.1109/TNSRE.2018.2808425.
Abstract
We conducted a study of a motor imagery brain-computer interface (BCI) using electroencephalography (EEG) to continuously control a formant frequency speech synthesizer with instantaneous auditory and visual feedback. Over a three-session training period, sixteen participants learned to control the BCI for production of three vowel sounds (/ℹ/[heed], /ɐ/[hot], and/u/[who’d]) and were split into three groups: those receiving unimodal auditory feedback of synthesized speech, those receiving unimodal visual feedback of formant frequencies, and those receiving multimodal, audio-visual feedback. Audio feedback was provided by a formant frequency artificial speech synthesizer and visual feedback was given as a two-dimensional cursor on a graphical representation of the plane defined by the first two formant frequencies. We found that combined audiovisual feedback led to the greatest performance in terms of percent accuracy, distance to target and movement time to target compared to either unimodal feedback of auditory or visual information. These results indicate that performance is enhanced when multimodal feedback is meaningful for the BCI task goals, rather than as a generic biofeedback signal of BCI progress.
Comments
Author manuscript; available in PMC 2019 April 01.