Department of Special Education and Communication Disorders

 

Document Type

Article

Date of this Version

Spring 3-8-2012

Citation

Wang, J., Samal, A., Green, J. R., & Rudzicz, F. (2012). Sentence recognition from articulatory movements for silent speech interfaces, Proc. of IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (pp. 4985-4988), Kyoto, Japan.

Comments

Copyright (c) 2012 IEEE. Used by permission.

Abstract

Recent research has demonstrated the potential of using an articulation-based silent speech interface for command-and-control systems. Such an interface converts articulation to words that can then drive a text-to-speech synthesizer. In this paper, we have proposed a novel near-time algorithm to recognize whole-sentences from continuous tongue and lip movements. Our goal is to assist persons who are aphonic or have a severe motor speech impairment to produce functional speech using their tongue and lips. Our algorithm was tested using a functional sentence data set collected from ten speakers (3012 utterances). The average accuracy was 94.89% with an average latency of 3.11 seconds for each sentence prediction. The results indicate the effectiveness of our approach and its potential for building a real-time articulation-based silent speech interface for clinical applications.

Share

COinS