Department of Special Education and Communication Disorders

 

Date of this Version

2010

Citation

4th International Conference on Signal Processing and Communication Systems (ICSPCS), 2010, doi: 10.1109/ICSPCS.2010.5709716 pp. 1-7.

Comments

©2010 IEEE. Used by permission.

Abstract

A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples in near real-time.

Share

COinS