Department of Special Education and Communication Disorders
Document Type
Article
Date of this Version
2010
Citation
4th International Conference on Signal Processing and Communication Systems (ICSPCS), 2010, doi: 10.1109/ICSPCS.2010.5709716 pp. 1-7.
Abstract
A novel approach was developed to recognize vowels from continuous tongue and lip movements. Vowels were classified based on movement patterns (rather than on derived articulatory features, e.g., lip opening) using a machine learning approach. Recognition accuracy on a single-speaker dataset was 94.02% with a very short latency. Recognition accuracy was better for high vowels than for low vowels. This finding parallels previous empirical findings on tongue movements during vowels. The recognition algorithm was then used to drive an articulation-to-acoustics synthesizer. The synthesizer recognizes vowels from continuous input stream of tongue and lip movements and plays the corresponding sound samples in near real-time.
Included in
Communication Sciences and Disorders Commons, Computer Sciences Commons, Special Education and Teaching Commons
Comments
©2010 IEEE. Used by permission.