Real-Time Articulatory-Controlled Vowel Synthesizer for research on Auditory-Speech Motor Learning


NE-Munroe-Meyer Institute of Genetics & Rehabilitation, UCEDD/LEND
Program Type LEND,UCEDD Fiscal Year 2011
Contact Jordan Green, Ph.D.
Email jgreen4@unl.edu    
Phone 402-559-6302    
Project Description
When learning to speak, young children use auditory feedback to learn associations between articulatory movements and their acoustic consequences (Guenther et al., 1998). Presumably, this process involves the inverse mapping from acoustic goals to vocal tract shapes to muscular forces. Adults with acquired speech impairments may also undergo a similar inverse mapping process when regaining speech following injury to the vocal tract or the neural structures that govern speech. The principles underlying speech motor learning and re-learning are poorly understood though such knowledge is essential for treatments designed to improve speech. This project will examine the usefulness of a real-time articulatory-controlled vowel synthesizer for conducting experiments on auditory-motor associative learning in speech. Experiments will be conducted to determine participants? ability to generate corner vowels using the synthesizer and to adapt to experimental manipulations of the mappings between mouth shape and vowel sounds.