Katz, William F.Mehta, S.2016-09-272016-09-272015-11-191662-5161http://hdl.handle.net/10735.1/5096Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing. .enCC BY 4.0 (Attribution)©2015 The Authorshttp://creativecommons.org/licenses/by/4.0/Articulation disordersLanguage and languages—Study and teaching—Audio-visual aidsElectromagnetic articulographySecond language acquisitionSpeechVisual Feedback of Tongue Movement for Novel Speech Sound LearningTextKatz, W. F., and S. Mehta. 2015. "Visual feedback of tongue movement for novel speech sound learning." Frontiers in Human Neuroscience 9(612), DOI: dx.doi.org/10.3389/fnhum.2015.00612.9