Katz, William F.

Permanent URI for this collectionhttps://hdl.handle.net/10735.1/5018

Dr. William Katz is professor of Speech Science and Neurolinguistics at the Callier Center for Communication Disorders. Dr. Katz also serves as the head of the Speech Production Laboratory. His research is based in three general areas:

  • Neurolinguistics: specifically, the pathology of speech in aphasia and apraxia, brain models of language representation, augmented kinematic feedback and the treatment of neurogenic speech disorders
  • Speech production and perception: focusing on acoustic phonetic features, coarticulation and speech motor planning. The development of speech in children. Finally, compensatory articulation and speech motor control.
  • Cue-trading relations at the interface of prosody and syntax.

Browse

Recent Submissions

Now showing 1 - 2 of 2
  • Item
    Using Electromagnetic Articulography with a Tongue Lateral Sensor to Discriminate Manner of Articulation
    (Acoustical Society of America) Katz, William F.; Mehta, Sonya; Wood, Matthew; Wang, Jun; Katz, William F.; Mehta, Sonya; Wood, Matthew; Wang, Jun
    This study examined the contributions of the tongue tip (TT), tongue body (TB), and tongue lateral (TL) sensors in the electromagnetic articulography (EMA) measurement of American English alveolar consonants. Thirteen adults produced /ɹ/, /l/, /z/ and /d/ in /αCα/ syllables while being recorded with an EMA system. According to statistical analysis of sensor movement and the results of a machine classification experiment, the TT sensor contributed most to consonant differences, followed by TB. The TL sensor played a complementary role, particularly for distinguishing /z/. © 2017 Acoustical Society of America.
  • Item
    Visual Feedback of Tongue Movement for Novel Speech Sound Learning
    (Frontiers Media S. A.) Katz, William F.; Mehta, S.; 305456098 (Katz, WF)
    Pronunciation training studies have yielded important information concerning the processing of audiovisual (AV) information. Second language (L2) learners show increased reliance on bottom-up, multimodal input for speech perception (compared to monolingual individuals). However, little is known about the role of viewing one’s own speech articulation processes during speech training. The current study investigated whether real-time, visual feedback for tongue movement can improve a speaker’s learning of non-native speech sounds. An interactive 3D tongue visualization system based on electromagnetic articulography (EMA) was used in a speech training experiment. Native speakers of American English produced a novel speech sound (/ɖ/; a voiced, coronal, palatal stop) before, during, and after trials in which they viewed their own speech movements using the 3D model. Talkers’ productions were evaluated using kinematic (tongue-tip spatial positioning) and acoustic (burst spectra) measures. The results indicated a rapid gain in accuracy associated with visual feedback training. The findings are discussed with respect to neural models for multimodal speech processing. .

Works in Treasures @ UT Dallas are made available exclusively for educational purposes such as research or instruction. Literary rights, including copyright for published works held by the creator(s) or their heirs, or other third parties may apply. All rights are reserved unless otherwise indicated by the copyright owner(s).