Callier Center for Communication Disorders
Permanent URI for this communityhttps://hdl.handle.net/10735.1/1816
The UT Dallas Callier Center for Communication Disorders is a 110,000 square foot research and clinical facility with two locations. Callier research labs contain specialized instrumentation and advanced technology for studying communication and its disorders throughout the lifespan.
Callier Center Dallas, located on Inwood Road next to the UT Southwestern Medical Center, houses offices, classrooms, clinical facilities, a child development program, and research laboratories for Callier Center faculty.
Callier Center Dallas is home to the Callier Advanced Hearing Research Center, which houses state-of-the-art equipment for assessing auditory capabilities of children and adults, in addition to research laboratories devoted to the study of hearing aids, speech and language of children and adults using cochlear implants, and central auditory system processing.
Callier Center Richardson, located on the campus of UT Dallas, houses offices, classrooms, clinical facilities and research laboratories for Callier Center faculty.
News
New Callier Center Test Catches Cause of Balance Issues.
The Callier Center is one of the few facilities in North Texas that has the capability to provide a full battery of tests for problems associated with dizziness, balance or uncontrollable eye movements.
Browse
Browsing Callier Center for Communication Disorders by Title
Now showing 1 - 20 of 20
- Results Per Page
- Sort Options
Item Automatic Speech Activity Recognition from MEG Signals Using Seq2Seq Learning(IEEE Computer Society, 2019-03) Dash, Debadatta; Ferrari, P.; Malik, S.; Wang, Jun; 0000-0001-7265-217X (Wang, J); Dash, Debadatta; Wang, JunAccurate interpretation of speech activity from brain signals is critical for understanding the relationship between neural patterns and speech production. Current research on speech activity recognition from the brain activity heavily relies on the region of interest (ROI) based functional connectivity analysis or source separation strategies to map the activity as a spatial localization of a brain function. Albeit effective, these methods require prior knowledge of the brain and expensive computational effort. In this study, we investigated automatic speech activity recognition from brain signals using machine learning. Neural signals of four subjects during four stages of a speech task (i.e., rest, perception, preparation, and production) were recorded using magnetoencephalography (MEG), which has an excellent temporal and spatial resolution. First, a deep neural network (DNN) was used to classify the four whole tasks from the MEG signals. Further, we trained a sequence to sequence (Seq2Seq) long short-term memory-recurrent neural network (LSTM-RNN) for continuous (sample by sample) prediction of the speech stages/tasks by leveraging its sequential pattern learning paradigm. Experimental results indicate the effectiveness of both DNN and LSTM-RNN for automatic speech activity recognition from MEG signals. © 2019 IEEE.Item Better Speech and Hearing Month Proclamation(12/8/2011) Newman, Eloyce; Callier Center for Communication DisordersItem Building AuD externshipsCokely, Carol Lynn GeltmanEvaluation and analysis of externship program, which has been incorporated into Audiology curricula in the School of Behavioral and Brain Sciences at the University of Texas at Dallas.Item Dallas Mayor Laura Miller(12/8/2011) Newman, Eloyce; Callier Center for Communication DisordersItem Dean Moore and President Jennifer.(12/8/2011) Newman, Eloyce; Callier Center for Communication DisordersItem Enhancement of Consonant Recognition in Bimodal and Normal Hearing Listeners(Sage Publications Inc., 2019-05-15) Yoon, Y. -S; Riley, B.; Patel, H.; Frost, Amanda; Fillmore, P.; Gifford, R.; Hansen, John H. L.; Frost, Amanda; Hansen, John H. L.Objectives: The present study investigated the effects of 3-dimensional deep search (3DDS) signal processing on the enhancement of consonant perception in bimodal and normal hearing listeners. Methods: Using an articulation-index gram and 3DDS signal processing, consonant segments that greatly affected performance were identified and intensified with a 6-dB gain. Then consonant recognition was measured unilaterally and bilaterally before and after 3DDS processing both in quiet and noise. Results: The 3DDS signal processing provided a benefit to both groups, with greater benefit occurring in noise than quiet. The benefit rendered by 3DDS was the greatest in binaural listening condition. Ability to integrate acoustic features across ears was also enhanced with 3DDS processing. In listeners with normal hearing, manner and place of articulation were improved in binaural listening condition. In bimodal listeners, voicing and manner and place of articulation were also improved in bimodal and hearing aid ear–alone conditions. Conclusions: Consonant recognition was improved with 3DDS in both groups. This observed benefit suggests 3DDS can be used as an auditory training tool for improved integration and for bimodal users who receive little or no benefit from their current bimodal hearing. © The Author(s) 2019.Item Growth curve modeling tutorial: What you need to knowRojas, Raúl; Iglesias, AquilesItem Impact of program type on bilingual language growthRojas, Raúl; Iglesias, AquilesItem Implementation and Analysis of a Free Water Protocol in Acute Trauma and Stroke Patients(American Association of Critical Care Nurses, 2019-06-01) Kenedi, Helen; Campbell-Vance, J.; Reynolds, J.; Foreman, M.; Dollaghan, Cristine A.; Graybeal, D.; Warren, A. M.; Bennett, M.; 0000-0002-0431-140X (Dollaghan, CA); Kenedi, Helen; Dollaghan, Cristine A.Background Free water protocols allow patients who aspirate thin liquids and meet eligibility criteria to have access to water or ice according to specific guidelines. Limited research is available concerning free water protocols in acute care settings. Objectives To compare rates of positive clinical outcomes and negative clinical indicators of a free water protocol in the acute care setting and to continue monitoring participants discharged into the hospital system's rehabilitation setting. Positive clinical outcomes were diet upgrade, fewer days to diet upgrade, and fewer days in the study. Negative clinical indicators were pneumonia, intubation, and diet downgrade. Methods A multidisciplinary team developed and implemented a free water protocol. All eligible stroke and trauma patients (n = 104) treated over a 3-year period were randomly assigned to an experimental group with access to water and ice or a control group without such access. Trained study staff recorded data on positive outcomes and negative indicators; statistical analyses were conducted with blinding. Results No significant group differences in positive outcomes were found (all P values were > .40). Negative clinical indicators were too infrequent to allow for statistical comparison of the 2 groups. Statistical analyses could not be conducted on the small number (n = 15) of patients followed into rehabilitation, but no negative clinical indicators occurred in these patients. Conclusions Larger-scale studies are needed to reach decisive conclusions on the positive outcomes and negative indicators of a free water protocol in the acute care setting. ©2019 American Association of Critical-Care Nurses.Item Increasing efficacy for nursing staff via mastery training for hearing aid careGeheber, Laurin; Cokely, Carol Lynn Geltman; School of Behavioral and Brain Sciences• New hearing aid users must acquire knowledge in order to care and maintain their instruments, but knowledge alone may be insufficient. Self-efficacy is needed to implement knowledge. Mastery experiences are an important component of self-efficacy (Smith et. al, 2006); • Self-efficacy is also important for those who are involved in the care of another. Consider that in elder-care or assisted-living facilities, daily hearing aid care and maintenance often is delegated to staff members who are not trained regarding hearing loss or hearing aid care and maintenance (Alford et. al, 2010); • Current research reflects the third phase in ongoing research addressing self-efficacy. Earlier work indicated that training programs increased knowledge but not self-efficacy (Alford et. al, 2010); • To help target appropriate topics for inclusion in the training program, three facility residents were surveyed regarding hearing aid assistance received from nursing staff.Item Influence of headphones versus loudspeaker presentation of dichotic speech on ear advantage(2012-08-10) Peshwani, Heena Rakjumar; Martin, Jeffrey S.A right-ear advantage (REA) is a robust phenomenon on dichotic speech tests. Biases in speech perception and cognitive control are believed to produce ear advantage.1 Headphones are often used to deliver dichotic stimuli in order to isolate the two ears, however, similar outcomes have been found in behavioral and electrophysiological studies incorporating loudspeakers. The size of the REA is also known to be influenced by the level of perceptual difficulty (e.g., CV stimuli) and/or linguistic demands placed on the listener (e.g., sentences).Item Infusing audiologic rehabilitation and counseling into educational curriculum and externshipsCienkowski, Kathy; Hnath-Chisolm, Teresa; Harris, Frances; Cokely, Carol Lynn Geltman; Hickson, Louise M.Overview of Audiological Rehabilitation in the UT Dallas curriculum.Item Interaural asymmetry using dichotic filtered words in children with suspected auditory processing disorder: preliminary findings(2012-05-11) Huston, Lisa; Gibson, Keiko; Kwan, Jason; Martin, Jeffrey S.; School of Behavioral and Brain Sciences• The direction and magnitude of interaural asymmetry (IA) on dichotic listening tests is often evaluated during diagnostic assessment for APD, with excessive IA (e.g., left-ear deficit) often taken as a sign of the disorder. • It is worthwhile to consider that clinical decisions about IA might be improved when the dichotic test itself generates meaningful amounts of asymmetry in the non-clinical population, but without introducing extra-auditory factors on test performance. • In this regard, a recent study evaluated performances to dichotic low-pass filtered speech (dichotic filtered words, DFWs) presented under DIV and DIR test modes in healthy young adults with normal hearing. Previous studies have suggested that the combined utility of DIV and DIR modes may help discern the relatively contributions of perceptual (bottom-up) versus cognitive (top-down) processing biases underlying IA.3,4 Results showed that larger values of IA (e.g., REA) were produced using DFW as compared to traditional non-filtered stimuli. The magnitude of IA for DFWs was similar between test modes. • The purpose of this study was to further evaluate the DFW paradigm in a sample of school-aged children with and without symptoms of APD.Item Optimization of Transcranial Direct Current Stimulation of Dorsolateral Prefrontal Cortex for Tinnitus: A Non-Linear Dose-Response Effect(Nature Publishing Group) Shekhawat, Giriraj Singh; Vanneste, Sven; Shekhawat, Giriraj Singh; 0000-0002-9906-1836 (Vanneste, S); Vanneste, SvenNeuromodulation is defined as the process of augmenting neuroplasticity via invasive or non-invasive methods. Tinnitus is the perception of sound in the absence of its external source. The objective of this study was to optimize the parameters of transcranial direct current stimulation (tDCS) of dorsolateral prefrontal cortex (DLPFC) for tinnitus suppression. The following factors were optimized in the dose-response design (n = 111): current intensity (1.5 mA or 2 mA), stimulation duration (20 min or 30 min), and number of stimulation sessions (2, 4, 6, 8, or 10), with a 3-4 day washout period between each session. Participants underwent a minimum of 2 sessions in 1 week or maximum of 10 sessions in 5 weeks' time. Tinnitus loudness was measured in pre-post design using a 10-point numeric rating scale. There was a significant reduction in tinnitus loudness after tDCS of DLPFC. There was no significant difference between the intensity and duration of stimulation. As the number of sessions increased, there was a higher reduction in the tinnitus loudness; however, this effect plateaued after 6 sessions.Item President Franklyn Jennifer(12/8/2011) Newman, Eloyce; Callier Center for Communication DisordersItem Proclamation(12/8/2011) Miller, Laura; Callier Center for Communication DisordersItem The Role of Nonverbal Working Memory in Morphosyntactic Processing by Children with Specific Language Impairment and Autism Spectrum Disorders(BioMed Central, 2018-09-24) Ellis Weismer, S.; Davidson, Meghan M.; Gangopadhyay, I.; Sindberg, H.; Roebuck, H.; Kaushanskaya, M.; Davidson, Meghan M.Background: Both children with autism spectrum disorders (ASD) and children with specific language impairment (SLI) have been shown to have difficulties with grammatical processing. A comparison of these two populations with neurodevelopmental disorders was undertaken to examine similarities and differences in the mechanisms that may underlie grammatical processing. Research has shown that working memory (WM) is recruited during grammatical processing. The goal of this study was to examine morphosyntactic processing on a grammatical judgment task in children who varied in clinical diagnosis and language abilities and to assess the extent to which performance is predicted by nonverbal working memory (WM). Two theoretical perspectives were evaluated relative to performance on the grammatical judgment task - the "working memory" account and the "wrap-up" account. These accounts make contrasting predictions about the detection of grammatical errors occurring early versus late in the sentence. Methods: Participants were 84 school-age children with SLI (n = 21), ASD (n = 27), and typical development (TD, n = 36). Performance was analyzed based on diagnostic group as well as language status (normal language, NL, n = 54, and language impairment, LI, n = 30). A grammatical judgment task was used in which the position of the error in the sentence (early versus late) was manipulated. A visual WM task (N-back) was administered and the ability of WM to predict morphosyntactic processing was assessed. Results: Groups differed significantly in their sensitivity to grammatical errors (TD > SLI and NL > LI) but did not differ in nonverbal WM. Overall, children in all groups were more sensitive and quicker at detecting errors occurring late in the sentence than early in the sentence. Nonverbal WM predicted morphosyntactic processing across groups, but the specific profile of association between WM and early versus late error detection was reversed for children with and without language impairment. Conclusions: Findings primarily support a "wrap up" account whereby the accumulating sentence context for errors positioned late in the sentence (rather than early) appeared to facilitate morphosyntactic processing. Although none of the groups displayed deficits in visual WM, individual differences in these nonverbal WM resources predicted proficiency in morphosyntactic processing.Item Take another look: Pupillometry and cognitive processing demandsDollaghan, Christine A; Rojas, Raúl; Mueller, Jana A.Item The Impact of Brief Restriction to Articulation on Children's Subsequent Speech Production(Acoustical Soc America) Seidl, Amanda; Brosseau-Lapré, Françoise; Goffman, Lisa; Goffman, LisaThis project explored whether disruption of articulation during listening impacts subsequent speech production in 4-yr-olds with and without speech sound disorder (SSD). During novel word learning, typically-developing children showed effects of articulatory disruption as revealed by larger differences between two acoustic cues to a sound contrast, but children with SSD were unaffected by articulatory disruption. Findings suggest that, when typically developing 4-yr- olds experience an articulatory disruption during a listening task, the children's subsequent production is affected. Children with SSD show less influence of articulatory experience during perception, which could be the result of impaired or attenuated ties between perception and articulation.Item Using SALT software to assess the language production of bilingual Spanish-English childrenIglesias, Aquiles; Rojas, Raúl; Nockerts, Ann