Tan, Chin-Tuan

Permanent URI for this collectionhttps://hdl.handle.net/10735.1/7046

Chin-Tuan Tan joined the UT Dallas faculty as an Associate Professor in 2016. He akso serves as director of the Auditory Perception Engineering Lab. His research interests include:

  • Speech and music signal processing
  • Audio Engineering
  • Assistive Hearing Devices (implantable/non-implantable)
  • Psychoacoustics & Physiological Acoustics
  • Auditory Neuroscience Analysis & Modeling
  • Medical Instrumentation & Systems
  • Functional Imaging

ORCID page

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Analyzing Auditory Evoked Cortical Response to Noise-Suppressed Speech in Cochlear Implant Users Using Mismatch Negativity
    (IEEE Computer Society, 2019-03) Yu, F.; Tan, Chin-Tuan; Chen, F.; 0000-0002-4676-4917 (Tan, C-T); 78509491 (Tan, C-T); Tan, Chin-Tuan
    Speech perception in background noise remains a challenge in cochlear implant (CI) users, and noise-suppression processing (e.g., Wiener filtering) has been commonly utilized to improve speech perception for CI users. It is crucial to objectively examine the perception of the noise-suppressed speech in CI users. The purpose of this work was to investigate whether the mismatch negativity (MMN) response could objectively assess the quality of the noise-suppressed speech as perceived by CI users. A vowel /a/ stimulus was masked by a steady-state noise, creating two noisy stimuli at two signal-to-noise ratios (SNRs) of -5 and +5 dB. The two noisy stimuli were processed by Wiener filtering. Electroencephalogram data obtained from 7 CI users who participated in an auditory oddball paradigm was analyzed to extract the MMN. The two noise-suppressed stimuli served as the deviant stimuli and the clean vowel stimulus as the standard stimulus. Experimental results showed that the noise-suppressed stimuli at -5 dB SNR evoked a larger MMN amplitude than that at +5 dB SNR, accounting for the effect of SNR level on the auditory evoked cortical response to the noise-suppressed speech. The MMN may be potentially used as an objective biomarker to evaluate the perception of the noise-suppressed speech in CI users. © 2019 IEEE.
  • Item
    Neural Entrainment to Speech Envelope in Response to Perceived Sound Quality
    (IEEE Computer Society, 2019-03) Ngo, Dat Quoc; Oliver, Garret; Tcheslavski, Gleb; Tan, Chin-Tuan; 0000-0002-4676-4917 (Tan, C-T); 78509491 (Tan, C-T); Ngo, Dat Quoc; Oliver, Garret; Tcheslavski, Gleb; Tan, Chin-Tuan
    The extent, to which people listen to and perceive the speech content at different noise levels varies from individual to individual. In past research projects, the speech intelligibility was determined by rating assessment, which suffered from variation of subjects' physical features. The purpose of this study is to investigate electroencephalography (EEG) by implementing multi-variate Temporal Response Function (mTRF) to examine neural responses to speech stimuli at different sound and noise levels. The result of this study shows that the front-central area of the brain clearly shows the envelope entrainment to speech stimuli. ©2019 IEEE.
  • Item
    A Speech Processing Strategy Based on Sinusoidal Speech Model for Cochlear Implant Users
    (Institute of Electrical and Electronics Engineers Inc.) Lee, Sungmin; Akbarzadeh, Sara; Singh, Satnam; Tan, Chin-Tuan; 0000-0002-4676-4917 (Tan, C-T); 78509491 (Tan, C-T); Lee, Sungmin; Akbarzadeh, Sara; Singh, Satnam; Tan, Chin-Tuan
    In sinusoidal modeling (SM), speech signal, which is pseudo-periodic in structure, can be approximated by sinusoids and noise without losing significant speech information. A speech processing strategy based on this sinusoidal speech model will be relevant for encoding electric pulse streams in cochlear implant (CI) processing, where the number of channels available is limited. In this study, 5 normal hearing (NH) listeners and 2 CI users were asked to perform the task of speech recognition and perceived sound quality rating on speech sentences processed in 12 different test conditions. The sinusoidal analysis/synthesis algorithm was limited to 1, 3 or 6 sinusoids from the sentences low-pass filtered at either 1 kHz, 1.5 kHz, 3 kHz, or 6 kHz, re-synthesized as the test conditions. Each of 12 lists of AzBio sentences was randomly chosen and process with one of 12 test conditions, before they were presented to each participant at 65 dB SPL (Sound Pressure Level). Participant was instructed to repeat the sentence as they perceived, and the number of words correctly recognized was scored. They were also asked to rate the perceived sound quality of the sentences including original speech sentence, on the scale of 1 (distorted) to 10 (clean). Both speech recognition score and perceived sound quality rating across all participants increase when the number of sinusoids increases and low-pass filter broadens. Our current finding showed that three sinusoids may be sufficient to elicit the nearly maximum speech intelligibility and quality necessary for both NH and CI listeners. Sinusoidal speech model has the potential in facilitating the basis for a speech processing strategy in CI. ©2018 APSIPA.
  • Item
    Wavelet Scattering Transform for Variability Reduction in Cortical Potentials Evoked by Pitch Matched Electro-Acoustic Stimulation in Unilateral Cochlear Implant Patients
    (Institute of Electrical and Electronics Engineers Inc.) Heydarzadeh, Mehrdad; Akbarzadeh, Sara; Tan, Chin-Tuan; 0000-0002-4676-4917 (Tan, C-T); 78509491 (Tan, C-T); Heydarzadeh, Mehrdad; Akbarzadeh, Sara; Tan, Chin-Tuan
    Cochlear implant (CI) restores the hearing sensation in profoundly deafen patients by directly stimulating auditory nerve with electric pulses using an array of tonotopically inserted electrodes. Basal electrodes stimulate in response to high input frequencies while apical electrodes stimulate to low input frequencies. The problem with this electrical stimulation, particularly in unilaterally implanted users who has residual hearing in the contra-lateral ear, lies in the frequency mismatch between characteristic frequency of auditory nerve and input signal. In this paper, we revisit our previously proposed mechanism for tuning intra-cochlear electrode to its pitch matched frequency using a single channel EEG [1]. We apply the wavelet scattering transform to extract a deformation invariant from the EEG signal recorded from each of 10 CI subjects when they were listening to pitch matched electro-acoustic stimulation. Results show that the wavelet scattering transform is able to capture the variability introduced by different subjects, and a more robust alternative to reveal the underlying neuro-physiological responses to this perceptual event. ©2018 APSIPA
  • Item
    Implication of Speech Level Control in Noise to Sound Quality Judgement
    (Institute of Electrical and Electronics Engineers Inc.) Akbarzadeh, Sara; Lee, Sungman; Singh, Satnam; Tan, Chin-Tuan; 0000-0002-4676-4917 (Tan, C-T); 78509491 (Tan, C-T); Akbarzadeh, Sara; Lee, Sungman; Singh, Satnam; Tan, Chin-Tuan
    Relative levels of speech and noise, which is signal-to-noise ratio (SNR), alone as a metric may not fully account how human perceives speech in noise or making judgement on the sound quality of the speech component. To date, the most common rationale in front-end processing of noisy speech in assistive hearing devices is to reduce 'noise' (estimated) with a sole objective to improve the overall SNR. Absolute sound pressure level of speech in the remaining noise, which is necessary for listeners to anchor their perceptual judgement, is assumed to be restored by the subsequent dynamic range compression stage intended to compensate for the loudness recruitment in hearing impaired (HI). However, un-coordinated setting of thresholds that trigger the nonlinear processing in these two separate stages, amplify the remaining 'noise' and/or distortion instead. This will confuse listener's judgement of sound quality and deviate from the usual perceptual trend as one would expect when more noise was present. In this study, both normal hearing (NH) and HI listeners were asked to rate the sound quality of noisy speech and noise reduced speech as they perceived. The result found that speech processed by noise reduction algorithms were lower in quality compared to original unprocessed speech in noise conditions. The outcomes also showed that sound quality judgement was dependent on both input SNR and absolute level of speech, with a greater weightage on the latter, across both NH and HI listeners. The outcome of this study potentially suggests that integrating the two separate processing stages into one will better match with the underlying mechanism in auditory reception of sound. Further work will attempt to identify settings of these two processing stages for a better speech reception in assistive hearing device users. ©2018 APSIPA.

Works in Treasures @ UT Dallas are made available exclusively for educational purposes such as research or instruction. Literary rights, including copyright for published works held by the creator(s) or their heirs, or other third parties may apply. All rights are reserved unless otherwise indicated by the copyright owner(s).