Carlos Busso-Recabarren is a Professor of Electrical Engineering and Principal Investigator of the MSP (Multimodal Signal Processing) Laboratory. His research interests include:

  • Modeling and synthesis of human behavior
  • Affective State Recognition
  • Multimodal Interfaces
  • Sensing Participant Interaction
  • Digital Signal Processing
  • Speech and Video Processing

Works in Treasures @ UT Dallas are made available exclusively for educational purposes such as research or instruction. Literary rights, including copyright for published works held by the creator(s) or their heirs, or other third parties may apply. All rights are reserved unless otherwise indicated by the copyright owner(s).

Recent Submissions

  • Lexical Dependent Emotion Detection Using Synthetic Speech Reference 

    Lotfian, Reza; Busso, Carlos A. (IEEE-Inst Electrical Electronics Engineers Inc, 2019-02-08)
    This paper aims to create neutral reference models from synthetic speech to contrast the emotional content of a speech signal. Modeling emotional behaviors is a challenging task due to the variability in perceiving and ...
  • Speech-Driven Expressive Talking Lips with Conditional Sequential Generative Adversarial Networks 

    Sadoughi, Najmeh; Busso, Carlos (Institute of Electrical and Electronics Engineers Inc., 2019-05-07)
    Articulation, emotion, and personality play strong roles in the orofacial movements. To improve the naturalness and expressiveness of virtual agents(VAs), it is important that we carefully model the complex interplay between ...
  • Speech-Driven Animation with Meaningful Behaviors 

    Sadoughi, Najmeh; Busso, Carlos (Elsevier B.V., 2019-04-05)
    Conversational agents (CAs) play an important role in human computer interaction (HCI). Creating believable movements for CAs is challenging, since the movements have to be meaningful and natural, reflecting the coupling ...
  • Expressive Speech-Driven Lip Movements with Multitask Learning 

    Sadoughi, Najmeh; Busso, Carlos A.
    The orofacial area conveys a range of information, including speech articulation and emotions. These two factors add constraints to the facial movements, creating non-trivial integrations and interplays. To generate more ...