Sadoughi, NajmehBusso, Carlos A.2019-08-052019-08-052018-05-159781538623350 (ISBN)https://hdl.handle.net/10735.1/6771Full text access from Treasures at UT Dallas is restricted to current UTD affiliates (use the provided Link to Article).The orofacial area conveys a range of information, including speech articulation and emotions. These two factors add constraints to the facial movements, creating non-trivial integrations and interplays. To generate more expressive and naturalistic movements for conversational agents (CAs) the relationship between these factors should be carefully modeled. Data-driven models are more appropriate for this task than rule-based systems. This paper provides two deep learning speech-driven structures to integrate speech articulation and emotional cues. The proposed approaches rely on multitask learning (MTL) strategies, where related secondary tasks are jointly solved when synthesizing orofacial movements. In particular, we evaluate emotion recognition and viseme recognition as secondary tasks. The approach creates shared representations that generate behaviors that not only are closer to the original orofacial movements, but also are perceived more natural than the results from single task learning.en©2018 IEEELearningGestureSpeechConversationEmotion recognitionExpressive Speech-Driven Lip Movements with Multitask LearningarticleSadoughi, N., and C. Busso. 2018. "Expressive speech-driven lip movements with multitask learning." Proceedings - International Conference on Automatic Face and Gesture Recognition, 13th: 409-415, doi:10.1109/FG.2018.00066