Multidimensional Uncertainty Quantification for Deep Neural Networks
Date
Authors
ORCID
Journal Title
Journal ISSN
Volume Title
Publisher
item.page.doi
Abstract
Deep neural networks (DNNs) have received tremendous attention and achieved great success in various applications, such as image and video analysis, natural language processing, recommendation systems, and drug discovery. However, inherent uncertainties derived from different root causes have been realized as serious hurdles for DNNs to find robust and trustworthy solutions for real-world problems. A lack of consideration of such uncertainties may lead to unnecessary risk. For example, a self-driving autonomous car can misdetect a human on the road. A deep learning-based medical assistant may misdiagnose cancer as a benign tumor. In this work, we study how to measure different uncertainty causes for DNNs and use them to solve diverse decision-making problems more effectively. In the first part of this thesis, we develop a general learning framework to quantify multiple types of uncertainties caused by different root causes, such as vacuity (i.e., uncertainty due to a lack of evidence) and dissonance (i.e., uncertainty due to conflicting evidence), for graph neural networks. We provide a theoretical analysis of the relationships between different uncertainty types. We further demonstrate that dissonance is most effective for misclassification detection and vacuity is most effective for Out-of-Distribution (OOD) detection. In the second part of the thesis, we study the significant impact of OOD objects on semi-supervised learning (SSL) for DNNs and develop a novel framework to improve the robustness of existing SSL algorithms against OODs. In the last part of the thesis, we create a general learning framework to quantity multiple uncertainty types for multi-label temporal neural networks. We further develop novel uncertainty fusion operators to quantify the fused uncertainty of a subsequence for early event detection.