Hyper Evidential Neural Network

Date

2022-05-01T05:00:00.000Z

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

Estimating how uncertain an AI system is in its predictions is vital to improve the safety of such systems. Uncertainty in an observation can result from confusion between two or more classes due to a lack of discriminating features, causing the annotator to label it as a composite of multiple classes. This work proposes a framework that is able to train on such vague observations with composite labels and is the first to explicitly model the vagueness in an observation using the theory of Subjective Logic. It treats predictions of a neural net as subjective hyper-opinions and learns the function that collects evidences for both singleton and composite labels leading to these opinions. The resultant predictor for a multi-class classification problem is a Grouped Dirichlet distribution whose parameters are set by the evidences outputted by the neural net. Extensive experiments on both synthetic and realworld datasets show the proposed framework’s superior performance over other approaches in identifying such observations and the effectiveness of vagueness as an indicator of confusion between two or more classes when discriminating attributes are missing.

Description

Keywords

Computer Science

item.page.sponsorship

Rights

Citation