Explainable AI Algorithms for Classification Tasks With Mixed Data

Date

December 2022

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

With the great power of Machine Learning techniques, numerous applications have been created that have become an integral part of our modern life. However, the decision making processes of many of these machine learning-based applications are being questioned and criticized due to their opacity to users, especially for critical tasks such as disease diagnosis, loan application, industrial robots, etc. This opacity is the result of using statistical ma- chine learning approaches that generate models that can be viewed as solutions to optimization problems that minimize loss or maximize likelihood. Explainable Artificial Intelligence (XAI) models or Explainable Machine Learning (XML) models are machine-learned models in which human users can understand the decision making or prediction making process. The main goals of XAI are to: 1) generate highly accurate models that are comprehensible to human users. 2) explain a model’s decision-making process to a human so that they can easily understand it, develop trust in it, and diagnose any potential problems. This dissertation presents the FOLD family of new explainable AI algorithms for classification tasks that are able to efficiently handle mixed data (numerical and categorical) without extra effort (i.e., without resorting to any special data encoding). These algorithms generate a set of default rules, represented as a stratified logic program, that serves as the predictive model. Due to their symbolic nature and because they are based on logic, they can be easily understood and modified by humans. These new algorithms are competitive in predictive performance with state-of-the-art machine learning algorithms such as XGBoost and Multi-Layer Perceptrons (MLP), however, they are an order of magnitude faster with respect to execution efficiency. The FOLD-R++ algorithm has been designed for solving binary classification problems, FOLD-RM for multi-category classification problems, and FOLD-LTR for ranking. FOLD-SE is a further improvement over these algorithms that leads to scalable explainability. Scalable explainability means that regardless of the size of the data, the generated model is represented using a small number of rules—resulting in improved human-interpretability and human-explainability—while maintaining excellent predictive performance. The rest of this thesis presents the FOLD family of algorithms and compares and contrasts them with state-of-the-art machine learning algorithms.

Description

Keywords

Computer Science

item.page.sponsorship

Rights

Citation