Explainable AI Algorithms for Classification Tasks With Mixed Data

dc.contributor.advisorHayenga, Heather
dc.contributor.advisorGupta, Gopal
dc.contributor.committeeMemberTamil, Lakshman
dc.contributor.committeeMemberSalazar, Elmer
dc.contributor.committeeMemberNourani, Mehrdad
dc.contributor.committeeMemberKhan, Latifur
dc.creatorWang, Huaduo
dc.date.accessioned2024-03-13T22:02:11Z
dc.date.available2024-03-13T22:02:11Z
dc.date.created2022-12
dc.date.issuedDecember 2022
dc.date.submittedDecember 2022
dc.date.updated2024-03-13T22:02:11Z
dc.description.abstractWith the great power of Machine Learning techniques, numerous applications have been created that have become an integral part of our modern life. However, the decision making processes of many of these machine learning-based applications are being questioned and criticized due to their opacity to users, especially for critical tasks such as disease diagnosis, loan application, industrial robots, etc. This opacity is the result of using statistical ma- chine learning approaches that generate models that can be viewed as solutions to optimization problems that minimize loss or maximize likelihood. Explainable Artificial Intelligence (XAI) models or Explainable Machine Learning (XML) models are machine-learned models in which human users can understand the decision making or prediction making process. The main goals of XAI are to: 1) generate highly accurate models that are comprehensible to human users. 2) explain a model’s decision-making process to a human so that they can easily understand it, develop trust in it, and diagnose any potential problems. This dissertation presents the FOLD family of new explainable AI algorithms for classification tasks that are able to efficiently handle mixed data (numerical and categorical) without extra effort (i.e., without resorting to any special data encoding). These algorithms generate a set of default rules, represented as a stratified logic program, that serves as the predictive model. Due to their symbolic nature and because they are based on logic, they can be easily understood and modified by humans. These new algorithms are competitive in predictive performance with state-of-the-art machine learning algorithms such as XGBoost and Multi-Layer Perceptrons (MLP), however, they are an order of magnitude faster with respect to execution efficiency. The FOLD-R++ algorithm has been designed for solving binary classification problems, FOLD-RM for multi-category classification problems, and FOLD-LTR for ranking. FOLD-SE is a further improvement over these algorithms that leads to scalable explainability. Scalable explainability means that regardless of the size of the data, the generated model is represented using a small number of rules—resulting in improved human-interpretability and human-explainability—while maintaining excellent predictive performance. The rest of this thesis presents the FOLD family of algorithms and compares and contrasts them with state-of-the-art machine learning algorithms.
dc.format.mimetypeapplication/pdf
dc.identifier.uri
dc.identifier.urihttps://hdl.handle.net/10735.1/10050
dc.language.isoEnglish
dc.subjectComputer Science
dc.titleExplainable AI Algorithms for Classification Tasks With Mixed Data
dc.typeThesis
dc.type.materialtext
local.embargo.lift2023-12-01
local.embargo.terms2023-12-01
thesis.degree.collegeSchool of Engineering and Computer Science
thesis.degree.departmentComputer Engineering
thesis.degree.grantorThe University of Texas at Dallas
thesis.degree.namePHD

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
WANG-PRIMARY-2022.pdf
Size:
726.15 KB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.98 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
proquest_license.txt
Size:
6.37 KB
Format:
Plain Text
Description: