Logic-based Approaches in Explainable AI and Natural Language Understanding
Date
Authors
ORCID
Journal Title
Journal ISSN
Volume Title
Publisher
item.page.doi
Abstract
Dramatic success of machine learning algorithms has led to a torrent of Artificial Intelligence (AI) applications in computer vision and natural language understanding. However, the effectiveness of these systems is limited by the machines’ current inability to explain and justify their decisions and actions. The Explainable AI program (Gunning, 2015) aims at creating a suite of machine learning techniques that: a) Produces explainable models without sacrificing predictive performance b) Enables human users to understand the underlying logic and diagnose the mistakes made by the AI system. Inspired by Explainable AI program, this dissertation presents logic programming-based approaches to some of the problems of interest in Explainable AI including learning machine learning hypotheses in the form of default theories, counter-factual reasoning and natural language understanding. In particular, We introduce algorithms that automate learning of default theories. We leverage these algorithms to capture the underlying logic of complex statistical learning models. We also propose a fully explainable logic programming-based framework for visual question answering and introduce a counter-factual reasoner based on Craig Interpolants and Answer Set Programming to come up with recommendations that respect logical, physical, and temporal constraints.