Validation and Interpretable Model Explanations for Synthesized Data in Healthcare

Date

2021-07-16

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

Recent advances in artificial intelligence (AI) based solutions for healthcare problems have led to the increased demands for quality accessible patient data and the functional understanding of the remarkable outcomes of AI decision support systems. Nonetheless, challenges persist from strict regulations that oversee patient privacy, small imbalanced datasets due to high costs of measurement and expert annotation, and the black-box nature of AI technology. In this dissertation, we address foundational frameworks necessary for achieving quality and accessible synthesized healthcare time-series data and build the essential trust and confidence in outcomes of AI solutions through interpretable explanations for healthcare time series data. In the first challenge, we propose validation approaches for synthesized healthcare time-series data and apply this quality synthesized data in training better performing healthcare decision support systems. Finally, we present a framework that generates and integrates modular interpretable explanations from varying deep learning models with model capacities achieved using synthesized data.

Description

Keywords

Medical care, Artificial intelligence

item.page.sponsorship

Rights

Citation