Prabhakaran, Balakrishnan2022-08-232022-08-232021-082021-07-16August 202https://hdl.handle.net/10735.1/9459Recent advances in artificial intelligence (AI) based solutions for healthcare problems have led to the increased demands for quality accessible patient data and the functional understanding of the remarkable outcomes of AI decision support systems. Nonetheless, challenges persist from strict regulations that oversee patient privacy, small imbalanced datasets due to high costs of measurement and expert annotation, and the black-box nature of AI technology. In this dissertation, we address foundational frameworks necessary for achieving quality and accessible synthesized healthcare time-series data and build the essential trust and confidence in outcomes of AI solutions through interpretable explanations for healthcare time series data. In the first challenge, we propose validation approaches for synthesized healthcare time-series data and apply this quality synthesized data in training better performing healthcare decision support systems. Finally, we present a framework that generates and integrates modular interpretable explanations from varying deep learning models with model capacities achieved using synthesized data.application/pdfenMedical careArtificial intelligenceValidation and Interpretable Model Explanations for Synthesized Data in HealthcareThesis2022-08-23