Multimodal Medical Image Analysis Using Machine Learning
Date
Authors
ORCID
Journal Title
Journal ISSN
Volume Title
Publisher
item.page.doi
Abstract
With extensive collections of data and evolved medical diagnostic imaging, including digitized histopathology images, computer-aided detection for medical assessment has become feasible. Clinicians and medical professionals can use automated computational models to detect regions of interest and aid in diagnosis. They can be used to provide a second opinion at times of uncertainty or used independently for reducing the load of the medical healthcare provider on difficult and time-consuming tasks. The research presented in this dissertation focuses on developing automated systems comprising detection, classification, survival prediction, segmentation, and quantification tasks using machine learning and deep learning algorithms for three medical problems. We use multiview images, images and clinical information, and multisite images to solve these problems, overcoming the underlying challenges, including limited data and lack of region annotations. In the first problem, we develop a solution to assess craniosynostosis, a skull deformity, automatically. An automatic craniosynostosis detector can diagnose the malformation early, particularly helping care providers with limited craniofacial expertise. We analyze 2D multiview images of healthy controls and infants with craniosynostosis to identify the disease using computer-based classifiers. First, we develop a traditional machine learning (ML) with feature extraction multiview image-based classifiers, and next, we build a Convolutional Neural Networks (CNNs) based classifier. The ML model has an accuracy of 91.7%, and the CNN model has an accuracy of 90.6%. ML model performs slightly better than the CNN model, probably due to the supremacy of the designed ML features in the craniosynostosis subtypes differentiation and small image dataset availability for model development. In the second problem, we classify a common type of cancer occurring in the soft tissue of children named Rhabdomyosarcoma (RMS) as the correct subtype. The subtypes respond to different treatments. Due to slight differences in the appearance of histopathology images, manual classification is tedious and needs high expertise. We present a machine learningbased pipeline to automatically classify Rhabdomyosarcoma into three significant subtypes using whole slide images (WSI). We train the model based on the knowledge of the class associated with the WSIs. There are no manual annotations used for the model development. Meanwhile, most related approaches to classify tumor needs manual regional or nuclear annotation on WSI. We first divide a WSI into tiles and predict the class of each tile. Then we convert tile-level predictions to WSI-level predictions using threshold and soft voting. We achieved 94.87% WSI tumor subtype classification accuracy on a large and diverse test dataset. Unlike related work, we achieved such accurate classification at 5X magnification of WSI, using 20X or 10X for best results. The benefit of our approach is that training and testing are performed computationally faster due to the lower image resolution. Next, we solve the survival prediction by developing a novel survival predictor. Our proposed method comprises two steps. First is the extraction of a whole slide feature map (WSFM), and next, it is used to build the survival predictor. We divide a WSI into small tile images, then extract the features for the tile image using InceptionV3 pretrained model. Next, we reduce the dimension of the features by applying Principal Component Analysis (PCA) and obtain a low dimension feature representation for the tiles. We store the tile features as channel information and replace all the tiles with their PCA extracted features in place to form WSFMs. The WSFM has the information of the entire tissue in the WSI and also preserves the adjacency information of the tiles. Using the WSFMs as input we then build Siamese survival convolutional neural network (SSCNN) which overcomes the small dataset size problem pertinent in the existing methods. The SSCNN uses the multivariate clinical features aling with the WSFM for predicting a survival score. We propose a novel modified pairwise ranking loss with a bounded inverse term to train the SSCNN. The proposed method does not need pixel-level annotations which is a notorious bottleneck for such studies and can be easily adapted for any tumor being agnostic to the other model development parameters like number of clusters. Experimental results in two different tumors, RMS and Glioblastoma multiforme (GBM) brain cancer, validate the success of the proposed SSCNN compared to other state-of-the-art survival predictors. At last, we established a deep learning model (DLM) pipeline to assess tumor viability using WSIs of the primary tumors and corresponding lung sections for 130 mice having breast cancer. We developed an InceptionResNetV3 convolutional neural network (CNN) model for detecting the viable and necrotic tumors and normal mammary tissue in the primary tumor WSIs. Then, we trained another CNN model by fine-tuning the first model to identify the metastatic tumor and normal tissue in the lung sections. We created the tumor viability heatmap for each WSI using the predictions from the respective model and quantified the tumor viability in each WSI. We measured the intraclass correlation between the manual tumor viability quantification and the DLM and obtained more than 0.97 correlation. By providing the clinically relevant outcome parameter of tumor viability, this novel DLM promises to become a standard tool in animal tumor models’ assessment.