Browsing by Author "Daescu, Ovidiu"
Now showing 1 - 13 of 13
- Results Per Page
- Sort Options
Item Approximating the Geometric Edit Distance(2019-05) Li, Xinyi; Fox, Kyle; Daescu, Ovidiu; Raichel, BenjaminEdit distance is a measurement of similarity between two sequences such as strings, point sequences, or polygonal curves. Because many matching problems from a variety of areas, such as signal analysis, bioinformatics, etc., need to be solved in a geometric space, the geometric edit distance (GED) has been studied. In this thesis, we focus on approximating the GED between two point sequences. Previous work has proved that there is no O(n^{2−δ})(δ > 0) exact algorithm unless SETH fails. We present a randomized O(n log² n) time O(√n)-approximation algorithm. We then generalize our result to give a randomized α-approximation algorithm for any α ∈ [1, √n], running in time O˜(n²/α²). Both algorithms are Monte Carlo and return approximately optimal solutions with high probability.Item City Guarding and Path Checking: Some Steps Towards Smart Cities(2020-07-09) Malik, Hemant; Daescu, OvidiuWith drones and other small unmanned aerial vehicles starting to get permission to fly within city limits, monitoring the aerial space of big cities is becoming a critical problem that yet has to be addressed. While video cameras are easily available in most cities, their purpose is to guard the streets at ground level. Guarding the aerial space of a city with video cameras is a problem that so far has been largely ignored even in a limited way of all three dimensions. In this dissertation, we address various issues that set a necessary foundation for drone surveillance, which are as follows: 1. City Guarding with Limited Field of View. 2. Path Checking in < 2 In the first problem, we present bounds on the number of cameras needed to guard a city’s aerial space (roofs, walls, and ground) using cameras with 180◦ range of vision (the region in front of the guard), which is common for most commercial cameras. Each camera is placed at the top corner of a building. We considered the following cases: 1. All buildings are vertical and have a rectangular base. 2. All buildings are vertical and have an orthogonal base. For each case, we further considered the following two sub-cases: 1. Buildings have an axis-aligned ground base and, 2. Buildings have an arbitrary orientation. Unlike previous studies on guarding polygons with holes, a key subproblem we encounter is to guard a simple shaped polygon with holes by placing guards only at the vertices of the holes. We further address the following path checking problem: Given a set S of m disjoint simple polygons in the plane, with a total of n vertices, preprocess them so that for a query consisting of a positive constant c and a simple polygonal path π with k vertices, from a point u to a point v in free space, where k is much smaller than n, one can quickly decide whether π has clearance at least c (that is, there is no polygonal obstacle within distance c of π). To do so, we show how to solve the following related problem: Given a set S of m simple polygons in < 2 , preprocess S into a data structure so that the polygon in S closest to a query line segment s can be reported quickly.Item Computational Methods for Histopathological Whole Slide Image Analysis of Osteosarcoma(2018-05) Arunachalam, Harish Babu; 0000-0001-8143-4107 (Arunachalam, HB); Daescu, OvidiuComputational image analysis methods have been successfully implemented in many tumor studies to assist pathologists and medical professionals in making informed decisions. Osteosarcoma is one of the most common types of bone cancer in children. Currently, to estimate a patient‘s cancer treatment response, pathologists manually evaluate Hematoxylin and Eosin (H&E) stained glass-slides. The slides are carefully prepared after a surgical resection, to calculate the percentage of tumor necrosis, a useful biomarker. This process is very time consuming and is subject to observer bias, which could impact subsequent treatment procedures. Digital image analysis automates this process, saves time and provides a more accurate evaluation. However, the size and format of the digital slide images in conjunction with the heterogeneity of the Osteosarcoma tissue regions makes the analysis a challenging task. This research on Osteosarcoma focuses on developing image-analysis and machine-learning techniques to successfully predict tumor necrosis in histopathology image datasets (digitized glass-slides). The methods use whole slide images (WSIs) – high-resolution images consisting of more than 109 pixels, supporting up to 40X magnification. A comprehensive analysis is carried out for efficient necrosis identification by (1) using image processing methods to generate features, (2) performing comparative evaluation of feature sets, (3) identifying best automated learner, (4) comparative evaluation of classification approaches, and (5) testing the impact of extended feature set on learner accuracy. Image-tiles at a suitable magnification are generated from the WSIs and are normalized to remove color variations. They are segmented to compute color, shape, density and texture features. The features are grouped into two categories, namely, (1) expert-guided, and (2) automated-tool generated. Expert-guided features represent the properties pathologists observe while evaluating glass slides, and automated-tool generated features represent mainly texture-based properties. A comparative evaluation is performed to understand the significance of each feature-category. Both groups of features are combined and used as input-set to train and validate 13 machine-learning models. The best learner was a Support Vector Machine, which was used to perform comparative evaluation between a three-class and a two-class classification problem. An extended feature set is also generated by isolating sub-components of tissues from image-tiles and computing texture properties. A data-visualization step combines the results of classification into a tumor-prediction map which computes the percentage of tumor necrosis in a WSI. The results from the above steps lead to the design of Necrosis Detection and Analysis Software. The tool is intended to perform an end-to-end image analysis of Osteosarcoma WSI images and is to be used by pathologists in a clinical setting. Two more applications have been created as part of this research - an image-tile annotation software, and a gross-image annotation software, which help pathologists in creating datasets for automated-learners, and gross-map area-computations, respectively. The novel contributions of this research include, (1) building an automated image-analysis pipeline for Osteosarcoma, (2) creation of tumor-prediction maps from image-tiles, (3) design of an end-to-end necrosis detection tool, and (4) image-tile annotation and gross-image area-computation tools. The outcomes of this research will play a vital role in building novel, automated methods for Osteosarcoma and save valuable time of pathologists by reducing the time-consuming tumor necrosis estimation process.Item Computer Aided Diagnosis Systems for Digital Analysis of Osteosarcoma and Skin Cancer(2019-12) Mishra, Rashika; 0000-0002-1812-1252 (Mishra, R); Daescu, OvidiuComputer-aided detection/diagnosis (CAD) systems assist medical professionals in the interpretation of medical images. Many image modalities such as X-ray, MRI, and ultrasound already have diagnostics systems that process digital images for regions of interest and compute diagnostic patterns to provide supporting information in the decision making process for a possible diagnosis. The development of a CAD system is an interdisciplinary process combining computer vision algorithms/models with medical domain knowledge. Although such systems exist for radiology digital images, the advent of diagnosis systems in other modalities such as histology and dermoscopy is very recent. This dissertation focuses on the development of CAD systems for these modalities and includes (1) NAS: Deep Learning-Based Necrosis Assessment System for Osteosarcoma Histology Images, and (2) AlgoDerm: Deep Learning Framework for Skin Lesion Analysis and Tracking. Osteosarcoma is a type of bone cancer in children with an estimated 400-900 new cases each year in the United States. The current treatment plan for osteosarcoma involves a histopathology analysis after ten weeks of chemotherapy. Pathologists manually evaluate Haemotoxylin and Eosin (H&E) stained glass slides to estimate the percentage of tumor necrosis. Determination of the extent of tumor necrosis in a patient case can provide useful information for treatment outcome and prognosis. The manual process is time-consuming and is subject to observer bias, which could impact subsequent treatment procedures. The dissertation proposes a computational diagnosis framework with a custom deep learning model for digital histology images that can support the tumor analysis process, thereby saving time and providing a more objective evaluation. The proposed models achieve high accuracy in the necrosis estimation task and produce a tumor guide map with viable and necrotic tumor for pathologists. Skin cancer is one of the most prominent skin diseases, with 1 in 5 Americans being diagnosed with skin cancer in their lifetime. In the past few years, various computer-aided diagnosis systems have been proposed to facilitate accurate diagnosis of skin cancer but most are limited to cancer diagnosis from dermoscopic images in a clinical setting, with only a few considering cell phone/clinical photos. Smartphones equipped with applications for analyzing skin lesions can detect skin cancer early and increase the survival rate. The second part of this dissertation proposes a cell phone application where the patient uses a cell phone at home to take pictures of skin lesions and has the images interpreted by a specialized CAD system. Such an application aims to save time and money, with only patients that need further investigation through biopsy or cross-examination having to visit a medical office. Successful implementation and validation of this application can also increase the availability of skilled systems and extend the reach of dermatologists to remote areas. The proposed application shows high accuracy for both dermoscopy and digital images. The research presented in this dissertation focuses on the development of methods for effective representation and classification of regions of interest in histology and dermoscopy datasets using state of the art techniques.Item Fully Automated Brain Surgery Planning with Computational Geometry(2019-08) Yociss, Megan; Daescu, OvidiuAn application for computer-assisted surgery planning is presented. The application requires a volumetric image that is annotated with 1) a set of labels representing different structural or functional regions in the image, and 2) one numeric weight per label that provides information about the relative safety of travelling through that region. The application completes a number of preprocessing steps and uses a geometry-based algorithm to generate a list of safe paths through the domain. The paths can then be manually verified using the application’s visualization software. The surgery planning application is available at https://github.com/myociss/freesurgery. Simple instructions for installation and use can be found at the link. This paper suggests a fully automated brain surgery planning pipeline, including automatic labeling; however, the surgery planning software provided is applicable to any multilabel volumetric data in the required format. The surgery planning software depends on a hybrid C++/Python library to find paths, which is available at https://github.com/myociss/pathfinder. This library can also be installed independently for use by any Python application that requires the computation of safe paths through a three-dimensional domain represented by weighted tetrahedral mesh.Item Geometric Algorithms for Trajectory Planning and Facility Location Optimization Problems(August 2023) Teo, Ka Yaw; Dingal, Polimyr; Daescu, Ovidiu; Du, Ding-Zhu; Bereg, Sergey; Guo, Xiaohu; Fox, Emily KyleGeometry has long functioned as a bridge between abstract and real-world problems. In the field of computational geometry, we design algorithms and data structures to solve computational problems efficiently by exploiting their intrinsic geometric properties. This dissertation showcases our algorithmic contributions in utilizing computational geometry to solve a set of problems in two distinct subject areas – i) robot motion planning and ii) facility location theory. First, we describe geometric algorithms for computing feasible trajectories for an articulated two-segment robotic probe subject to specific motion constraints. We examine generalizations of the trajectory planning problem, in both two and three dimensions, with a fixed or variable end segment. Our algorithmic solutions are non-trivial and exact, as opposed to approximations and heuristics that are often employed for complex motion planning problems involving robots with restrictions and high degrees of freedom. The development of these algorithms is primarily driven by the need for precise planning in minimally invasive robotic surgeries in the medical domain. Secondly, we analyze several variations of facility location optimization problems from a geometric perspective. These problems involve finding the optimal location for a facility, either a line segment or a point, based on its distances from a set of demand points in fixed dimensions. We study variations such as discrete and center median line segments, continuous median line segment, and medoid (i.e., discrete point). To solve these problems, we create new geometric algorithms that are efficient, either exact or approximate with a relative performance guarantee. These optimization problems are considered fundamental in location science and an integral part of many industrial as well as data-science applications.Item Multimodal Medical Image Analysis Using Machine Learning(2022-05-01T05:00:00.000Z) Agarwal, Saloni; Daescu, Ovidiu; You, Seung M.; Prabhakaran, Balakrishnan; Natarajan, Sriraam; Iyer, RishabhWith extensive collections of data and evolved medical diagnostic imaging, including digitized histopathology images, computer-aided detection for medical assessment has become feasible. Clinicians and medical professionals can use automated computational models to detect regions of interest and aid in diagnosis. They can be used to provide a second opinion at times of uncertainty or used independently for reducing the load of the medical healthcare provider on difficult and time-consuming tasks. The research presented in this dissertation focuses on developing automated systems comprising detection, classification, survival prediction, segmentation, and quantification tasks using machine learning and deep learning algorithms for three medical problems. We use multiview images, images and clinical information, and multisite images to solve these problems, overcoming the underlying challenges, including limited data and lack of region annotations. In the first problem, we develop a solution to assess craniosynostosis, a skull deformity, automatically. An automatic craniosynostosis detector can diagnose the malformation early, particularly helping care providers with limited craniofacial expertise. We analyze 2D multiview images of healthy controls and infants with craniosynostosis to identify the disease using computer-based classifiers. First, we develop a traditional machine learning (ML) with feature extraction multiview image-based classifiers, and next, we build a Convolutional Neural Networks (CNNs) based classifier. The ML model has an accuracy of 91.7%, and the CNN model has an accuracy of 90.6%. ML model performs slightly better than the CNN model, probably due to the supremacy of the designed ML features in the craniosynostosis subtypes differentiation and small image dataset availability for model development. In the second problem, we classify a common type of cancer occurring in the soft tissue of children named Rhabdomyosarcoma (RMS) as the correct subtype. The subtypes respond to different treatments. Due to slight differences in the appearance of histopathology images, manual classification is tedious and needs high expertise. We present a machine learningbased pipeline to automatically classify Rhabdomyosarcoma into three significant subtypes using whole slide images (WSI). We train the model based on the knowledge of the class associated with the WSIs. There are no manual annotations used for the model development. Meanwhile, most related approaches to classify tumor needs manual regional or nuclear annotation on WSI. We first divide a WSI into tiles and predict the class of each tile. Then we convert tile-level predictions to WSI-level predictions using threshold and soft voting. We achieved 94.87% WSI tumor subtype classification accuracy on a large and diverse test dataset. Unlike related work, we achieved such accurate classification at 5X magnification of WSI, using 20X or 10X for best results. The benefit of our approach is that training and testing are performed computationally faster due to the lower image resolution. Next, we solve the survival prediction by developing a novel survival predictor. Our proposed method comprises two steps. First is the extraction of a whole slide feature map (WSFM), and next, it is used to build the survival predictor. We divide a WSI into small tile images, then extract the features for the tile image using InceptionV3 pretrained model. Next, we reduce the dimension of the features by applying Principal Component Analysis (PCA) and obtain a low dimension feature representation for the tiles. We store the tile features as channel information and replace all the tiles with their PCA extracted features in place to form WSFMs. The WSFM has the information of the entire tissue in the WSI and also preserves the adjacency information of the tiles. Using the WSFMs as input we then build Siamese survival convolutional neural network (SSCNN) which overcomes the small dataset size problem pertinent in the existing methods. The SSCNN uses the multivariate clinical features aling with the WSFM for predicting a survival score. We propose a novel modified pairwise ranking loss with a bounded inverse term to train the SSCNN. The proposed method does not need pixel-level annotations which is a notorious bottleneck for such studies and can be easily adapted for any tumor being agnostic to the other model development parameters like number of clusters. Experimental results in two different tumors, RMS and Glioblastoma multiforme (GBM) brain cancer, validate the success of the proposed SSCNN compared to other state-of-the-art survival predictors. At last, we established a deep learning model (DLM) pipeline to assess tumor viability using WSIs of the primary tumors and corresponding lung sections for 130 mice having breast cancer. We developed an InceptionResNetV3 convolutional neural network (CNN) model for detecting the viable and necrotic tumors and normal mammary tissue in the primary tumor WSIs. Then, we trained another CNN model by fine-tuning the first model to identify the metastatic tumor and normal tissue in the lung sections. We created the tumor viability heatmap for each WSI using the predictions from the respective model and quantified the tumor viability in each WSI. We measured the intraclass correlation between the manual tumor viability quantification and the DLM and obtained more than 0.97 correlation. By providing the clinically relevant outcome parameter of tumor viability, this novel DLM promises to become a standard tool in animal tumor models’ assessment.Item On the Geometric Separability of Bichromatic Point Sets(2017-08) Armaselu, Bogdan Andrei; 172508805 (Daescu, O); Daescu, OvidiuConsider two sets of points in the two- or three-dimensional space, namely, a set R of n "red" points and a set B of m "blue" points. A separator of the point sets R and B is a geometric locus enclosing all red points, that contains the fewest possible blue points. In this dissertation, we study the separability of these two point sets using various separators, such as circles, axis-aligned rectangles, or arbitrarily oriented rectangles. If there are an infinity of separators, we consider optimum criteria such as minimizing the radius (for circles) and maximizing the area (for rectangles). We first give an overview of computational geometry and the work related to geometric separability. Then, we study the circular separation problem and present three dynamic data structures that allow insertions and deletions of blue points, as well as reporting an optimal circle after such an insertion or deletion. The first is a unified data structure that supports both insertions and deletions and has nearlinear query and update time. The other two data structure have logarithmic query time and near-quadratic update time. One of them allows only insertions and the other supports only deletions of blue points. These are the first algorithms for the dynamic circular separation problem. After that, we introduce the rectangular separation problem and focus on the axis-aligned case (that is, the target rectangle has to be axis-aligned). We prove that the number of optimal solutions can be (n) in the worst case, present an algorithm to find one optimal solution that has near-linear running time, and then prove a matching lower bound for finding one optimal solution. We also introduce a number of extensions of the rectangular separation problem. Specifically, we consider the case when a fixed number of blue points are allowed inside the separating rectangle and the case where the blue points are replaced by axis-aligned rectangles. Finally, we conclude by discussing the on-going work and give possible future directions for geometric separability.Item Thin-film Schottky Diodes on Softening Polymer Substrates for Radio-frequency Bioelectronics(2021-12-01T06:00:00.000Z) Guerrero Ruiz, Edgar; Daescu, Ovidiu; Voit, Walter E.; Cogan, Stuart; Young, Chadwin D.; Grasse, DaneThe next generation of implantable electronics for biomedical medicines must include features which minimize the impact of the chronic inflammatory response to improve operational lifetime and minimize discomfort in the body. For this reason, eliminating the mechanical mismatch and effects that wires have on biological tissue could improve the state of present-day implantable electronics. Wireless devices could serve longer lifetimes and reduce the likeliness of complications that often arise in tethered electronics post-implantation. Moreover, by employing flexible substrates for wireless bioelectronics, the inflammatory response could be further mitigated and conformability to biological tissue enhanced. Yet, the design and fabrication of wireless electronic components on flexible substrates is limited, and flexibility of the devices is often sacrificed in exchange for the performance of rigid, silicon-based devices. Schottky diodes are rectifying electronic components crucial in the development of implantable wireless technology for biomedical medicines. In this work, we developed Schottky diodes on novel stimuli-responsive flexible substrates. By incorporating Schottky diode technology on novel flexible substrates that soften in response to temperature and moisture, the tradeoffs of flexibility, conformability, and performance are explored on electronic components for wireless technology. This work could pave the way for the next generation of soft, flexible electronics for biomedical applications.Item Towards AI and Hardware Synergy(December 2023) Arunachalam, Ayush; Daescu, Ovidiu; Basu, Kanad; Banerjee, Parag; Prasad, Shalini; Hansen, John H.L.; Tamil, LakshmanThis research explores the symbiotic relationship between Artificial Intelligence (AI) and hardware, with a specific emphasis on the intersection of AI and Hardware. In this dissertation, we consider two research thrusts: (i) AI for Hardware and (ii) Hardware for AI. In recent years, there has been a widespread adoption of custom hardware-based AI solutions to solve a plethora of real-world problems. For instance, researchers have proposed the incorporation of AI in numerous mission-critical applications, especially in high-assurance environments. To this end, in the first research thrust, we focus on developing AI techniques catered to hardware. Specifically, we propose novel low-latency and high-fidelity AI workloads to ensure the reliability of automotive hardware. On the other hand, the second research thrust is associated specialized hardware for AI. Despite the ubiquitous use of AI solutions in various real-world applications, such as facial recognition and autonomous vehicles, their deployment on hardware renders inefficiency, especially in resource-constrained platforms. Therefore, the second research thrust aims to facilitate efficient deployment of AI workloads on dedicated hardware platforms. To this end, we have formulated two main problems in this research, which are explained in detail below. The first aspect of this dissertation focuses on ensuring the Functional Safety (FuSa) of automotive systems. With increasing prevalence of safety-critical applications in the automotive domain, it is imperative to ensure the FuSa of circuits and components within the associated systems, which are predominantly Analog and Mixed-Signal (AMS) circuits. However, existing AI-based AMS FuSa violation detection solutions are limited by predefined feature inputs and lack a rationale for determining signals to be monitored for anomaly detection. To address these challenges, we propose a novel unsupervised Machine Learning (ML)-based framework for early anomaly detection in automotive AMS circuits. Our approach comprises the injection of anomalies into automotive AMS circuits in order to generate a comprehensive anomaly model, novel centroid selection and time-series methodologies for expedited high-fidelity anomaly detection. The proposed anomaly detection framework furnishes up to 100% detection accuracy and reduced the associated latency by 5× (compared to the non-time series approach). Subsequently, we augment our existing solution via novel feature and signal selection techniques, as well as an Explainable AI (XAI) framework for enhanced user interpretability and transparency. We achieve up to 7.2% improvement in detection accuracy and 2.3× reduction in detection latency over our prior approach. Following this, we perform anomaly abstraction to study the impact of anomalies across multiple abstraction levels in automotive systems, wherein we achieve high-fidelity anomaly detection (up to 100% detection accuracy) in both component-level and block-level implementations. Moreover, since we aim to deploy our AI solution on-chip for in-field applications, it is imperative to enable efficient resource utilization for real-world applicability. This necessitates real-time and low-power AI workload deployment, which we describe in our second research thrust. The second aspect of this dissertation pertains to Application-Specific Integrated Circuits (ASICs), such as Deep Neural Network (DNN) hardware accelerators, wherein we optimize the energy efficiency of DNN inference. The proliferation of DNNs in recent years has led to their widespread application in addressing a myriad of real-world challenges. However, due to the significant computational and power requirements, specialized hardware platforms, such as DNN accelerators, have been developed. Despite these advancements, DNN inference execution is associated with energy bottlenecks in these resource-constrained accelerators. To address these issues, we first propose a novel low-power hardware-based memory compression solution catered to commercial DNN accelerators. Our approach, which optimizes the memory subsystem of deep learning systems, involves hardware-based post-quantization weight trimming, followed by dictionary-based compression, and subsequent decompression by a low-power hardware engine during inference in the accelerator. Our technique furnishes up to 28571× reduction in memory footprint, while incurring negligible area and power decompression overheads of around 0.02% and 0.002%, respectively. Following this, we propose a novel sensor compression solution designed to optimize the energy efficiency of DNN sensor subsystems. The proposed approach employs a two-step approach involving subsampling followed by supersampling of sensor images through interpolation. Furthermore, we develop a fault injection framework to assess the fault resilience of DNNs (with compressed sensor inputs) to bit-flip faults in the DNN accelerator memory. Our solution furnishes up to 62.1% energy savings with marginal performance degradation of 0.83%. Furthermore, from our results, we can infer that DNNs accelerators witness up to 21.56% loss in classification accuracy for compressed sensor inputs, rendering them highly vulnerable to bit-flip faults manifested in their memory blocks. Therefore, by optimizing both the memory and sensor subsystems, we seek to enhance the overall efficiency and performance of of deep learning systems, particularly in resource-constrained environments. In conclusion, this dissertation proposes pioneering approaches to achieve synergy between AI and hardware, with the objective of improving the performance, safety, and power efficiency of systems situated at the confluence of AI and hardware. This research describes novel approaches to address the challenges pertaining to automotive AMS functional safety and low-power DNN implementation in resource-constrained IoT edge devices, offering valuable contributions to the AI and hardware research communities, and can foster the development of more robust and efficient systems.Item Uncertain Inputs for Convex Hulls and Clustering(2022-05-01T05:00:00.000Z) Huang, Hongyao; Raichel, Benjamin; Ryu, Ill; Bereg, Sergey; Daescu, Ovidiu; Fox, KyleGeometric algorithms and inputs have received an increasing amount of attention with the explosion of data and computing challenges that arise from real world applications. This real world data is often uncertain in nature, either in the location or the existence of the data points. However, many classical computational geometry algorithms assume inputs to be precise. Thus the inherent presence of uncertainty in real data motivates the further exploration of classical geometric problems, though modeled to include uncertain inputs. This dissertation considers two of the most fundamental computational geometry problems, namely convex hulls and clustering, when the inputs are uncertain. We consider two different ways to model uncertainty: (i) uncertainty on location, where an uncertain point set is a collection of compact regions in the plane, and (ii) a probabilistic framework to model the existence of each point from the input point set. First, we study the complexity of the convex hull when the uncertain input points are modeled as a set of compact subsets, namely line segments. Here we seek the realization of the points whose convex hull has the fewest number of vertices. Next, we explore the classic k-center clustering problem for when the uncertain input points are a set of convex objects, for which we present several results. Finally, the last part of this dissertation concerns the k-center clustering problem with probabilistic centers, where each cluster center has a probability of failure. In presenting geometric properties, algorithms, and hardness results for convex hulls and clustering, this dissertation aims to give a better understanding to fundamental geometric problems with uncertain inputs.Item Unsupervised Driving Anomaly Detection in Naturalistic Driving Scenarios(August 2022) Qiu, Yuning; Busso, Carlos; Daescu, Ovidiu; Hansen, John H.L.; Kehtarnavaz, Nasser; Misu, TeruhisaNew developments in advanced driver assistance systems (ADAS) can help drivers deal with risky driving maneuvers, preventing potential hazard scenarios. A key challenge in these systems is to determine when to intervene. While there are situations where the needs for intervention or feedback is clear (e.g., lane departure), it is often difficult to determine scenarios that deviate from normal driving conditions. These scenarios can appear due to errors by the drivers, presence of pedestrian or bicycles, or maneuvers from other vehicles. We formulate this problem as a driving anomaly detection, where the goal is to automatically identify cases that require intervention. We aim to create unsupervised multimodal solutions that do not depend on predefined rules, or hyperplanes learned from labeled data describing few target events. This model should recognize anomalous driving scenarios even if similar scenarios are never observed in the training data. Toward this goal, this dissertation focuses on three main transformative goals: (a) to build robust unsupervised methods for driving anomaly detection, (b) to make the approach scalable so multiple modalities can be easily added, and (c) to make the approach interpretable so it is intuitive to understand why a given segment is detected as anomalous. Our first aim is to build robust unsupervised methods for driving anomaly detection. We address this goal by proposing a novel conditional generative adversarial networks (GAN) where the models are constrained by the signals previously observed. The difference of the scores in the discriminator between the predicted and actual signals is used as a metric for detecting driving anomalies. Our model consider (1) physiological signals from the driver, (2) vehicle information obtained from the controller area network (CAN) bus sensor. The original model was implemented with fully connected layers and hand crafted features from the physiological and CAN-Bus signals. This model was improved with two important changes. First, we explore an end-to-end solution extracting feature representations directly from the data, using convolutional neural network (CNNs). This model also leverages temporal information using long short-term memory (LSTM). Second, we improve the anomaly score using a triplet-loss function to further contrast the predicted and actual signals. The triplet-loss function creates an unsupervised framework that rewards predictions closer to the actual signals, and penalizes predictions deviating from the expected signals. This approach maximizes the discriminative power of the feature embeddings to detect anomalies, leading to measurable improvements over the results observed by our previous approach implemented with fully connected layers. The second aim is to make the driving anomaly detection approach scalable so multiple modalities can be easily added. This is important as individual modalities have limitations. For example, by considering only the vehicle CAN-Bus data and driver’s physiological data, our proposed approach can only detect abnormal driving scenarios when the driver reacts to the driving environment. If a driver fails to notice an abnormal driving scenario, these signals will not change and our driving anomaly scores will fail to capture the event. A model should be scalable, so we can incorporate other modalities that, for example, describe the environmental information. Our proposed approach trains a conditional GAN to extract latent features from each modality, which are independently pre-trained. An attention mechanism model combines the latent representations from the modalities. The entire framework is trained with the triplet loss function to generate effective representations to discriminate normal and abnormal driving segments. This approach is implemented with five different modalities (vehicle’s CAN-Bus signals, driver’s physiological signals, distance to nearby pedestrians, distance to nearby vehicles and distance to nearby bicycles), achieving improved performance over alternative approaches. The third aim is to make the approach interpretable so it is possible to understand why a given segment is detected as anomalous. We address this goal with two alternative approaches. The first approach is an example-based query algorithm that combines the aforementioned attention-based conditional GAN model with the multi-label k-nearest neighbors (ML-KNN) algorithm. Our approach relies on few manually labeled driving segments that are efficiently used as anchors to retrieve the causes of driving anomalies in a given driving segment. These anchors are projected into the embedding created by unsupervised driving anomaly detection systems, providing an ideal space to compare an anomalous driving segment detected by the system with the anchors. The second alternative framework is an unsupervised approach based on the contrastive multiview coding (CMC) framework to capture the correlations in representations extracted from different modalities. The approach learns a more discriminative representation space for unsupervised anomaly driving detection. We use CMC to train our model to extract view-invariant factors by maximizing the mutual information between multiple representations from a given view, and increasing the distance of views from unrelated segments. The approach is efficient, scalable and interpretable, where the distances in the contrastive embedding for each view can be used to understand potential causes of the detected anomalies. The proposed solutions are evaluated and trained with 130 hours of naturalistic data manually annotated with driving events. The results demonstrate the benefits of the proposed solutions. Collectively, these advances represent transformative contributions to build scalable, interpretable, and discriminative algorithms to identify anomaly driving events.Item Viable and Necrotic Tumor Assessment from Whole Slide Images of Osteosarcoma Using Machine-Learning and Deep-Learning Models(Public Library of Science) Arunachalam, Harish Babu; Mishra, Rashika; Daescu, Ovidiu; Cederberg, K.; Rakheja, D.; Sengupta, A.; Leonard, D.; Hallac, R.; Leavey, P.; 0000-0001-8143-4107 (Arunachalam, HB); 0000-0002-0278-4174 (Daescu, O); 172508805 (Daescu, O); Arunachalam, Harish Babu; Mishra, Rashika; Daescu, OvidiuPathological estimation of tumor necrosis after chemotherapy is essential for patients with osteosarcoma. This study reports the first fully automated tool to assess viable and necrotic tumor in osteosarcoma, employing advances in histopathology digitization and automated learning. We selected 40 digitized whole slide images representing the heterogeneity of osteosarcoma and chemotherapy response. With the goal of labeling the diverse regions of the digitized tissue into viable tumor, necrotic tumor, and non-tumor, we trained 13 machine-learning models and selected the top performing one (a Support Vector Machine) based on reported accuracy. We also developed a deep-learning architecture and trained it on the same data set. We computed the receiver-operator characteristic for discrimination of non-tumor from tumor followed by conditional discrimination of necrotic from viable tumor and found our models performing exceptionally well. We then used the trained models to identify regions of interest on image-tiles generated from test whole slide images. The classification output is visualized as a tumor-prediction map, displaying the extent of viable and necrotic tumor in the slide image. Thus, we lay the foundation for a complete tumor assessment pipeline from original histology images to tumor-prediction map generation. The proposed pipeline can also be adopted for other types of tumor. ©2019 The Authors