Browsing by Author "Tamil, Lakshman"
Now showing 1 - 20 of 21
- Results Per Page
- Sort Options
Item An Element Management System for a Telehealth Remote Patient Monitoring Environment(2019-12) Thomas, John Wesley; 0000-0001-6258-8071 (Thomas, JW); Tamil, LakshmanOne of the major service areas of telemedicine, also referred to as telehealth, is remote patient monitoring. Some of the concerns that arise when considering the remotely located medical equipment used by remote patients are the management of initialization, usage, safety, security, calibration and reliability. It is well documented that due to the unique usage patterns and data sensitivity of the remote telehealth devices, the use of traditional network management protocols may not be the most efficient for use throughout the entire management plane of the telemedicine environment. Many examples of remote management are not customized to the unique challenges imposed by medical data and networked medical equipment. The majority of solutions available today are proprietary and lack open standardization. In this dissertation, is a proposal of a structured element management system for remote telemedicine devices, named the Telehealth Element Management System (TEMS), which will alleviate many of the above-mentioned concerns, while also fully leveraging open standards of network management and making an impact on the overall growth of the telemedicine market. The dissertation is organized as follows: Introduction and Literature Review, followed by defining the TEMS Architecture, Functionality, and Implementation. The summation of the the dissertation will be provided in the Conclusion.Item An Internet of Things Platform for Improved Water Management Using Underground Soil Moisture Sensing(2021-05-01T05:00:00.000Z) Arjona Angarita, Ricardo Javier; Fumagalli, Andrea; Fei, Baofei; Razo-Razo, Miguel; Tacca, Marco; Tamil, Lakshman; Faragó, AndrásEfficient use of water resources is becoming of paramount importance in agriculture due to their scarcity and less predictable availability impacted by climate change. Profitability of traditional farming methods to meet the increasing population demand for food production has been negatively affected, thus requiring efficient irrigation systems and water management practices through technology. In this thesis, a cost-effective Internet of Things - IoT platform that incorporates underground soil moisture sensing is presented with the aim of increasing the penetration of applied technologies in the farming market. The platform features a Sub-1 GHz IEEE802.15.4g-based wireless sensor network concentrator (WSNC) with LTE backhaul which provides Internet connectivity in rural areas towards a cloud server. The WSNC connects sensor nodes to the collector node over a wireless link following a star topology network. The sensor node is enhanced with a helical antenna designed specifically for underground operation along with a power amplifier to compensate signal attenuation in the soil-air path to the WSNC. Based on the number of collectors and physical layers that are supported, the implemented WSNC offers three configurations: Single Collector (SC), Multi Collector (MC) and MC - Multi Rate (MR). The SC-WSNC supports a total of 50 sensor nodes whereas the MC-WSNC can support up to 200 devices by hosting several independent Wireless Sensor Networks (WSNs) operating on a unique frequency channel. To improve the system performance, a load balancing algorithm and a sensor handover mechanism are developed for the MC-WSNC to uniformly distribute the number of aggregated sensor nodes across the available collectors. The MR capability added to the MC-WSNC and the sensor nodes dynamically optimizes the energy consumption and radio link margin of the sensor nodes for improved battery lifetime and connection reliability. The SC-WSNC has been experimentally evaluated in terms of coverage range in aboveground and underground scenarios with a detailed end-to-end delay characterization using state-of-art tools in every network segment. The results reveal the limitations of the system in covering large farming areas due to both the high attenuation in the combined physical media and the limited number of sensor nodes that can be attached to one collector. In contrast, the MC-WSNC is evaluated using a test-bed consisting of up to four co-located collectors and fifty sensor nodes. The performance evaluation is carried out under race conditions in the WSNs to emulate high dense networks with different network sizes and channel gaps. The experimental results show that the MC-WSNC proportionally scales up the capacity of the network and reduces both the energy consumption and the packet error rate of the sensor nodes. The MR feature - implemented as a physical layer switch at the sensor nodes - further reduces the overall network power consumption and increases the network throughput while at the same time accounts for varying radio link conditions.Item Building Resiliency Into 5G Open-source and Disaggregated Architecture(December 2023) Ramanathan, Shunmugapriya 1982-; Kantarcioglu, Murat; Fumagalli, Andrea; Tamil, Lakshman; Razo-Razo, Miguel; Tacca, MarcoToday in the Internet era, communication service providers face tremendous constraints on increasing capital expenditures and operating expenses compared to the much less income growth. Cloud Radio Access Network (C-RAN) architecture has emerged as a potential candidate for the future wireless network that highlights the notion of service cloud, service-oriented resource scheduling, and management, thereby facilitating the utilization of both Network Functions Virtualization and Software-Defined Networking (NFV-SDN) technologies. The transport network reliability of the disaggregated C-RAN components is paramount to ensure reliable data communication. Our first contribution focuses on providing transport network resiliency support for the C-RAN architecture using a programmable optical software-defined network testbed. The testbed supports C-RAN functionalities by offering fronthaul, midhaul, and backhaul transport capabilities with increased reliability. The C-RAN components are further disaggregated and run as either virtual machines (VMs) or containers in a virtualized environment. To ensure load-balancing and fault-tolerance of the C-RAN components, our Optical programmable testbed with SDN capabilities supports live migration of C-RAN functions among data centers. OS container-based virtualization enables faster application instantiation than the hypervisor-based VM because of its smaller footprint size. However, in the context of the mobile network protocol stack, the open- source container migration software has yet to be developed to the full extent. Our second and third contributions focus on the live migration of containerized core network and RAN central unit virtual functions. The live migration is made feasible through our proof-of- concept implementation of the open-source container migration software. In the C-RAN architecture, the Next Generation NodeB (gNB) functions are decoupled into three entities, namely Radio Unit (RU), Distributed Unit (DU), and Central Unit (CU). These entities will likely be virtualized and distributed in micro and macro data centers. The virtualized CUs (vCUs) are decoupled further into virtualized CU Control-Plane (vCU- CP) and virtualized CU User-Plane (vCU-UP) to optimize the location of the RAN functions for 5G vertical use case scenarios and performance requirements. The vCU-CP handles the signaling functionality, such as connection establishment and hand-over. All the 5G Core Network control plane modules have a single point of contact with vCU-CP. Therefore, a study on resiliency on vCU-CP is important to avoid the single point of failure, which comes under our fourth contribution. Our proof-of-concept guaranteed the fronthaul network reliability of the 5G transport network and during VNF live migration, it ensured end-user service continuity without permanent UE interruption. In addition, the temporary downtime experienced during the live-migration is significantly lowered by more than 50% when using our container migration prototype compared to traditional VM solutions.Item Development of Indium Gallium Zinc Oxide Thin Film Transistors on a Softening Shape Memory Polymer for Implantable Neural Interfaces Devices(2019-12) Rodriguez Lopez, Ovidio; Tamil, Lakshman; Voit, WalterThe continuous improvement in electronic active devices has led to several innovations in semiconductor materials, novel deposition methods, and improved microfabrication techniques. In the same way, the implementation of thin-film technology has revolutionized the semiconductor industry. For instance, the field of flexible electronics has utilized novel thin-film electronics components for the fabrication flexible displays, radio frequency identification (RF-ID) tags, and solar cells. Moreover, flexible electronics have sparked a great interest in the field of bioelectronics, for the fabrication of high-spatial-resolution implantable devices for neural interfaces. This incorporation of thin-film technology can potentially enable stimulation and recording the nervous system activity by utilizing novel, minimally invasive, conformal devices. To achieve this, flexible electronics circuits must possess high performance, reliability, and stability, as well as be resilient to mechanical stress and human body conditions, are some of the requirements that flexible electronics must meet for the realization of these devices. Furthermore, the choice of substrates is also critical since it directly affects final properties of the active devices. Substrates, which are mechanically and biologically compliant, are preferred. For this reason, novel, softening materials like thiol-ene polymers are considered in this research. This work centers on the development of Indium-Gallium-Zinc-Oxide (IGZO) thin-film transistors (TFT) using the thiol-ene softening polymer as substrate. Functional IGZO-TFTs were fabricated on top of 50 µm of a thiol-ene/acrylate shape memory polymer (SMP) and electrically characterized. Hafnium oxide (HfO₂) deposited at 100°C by atomic layer deposition was used as gate dielectric, and gold (Au) as contacts. The devices were exposed to oxygen, vacuum and forming gas (FG) environments at 250°C to analyze the effects of these atmospheres on the IGZO-TFTs. Improvement in the electrical performance was noticed after the exposure to FG with a significant change in mobility from 0.01 to 30 cm2 V-1s-1, and a reduction in the threshold voltage shift (∆Vₜₕ), which it is translated into an increase on stability. Vacuum and oxygen effects were, also analyzed and compared. Furthermore, a time-dependent dielectric breakdown (TDDB) analysis was performed to define the lifetime of the transistors, where a prediction of 10 years at an operational range below 5 V was obtained. Additionally, the TFTs were encapsulated with 5 µm of SMP and exposed to simulated in vivo conditions. Up to 104 bending cycles were performed to the IGZO-TFTs with a bending radius of 5 mm and then, soaked into PBS solution at 37°C for one week to determine the resilience and reliability of the devices. The encapsulated IGZO-TFTs survived to the PBS environment and demonstrated resilience to mechanical deformation with small changes in the electronic properties. The results provided in this research contribute to the development of complex circuitry based on thin-film devices using mechanically adaptive polymers as a flexible substrate and enable the production of multichannel implantable bioelectronics devices.Item Efficient Machine Learning Algorithms for One- and Two-dimensional Biomedical Signals(December 2023) Tiryaki, Erhan 1989-; Zalila-Wenkstern, Rym; Tamil, Lakshman; Nourani, Mehrdad; Fumagalli, Andrea; Brown, KatherineData analysis plays a crucial role in healthcare when it comes to diagnosing and detecting illnesses and medical conditions. Thanks to the advancements in data computing and machine learning, healthcare professionals can leverage this technology to their advantage. There is a vast amount of biomedical data available, ranging from patient records to medical imaging and genomic sequencing, as well as clinical trial results. By analyzing this data with the help of machine learning, healthcare professionals can gain valuable insights that could lead to more accurate diagnoses, better treatment options, and improved health outcomes for patients. It’s worth noting that any discovery made through data analysis has the potential to enhance the quality of life for individuals. This dissertation presents a successful method for detecting life-threatening ventricular arrhythmias, namely ventricular tachycardia, ventricular fibrillation, and ventricular flutter, through the use of machine learning algorithms. The method leverages various statistical features and is capable of detecting these arrhythmias over different ECG signal durations. Our method can efficiently differentiate ventricular tachycardia/fibrillation/flutter (VTFL) against normal sinus rhythm (NSR) with an accuracy, recall and positive predictive value of 98.21, 95.57 and 98.61 percents respectively. The discriminatory power of the same algorithm between VTFL and non-VTFL as characterized by accuracy, recall and positive predictive value are 98.12, 93.29, and 97.2 percents respectively. This dissertation also suggests a novel way to identify ST segment depression through a single ECG lead. The proposed method involves transforming one-dimensional ECG signals into two-dimensional images to efficiently detect ST segment depression, a key indicator of myocardial ischemia. The generalized algorithm (subject independent) developed, when analyzed using a convolutional neural network with 8, 16, and 32 filters in its consecutive convolutional layers yielded for ECG segments of 40 beats, a sensitivity, specificity and an accuracy of 91.18, 98.79 and 95.51 percents respectively. A personalized algorithm (subject dependent) built on the same CNN architecture as the generalized algorithm for detecting ST segment depression yielded a sensitivity, specificity and precision of 97.04, 99.72 and 98.90 percents respectively. In addition to one-dimensional ECG signals, this dissertation explores utilizing ultrasound images in combination with Generative Adversarial Networks (GANs) for data augmentation to enhance breast cancer detection. We achieved an accuracy of 90.2 percent, which is higher that any results reported for a single model. By employing a GAN-based approach to augment data, the detection process can be considerably improved, resulting in greater performance when identifying the disease from ultrasound breast images. This advancement is valuable for timely diagnosis and treatment, and has the ability to positively impact patient outcomes.Item Explainable AI Algorithms for Classification Tasks With Mixed Data(December 2022) Wang, Huaduo; Hayenga, Heather; Gupta, Gopal; Tamil, Lakshman; Salazar, Elmer; Nourani, Mehrdad; Khan, LatifurWith the great power of Machine Learning techniques, numerous applications have been created that have become an integral part of our modern life. However, the decision making processes of many of these machine learning-based applications are being questioned and criticized due to their opacity to users, especially for critical tasks such as disease diagnosis, loan application, industrial robots, etc. This opacity is the result of using statistical ma- chine learning approaches that generate models that can be viewed as solutions to optimization problems that minimize loss or maximize likelihood. Explainable Artificial Intelligence (XAI) models or Explainable Machine Learning (XML) models are machine-learned models in which human users can understand the decision making or prediction making process. The main goals of XAI are to: 1) generate highly accurate models that are comprehensible to human users. 2) explain a model’s decision-making process to a human so that they can easily understand it, develop trust in it, and diagnose any potential problems. This dissertation presents the FOLD family of new explainable AI algorithms for classification tasks that are able to efficiently handle mixed data (numerical and categorical) without extra effort (i.e., without resorting to any special data encoding). These algorithms generate a set of default rules, represented as a stratified logic program, that serves as the predictive model. Due to their symbolic nature and because they are based on logic, they can be easily understood and modified by humans. These new algorithms are competitive in predictive performance with state-of-the-art machine learning algorithms such as XGBoost and Multi-Layer Perceptrons (MLP), however, they are an order of magnitude faster with respect to execution efficiency. The FOLD-R++ algorithm has been designed for solving binary classification problems, FOLD-RM for multi-category classification problems, and FOLD-LTR for ranking. FOLD-SE is a further improvement over these algorithms that leads to scalable explainability. Scalable explainability means that regardless of the size of the data, the generated model is represented using a small number of rules—resulting in improved human-interpretability and human-explainability—while maintaining excellent predictive performance. The rest of this thesis presents the FOLD family of algorithms and compares and contrasts them with state-of-the-art machine learning algorithms.Item Exploiting Instance Similarity in Applications of Deep Learning in Bioinformatics(2021-08-01T05:00:00.000Z) Eslami Manoochehri, Hafez Eslami; Nourani, Mehrdad; Rugg, Elizabeth; Gupta, Gopal; Kehtarnavaz, Nasser; Tamil, LakshmanPredicting the state or function of a biological organism from gross observation is a recurrent challenge in biology. As biological systems are too complex, identifying the rules and principles from the observations is extremely hard. In recent years, machine learning has been employed to deal with the complexity of biological data. Machine learning techniques often rely on hand-crafted features. However, designing and leveraging such features are not always obvious. Over the last decade, deep learning has emerged as a new area in machine learning, revolutionized our understanding of biology. Unfortunately, the sparsity of training data in some specific domains has restricted the use of deep learning. One way to overcome this limitation is to utilize instance similarities. In many applications, however, incorporating similarity information into deep learning models is not straightforward. Utilizing instance similarity in biology has been the main motivation of this research. In this dissertation, we have tackled exploiting the similarity of instances into deep learning in two main applications: a) drug-target interaction prediction, and b) learning morphological similarity in histopathology images. In the first application, we introduce a deep learning framework to predict drug-target interactions by learning the topological features from a bipartite drug-target interaction graph. Furthermore, we exploit drug and protein similarity information by extending our framework to learn from a semi-bipartite graph. We show that our approach achieves state-of-the-art performance in predicting new drug-target interactions. In the second application, we propose a novel deep metric learning methodology that learns morphological similarities in histopathology images. Thanks to a new task and metric learning design, our approach performs without requiring any labeled data. We demonstrate our approach learns more general purpose features than discriminative approaches, and therefore performs better in downstream tasks. We show that our framework can be used as a backbone in a few tasks, i.e., image retrieval, identifying morphological and biological correlation and transfer learning. Furthermore, We show our approach can also be used to reduce the batch effect in histopathology domain. We also show that our approach’s strength and unsupervised nature, make it a powerful tool for biological explorations and discoveries.Item Exploring Machine Learning for Automated Diagnosis in the Presence of Missing and Corrupted Data(December 2023) Apalak, Merve 1991-; Hamlen, Kevin; Kiasaleh, Kamran; Panahi, Issa M.S.; Nourani, Mehrdad; Tamil, LakshmanThe advances in electronic health records (EHRs) and machine learning (ML) algorithms have brought a new perspective to biomedical sciences and medical practice. This has enabled and improved research on automated diagnostics, data-driven disease categorization and personalized treatments. Researchers and healthcare providers have already welcomed the recent advancements. However, the transition into practice happens gradually due to the challenges in the field. Even though patients’ records are being recorded as EHRs in hospital systems, it is necessary to thoroughly analyze, process, and annotate the data before employing it for prediction problems. Each chapter of this dissertation highlights the obstacles that have emerged when implementing learning models on clinical data, particularly in the application domain of sepsis prediction. The first part of the thesis proposes a strategy to alleviate the poor prediction performances caused by irregularly-spaced and incompletely observed databases. The proposed method employs Conditional Generative Adversarial Networks (GANs), with Long Short-Term Memory (LSTM) networks serving as both the generator and discriminator, conditioned on class labels. Experimental results show that while the proposed framework profitably identifies long-term temporal dependencies and exploits the missing patterns, it also delivers highly notable performance results. The second part of this dissertation focuses on non-invasive, computationally efficient, and continuous patient monitoring in intensive care units (ICUs) using single-lead electrocardiogram (ECG) signals for the early prediction of sepsis. We develop a continuous early sepsis detection algorithm utilizing two databases; in particular, the Medical Information Mart for Intensive Care (MIMIC-III) Clinical Database and MIMIC-Waveform Database. We carry out a systematic approach to selecting ECG segments with superior quality that are recorded in highly dynamic ICU environments. Moreover, since we are approaching the early sepsis prediction as a supervised time series classification, we evaluate the model performance by implementing Temporal Convolutional Networks (TCN). It is discovered that hear rate variability (HRV) demonstrates considerable decelerations for sepsis patients, and the HRV characteristics of adults can be a valuable indicator for continuous sepsis monitoring in an ICU. Finally, this research work adds to the field of early sepsis detection by providing an annotated continuous waveform database from the MIMIC-Waveform Database, which is made accessible to the public.Item Fast Inference and Learning on Hybrid Relational Probabilistic Graphical Models(2022-08-01T05:00:00.000Z) Chen, Yuqiao; Natarajan, Sriraam; Tamil, Lakshman; Ruozzi, Nicholas; Gogate, Vibhav; Kersting, KristianProbabilistic Relational Models (PRMs) combine the power of Probabilistic Graphical Mod- els modeling structural data and the capability of First-order Logic representing relationship on class level. This type of model has shown success on modeling large data, especially en- terprise data that are stored in relational databases. Efficient reasoning and model learning of PRMs are critical problems for applying them to large-scale tasks. With recent advances in lifted inference, symmetries in the PRMs can exploited to perform efficient reasoning and learning. However, although most real-world applications involve both discrete and continu- ous (hybrid) features, most existing lifted inference methods have been restricted to discrete models or continuous models with systematic assumptions, limiting their applications and ease of use. To extend the applicability of PRMs, we need to design lifted inference methods and model learning algorithms that are suitable for hybrid data. In this work, we develop approximate lifted inference schemes based on particle sampling and variational inference, which have the ability to perform inference on arbitrary hybrid domain Markov Random Field (MRF) models. We also introduce Relational Neural Markov Random Field (RN-MRF) models that allow handling of complex relational features with the help of both neural potential functions and expert defined relational rules. Finally, we propose a maximum pseudo-likelihood estimation-based learning algorithm with importance sampling for training the RN-MRF models. The key advantage of our inference and learning approach is that they make minimal data distributional assumptions and have the flexibility to be applied to various real-world application. We demonstrate empirically that our inference methods and proposed learning model are efficient and outperforming existing approaches in a variety of settings.Item Information Theory Based Classification Method and its Application In Atrial Fibrillation Detection(2019-12) Fonseka, Sebastian Pradeep; Tamil, LakshmanA novel mutual information-based classification (MIC) method is introduced. MIC selects features that carry the highest mutual information from the set of all available features. In testing, MIC combines the test subject with the training data and searches for the class of the test subject that maximizes the mutual information of the combination of that test subject and the training data. The proposed MIC method is tested in the detection of atrial fibrillation (AF). Features are extracted from a single lead ECG signal using known methods. Numerical results presented show that MIC can perform significantly better than support vector machines (SVM). Accuracies over 90% are reported using MIC with only few features selected according to mutual information.Item Machine Learning Techniques for Automated Detection of Cardiac Arrhythmias(2020-06-03) Kalidas, Vignesh; Tamil, LakshmanCardiac Arrhythmias are cardiac abnormalities that arise as a consequence of irregularities in the electrical conduction system of the heart. In this dissertation, a comprehensive set of machine learning techniques, complemented by logical analysis, are presented for accurate detection of fifteen different cardiac arrhythmias - both ventricular and supraventricular. This includes, along with normal sinus rhythm, (1) ventricular fibrillation (VF), (2) ventricular tachycardia (VT), (3) premature ventricular complexes (PVC), (4-6) ventricular bigeminy/trigeminy/quadrigeminy, (7) ventricular couplets, (8) atrial fibrillation, (9) supraventricular ectopic beats (SVEB), (10-12) supraventricular bigeminy/trigeminy/quadrigeminy, (13) supraventricular couplets, (14) supraventricular tachycardia and (15) bradycardia. In this dissertation, information from single-lead electrocardiogram (ECG) signals is utilized to create a rich set of arrhythmia-specific features to aid in the development of highly accurate arrhythmia detection models. ECG is a waveform representation of the heart’s electrical activity and cardiac arrhythmias often manifest as morphological variations on the ECG. Prior to performing any arrhythmia analysis, the incoming ECG signal is preprocessed to remove low frequency and high frequency artifacts using Stationary Wavelet Transforms and Denoising Convolutional Autoencoders. This is complemented by signal quality assessment using Convolutional Neural Networks where ECG segments corrupted by high grade motion artifacts are identified and suppressed from further arrhythmia analysis. Following this, detection of Ventricular Fibrillation and Sustained Ventricular Tachycardia is implemented using a Random Forests classifier. Next, beat detection using a combination of Convolutional Autoencoders and adaptive thresholding is carried out to accurately detect R-peak locations which is key to performing robust arrhythmia analysis. Subsequently, algorithms for detection of PVC-beat-based ventricular arrhythmias are implemented using Semisupervised Autoencoders combined with Random Forests and logical analysis. This is followed by atrial fibrillation detection using Markov models in conjunction with Random Forests. Finally, logical sequence analysis techniques are applied to detect additional SVEBbased supraventricular arrhythmias. The algorithms presented in this dissertation achieve a sensitivity of 98.85%, positive predictive value (PPV) of 95.77% and F-Score of 96.82% in detecting ventricular fibrillation/sustained ventricular tachycardia episodes on records from MIT-BIH Malignant Ventricular Ectopy Database and American Heart Association Database. In terms of Rpeak detection, 99.63% sensitivity, 99.88% PPV and 99.75% F-Score is achieved on the MIT-BIH Arrhythmia Database (MITDB) records. Following this, the PVC detection algorithm achieves sensitivity, PPV and F-Score values of 93.17%, 94.41% and 93.78% on the MITDB records. Similarly, the SVEB detection algorithm achieves sensitivity, PPV and F-Score values of 92.11%, 83.77% and 87.74% on the MITDB records. In the context of atrial fibrillation detection, a sensitivity of 96.88%, PPV of 98.87% and F-Score of 97.86% is obtained on the MIT-BIH Atrial Fibrillation records. The working of afore-mentioned algorithms is demonstrated by deploying them in a cloud platform, AutoECG - a web service that facilitates online arrhythmia detection by analyzing ECGs uploaded by authorized users. AutoECG is device-agnostic and can process ECG data of varying duration (30s to 24 hours). Following ECG analysis, the AutoECG software generates an arrhythmia summary report for further review by qualified medical practitioners. This affirms the translational nature of the research presented in this dissertation.Item Machine Learning Techniques for Improved Disease Detection in Breast and Gastrointestinal Tissues(2021-11-17) Cogan, Timothy Curtis; Tamil, LakshmanWe have developed high-performing deep learning architectures and preprocessing pipelines for identifying abnormalities, diseases, anatomical landmarks, and physiological characteristics of breast and gastrointestinal tissues. These algorithms have the potential to improve accuracy, speed, cost, and accessibility of medical image screening for individuals all around the world. For breast tissue, we have developed a region-proposal network for identifying and localizing malignancies, a one-class classifier for determining whether or not an image is truly a mammogram in an intelligent image filtering pipeline, and lastly DualViewNet, a convolutional neural network built upon MobileNetV2, for identifying breast tissue density in mammograms and quantifying the usefulness of different mammogram views in determining breast density. The malignancy identification network achieved 0.951 AUC when tested on BI-RADS 1, 5, and 6 mammograms from the INbreast dataset while the one-class classifier had only 2 misclassifications out of 410 mammograms and 2 misclassifications out of 1,640 non-mammograms. DualViewNet showed best performance over all compared architectures with a macro average AUC of 0.8970 and macro average 95% confidence interval of 0.8239- 0.9450 and demonstrated preference of MLO over CC views in 1,187 out of 1,323 breasts. For gastrointestinal tissue, we fine-tuned Inception-v4, Inception-ResNet-v2, and NASNet on images sent through a custom data pipeline, achieving state-of-the-art results on images taken from the Kvasir database. The resulting accuracies achieved using these models were 0.9845, 0.9848, and 0.9735, respectively. In addition, Inception-v4 achieved an average of 0.938 precision, 0.939 recall, 0.991 specificity, 0.938 F1 score, and 0.929 Matthews correlation coefficient (MCC). Bootstrapping provided NASNet, the worst performing model, a lower bound of 0.9723 accuracy on the 95% confidence interval. In addition, we built a cloud-based deployment environment for remotely analyzing and screening mammograms from anywhere in the world as well as a client-side annotation tool for generating new training data. These results are presented in detail through the following chapters.Item Rate and Performance Enhancement of LDPC Coded Schemes(December 2022) Hassan, Rana A; Fonseka, John; Bereg, Sergey; Al-Dhahir, Naofal; Minn, Hlaing; Tamil, LakshmanIn the first part of the dissertation, a novel collection of punctured codes decoding (CPCD) technique that considers a code as a collection of its punctured codes is proposed. Two forms of CPCD, serial CPCD that decodes each punctured code serially and parallel CPCD that decodes each punctured code in parallel, are discussed. In contrast to other modifications of Low-density parity-check (LDPC) decoding documented in the literature, the proposed CPCD technique views a LDPC code as a collection of punctured LDPC codes, where all punctured codes are derived from the original LDPC code by removing different portions of its parity bits. CPCD technique decodes each punctured code separately and exchanges extrinsic information obtained from that decoding among all other punctured codes for their decoding. Hence, as the iterations increase, the information obtained in the decoding of punctured codes improve making CPCD perform better than standard decoding. LDPC codes have received significant interest in a variety of communication systems due to their superior performance and reasonable decoding complexity. Numerical results demonstrate that CPCD can significantly improve the performance, or significantly increase the code rate of LDPC codes. It is demonstrated that both serial and parallel CPCD have about the same decoding complexity compared with standard sum product algorithm (SPA) decoding. It is also demonstrated that while serial CPCD has about the same decoding delay compared with standard SPA decoding, parallel CPCD can decrease the decoding delay, however, at the expense of processing power. Furthermore, it is demonstrated that similar improvements in performance and decoding delay can be achieved by applying CPCD to longer codes with higher-order modulation too. Specifically, it is shown that parallel CPCD with two parallel concatenated codes D = 2 achieves 0.3 − 1 dB gain over standard SPA decoding of the LDPC code of length 1944 employed in the WiFi standard with QPSK, 16-QAM or 64- QAM modulation while simultaneously reducing decoding delay by about 50%. It is also shown that the CPCD technique can similarly improve the performance of the LDPC code employed in the 5G NR standard at its highest code rate by about 0.6 dB while reducing the decoding delay by about 50%. In the second part of the dissertation, a novel implicit transmission with bit flipping (ITBF) technique is introduced to transmit a coded stream implicitly while transmitting a coded stream explicitly over a channel. ITBF flips a set of chosen parity bits of the explicitly transmitted stream according to an implicit stream. Numerical results show that the ITBF can transmit an implicit stream at the rate up to 13.19% of the explicit stream without significantly sacrificing performance, or increasing the decoding complexity or the decoding delay. The ITBF is combined with CPCD to form ITCD schemes that can further increase the rate of transmission on the implicit stream. It is demonstrated with the LDPC code in the WiFi standard that ITCD can transmit an implicit stream at up to 25% of the rate of the explicit.Item Real-time Assessment of Obstructive Sleep Apnea Using Deep Learning(2021-05-14) Sonawane, Akshay Bhagwan; Tamil, LakshmanSleep quality assessments provide various measures to gauge the severity of Sleep Apnea. In the present, sleep quality testing is inconvenient for the patients in terms of both money and a comfortable environment. Evaluation methods like the Polysomnography test require many sensing resources. Our research proposes an inexpensive and an automated system based on Single-lead Electrocardiogram (ECG) signal and a one-dimensional Convolutional Neural Network classifier (CNN). We use only a single-channel ECG to measure the heart signal and deliver them to an 1D-CNN to classify for apneic events. This method provides an alternative to the cumbersome and expensive Polysomnography (PSG) and scoring by Rechtschaffen and Kales visual method. In addition to this, we propose an Android application that uses a Deep Neural Network model that we have trained to use in real assessment of Obstructive Sleep Apnea.Item Real-Time QRS Detector Using Stationary Wavelet Transform for Automated ECG Analysis(Institute of Electrical and Electronics Engineers Inc.) Kalidas, Vignesh; Tamil, Lakshman; 0000-0003-4523-9376 (Tamil, L); Kalidas, Vignesh; Tamil, LakshmanIn this paper, we propose an online QRS detector algorithm using Stationary Wavelet Transforms (SWT) for real time beat detection from single-lead electrocardiogram (ECG) signals. Daubechies 3 (â€db3’) wavelet is chosen as the mother wavelet for SWT analysis. The information from the first ten seconds of the ECG signal is used as a learning template by the algorithm to initialize thresholds for beat detection. These thresholds are then modified every three seconds, thereby quickly adapting to changes in heart rate and signal quality. Hence false beat detections are vastly suppressed in this approach, while identifying true beats with a high degree of accuracy. Our algorithm yields a sensitivity (SE) of 99.88% and a positive predictive value (PPV) of 99.84% on the MIT-BIH Arrhythmia Database, SE of 99.80% and PPV of 99.91% on the AHA database and an SE of 99.97% and PPV of 99.90% on the QT database.Item Semi-Supervised Learning with Label Confidence for Automatic Knee Osteoarthritis Severity Assessment(2022-05-01T05:00:00.000Z) Wang, Yifan; Zhou, Dian; Sarac, Kamil; Liu, Jin; Nourani, Mehrdad; Tamil, LakshmanKnee osteoarthritis (OA) is a chronic disease that considerably reduces patients’ quality of life. Preventive therapies require early detection and lifetime monitoring of OA progression. In the clinical environment, the severity of OA is classified by the Kellgren and Lawrence (KL) grading system, ranging from KL-0 to KL-4. Recently, deep learning methods were ap- plied to OA severity assessment, to improve accuracy and efficiency. Researchers fine-tuned convolutional neural networks (CNN) on the OA dataset and built end-to-end approaches. However, this task is still challenging due to the ambiguity between adjacent grades, es- pecially in early-stage OA. Low confident samples, which are less representative than the typical ones, undermine the training process. Targeting the uncertainty in the OA dataset, we propose a novel learning scheme that dynamically separates the data into two sets ac- cording to their reliability. Besides, we design a hybrid loss function to help CNN learn from the two sets accordingly. With the proposed approach, we emphasize the typical samples and control the impacts of low confident cases. Experiments are conducted in a five-fold manner on five-class task and early-stage OA task. Our method achieves a mean accuracy of 70.13% on the five-class OA assessment task, which outperforms all other state-of-art methods. Despite early-stage OA detection still benefiting from the human intervention of lesion region selection, our approach achieves superior performance on the KL-0 vs. KL-2 task. Moreover, we design an experiment to validate large-scale automatic data refining during training. The result verifies the ability to characterize low confidence samples by our approach. The dataset used in this paper was obtained from the Osteoarthritis Initiative.Item Signal Processing Algorithms for Smartphone-Based Hearing Aid Platform; Applications and Clinical Testing(2022-05-01T05:00:00.000Z) Tokgoz, Serkan; Panahi, Issa M.S.; Hoyt, Kenneth; Thibodeau, Linda M.; Kiasaleh, Kamran; Tamil, LakshmanDigital signal processing algorithms are widely utilized in hearing aid applications to im- prove the quality of speech. The signal processing pipeline for speech involves several crucial components that enhance hearing-impaired people’s listening. This thesis covers the devel- opment of novel methods that can be used in the speech processing pipeline and clinical testing. Each chapter of the dissertation focuses on the components of the speech process- ing pipeline for smartphone-based hearing aid setup. The first algorithm, speech source localization (SSL), which identifies the direction of the talker of interest using multiple mi- crophones, is discussed. A speaker identification method is proposed as an assistive system to the pipeline, and it can be used to boost the overall system’s performance. A clinical testing system is developed to evaluate the new signal processing algorithms. An approach is developed to use multiple microphones of iPhone simultaneously for real-time low-latency audio applications. Real-time integration of several signal processing modules that appear in digital hearing aids is developed as smartphone apps. Subjective evaluations are conducted for the proposed methods to show noticeable improvements and compared with state of the art methods. Additionally, the implementation of the proposed methods is explained on smartphones and computers.Item Speech Perception of Hearing-impaired Listeners in Challenging Listening Environments and Personalization of Hearing Assistive Devices via Inverse Reinforcement Learning(2022-05-01T05:00:00.000Z) Akbarzadeh, Sara; Kehtarnavaz, Nasser; Tan, Chin-Tuan; Auciello, Orlando; Lobarinas, Edward; Tamil, LakshmanListening to a speech in presence of a noise has always been a challenge specially for individuals with hearing impairment. There are many aspects that needs to be considered in designing and fitting of hearing assistive devices to provide users with a more preferred hearing experience or increase the perceived quality of speech and decrease the listening effort. This dissertation focuses on this topic in two major research thrust. In the first research thrust, the speech perception of hearing impaired listeners has been studied in challenging hearing environments. Behavioral and electrophysiological experiments have been designed to evaluate the effect of speech and noise levels on the perceived quality of speech and selective auditory attention in normal hearing and hearing impaired listeners. The perception of degraded speeches in normal hearing and hearing impaired listeners have been measured and the differences between the hearing patterns in these groups have been described. It has been shown that to achieve an optimal hearing experience, the listener’s hearing situation should be taken into account. In the second research thrust, the maximum likelihood inverse reinforcement learning approach has been followed to develop an algorithm to personalize the hearing aids fitting in an online manner. The results of the experiments conducted on subjects with hearing loss demonstrates the outperformance of the developed personalized setting over the standard prescriptive setting.Item Tissue Characterization Using H-scan Ultrasound Imaging(2022-12-01T06:00:00.000Z) Tai, Haowei 1991-; Hoyt, Kenneth; Griffith, D. Todd; Brown, Katherine; Hansen, John H.L.; Tamil, LakshmanBreast cancer is the second leading cause of mortality among women and affects more women than any other type of cancer. Around 43,600 women in the U.S. died in 2021 from breast cancer. Clinical studies have demonstrated that an early neoadjuvant response is a better predictor of the patient’s recurrence-free survival than pathological complete response. Therefore, mammography, ultrasound (US), and magnetic resonance imaging (MRI) have been widely used to determine tumor response by tracking changes in tumor size using guidelines provided by the Response Evaluation Criteria in Solid Tumors (RECIST). However, measurable changes in tumor size may not be detectable until after multiple cycles of chemotherapy. In the interim, high cost and unnecessary patient toxicity may be incurred for therapy regimens. Further, intratumor heterogeneity poses a fundamental treatment challenge because different tumor subregions might have different drug sensitivities. This implies that some therapeutic strategies might not be effective against the whole tumor. Therefore, the use of noninvasive US for quantitative tissue characterization has become an exciting research prospect. Herein the challenge is to find hidden patterns in the US data to reveal more information about tissue function and pathology that cannot be seen in the conventional US images. Circumventing some of the limitations associated with traditional tissue characterization approaches, a new modality has been proposed for the US classification of acoustic scatterers, such as cancer cells. Termed H-scan US imaging, this technique relies on matching a model that describes US image formation to the mathematics of a class of Gaussian-weighted Hermite polynomials. In short, it reveals the local frequency dependence of different sized scatterers in soft tissue. In this dissertation work we demonstrate: (1) application of a novel frequency-dependent attenuation correction technique improves H-scan US imaging sensitivity to subtle changes at tissue depth. (2) propose 3-D H-scan imaging technique to capture data from the entire tumor burden, visualization of any heterogenous tissue patterns, and fundamentally improve any tissue characterization strategy and treatment response determination and (3) propose volumetric H-scan US imaging to visualize breast cancer changes during response to drug treatment including apoptotic activity, which is a hallmark feature of effective anticancer therapy. Our overarching hypothesis is that volumetric H-scan US imaging can detect early response to chemotherapy in breast cancer tumors and provide vital prognostic data on treatment response and tumor progression. Consequently, this would provide a new and safe approach to exploring the tumor response to chemotherapy as early as possible and maximize effective therapy for an individual patient, reduce morbidity, and constrain escalating health care costs associated with overtreatment.Item Toward Highly Reliable, Smart and Open Optical Networks(December 2022) Zhang, Tianliang; Fumagalli, Andrea; Moldovan, Dan I.; Tamil, Lakshman; Hu, Yang; Razo-Razo, MiguelAs more software and applications requiring greater bandwidth are developed and become more popular, the need for a reliable and high-speed communication network grows. Optical transport network is uniquely positioned to deliver the required speed and capacity. The development of software defined optical networking technology has dramatically improved the operability, flexibility, and effectiveness of optical networks. Through modeling and algorithm optimization of network resource management and monitoring of network signal transmission quality, it improves the awareness of the network, reduces the network failure rate, and improves the network operation efficiency. At the same time, the proposal of open optical network breaks the barrier between different vendors, unifies the standard of each interface, and greatly accelerates the development of the optical network. Open optical network also accelerates the optimization and management of network resources, providing prerequisites for the development of more applications, technologies, and algorithms. In this dissertation, the analytical models for the network blocking probability under first-fit algorithm are introduced. The effect of optical signal quality of transmission margin on the performance of optical networks is explored and a method using neural network to model the WSS filtering penalty is proposed. An open optical network architecture, OpenROADM, is introduced, and the applications developed based on this open optical network platform are described. Moreover, an emulator that can work with the OpenROADM simulator to provide a realistic optical network environment is developed.