Vidyasagar, MathukumalliNo Descriptionhttps://hdl.handle.net/10735.1/5613https://utd-ir.tdl.org/retrieve/ef9055a0-7e42-4364-ae13-831ea5bdbc32/2024-06-16T04:44:54Z2024-06-16T04:44:54Z51A Tutorial Introduction Compressed SensingVidyasagar, Mathukumallihttps://hdl.handle.net/10735.1/88562020-09-09T08:01:07Z2019-01-09T00:00:00Zdc.title: A Tutorial Introduction Compressed Sensing
dc.contributor.author: Vidyasagar, Mathukumalli
dc.description.abstract: In this half-day tutorial, the author will present an introduction to the field of compressed sensing. Compressed sensing refers to the recovery of high-dimensional but low-complexity objects from a small number of linear measurements. The most popular applications of compressed sensing are (i) the recovery of high-dimensional but sparse vectors, when the locations of the nonzero components are unknown, and (ii) the recovery of high-dimensional but low rank matrices. This half-day tutorial will cover some of the most recent results in both problems. Until recently, both problems were addressed through the method of random projections. However, recent research has focused on deterministic methods for determining the measurement operators, especially the use of binary measurement matrices. The recent approaches often require fewer measurements and also orders of magnitude faster. In this tutorial the theoretical methods will be presented, and their application will be illustrated through Matlab codes which will be freely available from the author.
dc.description: Due to copyright restrictions and/or publisher's policy full text access from Treasures at UT Dallas is limited to current UTD affiliates (use the provided Link to Article).
2019-01-09T00:00:00ZAn Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning TheoryAhsen, Mehmet ErenVidyasagar, Mathukumallihttps://hdl.handle.net/10735.1/87322020-07-24T08:01:07Z2019-01-01T00:00:00Zdc.title: An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory
dc.contributor.author: Ahsen, Mehmet Eren; Vidyasagar, Mathukumalli
dc.description.abstract: In this paper, the problem of one-bit compressed sensing (OBCS) is formulated as a problem in probably approximately correct (PAC) learning. It is shown that the Vapnik- Chervonenkis (VC-) dimension of the set of half-spaces in \Rⁿ generated by k-sparse vectors is bounded below by k([lg(n/k)]+1) and above by [2k lg(en)]. By coupling this estimate with well-established results in PAC learning theory, we show that a consistent algorithm can recover a k-sparse vector with O(k lg n) measurements, given only the signs of the measurement vector. This result holds for all probability measures on \Rⁿ. The theory is also applicable to the case of noisy labels, where the signs of the measurements are flipped with some unknown probability.
2019-01-01T00:00:00ZCompressed Sensing with Binary Matrices: New Bounds on the Number of MeasurementsLotfi, MahsaVidyasagar, Mathukumallihttps://hdl.handle.net/10735.1/74542020-03-25T08:00:58Z2019-01-09T00:00:00Zdc.title: Compressed Sensing with Binary Matrices: New Bounds on the Number of Measurements
dc.contributor.author: Lotfi, Mahsa; Vidyasagar, Mathukumalli
dc.description.abstract: In this paper we study the problem of compressed sensing using binary measurement matrices. New bounds are derived for the number of measurements that suffice to achieve robust sparse recovery, and the number of measurements needed to achieve sparse recovery. In particular, by interpreting any binary measurement matrix as the biadjacency matrix of an unbalanced bipartite graph, we derive new lower bounds on the number of measurements required by any graph of girth six or larger, in order to satisfy a sufficient condition for sparse recovery. It is shown that the optimal choices for the girth of the graph associated with the measurement matrix are six and eight. Some interesting open problems that arise from our results are pointed out. The proofs of the results presented here are omitted. The reader is directed to (M. Lotfi and M. Vidyasagar, “Compressed sensing using binary matrices of nearly optimal dimensions,” arXiv:1808.03001, 2018) for stronger results than are presented here, as well as their proofs. © 2019 IEEE.
dc.description: Due to copyright restrictions and/or publisher's policy full text access from Treasures at UT Dallas is limited to current UTD affiliates (use the provided Link to Article).
2019-01-09T00:00:00ZSparse Feature Selection for Classification and Prediction of Metastasis in Endometrial CancerAhsen, Mehmet ErenBoren, Todd P.Singh, Nitin K.Misganaw, BurookMutch, David G.Moore, Kathleen N.Backes, Floor J.McCourt, Carolyn K.Lea, Jayanthi S.Miller, David S.White, Michael A.Vidyasagar, Mathukumallihttps://hdl.handle.net/10735.1/58282023-10-11T08:00:45Z2017-03-27T00:00:00Zdc.title: Sparse Feature Selection for Classification and Prediction of Metastasis in Endometrial Cancer
dc.contributor.author: Ahsen, Mehmet Eren; Boren, Todd P.; Singh, Nitin K.; Misganaw, Burook; Mutch, David G.; Moore, Kathleen N.; Backes, Floor J.; McCourt, Carolyn K.; Lea, Jayanthi S.; Miller, David S.; White, Michael A.; Vidyasagar, Mathukumalli
dc.description.abstract: Background: Metastasis via pelvic and/or para-aortic lymph nodes is a major risk factor for endometrial cancer. Lymph-node resection ameliorates risk but is associated with significant co-morbidities. Incidence in patients with stage I disease is 4-22% but no mechanism exists to accurately predict it. Therefore, national guidelines for primary staging surgery include pelvic and para-aortic lymph node dissection for all patients whose tumor exceeds 2cm in diameter. We sought to identify a robust molecular signature that can accurately classify risk of lymph node metastasis in endometrial cancer patients. 86 tumors matched for age and race, and evenly distributed between lymph node-positive and lymph node-negative cases, were selected as a training cohort. Genomic micro-RNA expression was profiled for each sample to serve as the predictive feature matrix. An independent set of 28 tumor samples was collected and similarly characterized to serve as a test cohort.
Results: A feature selection algorithm was designed for applications where the number of samples is far smaller than the number of measured features per sample. A predictive miRNA expression signature was developed using this algorithm, which was then used to predict the metastatic status of the independent test cohort. A weighted classifier, using 18 micro-RNAs, achieved 100% accuracy on the training cohort. When applied to the testing cohort, the classifier correctly predicted 90% of node-positive cases, and 80% of node-negative cases (FDR = 6.25%).
Conclusion: Results indicate that the evaluation of the quantitative sparse-feature classifier proposed here in clinical trials may lead to significant improvement in the prediction of lymphatic metastases in endometrial cancer patients.
dc.description: Includes supplementary material
2017-03-27T00:00:00ZExploiting Ordinal Class Structure in Multiclass Classification: Application to Ovarian CancerMisganaw, BurookVidyasagar, Mathukumallihttps://hdl.handle.net/10735.1/56142019-04-13T08:44:34Zdc.title: Exploiting Ordinal Class Structure in Multiclass Classification: Application to Ovarian Cancer
dc.contributor.author: Misganaw, Burook; Vidyasagar, Mathukumalli
dc.description.abstract: In multiclass machine learning problems, one needs to distinguish between the nominal labels that do not have any natural ordering and the ordinal labels that are ordered. Ordinal labels are pervasive in biology, and some examples are given here. In this note, we point out the importance of making use of the order information when it is inherent to the problem. We demonstrate that algorithms that use this additional information outperform the algorithms that do not, on a case study of assigning one of four labels to the ovarian cancer patients on the basis of their time of progression-free survival. As an aside, it is also pointed out that the algorithms that make use of ordering information require fewer data normalizations. This aspect is important in biological applications, where data are plagued by variations in platforms and protocols, batch effects, and so on.