A General Framework of Non-convex Models for Sparse Recovery With Applications

Date

December 2021

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

Thanks to latest developments of science and technology, large data sets are becoming increasingly popular that lead to an emerging field, called compressive sensing (CS), which is about acquiring and processing sparse signals. In this thesis, we first propose a general framework to estimate sparse coefficients of generalized polynomial chaos (gPC) used in uncertainty quantification (UQ). In particular, we aim to identify a rotation matrix such that the gPC expansion of a set of random variables after the rotation has a sparser representation. However, this rotational approach alters the underlying linear system to be solved, which makes finding the sparse coefficients more difficult than the case without rotation. To resolve this issue, we examine several popular non-convex regularizations in CS that empirically perform better than the classic 1 approach. All these regularizations can be minimized by the alternating direction method of multipliers (ADMM). Numerical examples show superior performance of the proposed combination of rotation and non-convex sparsity promoting regularizations over the ones without rotation and with rotation but using the 1 norm. We observe through the UQ study that the 1 − 2 regularization often performs satisfactorily among the others. We then apply it to synthetic aperture radar (SAR) imaging based on a mathematical model of how electromagnetic waves are scattered in the space using Maxwell’s equations. Specifically we deduce an efficient sensing matrix for SAR and examine the efficiency of the 1 − 2 regularization to promote sparsity of scattered signals. Experimental results demonstrate that 1 − 2 can enhance the resolution of reconstructed image over the classic 1 approach. Motivated by conjugate gradient and adaptive momentum in the optimization literature, we propose a novel algorithmic improvement. The proposed algorithm works for general minimization problems, though numerical experiments are limited to 1 and 1 − 2 with a least-squares data fidelity term, showcasing faster convergence of the proposed algorithm over the traditional methods. We also establish the convergence of our algorithm for a quadratic problem.

Description

Keywords

Mathematics

item.page.sponsorship

Rights

Citation