Minimizing the Difference of L1 and L2 norms with Applications

Series
Applied and Computational Mathematics Seminar
Time
Monday, January 29, 2018 - 1:55pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Prof. Lou, Yifei – University of Texas, Dallas – https://sites.google.com/site/louyifei/Home
Organizer
Sung Ha Kang
A fundamental problem in compressive sensing (CS) is to reconstruct a sparse signal under a few linear measurements far less than the physical dimension of the signal. Currently, CS favors incoherent systems, in which any two measurements are as little correlated as possible. In reality, however, many problems are coherent, in which case conventional methods, such as L1 minimization, do not work well. In this talk, I will present a novel non-convex approach, which is to minimize the difference of L1 and L2 norms, denoted as L1-L2, in order to promote sparsity. In addition to theoretical aspects of the L1-L2 approach, I will discuss two minimization algorithms. One is the difference of convex (DC) function methodology, and the other is based on a proximal operator, which makes some L1 algorithms (e.g. ADMM) applicable for L1-L2. Experiments demonstrate that L1-L2 improves L1 consistently and it outperforms Lp (p between 0 and 1) for highly coherent matrices. Some applications will be discussed, including super-resolution, image processing, and low-rank approximation.