Model selection with low complexity priors

Nov09Wed

Model selection with low complexity priors

Wed, 09/11/2016 - 11:30

Location:

Speaker: 
Dr Mohammed Golbabaee
Affiliation: 
University of Edinburgh
Synopsis: 

Regularization plays a key role when facing the challenge of solving ill-posed inverse problems, where the number of measurements is smaller than the ambient dimension of the signal to be estimated. In this setting, the general approach is to solve a regularized optimization problem, which combines a data fidelity/fitting term and some regularization penalty that promotes the intrinsic low-dimensional structure of the signal.

In this talk I present a general framework to capture this low-dimensional structure using Partly Smooth Gauge (PSG) functions. This class of convex regularizers encompasses many popular examples such as the L1-norm (sparsity), L1-L2-norm (group sparsity), as well as several others including the L-infinity norm (anti-sparsity). In our work we show that the set of PSG functions is closed under addition and pre-composition by a linear operator, which allows us to cover mixed regularizations (e.g. sparse-PCA, elastic-net), and the so-called analysis-type priors (e.g. total variation, fused Lasso). Our main result presents a unified sharp analysis of exact and robust recovery of the low-dimensional subspace model associated to the signal from partial measurements.

Biography: 

Mohammad Golbabaee received his PhD degree (2012) in Computer, Communication and Information Sciences from the École Polytechnique Fédérale de Lausanne (EPFL) Switzerland, and under the direction of Prof. Pierre Vandergheynst. His thesis focused on compressive sampling and source separation strategies for multichannel data. In 2013, he worked as a CNRS researcher in Applied Mathematics Research Center (CEREMADE), Université Paris Dauphine France, under the direction of Prof. Gabriel Peyré. In 2014, he joined the DSP team at Rice University, Houston Texas, as a SNSF postdoc fellow under the direction of Prof. Richard Baraniuk. He is currently with the Institute for Digital Communications, University of Edinburgh and working on projects “compressive quantitative MR imaging” and “randomized methods for large-scale optimization” with Prof. Mike Davies.

His research interests include signal and image processing, machine learning, low dimensional signal models, compressed sensing, source separation, optimization algorithms for large-scale machine learning: theoretical and applied in medical/biological imaging (magnetic resonance fingerprinting, mass spectroscopy), remote sensing (hyperspectral imaging), multi-array cameras, and low-power/complexity sensor networks.

Institute: