Categories
Paper

New preprint

We uploaded our new preprint:

"A theory of optimal convex regularization for low-dimensional recovery", Y. Traonmilin, R. Gribonval and S. Vaiter

Abstract : We consider the problem of recovering elements of a low-dimensional model from under-determined linear measurements. To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint. Given a model, we ask ourselves what is the “best” convex regularizer to perform its recovery. To answer this question, we define an optimal regularizer as a function that maximizes a compliance measure with respect to the model. We introduce and study several notions of compliance. We give analytical expressions for compliance measures based on the best-known recovery guarantees with the restricted isometry property. These expressions permit to show the optimality of the ℓ 1-norm for sparse recovery and of the nuclear norm for low-rank matrix recovery for these compliance measures. We also investigate the construction of an optimal convex regularizer using the example of sparsity in levels.

Categories
Paper

New preprint

The final version of our work on sketched image denoising is available as a preprint: “Compressive learning for patch-based image denoising” , Hui Shi, Yann Traonmilin and Jean-François Aujol.

Categories
Paper

New preprint

We have uploaded a new preprint “Sketched learning for image denoising“, Hui Shi, Yann Traonmilin and Jean-François Aujol.

Abstract: The Expected Patch Log-Likelihood algorithm (EPLL) and its extensions have shown good performances for image denoising. It estimates a Gaussian mixture model (GMM) from a training database of image patches and it uses the GMM as a prior for denoising. In this work, we adapt the sketching framework to carry out the compressive estimation of Gaussian mixture models with low rank covariances for image patches. With this method, we estimate models from a compressive representation of the training data with a learning cost that does not depend on the number of items in the database. Our method adds another dimension reduction technique (low-rank modeling of covariances) to the existing sketching methods in order to reduce the dimension of model parameters and to add flexibility to the modeling. We test our model on synthetic data and real large-scale data for patch-based image denoising. We show that we can produce denoising performance close to the models estimated from the original training database, opening the way for the study of denoising strategies using huge patch databases.

Categories
Paper

New preprint

We upload a new preprint on non-convex methods for linear inverse problems with low-dimensional models with Jean-François Aujol and Arthur Leclaire : “The basins of attraction of the global minimizers of non-convex inverse problems with low-dimensional models in infinite dimension”

Abstract: “Non-convex methods for linear inverse problems with low-dimensional models have emerged as an alternative to convex techniques. We propose a theoretical framework where both finite dimensional and infinite dimensional linear inverse problems can be studied. We show how the size of the the basins of attraction of the minimizers of such problems is linked with the number of available measurements. This framework recovers known results about low-rank matrix estimation and off-the-grid sparse spike estimation, and it provides new results for Gaussian mixture estimation from linear measurements.”

Categories
Paper

Accepted!

Our paper « Projected gradient descent for non-convex sparse spike estimation » was accepted in IEEE Signal Processing Letters.

Categories
Paper

Accepted!

Our paper « The basins of attraction of the global minimizers of the non-convex sparse spikes estimation problem » was accepted for publication in Inverse Problems.