The final version of our work on sketched image denoising is available as a preprint: “Compressive learning for patch-based image denoising” , Hui Shi, Yann Traonmilin and Jean-François Aujol.
Author: ytraonmilin
Habilitation à diriger des recherche
My “Habilitation à diriger des recherche” thesis defense will be the 6th october 10am at Institut de Mathématiques de Bordeaux. It is possible to follow it online at the following link
https://streaming.math.u-bordeaux.fr/soutenance-yann-traonmilin/
A. Baldanza at ORASIS 2021
Our student A. Baldanza (Rematch, IMB) will be presenting his work “Découpage automatique de vidéos de sport amateur par détection de personnes et analyse de contenu colorimétrique”, A. Baldanza, J-F Aujol, Y. Traonmilin and F. Alary at the ORASIS 2021 conference (13th-17th Sept 2021).
New preprint
We have uploaded a new preprint “Sketched learning for image denoising“, Hui Shi, Yann Traonmilin and Jean-François Aujol.
Abstract: The Expected Patch Log-Likelihood algorithm (EPLL) and its extensions have shown good performances for image denoising. It estimates a Gaussian mixture model (GMM) from a training database of image patches and it uses the GMM as a prior for denoising. In this work, we adapt the sketching framework to carry out the compressive estimation of Gaussian mixture models with low rank covariances for image patches. With this method, we estimate models from a compressive representation of the training data with a learning cost that does not depend on the number of items in the database. Our method adds another dimension reduction technique (low-rank modeling of covariances) to the existing sketching methods in order to reduce the dimension of model parameters and to add flexibility to the modeling. We test our model on synthetic data and real large-scale data for patch-based image denoising. We show that we can produce denoising performance close to the models estimated from the original training database, opening the way for the study of denoising strategies using huge patch databases.
Talk at iTWIST 2020
I will give a talk about our work on “Projected gradient descent for non-convex sparse spike estimation” at iTWIST 2020 in Nantes.
New preprint
We upload a new preprint on non-convex methods for linear inverse problems with low-dimensional models with Jean-François Aujol and Arthur Leclaire : “The basins of attraction of the global minimizers of non-convex inverse problems with low-dimensional models in infinite dimension”
Abstract: “Non-convex methods for linear inverse problems with low-dimensional models have emerged as an alternative to convex techniques. We propose a theoretical framework where both finite dimensional and infinite dimensional linear inverse problems can be studied. We show how the size of the the basins of attraction of the minimizers of such problems is linked with the number of available measurements. This framework recovers known results about low-rank matrix estimation and off-the-grid sparse spike estimation, and it provides new results for Gaussian mixture estimation from linear measurements.”
My young researcher project was accepted for funding by Agence Nationale de la Recherche. The project EFFIREG « Efficient Regularization of High-Dimensional Inverse Problems for Data Processing » is starting soon. Internship and PhD position offer coming soon !
Talk at GDR MIA thematic day
• I will be presenting my latest work on non-convex methods at the GDR MIA thematic day in Toulouse : Non-convex sparse optimization, ENSEEIHT, Toulouse, 9 octobre 2020.
Our paper « Projected gradient descent for non-convex sparse spike estimation » was accepted in IEEE Signal Processing Letters.
Our paper « The basins of attraction of the global minimizers of the non-convex sparse spikes estimation problem » was accepted for publication in Inverse Problems.