Categories
Paper Talk

SampTA Paper

Our paper “Disentangled latent representations of images with atomic autoencoders” will be presented at the SampTA conference by A. Newson.

Abstract: “We present the atomic autoencoder architecture, which decomposes an image as the sum of elementary parts that are parametrized by simple separate blocks of latent codes. We show that this simple architecture is induced by the definition of a general atomic low-dimensional model of the considered data. We also highlight the fact that the atomic autoencoder achieves disentangled low-dimensional representations under minimal hypotheses. Experiments show that their implementation with deep neural networks is successful at learning disentangled representations on two different examples: images constructed with simple parametric curves and images of filtered off-the-grid spikes.”

Categories
Paper

New preprint

We uploaded the following preprint on the geometry of non-convex sparse spike estimation:

On strong basins of attractions for non-convex sparse spike estimation: upper and lower bounds, Y. Traonmilin, J.F. Aujol, A. Leclaire and P.J. Bénard. (EFFIREG)

“Abstract: In this article, we study the size of strong basins of attractions for the non-convex sparse spike estimation problem. We first extend previous results to obtain a lower bound on the size of sets where gradient descent converges with a linear rate to the minimum of the non-convex objective functional. We then give an upper bound that shows that the dependency of the lower bound with respect to the number of measurements reflects well the true size of basins of attraction for random Gaussian Fourier measurements. These theoretical results are confirmed by experiments.”

Categories
Non classé

Compressive learning of deep regularization for denoising

Hui Shi will be presenting “Compressive learning of deep regularization for denoising” at SSVM 2023.

Abstract: “Solving ill-posed inverse problems can be done accurately if a regularizer well adapted to the nature of the data is available. Such regularizer can be systematically linked with the distribution of the data itself through the maximum a posteriori Bayesian framework. Recently, regularizers designed with the help of deep neural networks received impressive success. Such regularizers are typically learned from voluminous training data. To reduce the computational burden of this task, we propose to adapt the compressive learning framework to the learning of regularizers parametrized by deep neural networks (DNN). Our work shows the feasibility of batchless learning of regularizers from a compressed dataset. In order to achieve this, we propose an approximation of the compression operator that can be calculated explicitly for the task of learning a regularizer by DNN. We show that the proposed regularizer is capable of modeling complex regularity prior and can be used to solve the denoising inverse problem.”

Categories
Event

Workshop on “Imaging inverse problems – regularization, low dimensional models and applications”

I am organizing March 23rd a workshop on Imaging inverse problems – regularization, low dimensional models and applications. Check https://gdr-mia.math.cnrs.fr/events/journee_problemes_inverses2023/ if you want to participate.

Categories
Paper

New preprint

A preprint of the work on off-the-grid super resolution of our student P.J. Bénard is available.

Fast off-the-grid sparse recovery with over-parametrized projected gradient descent, P.J. Bénard, Y. Traonmilin and J.F. Aujol

Abstract: “We consider the problem of recovering off-the-grid spikes from Fourier measurements. Successful methods such as sliding Frank-Wolfe and continuous orthogonal matching pursuit (OMP) iteratively add spikes to the solution then perform a costly (when the number of spikes is large) descent on all parameters at each iteration. In 2D, it was shown that performing a projected gradient descent (PGD) from a gridded over-parametrized initialization was faster than continuous orthogonal matching pursuit. In this paper, we propose an off-the-grid over-parametrized initialization of the PGD based on OMP that permits to fully avoid grids and gives faster results in 3D.”

Categories
Paper

New preprint

Our student Axel Baldanza uploaded a new preprint: “Piecewise linear prediction model for action tracking in sports“.

Abstract: “Recent tracking methods in professional team sports
reach very high accuracy by tracking the ball and players.
However, it remains difficult for these methods to perform
accurate real-time tracking in amateur acquisition conditions
where the vertical position or orientation of the camera is not
controlled and cameras use heterogeneous sensors. This article
presents a method for tracking interesting content in an amateur
sport game by analyzing player displacements. Defining optical
flow of the foreground in the image as the player motions,
we propose a piecewise linear supervised learning model for
predicting the camera global motion needed to follow the action.”

Categories
Paper

New preprint

We uploaded our new preprint:

"A theory of optimal convex regularization for low-dimensional recovery", Y. Traonmilin, R. Gribonval and S. Vaiter

Abstract : We consider the problem of recovering elements of a low-dimensional model from under-determined linear measurements. To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint. Given a model, we ask ourselves what is the “best” convex regularizer to perform its recovery. To answer this question, we define an optimal regularizer as a function that maximizes a compliance measure with respect to the model. We introduce and study several notions of compliance. We give analytical expressions for compliance measures based on the best-known recovery guarantees with the restricted isometry property. These expressions permit to show the optimality of the ℓ 1-norm for sparse recovery and of the nuclear norm for low-rank matrix recovery for these compliance measures. We also investigate the construction of an optimal convex regularizer using the example of sparsity in levels.

Categories
Talk

Talk at FNRS Contact Group on “Wavelets and Applications” Workshop

I will be giving a talk on the basins of attraction of non-convex methods at the

2021 FNRS Contact Group on “Wavelets and Applications” Workshop

on the 14th of December.

Categories
Event

Minisymposium at SIAM IS 22

We organize a minisymposium (2 parts) at SIAM IS 22 with Luca Calatroni and Paul Escande:

Non-Convex Optimization Methods for Inverse Problems in Imaging: From Theory to Applications

Come join us on Thursday, March 24, 2022!

Abstract:

Over the past decade, there has been a growing interest in the imaging community to use non-convex sparse optimization methods. These approaches are now ubiquitous in a plethora of real-world applications. Many recent theoretical contributions have proven the success of these methods. With respect to sparse regularization models, non-convexity arises naturally when dealing with efficient approximations to the pseudo-normality l_0 and/or when dealing with joint optimization problems where non-convexity is the by-product of a cross-regularization term and/or non-convex data models. The objective of this minisymposium is to gather experts in the field of non-convex regularization methods for inverse imaging problems to provide an overview of the field ranging from recent theoretical results to the design of numerical optimization methods that could be used effectively in a variety of applications.

With respect to theoretical developments, this minisymposium will focus on convergence guarantees and derivation of convergence rates of non-convex methods and their relation to the specific structure of imaging problems and low-dimensional models. A selection of contributions dealing with the design of efficient algorithms for new non-convex formulations of imaging problems will then be presented. Finally, some presentations on the actual use of these methods in real applications such as microscopic imaging, medical imaging and sparse signal recovery will be given.

Categories
Paper

New preprint

The final version of our work on sketched image denoising is available as a preprint: “Compressive learning for patch-based image denoising” , Hui Shi, Yann Traonmilin and Jean-François Aujol.