Improved Solution to the ℓ0 Regularized Optimization Problem via Dictionary-Reduced Initial Guess

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Institute of Electrical and Electronics Engineers

Acceso al texto completo solo para la Comunidad PUCP

Abstract

The ℓ 0 regularized optimization ( l0-RO) problem is a nonconvex problem that is central to several applications such as sparse coding, dictionary learning, compressed sensing, etc. Iterative algorithms for ℓ 0 - RO problem are only known to have local or subsequence convergence properties i.e. the solution is trapped in a saddle point or in an inferior local solution. Inspired by techniques used to improve the alternating optimization (AO) of nonconvex functions, we propose a simple yet effective two step iterative method to improve the solution to the ℓ 0 -RO problem. Given an initial solution, we first find the vanilla solution to ℓ 0 -RO via a descent method (in particular, Nesterov's accelerated gradient descent), to then estimate a new initial solution by using a scaled version of the dictionary involved in the ℓ 0 -RO problem, considering only a reduced number of its atoms. Our proposed algorithm is empirically demonstrated to have the best tradeoff between accuracy and computation time, when compared to state-of-the-art algorithms. Furthermore, due to its structure, our proposed algorithm can be directly apply to the convolutional formulation of ℓ 0 -RO.

Description

Keywords

Computer science, Convergence (economics), Gradient descent, Algorithm, Optimization problem, Iterative method, Mathematics, Combinatorics, Artificial intelligence, Artificial neural network

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By