Fine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter ϵ via gradient magnitude histogram analysis

dc.contributor.affiliationPontificia Universidad Católica del Perú. Departmento de Ingeniería Eléctrica
dc.contributor.authorSilva, G.
dc.contributor.authorRodríguez, P.
dc.date.accessioned2026-03-13T16:59:04Z
dc.date.issued2024
dc.description.abstractStochastic optimizers play a crucial role in the successful training of deep neural network models. To achieve optimal model performance, designers must carefully select both model and optimizer hyperparameters. However, this process is frequently demanding in terms of computational resources and processing time. While it is a well-established practice to tune the entire set of optimizer hyperparameters for peak performance, there is still a lack of clarity regarding the individual influence of hyperparameters mislabeled as “low priority”, including the safeguard factor e and decay rate e, in leading adaptive stochastic optimizers like the Adam optimizer. In this manuscript, we introduce a new framework based on the empirical probability density function of the loss’ gradient magnitude, termed as the “gradient magnitude histogram”, for a thorough analysis of adaptive stochastic optimizers and the safeguard hyperparameter e. This framework reveals and justifies valuable relationships and dependencies among hyperparameters in connection to optimal performance across diverse tasks, such as classification, language modeling and machine translation. Furthermore, we propose a novel algorithm using gradient magnitude histograms to automatically estimate a refined and accurate search space for the optimal safeguard hyperparameter e, surpassing the conventional trial-and-error methodology by establishing a worst-case search space that is two times narrower.
dc.description.sponsorshipFunding: This manuscript is supported by Consejo Nacional de Ciencia, Tecnología e Innovación Tecnológica (CONCYTEC), and Fondo Nacional de Desarrollo Científico, Tecnológico y de Innovación Tecnológica (FONDECYT), under contract No. 174-2020-FONDECYT "Doctoral Programs in Peruvian Universities", and by the Army Research Office (ARO) under Grant W911NF-22-1-0296.
dc.identifier.doihttps://doi.org/10.1007/s00521-024-10302-2
dc.identifier.urihttp://hdl.handle.net/20.500.14657/206150
dc.language.isoeng
dc.publisherSpringer Science and Business Media Deutschland GmbH
dc.relation.ispartofurn:issn:0941-0643
dc.rightsinfo:eu-repo/semantics/closedAccess
dc.sourceNeural Computing and Applications; Vol. 36, Núm. 35 (2024)
dc.subjectHyperparameter
dc.subjectFine-tuning
dc.subjectStochastic optimizers
dc.subjectDeep neural network
dc.subject.ocdehttps://purl.org/pe-repo/ocde/ford#2.02.01
dc.titleFine-tuning adaptive stochastic optimizers: determining the optimal hyperparameter ϵ via gradient magnitude histogram analysis
dc.typehttp://purl.org/coar/resource_type/c_6501
dc.type.otherArtículo
dc.type.versionhttps://vocabularies.coar-repositories.org/version_types/c_970fb48d4fbd8a85/

Files

Collections