Researchers from Meta AI and Samsung Introduce Two New AI Methods, Prodigy and Resetting, for Learning Rate Adaptation that Improve upon the Adaptation Rate of the State-of-the-Art D-Adaptation Method

Modern machine learning relies heavily on optimization to provide effective answers to challenging issues in areas as varied as computer vision, natural language processing, and reinforcement learning. The difficulty of achieving rapid convergence and high-quality solutions largely depends on the learning rates chosen. Applications with numerous agents, each with its optimizer, have made learning-rate tuning more difficult. Some hand-tuned optimizers perform well, but these methods typically demand expert skill and laborious work. Therefore, in recent years, “parameter-free” adaptive learning rate methods, such as the D-Adaptation approach, have gained popularity for learning-rate-free optimization.

The research team from Samsung AI Center and Meta AI introduces two unique changes to the D-Adaptation method called Prodigy and Resetting to improve the worst-case non-asymptotic convergence rate of the D-Adaptation method, leading to faster convergence rates and better optimization outputs.

The authors introduce two novel changes to the original method to improve the D-Adaptation method’s worst-case non-asymptotic convergence rate. They enhance the algorithm’s convergence speed and solution quality performance by tweaking the adaptive learning rate method. A lower bound for any approach that adjusts for the distance to the solution constant D is established to verify the proposed adjustments. They further demonstrate that relative to other methods with exponentially bounded iteration growth, the enhanced approaches are worst-case optimal up to constant factors. Extensive tests are then conducted to show that the increased D-Adaptation methods rapidly adjust the learning rate, resulting in superior convergence rates and optimization outcomes.

🔥 Unleash the power of Live Proxies: Private, undetectable residential and mobile IPs.

The team’s innovative strategy involves tweaking the D-Adaptation’s error term with Adagrad-like step sizes. Researchers may now take larger steps with confidence while still keeping the main error term intact, allowing the improved method to converge more quickly. The algorithm slows down when the denominator in the step size grows too large. Thus they additionally add weight next to the gradients just in case.

Researchers used the proposed techniques to solve convex logistic regression and serious learning challenges in their empirical investigation. Across multiple studies, Prodigy has shown faster adoption than any other known approaches; D-Adaptation with resetting reaches the same theoretical rate as Prodigy while employing a lot simpler theory than either Prodigy or D-Adaptation. In addition, the proposed methods often outperform the D-Adaptation algorithm and can achieve test accuracy on par with hand-tuned Adam.

Two recently proposed methods have surpassed the state-of-the-art D-adaption approach of learning rate adaption. Extensive experimental evidence shows that Prodigy, a weighted D-Adaptation variant, is more adaptive than existing approaches. It is shown that the second method, D-Adaptation with resetting, can match the theoretical pace of Prodigy with a far less complex theory.


Check Out The Paper. Don’t forget to join our 25k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club



Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.


Credit: Source link

Comments are closed.