This website requires JavaScript.

Accelerated Gradient Descent via Long Steps

Benjamin GrimmerKevin ShuAlex L. Wang
Sep 2023
0被引用
0笔记
开学季活动火爆进行中,iPad、蓝牙耳机、拍立得、键盘鼠标套装等你来拿
摘要原文
Recently Grimmer [1] showed for smooth convex optimization by utilizing longer steps periodically, gradient descent's state-of-the-art O(1/T) convergence guarantees can be improved by constant factors, conjecturing an accelerated rate strictly faster than O(1/T) could be possible. Here we prove such a big-O gain, establishing gradient descent's first accelerated convergence rate in this setting. Namely, we prove a O(1/T^{1.02449}) rate for smooth convex minimization by utilizing a nonconstant nonperiodic sequence of increasingly large stepsizes. It remains open if one can achieve the O(1/T^{1.178}) rate conjectured by Das Gupta et. al. [2] or the optimal gradient method rate of O(1/T^2). Big-O convergence rate accelerations from long steps follow from our theory for strongly convex optimization, similar to but somewhat weaker than those concurrently developed by Altschuler and Parrilo [3].
展开全部
机器翻译
AI理解论文&经典十问
图表提取
参考文献
发布时间 · 被引用数 · 默认排序
被引用
发布时间 · 被引用数 · 默认排序
社区问答