喵ID:G8sJpn免责声明

Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions

细化光滑度和噪声假设下自适应梯度法的收敛性分析

基本信息

DOI:
--
发表时间:
2024
期刊:
影响因子:
--
通讯作者:
Aryan Mokhtari
中科院分区:
文献类型:
--
作者: Devyani Maladkar;Ruichen Jiang;Aryan Mokhtari研究方向: -- MeSH主题词: --
关键词: --
来源链接:pubmed详情页地址

文献摘要

Adaptive gradient methods are arguably the most successful optimization algorithms for neural network training. While it is well-known that adaptive gradient methods can achieve better dimensional dependence than stochastic gradient descent (SGD) under favorable geometry for stochastic convex optimization, the theoretical justification for their success in stochastic non-convex optimization remains elusive. In this paper, we aim to close this gap by analyzing the convergence rates of AdaGrad measured by the $ell_1$-norm of the gradient. Specifically, when the objective has $L$-Lipschitz gradient and the stochastic gradient variance is bounded by $sigma^2$, we prove a worst-case convergence rate of $ ilde{mathcal{O}}(frac{sqrt{d}L}{sqrt{T}} + frac{sqrt{d} sigma}{T^{1/4}})$, where $d$ is the dimension of the problem.We also present a lower bound of ${Omega}(frac{sqrt{d}}{sqrt{T}})$ for minimizing the gradient $ell_1$-norm in the deterministic setting, showing the tightness of our upper bound in the noiseless case. Moreover, under more fine-grained assumptions on the smoothness structure of the objective and the gradient noise and under favorable gradient $ell_1/ell_2$ geometry, we show that AdaGrad can potentially shave a factor of $sqrt{d}$ compared to SGD. To the best of our knowledge, this is the first result for adaptive gradient methods that demonstrates a provable gain over SGD in the non-convex setting.
自适应梯度方法可以说是神经网络训练的最成功的优化算法。尽管众所周知,在有利的几何形状下,自适应梯度方法比随机梯度优化的几何形状比随机梯度下降(SGD)获得更好的尺寸依赖性,但其在随机非convex优化中成功的理论理由仍然难以捉摸。在本文中,我们的目标是通过分析梯度的$ ell_1 $ - norm衡量的Adagrad的收敛速率来缩小这一差距。具体而言,当目标具有$ l $ -lipschitz梯度,并且随机梯度方差受$ sigma^2 $的界限时,我们证明了最差的case收敛速率为$ iLde {Mathcal {o}}(frac {sqrt {sqrt {d} {d} {d} l} {sqrt {t}}} + frac {sqrt {d} sigma} {t^{1/4}})$,其中$ d $是问题的维度。我们还出现了$ {omega}的下限(frac {sqrt {d}}} {sqrt {sqrt {t}}} )$用于最大程度地减少确定性设置中的梯度$ ell_1 $ norm,以显示我们上界的紧密度。此外,在对目标噪声和梯度噪声的平滑度结构以及有利的梯度$ ell_1/ell_2 $几何形状下的更细粒度的假设下,我们表明,与SGD相比,Adagrad可能会刮除$ SQRT {D} $的因子。据我们所知,这是自适应梯度方法的第一个结果,该方法证明了在非convex设置中对SGD的可证明增益。
参考文献(4)
被引文献(0)
A High Probability Analysis of Adaptive SGD with Momentum
DOI:
发表时间:
2020-07
期刊:
ArXiv
影响因子:
0
作者:
Xiaoyun Li;Francesco Orabona
通讯作者:
Xiaoyun Li;Francesco Orabona
High Probability Convergence of Stochastic Gradient Methods
DOI:
10.48550/arxiv.2302.14843
发表时间:
2023-02
期刊:
ArXiv
影响因子:
0
作者:
Zijian Liu;Ta Duy Nguyen;Thien Hai Nguyen;Alina Ene;Huy L. Nguyen
通讯作者:
Zijian Liu;Ta Duy Nguyen;Thien Hai Nguyen;Alina Ene;Huy L. Nguyen
WHAT IS THE EXPECTED RETURN ON THE MARKET?
DOI:
10.1093/qje/qjw034
发表时间:
2017-02-01
期刊:
QUARTERLY JOURNAL OF ECONOMICS
影响因子:
13.7
作者:
Martin, Ian
通讯作者:
Martin, Ian

数据更新时间:{{ references.updateTime }}

Aryan Mokhtari
通讯地址:
--
所属机构:
--
电子邮件地址:
--
免责声明免责声明
1、猫眼课题宝专注于为科研工作者提供省时、高效的文献资源检索和预览服务;
2、网站中的文献信息均来自公开、合规、透明的互联网文献查询网站,可以通过页面中的“来源链接”跳转数据网站。
3、在猫眼课题宝点击“求助全文”按钮,发布文献应助需求时求助者需要支付50喵币作为应助成功后的答谢给应助者,发送到用助者账户中。若文献求助失败支付的50喵币将退还至求助者账户中。所支付的喵币仅作为答谢,而不是作为文献的“购买”费用,平台也不从中收取任何费用,
4、特别提醒用户通过求助获得的文献原文仅用户个人学习使用,不得用于商业用途,否则一切风险由用户本人承担;
5、本平台尊重知识产权,如果权利所有者认为平台内容侵犯了其合法权益,可以通过本平台提供的版权投诉渠道提出投诉。一经核实,我们将立即采取措施删除/下架/断链等措施。
我已知晓