Adaptive gradient methods are arguably the most successful optimization algorithms for neural network training. While it is well-known that adaptive gradient methods can achieve better dimensional dependence than stochastic gradient descent (SGD) under favorable geometry for stochastic convex optimization, the theoretical justification for their success in stochastic non-convex optimization remains elusive. In this paper, we aim to close this gap by analyzing the convergence rates of AdaGrad measured by the $ell_1$-norm of the gradient. Specifically, when the objective has $L$-Lipschitz gradient and the stochastic gradient variance is bounded by $sigma^2$, we prove a worst-case convergence rate of $ ilde{mathcal{O}}(frac{sqrt{d}L}{sqrt{T}} + frac{sqrt{d} sigma}{T^{1/4}})$, where $d$ is the dimension of the problem.We also present a lower bound of ${Omega}(frac{sqrt{d}}{sqrt{T}})$ for minimizing the gradient $ell_1$-norm in the deterministic setting, showing the tightness of our upper bound in the noiseless case. Moreover, under more fine-grained assumptions on the smoothness structure of the objective and the gradient noise and under favorable gradient $ell_1/ell_2$ geometry, we show that AdaGrad can potentially shave a factor of $sqrt{d}$ compared to SGD. To the best of our knowledge, this is the first result for adaptive gradient methods that demonstrates a provable gain over SGD in the non-convex setting.
自适应梯度方法可以说是神经网络训练的最成功的优化算法。尽管众所周知,在有利的几何形状下,自适应梯度方法比随机梯度优化的几何形状比随机梯度下降(SGD)获得更好的尺寸依赖性,但其在随机非convex优化中成功的理论理由仍然难以捉摸。在本文中,我们的目标是通过分析梯度的$ ell_1 $ - norm衡量的Adagrad的收敛速率来缩小这一差距。具体而言,当目标具有$ l $ -lipschitz梯度,并且随机梯度方差受$ sigma^2 $的界限时,我们证明了最差的case收敛速率为$ iLde {Mathcal {o}}(frac {sqrt {sqrt {d} {d} {d} l} {sqrt {t}}} + frac {sqrt {d} sigma} {t^{1/4}})$,其中$ d $是问题的维度。我们还出现了$ {omega}的下限(frac {sqrt {d}}} {sqrt {sqrt {t}}} )$用于最大程度地减少确定性设置中的梯度$ ell_1 $ norm,以显示我们上界的紧密度。此外,在对目标噪声和梯度噪声的平滑度结构以及有利的梯度$ ell_1/ell_2 $几何形状下的更细粒度的假设下,我们表明,与SGD相比,Adagrad可能会刮除$ SQRT {D} $的因子。据我们所知,这是自适应梯度方法的第一个结果,该方法证明了在非convex设置中对SGD的可证明增益。