AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

AMSGrad

AMSGrad | Abstract 第5文

Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with “long-term memory” of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also l…

AMSGrad | Abstract 第4文

We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Sashank J. Reddi, et al., "On the …

AMSGrad | Abstract 第3文

We show that one cause for such failures is the exponential moving average used in the algorithms. Sashank J. Reddi, et al., "On the Convergence of Adam and Beyond" https://arxiv.org/abs/1904.09237 有用な勾配を忘却しないようにするlong-term…

AMSGrad | Abstract 第2文

In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). Sashank J. Reddi, et al., "On the Con…

AMSGrad | Abstract 第1文

Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSPROP, ADAM, ADADELTA, NADAM are based on using gradient updates scaled by square roots of exponential moving av…