2020-01-01から1年間の記事一覧
We first observe the influence of the non-linear activations functions. Xavier Glorot and Yoshua Bengio, "Understanding the difficulty of training deep feedforward neural networks" http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf デ…
Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the…
All these experimental results were obtained with new initialization or training mechanisms. Xavier Glorot and Yoshua Bengio, "Understanding the difficulty of training deep feedforward neural networks" http://proceedings.mlr.press/v9/gloro…
Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs les…
As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either. Phillip Isola, et al., "Image-to-Image Translation with Conditio…
Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease …
We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adv…
This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adversarial Networks" https://arxi…
These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adversarial Networks" https://arxiv.org/a…
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adversarial Networks" https://arxiv.org/abs/1611.07004…
May the Force be with you. Star Wars 今回はAI関連論文ではなく、番外編です。私の好きな映画スターウォーズからです。 スターウォーズシリーズで有名な"May the Force be with you."というセリフついて、英語リーディング教本のFrame of Reference(F.o.R.…
Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. David Silver, et al., "Mastering the game of Go without human knowledge" Master…
This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the g…
AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go…
Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go wi…
These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go without huma…