We first observe the influence of the non-linear activations functions. Xavier Glorot and Yoshua Bengio, "Understanding the difficulty of training deep feedforward neural networks" http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf デ…
Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the…
All these experimental results were obtained with new initialization or training mechanisms. Xavier Glorot and Yoshua Bengio, "Understanding the difficulty of training deep feedforward neural networks" http://proceedings.mlr.press/v9/gloro…
Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs les…
As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either. Phillip Isola, et al., "Image-to-Image Translation with Conditio…
Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease …
We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adv…
This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adversarial Networks" https://arxi…
These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adversarial Networks" https://arxiv.org/a…
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. Phillip Isola, et al., "Image-to-Image Translation with Conditional Adversarial Networks" https://arxiv.org/abs/1611.07004…
May the Force be with you. Star Wars 今回はAI関連論文ではなく、番外編です。私の好きな映画スターウォーズからです。 スターウォーズシリーズで有名な"May the Force be with you."というセリフついて、英語リーディング教本のFrame of Reference(F.o.R.…
Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. David Silver, et al., "Mastering the game of Go without human knowledge" Master…
This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the g…
AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go…
Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go wi…
These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go without huma…
The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go without human knowledge | Nature 囲碁AIであるAl…
Recently, AlphaGo became the first program to defeat a world champion in the game of Go. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go without human knowledge | Nature 囲碁AIであるAlphaGo…
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. David Silver, et al., "Mastering the game of Go without human knowledge" Mastering the game of Go with…
This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away. David Silver, et al., "Mastering the game of Go with deep neural …
Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. David Silver, et al., "Mastering the game of Go with deep neural network…
We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. David Silver, et al., "Mastering the game of Go with deep neural networks and tree search" https://www.nature.com/articles/nature…
Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. David Silver, et al., "Mastering the game of Go with deep neu…
These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. David Silver, et al., "Mastering the game of Go with deep neural networks and…
Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves. David Silver, et al., "Mastering the game of Go with deep neural networks and tree search" https:…
The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. David Silver, et al., "Mastering the ga…
Fast R-CNN is implemented in Python and C++ (using Caffe) and is available under the open-source MIT License at https: //github.com/rbgirshick/fast-rcnn. Ross Girshick, "Fast R-CNN" https://arxiv.org/abs/1504.08083 物体検出タスクにおいて、…
Compared to SPPnet, Fast R-CNN trains VGG16 3× faster, tests 10× faster, and is more accurate. Ross Girshick, "Fast R-CNN" https://arxiv.org/abs/1504.08083 物体検出タスクにおいて、特徴マップの再利用によってR-CNNよりも高速化に成功した"Fast …
Fast R-CNN trains the very deep VGG16 network 9× faster than R-CNN, is 213× faster at test-time, and achieves a higher mAP on PASCAL VOC 2012. Ross Girshick, "Fast R-CNN" https://arxiv.org/abs/1504.08083 物体検出タスクにおいて、特徴マップ…
Compared to previous work, Fast R-CNN employs several innovations to improve training and testing speed while also increasing detection accuracy. Ross Girshick, "Fast R-CNN" https://arxiv.org/abs/1504.08083 物体検出タスクにおいて、特徴マッ…