2019-05-04から1日間の記事一覧
State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv.org/abs/1…
Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Netw…
Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Alec Radford, et al., "…
We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Alec Radford, et al.,…
In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks"…
Comparatively, unsupervised learning with CNNs has received less attention. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks" https://arxiv.org/abs/1511.06434 GANにCNNを用…
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Netw…
Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い…
There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競…
In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うよう…
In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.…
This framework corresponds to a minimax two-player game. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うように学習することが特徴的なGANの論文の"Generative Adversarial Netwo…
The training procedure for G is to maximize the probability of D making a mistake. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うように学習することが特徴的なGANの論文の"Ge…