GAN
Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い…
There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競…
In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うよう…
In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.…
This framework corresponds to a minimax two-player game. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うように学習することが特徴的なGANの論文の"Generative Adversarial Netwo…
The training procedure for G is to maximize the probability of D making a mistake. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うように学習することが特徴的なGANの論文の"Ge…
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the …