AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

2019-01-01から1年間の記事一覧

VAE | Abstract 第6文

VAE

Theoretical advantages are reflected in experimental results. Diederik P Kingma, et al., "Auto-Encoding Variational Bayes" https://arxiv.org/abs/1312.6114 通常の自己符号化器(Autoencoder)とは異なり、観測されたデータがある確率分布に基づいて…

VAE | Abstract 第5文

VAE

Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable po…

VAE | Abstract 第4文

VAE

First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Diederik P Kingma, et al., "Auto-Encoding Variationa…

VAE | Abstract 第3文

Our contributions is two-fold. Diederik P Kingma, et al., "Auto-Encoding Variational Bayes" https://arxiv.org/abs/1312.6114 通常の自己符号化器(Autoencoder)とは異なり、観測されたデータがある確率分布に基づいて生成されたと仮定する変分自己符号…

VAE | Abstract 第2文

VAE

We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Diederik P Kingma, et al., "Auto-Encoding Variation…

VAE | Abstract 第1文

How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? Diederik P Kingma, et al., "Auto-Encoding Va…

LeNet | Abstract 第4段落 第3文

It is deployed commercially and reads several million cheques per day. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1998年にYann Lecunらが発表した畳み込み…

LeNet | Abstract 第4段落 第2文

It uses Convolutional Neural Network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" …

LeNet | Abstract 第4段落 第1文

A Graph Transformer Network for reading a bank cheque is also described. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1998年にYann Lecunらが発表した畳み込…

LeNet | Abstract 第3段落 第2文

Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1…

LeNet | Abstract 第3段落 第1文

Two systems for online handwriting recognition are described. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1998年にYann Lecunらが発表した畳み込みニューラル…

LeNet | Abstract 第2段落 第2文

A new learning paradigm, called Graph Transformer Networks (GTN), allows such multimodule systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Yann LeCun, et al., "Gradient-Based Lea…

LeNet | Abstract 第2段落 第1文

Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.…

LeNet | Abstract 第1段落 第4文

Convolutional Neural Networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://ya…

LeNet | Abstract 第1段落 第3文

This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.l…

LeNet | Abstract 第1段落 第2文

Given an appropriate network architecture, Gradient-Based Learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters, with minimal preprocessing. Yan…

LeNet | Abstract 第1段落 第1文

Multilayer Neural Networks trained with the backpropagation algorithm constitute the best example of a successful Gradient-Based Learning technique. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.…

Deep Learning | Abstract 第4文

Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech. Yann LeCun, Yoshua Bengio & Geoffrey Hinton, "Deep …

Deep Learning | Abstract 第3文

Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the repres…

Deep Learning | Abstract 第2文

These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Yann LeCun, Yoshua Bengio & Geoffrey Hinton, "Deep…

Deep Learning | Abstract 第1文

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. Yann LeCun, Yoshua Bengio & Geoffrey Hinton, "Deep Learning" https://www.nature…

Adversarial Examples | Abstract 第6文

Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset. Ian J. Goodfellow, "Explaining and Harnessing Adversarial Examples" https://arxiv.org/abs/1412.6572 ニ…

Adversarial Examples | Abstract 第5文

Moreover, this view yields a simple and fast method of generating adversarial examples. Ian J. Goodfellow, "Explaining and Harnessing Adversarial Examples" https://arxiv.org/abs/1412.6572 ニューラルネットワークを騙すような入力となるAdversa…

Adversarial Examples | Abstract 第4文

This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Ian J. Goodfellow, "Explaining and Harnessing…

Adversarial Examples | Abstract 第3文

We argue instead that the primary cause of neural networks’ vulnerability to adversarial perturbation is their linear nature. Ian J. Goodfellow, "Explaining and Harnessing Adversarial Examples" https://arxiv.org/abs/1412.6572 ニューラルネ…

Adversarial Examples | Abstract 第2文

Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. Ian J. Goodfellow, "Explaining and Harnessing Adversarial Examples" https://arxiv.org/abs/1412.6572 ニューラルネットワークを騙すような入力となるAdversari…

Adversarial Examples | Abstract 第1文

Several machine learning models, including neural networks, consistently misclassify adversarial examples—inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed inpu…

AMSGrad | Abstract 第5文

Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with “long-term memory” of past gradients, and propose new variants of the ADAM algorithm which not only fix the convergence issues but often also l…

AMSGrad | Abstract 第4文

We provide an explicit example of a simple convex optimization setting where ADAM does not converge to the optimal solution, and describe the precise problems with the previous analysis of ADAM algorithm. Sashank J. Reddi, et al., "On the …

AMSGrad | Abstract 第3文

We show that one cause for such failures is the exponential moving average used in the algorithms. Sashank J. Reddi, et al., "On the Convergence of Adam and Beyond" https://arxiv.org/abs/1904.09237 有用な勾配を忘却しないようにするlong-term…