AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

2019-08-01から1ヶ月間の記事一覧

ZFNet | Abstract 第6文

We also perform an ablation study to discover the performance contribution from different model layers. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" https://arxiv.org/abs/1311.2901 CNNの可視化を行い…

ZFNet | Abstract 第5文

Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al. on the ImageNet classification benchmark. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Net…

ZFNet | Abstract 第4文

We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" http…

ZFNet | Abstract 第3文

In this paper we address both issues. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" https://arxiv.org/abs/1311.2901 CNNの可視化を行い、AlexNetの問題点を明らかにして改良することで、2013年のILSVRCの優勝…

ZFNet | Abstract 第2文

However there is no clear understanding of why they perform so well, or how they might be improved. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" https://arxiv.org/abs/1311.2901 CNNの可視化を行い、Ale…

ZFNet | Abstract 第1文

Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark (Krizhevsky et al., 2012). Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" h…

Dropout | Abstract 第10文

We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.…

Dropout | Abstract 第9文

This significantly reduces overfitting and gives major improvements over other regularization methods. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastav…

Dropout | Abstract 第8文

At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent…

Dropout | Abstract 第7文

During training, dropout samples from an exponential number of different “thinned” networks. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/sriva…

Dropout | Abstract 第6文

This prevents units from co-adapting too much. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf ノードをランダムに消去しながら学…

Dropout | Abstract 第5文

The key idea is to randomly drop units (along with their connections) from the neural network during training. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/s…

Dropout | Abstract 第4文

Dropout is a technique for addressing this problem. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf ノードをランダムに消去しなが…

Dropout | Abstract 第3文

Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks f…

Dropout | Abstract 第2文

However, overfitting is a serious problem in such networks. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf ノードをランダムに消…

Dropout | Abstract 第1文

Deep neural nets with a large number of parameters are very powerful machine learning systems. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/sri…

VAE | Abstract 第6文

VAE

Theoretical advantages are reflected in experimental results. Diederik P Kingma, et al., "Auto-Encoding Variational Bayes" https://arxiv.org/abs/1312.6114 通常の自己符号化器(Autoencoder)とは異なり、観測されたデータがある確率分布に基づいて…

VAE | Abstract 第5文

VAE

Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable po…

VAE | Abstract 第4文

VAE

First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Diederik P Kingma, et al., "Auto-Encoding Variationa…

VAE | Abstract 第3文

Our contributions is two-fold. Diederik P Kingma, et al., "Auto-Encoding Variational Bayes" https://arxiv.org/abs/1312.6114 通常の自己符号化器(Autoencoder)とは異なり、観測されたデータがある確率分布に基づいて生成されたと仮定する変分自己符号…

VAE | Abstract 第2文

VAE

We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Diederik P Kingma, et al., "Auto-Encoding Variation…

VAE | Abstract 第1文

How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? Diederik P Kingma, et al., "Auto-Encoding Va…

LeNet | Abstract 第4段落 第3文

It is deployed commercially and reads several million cheques per day. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1998年にYann Lecunらが発表した畳み込み…

LeNet | Abstract 第4段落 第2文

It uses Convolutional Neural Network character recognizers combined with global training techniques to provide record accuracy on business and personal cheques. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" …

LeNet | Abstract 第4段落 第1文

A Graph Transformer Network for reading a bank cheque is also described. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1998年にYann Lecunらが発表した畳み込…

LeNet | Abstract 第3段落 第2文

Experiments demonstrate the advantage of global training, and the flexibility of Graph Transformer Networks. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1…

LeNet | Abstract 第3段落 第1文

Two systems for online handwriting recognition are described. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf 1998年にYann Lecunらが発表した畳み込みニューラル…

LeNet | Abstract 第2段落 第2文

A new learning paradigm, called Graph Transformer Networks (GTN), allows such multimodule systems to be trained globally using Gradient-Based methods so as to minimize an overall performance measure. Yann LeCun, et al., "Gradient-Based Lea…

LeNet | Abstract 第2段落 第1文

Real-life document recognition systems are composed of multiple modules including field extraction, segmentation recognition, and language modeling. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://yann.…

LeNet | Abstract 第1段落 第4文

Convolutional Neural Networks, which are specifically designed to deal with the variability of 2D shapes, are shown to outperform all other techniques. Yann LeCun, et al., "Gradient-Based Learning Applied to Document Recognition" http://ya…