AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

2019-06-01から1ヶ月間の記事一覧

R-GCN | Abstract 第6文

We further show that factorization models for link prediction such as DistMult can be significantly improved by enriching them with an encoder model to accumulate evidence over multiple inference steps in the relational graph, demonstratin…

R-GCN | Abstract 第5文

We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. Michael Schlichtkrull, et al., "Modeling Relational Data with Graph Convolutional Networks" https://arxiv.org/abs/1703.06103 関係構造を表すグラフ…

R-GCN | Abstract 第4文

R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases. Michael Schlichtkrull, et al., "Modeling…

R-GCN | Abstract 第3文

We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (rec…

R-GCN | Abstract 第2文

Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. Michael Schlichtkrull, et al., "Modeling Relational Data with Graph Convolutional Networks" https://…

R-GCN | Abstract 第1文

Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Michael Schlichtkrull, et al., "Modeling Relational Data with Graph Convolutional Networks" https://arxiv.org/abs/1703.06103 関…

Grad-CAM++ | Abstract 第6文

Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ provides promising human-interpretable visual explanations for a given CNN architecture across multiple tasks including c…

Grad-CAM++ | Abstract 第5文

We provide a mathematical derivation for the proposed method, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate …

Grad-CAM++ | Abstract 第4文

Building on a recently proposed method called Grad-CAM, we propose a generalized method called Grad-CAM++ that can provide better visual explanations of CNN model predictions, in terms of better object localization as well as explaining oc…

Grad-CAM++ | Abstract 第3文

There has been a significant recent interest in developing explainable deep learning models, and this paper is an effort in this direction. Aditya Chattopadhyay, et al., "Grad-CAM++: Improved Visual Explanations for Deep Convolutional Netw…

Grad-CAM++ | Abstract 第2文

However, these deep models are perceived as ”black box” methods considering the lack of understanding of their internal functioning. Aditya Chattopadhyay, et al., "Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks" h…

Grad-CAM++ | Abstract 第1文

Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision problems. Aditya Chattopadhyay, et al., "Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks" https:…

Grad-CAM | Abstract 第9文

Video of the demo can be found at youtu.be/COjUB9Izk6E. Ramprasaath R. Selvaraju, et al., "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization" https://arxiv.org/abs/1610.02391 出力のクラスに対応する判断根拠を…

Grad-CAM | Abstract 第8文

Our code is available at https://github.com/ramprs/grad-cam/ and a demo is available on CloudCV. Ramprasaath R. Selvaraju, et al., "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization" https://arxiv.org/abs/161…

Grad-CAM | Abstract 第7文

Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a ‘stronger’ deep n…

Grad-CAM | Abstract 第6文

For image captioning and VQA, our visualizations show even non-attention based models can localize inputs. Ramprasaath R. Selvaraju, et al., "Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization" https://arxiv.o…

Grad-CAM | Abstract 第5文

In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c…

Grad-CAM | Abstract 第4文

We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResN…

Grad-CAM | Abstract 第3文

Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multi-modal in…

Grad-CAM | Abstract 第2文

Our approach – Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say logits for ‘dog’ or even a caption), flowing into the final convolutional layer to produce a coarse localization map highli…

Grad-CAM | Abstract 第1文

We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent. Ramprasaath R. Selvaraju, et al., "Grad-CAM: Visual Explanations …

Batch Normalization | Abstract 第8文

Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters. Sergey Ioffe, et al.,…

Batch Normalization | Abstract 第7文

Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Sergey Ioffe, et al., "Batch Normalization: …

Batch Normalization | Abstract 第6文

It also acts as a regularizer, in some cases eliminating the need for Dropout. Sergey Ioffe, et al., "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" https://arxiv.org/abs/1502.03167 内部共変量…

Batch Normalization | Abstract 第5文

Batch Normalization allows us to use much higher learning rates and be less careful about initialization. Sergey Ioffe, et al., "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" https://arxiv.or…

Batch Normalization | Abstract 第4文

Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Sergey Ioffe, et al., "Batch Normalization: Accelerating Deep Network Training by Reduc…

英語リーディング教本(F.o.R.)の知識で読めるAI関連論文の英文の割合は89.5%

本日までにAI関連の28論文のAbstractの200個の英文に関して構文解析を行いましたので、一度集計してみました。その結果、英語リーディング教本(F.o.R.)の知識で読めるAI関連論文の英文の割合は89.5%でした。 英語リーディング教本および英語構文のエッセンス…

Batch Normalization | Abstract 第3文

We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Sergey Ioffe, et al., "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" https://arxi…

Batch Normalization | Abstract 第2文

This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. Sergey Ioffe, et al., "Batch Normalization: Accelerating Dee…

Batch Normalization | Abstract 第1文

Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. Sergey Ioffe, et al., "Batch Normalization: Accelerating Deep N…