AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

2019-05-01から1ヶ月間の記事一覧

YOLO | Abstract 第2段落 第1文

Our unified architecture is extremely fast. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシンプルにすることで、Faster R-CNNに…

YOLO | Abstract 第1段落 第5文

Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物…

YOLO | Abstract 第1段落 第4文

A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出…

YOLO | Abstract 第1段落 第3文

Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/150…

YOLO | Abstract 第1段落 第2文

Prior work on object detection repurposes classifiers to perform detection. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシン…

YOLO | Abstract 第1段落 第1文

We present YOLO, a new approach to object detection. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシンプルにすることで、Faste…

Faster R-CNN | Abstract 第9文

Code has been made publicly available. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv.org/abs/1506.01497 物体検出タスクにおいて、2つのネットワーク(RPNとFast R-CNN)で特徴…

Faster R-CNN | Abstract 第8文

In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https…

Faster R-CNN | Abstract 第7文

For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals…

Faster R-CNN | Abstract 第6文

We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with “attention” mechanisms, the RPN component tells the unified network where to l…

Faster R-CNN | Abstract 第5文

The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv.org/a…

Faster R-CNN | Abstract 第4文

An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv…

Faster R-CNN | Abstract 第3文

In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Ob…

Faster R-CNN | Abstract 第2文

Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Propos…

Faster R-CNN | Abstract 第1文

State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv.org/abs/1…

DCGAN | Abstract 第6文

Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Netw…

DCGAN | Abstract 第5文

Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Alec Radford, et al., "…

DCGAN | Abstract 第4文

We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Alec Radford, et al.,…

DCGAN | Abstract 第3文

In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks"…

DCGAN | Abstract 第2文

Comparatively, unsupervised learning with CNNs has received less attention. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks" https://arxiv.org/abs/1511.06434 GANにCNNを用…

DCGAN | Abstract 第1文

In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Alec Radford, et al., "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Netw…

GAN | Abstract 第7文

GAN

Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い…

GAN | Abstract 第6文

GAN

There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競…

GAN | Abstract 第5文

GAN

In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うよう…

GAN | Abstract 第4文

GAN

In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.…

GAN | Abstract 第3文

GAN

This framework corresponds to a minimax two-player game. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うように学習することが特徴的なGANの論文の"Generative Adversarial Netwo…

GAN | Abstract 第2文

GAN

The training procedure for G is to maximize the probability of D making a mistake. Ian J. Goodfellow, et al., "Generative Adversarial Networks" https://arxiv.org/abs/1406.2661 2つのモデルが競い合うように学習することが特徴的なGANの論文の"Ge…

GAN | Abstract 第1文

GAN

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the …

DQN | Abstract 第4文

DQN

We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them. Volodymyr Mnih, et al., "Playing Atari with Deep Reinforcement Learning" https://arxiv.org/abs/1312.5602 強化学習にディ…

DQN | Abstract 第3文

DQN

We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. Volodymyr Mnih, et al., "Playing Atari with Deep Reinforcement Learning" https://arxiv.org/ab…