AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

2019-05-05から1日間の記事一覧

SSD | Abstract 第4文

SSD

Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/abs/1512.02325 物体…

SSD | Abstract 第3文

SSD

At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https…

SSD | Abstract 第2文

SSD

Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/a…

SSD | Abstract 第1文

SSD

We present a method for detecting objects in images using a single deep neural network. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/abs/1512.02325 物体検出タスクにおいて、YOLOよりも高速でしかもFaster R-CNNと同等…

YOLO | Abstract 第2段落 第6文

It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/15…

YOLO | Abstract 第2段落 第5文

Finally, YOLO learns very general representations of objects. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシンプルにすること…

YOLO | Abstract 第2段落 第4文

Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv…

YOLO | Abstract 第2段落 第3文

A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" …

YOLO | Abstract 第2段落 第2文

Our base YOLO model processes images in real-time at 45 frames per second. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシン…

YOLO | Abstract 第2段落 第1文

Our unified architecture is extremely fast. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシンプルにすることで、Faster R-CNNに…

YOLO | Abstract 第1段落 第5文

Since the whole detection pipeline is a single network, it can be optimized end-to-end directly on detection performance. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物…

YOLO | Abstract 第1段落 第4文

A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出…

YOLO | Abstract 第1段落 第3文

Instead, we frame object detection as a regression problem to spatially separated bounding boxes and associated class probabilities. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/150…

YOLO | Abstract 第1段落 第2文

Prior work on object detection repurposes classifiers to perform detection. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシン…

YOLO | Abstract 第1段落 第1文

We present YOLO, a new approach to object detection. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシンプルにすることで、Faste…

Faster R-CNN | Abstract 第9文

Code has been made publicly available. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv.org/abs/1506.01497 物体検出タスクにおいて、2つのネットワーク(RPNとFast R-CNN)で特徴…

Faster R-CNN | Abstract 第8文

In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https…

Faster R-CNN | Abstract 第7文

For the very deep VGG-16 model, our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals…

Faster R-CNN | Abstract 第6文

We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features—using the recently popular terminology of neural networks with “attention” mechanisms, the RPN component tells the unified network where to l…

Faster R-CNN | Abstract 第5文

The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv.org/a…

Faster R-CNN | Abstract 第4文

An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" https://arxiv…

Faster R-CNN | Abstract 第3文

In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Ob…

Faster R-CNN | Abstract 第2文

Advances like SPPnet and Fast R-CNN have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. Shaoqing Ren, et al., "Faster R-CNN: Towards Real-Time Object Detection with Region Propos…