AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

2019-05-01から1ヶ月間の記事一覧

FCN | Abstract 第7文

FCN

Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20% relative improvement to 62.2% mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image. Jon…

FCN | Abstract 第6文

FCN

We then define a novel architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Jonathan Long, et al., "Fully Convoluti…

FCN | Abstract 第5文

FCN

We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. Jonathan Long, et al., "Fully Convolu…

FCN | Abstract 第4文

FCN

We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. Jonathan Long, et al., "Fully Convolutional Networks for Semantic Segmenta…

FCN | Abstract 第3文

FCN

Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. Jonathan Long, et al., "Fully Convolutional Networks for Semantic …

FCN | Abstract 第2文

FCN

We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Jonathan Long, et al., "Fully Convolutional Networks for Semantic Segmentation" https://arxiv.or…

FCN | Abstract 第1文

FCN

Convolutional networks are powerful visual models that yield hierarchies of features. Jonathan Long, et al., "Fully Convolutional Networks for Semantic Segmentation" https://arxiv.org/abs/1411.4038 全結合層を使わないCNNで、あらゆる画像サイ…

U-Net | Abstract 第8文

The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net. Olaf Ronneberger, et al., "U-Net: Convolutional Networks for Biomedical Image Segmentation" htt…

U-Net | Abstract 第7文

Segmentation of a 512x512 image takes less than a second on a recent GPU. Olaf Ronneberger, et al., "U-Net: Convolutional Networks for Biomedical Image Segmentation" https://arxiv.org/abs/1505.04597 医療用画像のようにデータ数をたくさん用意…

U-Net | Abstract 第6文

Moreover, the network is fast. Olaf Ronneberger, et al., "U-Net: Convolutional Networks for Biomedical Image Segmentation" https://arxiv.org/abs/1505.04597 医療用画像のようにデータ数をたくさん用意できない場合においても、精度の良いセグメン…

U-Net | Abstract 第5文

Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Olaf Ronneberger, et al., "U-Net: Convolutional Networks for…

U-Net | Abstract 第4文

We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic…

U-Net | Abstract 第3文

The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. Olaf Ronneberger, et al., "U-Net: Convolutional Networks for Biomedical Image Segmentation" https://arxiv.…

U-Net | Abstract 第2文

In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. Olaf Ronneberger, et al., "U-Net: Convolutional Networks for Biomedical I…

U-Net | Abstract 第1文

There is large consent that successful training of deep networks requires many thousand annotated training samples. Olaf Ronneberger, et al., "U-Net: Convolutional Networks for Biomedical Image Segmentation" https://arxiv.org/abs/1505.04597…

SSD | Abstract 第10文

SSD

Code is available at: https://github.com/weiliu89/caffe/tree/ssd . Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/abs/1512.02325 物体検出タスクにおいて、YOLOよりも高速でしかもFaster R-CNNと同等の精度を実現したSSDの…

SSD | Abstract 第9文

SSD

Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/abs/1512.02325 物体検出タスクにおいて、YOLOよりも高速でしか…

SSD | Abstract 第8文

SSD

For 300 × 300 input, SSD achieves 74.3% mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for 512 × 512 input, SSD achieves 76.9% mAP, outperforming a comparable state-of-the-art Faster R-CNN model. Wei Liu, et al., "SSD: Single Shot M…

SSD | Abstract 第7文

Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both trainin…

SSD | Abstract 第6文

This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/abs/1512.02325 物体検出タスクにおいて、YOLOよりも高速で…

SSD | Abstract 第5文

SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. Wei Liu, et al., "…

SSD | Abstract 第4文

SSD

Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/abs/1512.02325 物体…

SSD | Abstract 第3文

SSD

At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https…

SSD | Abstract 第2文

SSD

Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/a…

SSD | Abstract 第1文

SSD

We present a method for detecting objects in images using a single deep neural network. Wei Liu, et al., "SSD: Single Shot MultiBox Detector" https://arxiv.org/abs/1512.02325 物体検出タスクにおいて、YOLOよりも高速でしかもFaster R-CNNと同等…

YOLO | Abstract 第2段落 第6文

It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images to other domains like artwork. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/15…

YOLO | Abstract 第2段落 第5文

Finally, YOLO learns very general representations of objects. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシンプルにすること…

YOLO | Abstract 第2段落 第4文

Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on background. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv…

YOLO | Abstract 第2段落 第3文

A smaller version of the network, Fast YOLO, processes an astounding 155 frames per second while still achieving double the mAP of other real-time detectors. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" …

YOLO | Abstract 第2段落 第2文

Our base YOLO model processes images in real-time at 45 frames per second. Joseph Redmon, et al., "You Only Look Once: Unified, Real-Time Object Detection" https://arxiv.org/abs/1506.02640 物体検出タスクにおいて、CNNのアーキテクチャをシン…