AI Paper English F.o.R.

人工知能(AI)に関する論文を英語リーディング教本のFrame of Reference(F.o.R.)を使いこなして読むブログです。

2019-01-01から1年間の記事一覧

Haar-like | Abstract 第3文

The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. Paul Viola and Michael Jones, "Rapid Object Detection using a Boosted Casca…

Haar-like | Abstract 第2文

This work is distinguished by three key contributions. Paul Viola and Michael Jones, "Rapid Object Detection using a Boosted Cascade of Simple Features" https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf ディープラーニン…

Haar-like | Abstract 第1文

This paper describes a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. Paul Viola and Michael Jones, "Rapid Object Detection using a Boosted …

SIFT | Abstract 第6文

This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance. David G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints" https://www.cs.ubc.ca/~lowe/paper…

SIFT | Abstract 第5文

The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally per…

SIFT | Abstract 第4文

This paper also describes an approach to using these features for object recognition. David G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints" https://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf ディープラーニングではなく2004年…

SIFT | Abstract 第3文

The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. David G. Lowe, "Distinctive Image Features from Scale-Invariant K…

SIFT | Abstract 第2文

The features are invariant to image scale and rotation, and are shown to provide robust matching across a a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. David G. Lowe, "Dist…

SIFT | Abstract 第1文

This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. David G. Lowe, "Distinctive Image Features from Scale-Invar…

HOG | Abstract 第4文

HOG

The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds. Navneet D…

HOG | Abstract 第3文

HOG

We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descrip…

HOG | Abstract 第2文

HOG

After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. Navneet Dalal and Bill…

HOG | Abstract 第1文

HOG

We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. Navneet Dalal and Bill Triggs, "Histograms of Oriented Gradients for Human Detection" https://lear.inrial…

ZFNet | Abstract 第7文

We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets. Matthew D Zeiler, Rob Fergus, "Vis…

ZFNet | Abstract 第6文

We also perform an ablation study to discover the performance contribution from different model layers. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" https://arxiv.org/abs/1311.2901 CNNの可視化を行い…

ZFNet | Abstract 第5文

Used in a diagnostic role, these visualizations allow us to find model architectures that outperform Krizhevsky et al. on the ImageNet classification benchmark. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Net…

ZFNet | Abstract 第4文

We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" http…

ZFNet | Abstract 第3文

In this paper we address both issues. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" https://arxiv.org/abs/1311.2901 CNNの可視化を行い、AlexNetの問題点を明らかにして改良することで、2013年のILSVRCの優勝…

ZFNet | Abstract 第2文

However there is no clear understanding of why they perform so well, or how they might be improved. Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" https://arxiv.org/abs/1311.2901 CNNの可視化を行い、Ale…

ZFNet | Abstract 第1文

Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark (Krizhevsky et al., 2012). Matthew D Zeiler, Rob Fergus, "Visualizing and Understanding Convolutional Networks" h…

Dropout | Abstract 第10文

We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.…

Dropout | Abstract 第9文

This significantly reduces overfitting and gives major improvements over other regularization methods. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastav…

Dropout | Abstract 第8文

At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent…

Dropout | Abstract 第7文

During training, dropout samples from an exponential number of different “thinned” networks. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/sriva…

Dropout | Abstract 第6文

This prevents units from co-adapting too much. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf ノードをランダムに消去しながら学…

Dropout | Abstract 第5文

The key idea is to randomly drop units (along with their connections) from the neural network during training. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/s…

Dropout | Abstract 第4文

Dropout is a technique for addressing this problem. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf ノードをランダムに消去しなが…

Dropout | Abstract 第3文

Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks f…

Dropout | Abstract 第2文

However, overfitting is a serious problem in such networks. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf ノードをランダムに消…

Dropout | Abstract 第1文

Deep neural nets with a large number of parameters are very powerful machine learning systems. Nitish Srivastava, et al., "Dropout: A Simple Way to Prevent Neural Networks from Overfitting" http://jmlr.org/papers/volume15/srivastava14a/sri…