Im2Text: Describing Images Using 1 Million Captioned Photographs

Vicente Ordonez, Girish Kulkarni, Tamara L. Berg

Stony Brook University

Abstract

We develop and demonstrate automatic image description methods using a large captioned photo collection. One contribution is our technique for the automatic collection of this new dataset - performing a huge number of Flickr queries and then filtering the noisy results down to 1 million images with associated visually relevant captions. Such a collection allows us to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results. We also develop methods incorporating many state of the art, but fairly noisy, estimates of image content to produce even more pleasing results. Finally we introduce a new objective performance measure for image captioning

Paper

Vicente Ordonez, Girish Kulkarni, Tamara L. Berg. Im2Text: Describing Images Using 1 Million Captioned Photographs. Neural Information Processing Systems(NIPS), 2011. [Paper (PDF)] [Bibtex] [Poster (PDF)]

Resources

Data, containing 1 million image urls + captions and a script to crawl images (43MB). Alternatively, if you're interested in the image collection for research purposes please email vicenteor@rice.edu.
Data-JSON, JSON file containing 1 million image urls + captions (Preferred) (48MB). This version additionally has user-ids to avoid overlap on user collections in train/test splits.
Matlab Code for computing our baseline method, including precomputed Gist + Tinyimage features (4.4GB).
Search tool: Search for images using text queries (e.g. "cat under a tree", "dog walking on the beach")
% Sample usage.
[captions] = im2text('myimage.jpg');
% Variable captions is a cell array of candidate image descriptions.