The segmented and annotated IAPR TC-12 benchmark

Full text for this resource is not available from the Research Repository.

Escalante, H, Hernández, C, Gonzalez, J, López-López, A, Montes, M, Morales, E, Enrique Sucar, L, Villaseñor, L and Grubinger, Michael (2010) The segmented and annotated IAPR TC-12 benchmark. Computer Vision and Image Understanding, 114 (4). pp. 419-428. ISSN 1077-3142


Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution.

Dimensions Badge

Altmetric Badge

Item type Article
DOI 10.1016/j.cviu.2009.03.008
Official URL
Subjects Historical > FOR Classification > 0801 Artificial Intelligence and Image Processing
Historical > Faculty/School/Research Centre/Department > School of Engineering and Science
Keywords ResPubID19195, data set creation, ground truth collection, evaluation metrics, automatic image annotation, image retrieval
Citations in Scopus 218 - View on Scopus
Download/View statistics View download statistics for this item

Search Google Scholar

Repository staff login