Research Repository

The segmented and annotated IAPR TC-12 benchmark

Escalante, H, Hernández, C, Gonzalez, J, López-López, A, Montes, M, Morales, E, Enrique Sucar, L, Villaseñor, L and Grubinger, Michael (2010) The segmented and annotated IAPR TC-12 benchmark. Computer Vision and Image Understanding, 114 (4). pp. 419-428. ISSN 1077-3142

Full text for this resource is not available from the Research Repository.


Automatic image annotation (AIA), a highly popular topic in the field of information retrieval research, has experienced significant progress within the last decade. Yet, the lack of a standardized evaluation platform tailored to the needs of AIA, has hindered effective evaluation of its methods, especially for region-based AIA. Therefore in this paper, we introduce the segmented and annotated IAPR TC-12 benchmark; an extended resource for the evaluation of AIA methods as well as the analysis of their impact on multimedia information retrieval. We describe the methodology adopted for the manual segmentation and annotation of images, and present statistics for the extended collection. The extended collection is publicly available and can be used to evaluate a variety of tasks in addition to image annotation. We also propose a soft measure for the evaluation of annotation performance and identify future research areas in which this extended test collection is likely to make a contribution.

Item Type: Article
Uncontrolled Keywords: ResPubID19195, data set creation, ground truth collection, evaluation metrics, automatic image annotation, image retrieval
Subjects: Current > FOR Classification > 0801 Artificial Intelligence and Image Processing
Historical > Faculty/School/Research Centre/Department > School of Engineering and Science
Depositing User: VUIR
Date Deposited: 07 Oct 2011 00:29
Last Modified: 07 Feb 2017 10:32
ePrint Statistics: View download statistics for this item
Citations in Scopus: 124 - View on Scopus

Repository staff only

View Item View Item

Search Google Scholar