This zip file contains the databases used in the video annotation experiments in the paper [1]. In particular, two datasets are provided: the Louvre dataset, and the Madrid dataset.

Both datasets are given by the links to the videos and images needed to perform the experiments. The structure of each folder is as follows:

- File categories.txt: contains a list of the categories involved in the annotation problem. Each row stands for a category, and the row number is the number of the category. 
- videosDB: This folder refers to the query videos and contains:

	- links.txt: Links to the videos used as queries. Each row number is the numeric identifier of each video.
	- labels.txt: A file containing the ground-truth annotations of the categories present in each video. Hence, each row number points to a particular video, and the vector of numbers refers to the categories in file categories.txt.

- referencesDB: This folder refers to the reference image DB (automatically downloaded from Flickr) and contains:
 	- links.txt: Links to the images used as reference. Each row number is the numeric identifier of the image.
	- labels.txt: A file containing the category associated to each image. Hence, each row number points to a particular category in file categories.txt.


If you have any problem to use the datasets or any of the links is broken, do not hesitate to contact with Iván González (igonzalez@tsc.uc3m.es).


[1] Temporal Segmentation and Keyframe Selection Methods for User-Generated Video Retrieval and Annotation. Iván González-Díaz, Tomás Martínez Cortes, Ascensión Gallardo Antolín and Fernando Díaz-de-María, submitted to IEEE Transactions on Expert Systems.



