Location: Universidad Politécnica de Madrid (UPM) > Grupo de Tratamiento de Imágenes (GTI) > Data

 

Databases:

ViCoCoS-3D (2016):

VideoConference Common Scenes in 3D.

The ViCoCoS-3D database contains freely-available sequences with typical content from videoconferencing scenarios, aiming at helping in the research on developing and evaluating the performance of processing algorithms and systems for videoconferencing services.

HandGesture database overview

Go to the site

LASIESTA database (2016):

Fully annotated sequences to test moving object detection and tracking algorithms.

LASIESTA is composed by many real indoor and outdoor sequences organized in diferent categories, each of one covering a specific challenge.

In contrast to other databases, it is fully annotated at both pixel-level and object-level. Therefore, it is suitable not only for strategies exclusively focused on the detection of moving objects but also for those that integrate tracking algorithms in their detection approaches.

Additionally, it contains sequences recorded with static and moving cameras and it also provides information about the moving objects remaining temporally static.

Go to the site

Hand gesture database (2015):

Hand-gesture database is composed by high-resolution color images acquired with the Senz3D sensor.

This database is composed by 5 different hand gestures. In particular, it contains 2 static hand gestures and 3 dynamic hand gestures, which have been performed by 6 different subjects.

HandGesture database overview

Go to the site

HRRFaceD database (2014):

Face database composed by high resolution images acquired with Microsoft Kinect 2 (second generation).

This database is composed by a set of high resolution range images acquired by the latest generation of range / depth cameras: the Microsoft Kinect 2 (second generation). It is composed by the faces of 18 people, acquired from different poses: frontal, lateral, etc. The faces of some of the people have been acquired with and without glasses.

HandGesture database overview

Go to the site

Lab database (2012):

Set of 6 sequences to test moving object detection strategies.

Lab database is composed by 6 sequences with some images labeled at pixel level that can be used to evaluate the quality of the results provided by moving object detection strategies.

Lab database overview

Go to the site

Vehicle image database (2012):

More than 7000 images of vehicles and roads.

Set of 3425 images of vehicle rears taken from different points of view, and 3900 images extracted from road sequences not containing vehicles.

Lab database overview

Go to the site

Software:

TSLAB (2015):

Tool for Semiautomatic LABeling

TSLAB is an advanced and user friendly tool for fast labeling of moving objects in video sequences. It allows to create three kind of labels for each moving object: moving region, shadow, and occluded area. Moreover, it assigns global identifiers at object level that allow to track labeled objects along the sequences. Additionally, TSLAB provides information about the moving objects that are temporally static.

A very friendly graphical user interface allows to manually create labels in a very easy way. Additionally, this interface includes some semiautomatic advanced tools that significantly simplify the labeling tasks and reduce drastically the time required to perform such tasks:

 

Go to the site

Supplementary material:

Real-time nonparametric background subtraction with tracking-based foreground update (2017)

 

This site contains the software and the results corresponding to the moving object detection strategy proposed in [*].

[*] D. Berjón, C. Cuevas, F. Morán, and N. García, "Real-time nonparametric background subtraction with tracking-based foreground update", Pattern Recognition, vol. xx, no. x, pp. xxx-xxx, 2017 (under review).

Go to the site

Augmented reality tool for the situational awareness improvement of UAV opperators (2017)

 

Lab database overview


This site contains the supplentary material corresponding to the strategy proposed in [*].




[*] S. Ruano, C. Cuevas, G. Gallego, and N. García, "Augmented reality tool for situational awareness improvement of UAV opperators", Sensors, vol. 17, no. 1, article ID 297, 16 pages, 2017 (doi: 10.3390/s17020297).

 

 

Go to the site

Detection of stationary foreground objects using multiple nonparametric background-foreground models on a Finite State Machine (2017)

 



This site contains the results corresponding to the strategy proposed in [*].




[*] C. Cuevas, R. Martínez, D. Berjón, and N. García, "Detection of stationary foreground objects using multiple nonparametric background-foreground models on a Finite State Machine", IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1127-1142, 2017 (doi: 10.1109/TIP.2016.2642779).

 

Go to the site

Camera localization using trajectories and maps (2014)

 

This site contains an exhaustive description of the test data used to evaluate the system for camera positioning proposed in [*] since, due to page length limitations, the said manuscript can only contain graphical descriptions for part of the evaluated settings.

[*] R. Mohedano, A. Cavallaro, N. García, “Camera localization using trajectories and maps”, IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 36, no. 4, pp. 684-697, Apr. 2014.

Go to the site

Grupo de Tratamiento de Imágenes (GTI), E.T.S.Ing. Telecomunicación
Universidad Politécnica de Madrid (UPM)
Av. Complutense nº 30, "Ciudad Universitaria". 28040 - Madrid (Spain). Tel: +34 913367353. Fax: +34 913367353