Location: Universidad Politécnica de Madrid (UPM) > Grupo de Tratamiento de Imágenes (GTI) > Data > TSLAB

TSLAB: Tool for Semiautomatic LABeling

TSLAB is an advanced and user friendly tool for fast labeling of moving objects in video sequences. It allows to create three kind of labels for each moving object: moving region, shadow, and occluded area. Moreover, it assigns global identifiers at object level that allow to track labeled objects along the sequences. Additionally, TSLAB provides information about the moving objects that are temporally static.

A very friendly graphical user interface allows to manually create labels in a very easy way. Additionally, this interface includes some semiautomatic advanced tools that significantly simplify the labeling tasks and reduce drastically the time required to perform such tasks:

For questions about this software, please contact Carlos Cuevas at ccr@gti.ssr.upm.es.

Citation:

C. Cuevas, E.M. Yáñez, N. García, Tool for semiautomatic labeling of moving objects in video sequences: TSLAB”, Sensors, vol. 15, no. 7, pp. 15159-15178, Jul. 2015. (doi: 10.3390/s150715159)

Download:

User manual (descriptions and demos):

1 - Installation
2 - Input/output
3 - Navigation window
3.1 - Top menu
3.1.1 - File: Loading sequences and ground-truth data
3.1.2 - Options
3.1.3 - View
3.1.4 - Help
3.2 - Image selection
3.3 - Label selection
3.4 - Layer selection
3.5 - Label properties
3.6 - Display area
3.7 - Keyboard shortcuts
4 - Labeling window
4.1 - Display area: Drawing contours
4.2 - Mode
4.3 - Scroll
4.4 - Draw and save
4.5 - Visualization
4.6 - Motion detection
4.7 - Automatic contour
4.8 - Active contours
4.9 - Deblurring
4.10 - Keyboard shortcuts

References:

[1] B. Lo and S. Velastin, "Automatic congestion detection system for underground platforms", IEEE Int. Symp. Intelligent Multimedia, Video and Speech Processing. pp. 158-161, 2001 (doi: 10.1109/ISIMP.2001.925356).

[2] C. R. Wren, A. Azarbayejani, T. Darrell, and A. P. Pentland, "Pfinder: Real-time tracking of the human body", IEEE Trans. Pattern Analysis and Machine Intelligence, no. 19, pp. 780-785, 1997 (doi: 10.1109/34.598236).

[3] C. Stauffer, and W. E. L. Grimson, "Adaptive background mixture models for real-time tracking", IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 246-252,1999 (doi: 10.1109/CVPR.1999.784637).

[4] C. Cuevas, N. García, “Improved background modeling for real-time spatio-temporal non-parametric moving object detection strategies”, Image and Vision Computing, vol. 31, no. 9, pp. 616-630, Sep. 2013 (doi:10.1016/j.imavis.2013.06.003).

[5] E.M. Yanez, C. Cuevas, N. García, “A Combined Active Contours Method for Segmentation using Localization and Multiresolution”, IEEE Int. Conf. on Image Processing, ICIP 2013, Melbourne, Australia, pp. 1257-1261, 15-18 Sep. 2013 (doi: 10.1109/ICIP.2013.6738259).

Grupo de Tratamiento de Imágenes (GTI), E.T.S.Ing. Telecomunicación
Universidad Politécnica de Madrid (UPM)
Av. Complutense nº 30, "Ciudad Universitaria". 28040 - Madrid (Spain). Tel: +34 913367353. Fax: +34 913367353