Extraction and classification of diving clips from continuous video footage

(Extraktion und Klassifizierung von Sprungclips aus kontinuierlichem Videomaterial)

The recording of video data has become a common component of athlete training programmes. However, the manual analysis of the obtained footage is time-consuming and requires domain-specific knowledge. In order to automate this kind of task, most previous work has focused on just one of the following sub-problems: 1) temporally cropping events/actions of interest from continuous video; 2) tracking the object of interest; and 3) classifying the events/actions of interest. In contrast, this paper provides a complete solution to the overall action monitoring task in the context of a challenging real-world exemplar, diving classification. The model is required to learn the temporal boundaries of a dive, even though the subject is small and other divers and bystanders may be in view, and must also be sensitive to subtle changes in body pose in order to classify the dive. We propose effective techniques which work in tandem and can be easily generalized to video footage from other sports.
© Copyright 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. Veröffentlicht von IEEE. Alle Rechte vorbehalten.

Bibliographische Detailangaben
Schlagworte:
Notationen:Naturwissenschaften und Technik technische Sportarten
Tagging:Mustererkennung
Veröffentlicht in:IEEE/CVF Conference on Computer Vision and Pattern Recognition
Sprache:Englisch
Veröffentlicht: Honolulu IEEE 2017
Online-Zugang:http://openaccess.thecvf.com//openaccess/content_cvpr_2017_workshops/w2/papers/Nibali_Extraction_and_Classification_CVPR_2017_paper.pdf
Seiten:38-48
Dokumentenarten:Kongressband, Tagungsbericht
Level:hoch