Extraction and classification of diving clips from continuous video footage

The recording of video data has become a common component of athlete training programmes. However, the manual analysis of the obtained footage is time-consuming and requires domain-specific knowledge. In order to automate this kind of task, most previous work has focused on just one of the following sub-problems: 1) temporally cropping events/actions of interest from continuous video; 2) tracking the object of interest; and 3) classifying the events/actions of interest. In contrast, this paper provides a complete solution to the overall action monitoring task in the context of a challenging real-world exemplar, diving classification. The model is required to learn the temporal boundaries of a dive, even though the subject is small and other divers and bystanders may be in view, and must also be sensitive to subtle changes in body pose in order to classify the dive. We propose effective techniques which work in tandem and can be easily generalized to video footage from other sports.
© Copyright 2017 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE. Published by IEEE. All rights reserved.

Bibliographic Details
Subjects:
Notations:technical and natural sciences technical sports
Tagging:Mustererkennung
Published in:IEEE/CVF Conference on Computer Vision and Pattern Recognition
Language:English
Published: Honolulu IEEE 2017
Online Access:http://openaccess.thecvf.com//openaccess/content_cvpr_2017_workshops/w2/papers/Nibali_Extraction_and_Classification_CVPR_2017_paper.pdf
Pages:38-48
Document types:congress proceedings
Level:advanced