Extracting highlights from a badminton video combine transfer learning with players` velocity
We present a novel method for extracting highlights from a badminton video. Firstly, we classify the different views of badminton videos for video segmentation through building classification model based on transfer learning, and achieve high-precision with real-time segmentation. Secondly, based on object detection by the object detecting model YOLOv3, we locate players in a video segment and calculate the players` average velocity to extract highlights from a badminton video. Video segments with higher players` average velocity reflect the intense scenes of a badminton game, so we can regard them as highlights in a way. We extract highlights by sorting badminton video segments with higher players` average velocity, which make users save their time to enjoy the highlights of an entire video. We laterally evaluate the proposed method through verifying whether a segment has admitted objective details such as exciting response from audiences and positive evaluation from narrators.
© Copyright 2020 Computer Animation and Social Agents. Published by Springer. All rights reserved.
| Subjects: | |
|---|---|
| Notations: | sport games technical and natural sciences |
| Published in: | Computer Animation and Social Agents |
| Language: | English |
| Published: |
Cham
Springer
2020
|
| Series: | Communications in Computer and Information Science, 1300 |
| Online Access: | https://doi.org/10.1007/978-3-030-63426-1_9 |
| Pages: | 82-91 |
| Document types: | congress proceedings |
| Level: | advanced |