A unified deep framework for joint 3D pose estimation and action recognition from a single RGB camera

(Ein einheitliches, umfassendes Framework für die gemeinsame 3-D-Positionsschätzung und Aktionserkennung aus einer einzigen RGB-Kamera)

We present a deep learning-based multitask framework for joint 3D human pose estimation and action recognition from RGB sensors using simple cameras. The approach proceeds along two stages. In the first, a real-time 2D pose detector is run to determine the precise pixel location of important keypoints of the human body. A two-stream deep neural network is then designed and trained to map detected 2D keypoints into 3D poses. In the second stage, the Efficient Neural Architecture Search (ENAS) algorithm is deployed to find an optimal network architecture that is used for modeling the spatio-temporal evolution of the estimated 3D poses via an image-based intermediate representation and performing action recognition. Experiments on Human3.6M, MSR Action3D and SBU Kinect Interaction datasets verify the effectiveness of the proposed method on the targeted tasks. Moreover, we show that the method requires a low computational budget for training and inference. In particular, the experimental results show that by using a monocular RGB sensor, we can develop a 3D pose estimation and human action recognition approach that reaches the performance of RGB-depth sensors. This opens up many opportunities for leveraging RGB cameras (which are much cheaper than depth cameras and extensively deployed in private and public places) to build intelligent recognition systems.
© Copyright 2020 Sensors. Alle Rechte vorbehalten.

Bibliographische Detailangaben
Schlagworte:
Notationen:Naturwissenschaften und Technik
Tagging:Position Mustererkennung Kamera neuronale Netze
Veröffentlicht in:Sensors
Sprache:Englisch
Veröffentlicht: 2020
Online-Zugang:https://doi.org/10.3390/s20071825
Jahrgang:20
Heft:7
Seiten:1825
Dokumentenarten:Artikel
Level:hoch