K-Multiscope: combining multiple Kinect sensors into a common 3d coordinate system

We present a method for combining data from multiple Kinect motion-capture sensors into a common coordinate system. Kinect sensors offer a cheaper, potentially less accurate alternative for full-body motion tracking. By incorporating multiple sensors into a multiscopic system, we address potential accuracy and recognition flaws caused by individual sensor conditions, such as occlusions and space limitations, and increase the overall accuracy of skeletal data tracking. We merge data from multiple Kinects using a custom calibration algorithm, called K-Multiscope. K-Multiscope generates an affine transform for each of the available sensors, and thus combines their data into a common 3D coordinate system. We have incorporated this algorithm into Kuatro, a skeletal data pipeline designed earlier to simplify live motion capture for use in music interaction experiences and installations. In closing, we present Liminal Space, a live duet performance for cello and dance, which utilizes the Kuatro system to transform dance movements into music.
© Copyright 2019 Proceedings of the 6th International Conference on Movement and Computing. Published by ACM. All rights reserved.

Bibliographic Details
Subjects:
Notations:technical and natural sciences
Tagging:Kinect Sensornetzwerk
Published in:Proceedings of the 6th International Conference on Movement and Computing
Language:English
Published: New York ACM 2019
Series:MOCO '19
Online Access:https://doi.org/10.1145/3347122.3347124
Pages:Article 2
Document types:article
Level:advanced