K-Multiscope: combining multiple Kinect sensors into a common 3d coordinate system
(K-Multiskop: Kombination mehrerer Kinect-Sensoren in einem gemeinsamen 3D-Koordinatensystem)
We present a method for combining data from multiple Kinect motion-capture sensors into a common coordinate system. Kinect sensors offer a cheaper, potentially less accurate alternative for full-body motion tracking. By incorporating multiple sensors into a multiscopic system, we address potential accuracy and recognition flaws caused by individual sensor conditions, such as occlusions and space limitations, and increase the overall accuracy of skeletal data tracking. We merge data from multiple Kinects using a custom calibration algorithm, called K-Multiscope. K-Multiscope generates an affine transform for each of the available sensors, and thus combines their data into a common 3D coordinate system. We have incorporated this algorithm into Kuatro, a skeletal data pipeline designed earlier to simplify live motion capture for use in music interaction experiences and installations. In closing, we present Liminal Space, a live duet performance for cello and dance, which utilizes the Kuatro system to transform dance movements into music.
© Copyright 2019 Proceedings of the 6th International Conference on Movement and Computing. Veröffentlicht von ACM. Alle Rechte vorbehalten.
| Schlagworte: | |
|---|---|
| Notationen: | Naturwissenschaften und Technik |
| Tagging: | Kinect Sensornetzwerk |
| Veröffentlicht in: | Proceedings of the 6th International Conference on Movement and Computing |
| Sprache: | Englisch |
| Veröffentlicht: |
New York
ACM
2019
|
| Schriftenreihe: | MOCO '19 |
| Online-Zugang: | https://doi.org/10.1145/3347122.3347124 |
| Seiten: | Article 2 |
| Dokumentenarten: | Artikel |
| Level: | hoch |