Learning camera control in dynamic scenes from limited demonstrations

In this work, we present our strategy for camera control in dynamic scenes with multiple people (sports teams). We learn a generic model of the player dynamics offline in simulation. We use only a few sparse demonstrations of a user's camera control policy to learn a reward function to drive camera motion in an ongoing dynamic scene. Key to our approach is the creation of a low-dimensional representation of the scene dynamics which is independent of the environment action and rewards, which enables learning the reward function using only a small number of examples. We cast the user-specific control objective as an inverse reinforcement learning problem, aiming to learn an expert's intention from a small number of demonstrations. The learned reward function is used in combination with a visual model predictive controller (MPC). We learn a generic scene dynamics model that is agnostic to the user-specific reward, enabling reusing the same dynamics model for different camera control policies. We show the effectiveness of our method on simulated and real soccer matches.
© Copyright 2022 Computer Graphics Forum. Wiley. All rights reserved.

Bibliographic Details
Subjects:
Notations:technical and natural sciences sport games
Published in:Computer Graphics Forum
Language:English
Published: 2022
Online Access:https://doi.org/10.1111/cgf.14444
Volume:41
Issue:1
Pages:427-437
Document types:article
Level:intermediate