Learning bicycle stunts

We present a general approach for simulating and controlling a human character that is riding a bicycle. The two main components of our system are offline learning and online simulation. We simulate the bicycle and the rider as an articulated rigid body system. The rider is controlled by a policy that is optimized through offline learning. We apply policy search to learn the optimal policies, which are parameterized with splines or neural networks for different bicycle maneuvers. We use Neuroevolution of Augmenting Topology (NEAT) to optimize both the parametrization and the parameters of our policies. The learned controllers are robust enough to withstand large perturbations and allow interactive user control. The rider not only learns to steer and to balance in normal riding situations, but also learns to perform a wide variety of stunts, including wheelie, endo, bunny hop, front wheel pivot and back hop.
© Copyright 2014 ACM Transactions on Graphics (TOG) archive. ACM. All rights reserved.

Bibliographic Details
Subjects:
Notations:biological and medical sciences technical and natural sciences
Tagging:Augmented Reality
Published in:ACM Transactions on Graphics (TOG) archive
Language:English
Published: 2014
Online Access:http://doi.org/10.1145/2601097.2601121
Volume:33
Issue:4
Pages:50: 1-12
Document types:article
Level:advanced