Real-time planning and control for simulated bipedal locomotion

Understanding and reproducing the processes that give rise to purposeful human and animal motions has long been of interest in the fields of character animation, robotics and biomechanics. However, despite the grace and agility with which many living creatures effortlessly perform skilled motions, m...

Full description

Bibliographic Details
Main Author: Coros, Stelian
Language:English
Published: University of British Columbia 2010
Online Access:http://hdl.handle.net/2429/30307
Description
Summary:Understanding and reproducing the processes that give rise to purposeful human and animal motions has long been of interest in the fields of character animation, robotics and biomechanics. However, despite the grace and agility with which many living creatures effortlessly perform skilled motions, modeling motor control has proven to be a difficult problem. Building on recent advances, this thesis presents several approaches to creating control policies that allow physically-simulated characters to demonstrate skill and purpose as they interact with their virtual environments. We begin by introducing a synthesis-analysis-synthesis framework that enables physically-simulated characters to navigate environments with significant stepping constraints. First, an offline optimization method is used to compute control solutions for randomly-generated example problems. Second, the example motions and their underlying control patterns are analyzed to build a low-dimensional step-to-step model of the dynamics. Third, the dynamics model is exploited by a planner to solve new instances of the task in real-time. We then present a method for precomputing robust task-based control policies for physically simulated characters. This allows our characters to complete higher-level locomotion tasks, such as walking in a user specified direction, while interacting with the environment in significant ways. As input, the method assumes an abstract action vocabulary consisting of balance-aware locomotion controllers. A constrained state exploration phase is first used to define a dynamics model as well as a finite volume of character states over which the control policy will be defined. An optimized control policy is then computed using reinforcement learning. Lastly, we describe a control strategy for walking that generalizes well across gait parameters, motion styles, character proportions, and a variety of skills. The control requires no character-specific or motion-specific tuning, is robust to disturbances, and is simple to compute. The method integrates tracking using proportional-derivative control, foot placement adjustments using an inverted pendulum model and Jacobian transpose control for gravity compensation and fine-level velocity tuning. We demonstrate a variety of walking-related skills such as picking up objects placed at any height, lifting, pulling, pushing and walking with heavy crates, ducking over and stepping under obstacles and climbing stairs.