|
|
|
|
LEADER |
01641 am a22001813u 4500 |
001 |
126615 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Smith, Kevin A
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
|e contributor
|
700 |
1 |
0 |
|a Allen, Kelsey Rebecca
|e author
|
700 |
1 |
0 |
|a Tenenbaum, Joshua B
|e author
|
245 |
0 |
0 |
|a End-to-end differentiable physics for learning and control
|
260 |
|
|
|b Curran Associates Inc,
|c 2020-08-17T15:06:34Z.
|
856 |
|
|
|z Get fulltext
|u https://hdl.handle.net/1721.1/126615
|
520 |
|
|
|a © 2018 Curran Associates Inc.All rights reserved. We present a differentiable physics engine that can be integrated as a module in deep neural networks for end-to-end learning. As a result, structured physics knowledge can be embedded into larger systems, allowing them, for example, to match observations by performing precise simulations, while achieves high sample efficiency. Specifically, in this paper we demonstrate how to perform backpropagation analytically through a physical simulator defined via a linear complementarity problem. Unlike traditional finite difference methods, such gradients can be computed analytically, which allows for greater flexibility of the engine. Through experiments in diverse domains, we highlight the system's ability to learn physical parameters from data, efficiently match and simulate observed visual behavior, and readily enable control via gradient-based planning methods. Code for the engine and experiments is included with the paper.
|
546 |
|
|
|a en
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t 32nd Conference on Neural Information Processing Systems (NeurIPS 2018)
|