End-to-end differentiable physics for learning and control

© 2018 Curran Associates Inc.All rights reserved. We present a differentiable physics engine that can be integrated as a module in deep neural networks for end-to-end learning. As a result, structured physics knowledge can be embedded into larger systems, allowing them, for example, to match observa...

Full description

Bibliographic Details
Main Authors: Smith, Kevin A, Allen, Kelsey Rebecca, Tenenbaum, Joshua B
Other Authors: Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Format: Article
Language:English
Published: Curran Associates Inc 2020
Online Access:https://hdl.handle.net/1721.1/126615
_version_ 1811085434526105600
author Smith, Kevin A
Allen, Kelsey Rebecca
Tenenbaum, Joshua B
author2 Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
author_facet Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences
Smith, Kevin A
Allen, Kelsey Rebecca
Tenenbaum, Joshua B
author_sort Smith, Kevin A
collection MIT
description © 2018 Curran Associates Inc.All rights reserved. We present a differentiable physics engine that can be integrated as a module in deep neural networks for end-to-end learning. As a result, structured physics knowledge can be embedded into larger systems, allowing them, for example, to match observations by performing precise simulations, while achieves high sample efficiency. Specifically, in this paper we demonstrate how to perform backpropagation analytically through a physical simulator defined via a linear complementarity problem. Unlike traditional finite difference methods, such gradients can be computed analytically, which allows for greater flexibility of the engine. Through experiments in diverse domains, we highlight the system's ability to learn physical parameters from data, efficiently match and simulate observed visual behavior, and readily enable control via gradient-based planning methods. Code for the engine and experiments is included with the paper.
first_indexed 2024-09-23T13:09:32Z
format Article
id mit-1721.1/126615
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T13:09:32Z
publishDate 2020
publisher Curran Associates Inc
record_format dspace
spelling mit-1721.1/1266152022-10-01T13:24:45Z End-to-end differentiable physics for learning and control Smith, Kevin A Allen, Kelsey Rebecca Tenenbaum, Joshua B Massachusetts Institute of Technology. Department of Brain and Cognitive Sciences © 2018 Curran Associates Inc.All rights reserved. We present a differentiable physics engine that can be integrated as a module in deep neural networks for end-to-end learning. As a result, structured physics knowledge can be embedded into larger systems, allowing them, for example, to match observations by performing precise simulations, while achieves high sample efficiency. Specifically, in this paper we demonstrate how to perform backpropagation analytically through a physical simulator defined via a linear complementarity problem. Unlike traditional finite difference methods, such gradients can be computed analytically, which allows for greater flexibility of the engine. Through experiments in diverse domains, we highlight the system's ability to learn physical parameters from data, efficiently match and simulate observed visual behavior, and readily enable control via gradient-based planning methods. Code for the engine and experiments is included with the paper. 2020-08-17T15:06:34Z 2020-08-17T15:06:34Z 2018-12 2019-10-08T14:43:33Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/126615 Belbute-Peres, Filipe de A. et al. “End-to-end differentiable physics for learning and control.” Paper presented at the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Dec 3-8 2018, Curran Associates Inc © 2018 The Author(s) en https://papers.nips.cc/paper/7948-end-to-end-differentiable-physics-for-learning-and-control 32nd Conference on Neural Information Processing Systems (NeurIPS 2018) Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Curran Associates Inc Neural Information Processing Systems (NIPS)
spellingShingle Smith, Kevin A
Allen, Kelsey Rebecca
Tenenbaum, Joshua B
End-to-end differentiable physics for learning and control
title End-to-end differentiable physics for learning and control
title_full End-to-end differentiable physics for learning and control
title_fullStr End-to-end differentiable physics for learning and control
title_full_unstemmed End-to-end differentiable physics for learning and control
title_short End-to-end differentiable physics for learning and control
title_sort end to end differentiable physics for learning and control
url https://hdl.handle.net/1721.1/126615
work_keys_str_mv AT smithkevina endtoenddifferentiablephysicsforlearningandcontrol
AT allenkelseyrebecca endtoenddifferentiablephysicsforlearningandcontrol
AT tenenbaumjoshuab endtoenddifferentiablephysicsforlearningandcontrol