Efficient Model Learning from Joint-Action Demonstrations for Human-Robot Collaborative Tasks
We present a framework for automatically learning human user models from joint-action demonstrations that enables a robot to compute a robust policy for a collaborative task with a human. First, the demonstrated action sequences are clustered into different human types using an unsupervised learning...
Main Authors: | Shah, Julie A, Nikolaidis, Stefanos, Ramakrishnan, Ramya, Gu, Keren |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
Format: | Article |
Language: | en_US |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2017
|
Online Access: | http://hdl.handle.net/1721.1/107887 https://orcid.org/0000-0003-1338-8107 https://orcid.org/0000-0001-8239-5963 |
Similar Items
-
Human-robot collaboration in manufacturing: Quantitative evaluation of predictable, convergent joint action
by: Nikolaidis, Stefanos, et al.
Published: (2015) -
Developing an Adaptive Robotic Assistant for Close Proximity Human-Robot Collaboration in Space
by: Lasota, Przemyslaw Andrzej, et al.
Published: (2018) -
Human-robot cross-training: Computational formulation, modeling and evaluation of a human team training strategy
by: Nikolaidis, Stefanos, et al.
Published: (2013) -
Human-Robot Interactive Planning using Cross-Training: A Human Team Training Approach
by: Nikolaidis, Stefanos, et al.
Published: (2018) -
Towards Interpretable Explanations for Transfer Learning in Sequential Tasks
by: Ramakrishnan, Ramya, et al.
Published: (2017)