Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator

Developing reliable control strategies for soft robots requires advances in soft robot perception. Due to their near-infinite degrees of freedom, obtaining useful sensory feedback from soft robots remains a long-standing challenge. Moreover, sensorization methods must be co-developed with more robus...

Full description

Bibliographic Details
Main Author: Zhang, Annan
Other Authors: Rus, Daniela
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/144766
https://orcid.org/0000-0001-6664-9417
_version_ 1826195904632193024
author Zhang, Annan
author2 Rus, Daniela
author_facet Rus, Daniela
Zhang, Annan
author_sort Zhang, Annan
collection MIT
description Developing reliable control strategies for soft robots requires advances in soft robot perception. Due to their near-infinite degrees of freedom, obtaining useful sensory feedback from soft robots remains a long-standing challenge. Moreover, sensorization methods must be co-developed with more robust approaches to soft robotic actuation. However, current soft robotic sensors pose many performance limitations, and available materials and manufacturing techniques complicate the design of sensorized soft robots. To address these needs, we introduce a vision-based method to sensorize robust, electrically-driven soft robotic actuators constructed from a new class of architected materials. Specifically, we position cameras within the hollow interiors of actuators based on handed shearing auxetics (HSA) to record their deformation. Using external motion capture data as ground truth, we train a convolutional neural network (CNN) that maps the visual feedback to the pose of the actuator’s tip. Our model provides predictions of tip pose with sub-millimeter accuracy from only six minutes of training data, while remaining lightweight with 300,000 parameters and an inference time of 18 milliseconds per frame on a single-board computer. We also develop a model that additionally predicts the horizontal tip force acting on the actuator and demonstrate its ability to generalize to previously unseen forces. Overall, our methods present a reliable vision-based approach for designing sensorized soft robots built from electrically-actuated, architected materials.
first_indexed 2024-09-23T10:17:41Z
format Thesis
id mit-1721.1/144766
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T10:17:41Z
publishDate 2022
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1447662022-08-30T03:49:47Z Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator Zhang, Annan Rus, Daniela Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Developing reliable control strategies for soft robots requires advances in soft robot perception. Due to their near-infinite degrees of freedom, obtaining useful sensory feedback from soft robots remains a long-standing challenge. Moreover, sensorization methods must be co-developed with more robust approaches to soft robotic actuation. However, current soft robotic sensors pose many performance limitations, and available materials and manufacturing techniques complicate the design of sensorized soft robots. To address these needs, we introduce a vision-based method to sensorize robust, electrically-driven soft robotic actuators constructed from a new class of architected materials. Specifically, we position cameras within the hollow interiors of actuators based on handed shearing auxetics (HSA) to record their deformation. Using external motion capture data as ground truth, we train a convolutional neural network (CNN) that maps the visual feedback to the pose of the actuator’s tip. Our model provides predictions of tip pose with sub-millimeter accuracy from only six minutes of training data, while remaining lightweight with 300,000 parameters and an inference time of 18 milliseconds per frame on a single-board computer. We also develop a model that additionally predicts the horizontal tip force acting on the actuator and demonstrate its ability to generalize to previously unseen forces. Overall, our methods present a reliable vision-based approach for designing sensorized soft robots built from electrically-actuated, architected materials. S.M. 2022-08-29T16:10:16Z 2022-08-29T16:10:16Z 2022-05 2022-06-21T19:25:42.588Z Thesis https://hdl.handle.net/1721.1/144766 https://orcid.org/0000-0001-6664-9417 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Zhang, Annan
Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
title Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
title_full Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
title_fullStr Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
title_full_unstemmed Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
title_short Vision-based Proprioceptive and Force Sensing for Soft Robotic Actuator
title_sort vision based proprioceptive and force sensing for soft robotic actuator
url https://hdl.handle.net/1721.1/144766
https://orcid.org/0000-0001-6664-9417
work_keys_str_mv AT zhangannan visionbasedproprioceptiveandforcesensingforsoftroboticactuator