Intrinsically Motivated Exploration of Learned Goal Spaces
Finding algorithms that allow agents to discover a wide variety of skills efficiently and autonomously, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes (IMGEPs) have been shown to enable real world robots to learn repertoires of policies producing a...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-01-01
|
Series: | Frontiers in Neurorobotics |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnbot.2020.555271/full |
_version_ | 1818573498361053184 |
---|---|
author | Adrien Laversanne-Finot Alexandre Péré Pierre-Yves Oudeyer |
author_facet | Adrien Laversanne-Finot Alexandre Péré Pierre-Yves Oudeyer |
author_sort | Adrien Laversanne-Finot |
collection | DOAJ |
description | Finding algorithms that allow agents to discover a wide variety of skills efficiently and autonomously, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes (IMGEPs) have been shown to enable real world robots to learn repertoires of policies producing a wide range of diverse effects. They work by enabling agents to autonomously sample goals that they then try to achieve. In practice, this strategy leads to an efficient exploration of complex environments with high-dimensional continuous actions. Until recently, it was necessary to provide the agents with an engineered goal space containing relevant features of the environment. In this article we show that the goal space can be learned using deep representation learning algorithms, effectively reducing the burden of designing goal spaces. Our results pave the way to autonomous learning agents that are able to autonomously build a representation of the world and use this representation to explore the world efficiently. We present experiments in two environments using population-based IMGEPs. The first experiments are performed on a simple, yet challenging, simulated environment. Then, another set of experiments tests the applicability of those principles on a real-world robotic setup, where a 6-joint robotic arm learns to manipulate a ball inside an arena, by choosing goals in a space learned from its past experience. |
first_indexed | 2024-12-15T00:12:08Z |
format | Article |
id | doaj.art-cb16cf5fd3a149c681e0e603608dfd6c |
institution | Directory Open Access Journal |
issn | 1662-5218 |
language | English |
last_indexed | 2024-12-15T00:12:08Z |
publishDate | 2021-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neurorobotics |
spelling | doaj.art-cb16cf5fd3a149c681e0e603608dfd6c2022-12-21T22:42:33ZengFrontiers Media S.A.Frontiers in Neurorobotics1662-52182021-01-011410.3389/fnbot.2020.555271555271Intrinsically Motivated Exploration of Learned Goal SpacesAdrien Laversanne-FinotAlexandre PéréPierre-Yves OudeyerFinding algorithms that allow agents to discover a wide variety of skills efficiently and autonomously, remains a challenge of Artificial Intelligence. Intrinsically Motivated Goal Exploration Processes (IMGEPs) have been shown to enable real world robots to learn repertoires of policies producing a wide range of diverse effects. They work by enabling agents to autonomously sample goals that they then try to achieve. In practice, this strategy leads to an efficient exploration of complex environments with high-dimensional continuous actions. Until recently, it was necessary to provide the agents with an engineered goal space containing relevant features of the environment. In this article we show that the goal space can be learned using deep representation learning algorithms, effectively reducing the burden of designing goal spaces. Our results pave the way to autonomous learning agents that are able to autonomously build a representation of the world and use this representation to explore the world efficiently. We present experiments in two environments using population-based IMGEPs. The first experiments are performed on a simple, yet challenging, simulated environment. Then, another set of experiments tests the applicability of those principles on a real-world robotic setup, where a 6-joint robotic arm learns to manipulate a ball inside an arena, by choosing goals in a space learned from its past experience.https://www.frontiersin.org/articles/10.3389/fnbot.2020.555271/fullsensorimotor developmentunsupervised learningrepresentation learninggoal space learningintrinsic motivationgoal exploration |
spellingShingle | Adrien Laversanne-Finot Alexandre Péré Pierre-Yves Oudeyer Intrinsically Motivated Exploration of Learned Goal Spaces Frontiers in Neurorobotics sensorimotor development unsupervised learning representation learning goal space learning intrinsic motivation goal exploration |
title | Intrinsically Motivated Exploration of Learned Goal Spaces |
title_full | Intrinsically Motivated Exploration of Learned Goal Spaces |
title_fullStr | Intrinsically Motivated Exploration of Learned Goal Spaces |
title_full_unstemmed | Intrinsically Motivated Exploration of Learned Goal Spaces |
title_short | Intrinsically Motivated Exploration of Learned Goal Spaces |
title_sort | intrinsically motivated exploration of learned goal spaces |
topic | sensorimotor development unsupervised learning representation learning goal space learning intrinsic motivation goal exploration |
url | https://www.frontiersin.org/articles/10.3389/fnbot.2020.555271/full |
work_keys_str_mv | AT adrienlaversannefinot intrinsicallymotivatedexplorationoflearnedgoalspaces AT alexandrepere intrinsicallymotivatedexplorationoflearnedgoalspaces AT pierreyvesoudeyer intrinsicallymotivatedexplorationoflearnedgoalspaces |