Evolutionary transfer learning for complex multi-agent reinforcement learning systems

Multi-agent systems (MAS) are computerized systems composing of multiple interacting and autonomous agents within a common environment of interest for problem-solving. Since the behavioral strategies of agents in conventional MAS are usually manually defined in advance, the development of intelligen...

Full description

Bibliographic Details
Main Author: Hou, Yaqing
Other Authors: Ong Yew Soon
Format: Thesis
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/72997
_version_ 1811681465840173056
author Hou, Yaqing
author2 Ong Yew Soon
author_facet Ong Yew Soon
Hou, Yaqing
author_sort Hou, Yaqing
collection NTU
description Multi-agent systems (MAS) are computerized systems composing of multiple interacting and autonomous agents within a common environment of interest for problem-solving. Since the behavioral strategies of agents in conventional MAS are usually manually defined in advance, the development of intelligent agents that are capable of adapting to the dynamic environment has attracted increasing attention in the past decades. In the past decades, reinforcement learning (RL) has been introduced into MAS as the learning paradigm of individual agents through trial-and-error interactions in a dynamic environment. Due to its generality and simplicity of use, the study of RL is rapidly expanding and a wide variety of approaches has already been proposed to exploit its benefits and applicabilities. A more recent machine learning paradigm of transfer learning (TL) has been proposed as an approach of leveraging valuable knowledge from related and well studied problem domains to enhance problem-solving in the target domains of interest. TL has been successfully used for enhancing RL tasks and many TL methodologies, such as instance or feature transfer, have been explored. Recently, the research on TL has been considered for enhancing multi-agent RL methods. In the context of computational intelligence, the science of memetics, especially memetic computation, has become an increasing topic of research. Many of the existing work in memetic computation has been established as an extension of the classical evolutionary algorithms where a meme is perceived as a form of individual learning procedure or local search operator in population based search algorithms. However, recent research shows memetic computation can become more meme-centric wherein memes transpire as units of domain information or knowledge building blocks useful for problem-solving. The intrinsic parallelism of natural evolution in such meme-centric computing may derive its strength from the simultaneous explorations of differing regions of a common problem domain and exploitative social interactions between the multiple agent learners. This dissertation hence seeks for the new study on a meme centric evolutionary knowledge transfer paradigm for problem-solving of multi-agent reinforcement learning systems. Specifically, this thesis presents an evolutionary transfer learning framework (eTL) which comprises a series of meme-inspired evolutionary knowledge representation and transfer mechanisms. The proposed framework serves to enhance the learning capabilities while addresses the limitations (e.g. blind reliance) of existing knowledge transfer frameworks. Subsequently, a novel transfer learning framework with predictive capabilities (eTL-P) is proposed to cope with the challenges arising in complex multi-agent systems where agents have differing or even competitive objectives. eTL-P endows agents with abilities to interact with competitive opponents, modeling their opponents, and hence predicting their behaviors accordingly. Further, to reduce the complexity of the opponent candidate models, a Top-K model selection method is proposed for selecting a smaller yet remarkably representative candidate model set from the entire model space. Last but not least, a summary of the future study on the evolutionary knowledge transfer paradigm is presented.
first_indexed 2024-10-01T03:41:23Z
format Thesis
id ntu-10356/72997
institution Nanyang Technological University
language English
last_indexed 2024-10-01T03:41:23Z
publishDate 2017
record_format dspace
spelling ntu-10356/729972021-03-20T13:30:41Z Evolutionary transfer learning for complex multi-agent reinforcement learning systems Hou, Yaqing Ong Yew Soon Interdisciplinary Graduate School (IGS) Multi-plAtform Game Innovation Centre Game Lab DRNTU::Engineering::Computer science and engineering::Computer applications Multi-agent systems (MAS) are computerized systems composing of multiple interacting and autonomous agents within a common environment of interest for problem-solving. Since the behavioral strategies of agents in conventional MAS are usually manually defined in advance, the development of intelligent agents that are capable of adapting to the dynamic environment has attracted increasing attention in the past decades. In the past decades, reinforcement learning (RL) has been introduced into MAS as the learning paradigm of individual agents through trial-and-error interactions in a dynamic environment. Due to its generality and simplicity of use, the study of RL is rapidly expanding and a wide variety of approaches has already been proposed to exploit its benefits and applicabilities. A more recent machine learning paradigm of transfer learning (TL) has been proposed as an approach of leveraging valuable knowledge from related and well studied problem domains to enhance problem-solving in the target domains of interest. TL has been successfully used for enhancing RL tasks and many TL methodologies, such as instance or feature transfer, have been explored. Recently, the research on TL has been considered for enhancing multi-agent RL methods. In the context of computational intelligence, the science of memetics, especially memetic computation, has become an increasing topic of research. Many of the existing work in memetic computation has been established as an extension of the classical evolutionary algorithms where a meme is perceived as a form of individual learning procedure or local search operator in population based search algorithms. However, recent research shows memetic computation can become more meme-centric wherein memes transpire as units of domain information or knowledge building blocks useful for problem-solving. The intrinsic parallelism of natural evolution in such meme-centric computing may derive its strength from the simultaneous explorations of differing regions of a common problem domain and exploitative social interactions between the multiple agent learners. This dissertation hence seeks for the new study on a meme centric evolutionary knowledge transfer paradigm for problem-solving of multi-agent reinforcement learning systems. Specifically, this thesis presents an evolutionary transfer learning framework (eTL) which comprises a series of meme-inspired evolutionary knowledge representation and transfer mechanisms. The proposed framework serves to enhance the learning capabilities while addresses the limitations (e.g. blind reliance) of existing knowledge transfer frameworks. Subsequently, a novel transfer learning framework with predictive capabilities (eTL-P) is proposed to cope with the challenges arising in complex multi-agent systems where agents have differing or even competitive objectives. eTL-P endows agents with abilities to interact with competitive opponents, modeling their opponents, and hence predicting their behaviors accordingly. Further, to reduce the complexity of the opponent candidate models, a Top-K model selection method is proposed for selecting a smaller yet remarkably representative candidate model set from the entire model space. Last but not least, a summary of the future study on the evolutionary knowledge transfer paradigm is presented. Doctor of Philosophy (IGS) 2017-12-18T08:38:13Z 2017-12-18T08:38:13Z 2017 Thesis Hou, Y. (2017). Evolutionary transfer learning for complex multi-agent reinforcement learning systems. Doctoral thesis, Nanyang Technological University, Singapore. http://hdl.handle.net/10356/72997 10.32657/10356/72997 en 120 p. application/pdf
spellingShingle DRNTU::Engineering::Computer science and engineering::Computer applications
Hou, Yaqing
Evolutionary transfer learning for complex multi-agent reinforcement learning systems
title Evolutionary transfer learning for complex multi-agent reinforcement learning systems
title_full Evolutionary transfer learning for complex multi-agent reinforcement learning systems
title_fullStr Evolutionary transfer learning for complex multi-agent reinforcement learning systems
title_full_unstemmed Evolutionary transfer learning for complex multi-agent reinforcement learning systems
title_short Evolutionary transfer learning for complex multi-agent reinforcement learning systems
title_sort evolutionary transfer learning for complex multi agent reinforcement learning systems
topic DRNTU::Engineering::Computer science and engineering::Computer applications
url http://hdl.handle.net/10356/72997
work_keys_str_mv AT houyaqing evolutionarytransferlearningforcomplexmultiagentreinforcementlearningsystems