En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective

It is a conventional wisdom that real world problems seldom occur in isolation. The motivation for this work, inspired from the observation that humans rarely tackle every problem from scratch, is to improve optimization performance through adap- tive knowledge transfer across related problems. The...

Full description

Bibliographic Details
Main Author: Bali, Kavitesh Kumar
Other Authors: Ong Yew Soon
Format: Thesis-Doctor of Philosophy
Language:English
Published: Nanyang Technological University 2021
Subjects:
Online Access:https://hdl.handle.net/10356/152658
_version_ 1811682499125837824
author Bali, Kavitesh Kumar
author2 Ong Yew Soon
author_facet Ong Yew Soon
Bali, Kavitesh Kumar
author_sort Bali, Kavitesh Kumar
collection NTU
description It is a conventional wisdom that real world problems seldom occur in isolation. The motivation for this work, inspired from the observation that humans rarely tackle every problem from scratch, is to improve optimization performance through adap- tive knowledge transfer across related problems. The scope for spontaneous trans- fers under the simultaneous occurrence of multiple problems unveils the benefits of multitasking. Multitask optimization has recently demonstrated competence in solving multiple (related) optimization tasks concurrently. Notably, in the presence of underlying relationships between problems, the transfer of high quality solutions across them has shown to facilitate superior performance characteristics - as the cost of re-exploring overlapping regions of the search space is reduced. However, in the absence of any prior knowledge about the inter-task synergies (as is often the case with general black-box optimization), the threat of predominantly negative transfer prevails. Susceptibility to negative inter-task interactions can in fact be detrimental, often impeding the overall convergence behavior. To allay such fears, this thesis presents viable solutions towards automated extraction and transfer of (fruitful) knowledge such that any deleterious e↵ects of otherwise negative inter- task exchanges are suppressed. To this end, an in-depth theoretical analysis is first conducted to unveil the primary caveats which concern the global convergence char- acteristics of the present day multitasking evolutionary optimization framework. Next, a novel evolutionary computation framework is proposed that enables online learning and exploitation of the similarities (and discrepancies) between distinct tasks in multitask settings via probabilistic mixture models. The proposed method is based on principled theoretical arguments that seek to minimize the tendency of harmful interactions between tasks, based on a purely data-driven learning of relationships among them. As a proof of concept, the method is initially validated experimentally on a wide range of synthetic discrete and continuous single-objective benchmarks. Thereafter, a realization of similar concepts of the proposed method is extended to the domain of multi-objective optimization (an omnipresent scenario in our daily lives). It is noteworthy that this work shall be among the first to utilize probabilistic modeling to capture inter-task relationships between multi-objective optimization tasks in the context of evolutionary multitasking. Empirical studies on a series of benchmark test functions show that the method is able to decipher and adapt to the degree of similarity between distinct multi-objective optimization tasks on the fly. Finally, the practicality of the proposed methods are substantiated on various real world case studies including reinforcement learning, multi-fidelity optimization and evolutionary deep learning. Not only do the practical studies provide insights into the behavior of the methods in the face of several (many) complex tasks occurring at once, but also underscores the benefits of omnidirec- tional knowledge exchanges, the boon of intentional-unintentional problem solving capabilities as well as knowledge transfers from low fidelity optimization tasks to substantially reduce the cost of (otherwise expensive) high fidelity optimization.
first_indexed 2024-10-01T03:57:48Z
format Thesis-Doctor of Philosophy
id ntu-10356/152658
institution Nanyang Technological University
language English
last_indexed 2024-10-01T03:57:48Z
publishDate 2021
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1526582021-10-05T07:44:18Z En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective Bali, Kavitesh Kumar Ong Yew Soon School of Computer Science and Engineering A*star SIMTech-NTU Joint Laboratory Tan Puay Siew ASYSOng@ntu.edu.sg Engineering::Computer science and engineering It is a conventional wisdom that real world problems seldom occur in isolation. The motivation for this work, inspired from the observation that humans rarely tackle every problem from scratch, is to improve optimization performance through adap- tive knowledge transfer across related problems. The scope for spontaneous trans- fers under the simultaneous occurrence of multiple problems unveils the benefits of multitasking. Multitask optimization has recently demonstrated competence in solving multiple (related) optimization tasks concurrently. Notably, in the presence of underlying relationships between problems, the transfer of high quality solutions across them has shown to facilitate superior performance characteristics - as the cost of re-exploring overlapping regions of the search space is reduced. However, in the absence of any prior knowledge about the inter-task synergies (as is often the case with general black-box optimization), the threat of predominantly negative transfer prevails. Susceptibility to negative inter-task interactions can in fact be detrimental, often impeding the overall convergence behavior. To allay such fears, this thesis presents viable solutions towards automated extraction and transfer of (fruitful) knowledge such that any deleterious e↵ects of otherwise negative inter- task exchanges are suppressed. To this end, an in-depth theoretical analysis is first conducted to unveil the primary caveats which concern the global convergence char- acteristics of the present day multitasking evolutionary optimization framework. Next, a novel evolutionary computation framework is proposed that enables online learning and exploitation of the similarities (and discrepancies) between distinct tasks in multitask settings via probabilistic mixture models. The proposed method is based on principled theoretical arguments that seek to minimize the tendency of harmful interactions between tasks, based on a purely data-driven learning of relationships among them. As a proof of concept, the method is initially validated experimentally on a wide range of synthetic discrete and continuous single-objective benchmarks. Thereafter, a realization of similar concepts of the proposed method is extended to the domain of multi-objective optimization (an omnipresent scenario in our daily lives). It is noteworthy that this work shall be among the first to utilize probabilistic modeling to capture inter-task relationships between multi-objective optimization tasks in the context of evolutionary multitasking. Empirical studies on a series of benchmark test functions show that the method is able to decipher and adapt to the degree of similarity between distinct multi-objective optimization tasks on the fly. Finally, the practicality of the proposed methods are substantiated on various real world case studies including reinforcement learning, multi-fidelity optimization and evolutionary deep learning. Not only do the practical studies provide insights into the behavior of the methods in the face of several (many) complex tasks occurring at once, but also underscores the benefits of omnidirec- tional knowledge exchanges, the boon of intentional-unintentional problem solving capabilities as well as knowledge transfers from low fidelity optimization tasks to substantially reduce the cost of (otherwise expensive) high fidelity optimization. Doctor of Philosophy 2021-09-09T01:23:19Z 2021-09-09T01:23:19Z 2021 Thesis-Doctor of Philosophy Bali, K. K. (2021). En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/152658 https://hdl.handle.net/10356/152658 10.32657/10356/152658 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University
spellingShingle Engineering::Computer science and engineering
Bali, Kavitesh Kumar
En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective
title En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective
title_full En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective
title_fullStr En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective
title_full_unstemmed En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective
title_short En route to automated extraction and transfer of knowledge in multitask optimization : an evolutionary perspective
title_sort en route to automated extraction and transfer of knowledge in multitask optimization an evolutionary perspective
topic Engineering::Computer science and engineering
url https://hdl.handle.net/10356/152658
work_keys_str_mv AT balikaviteshkumar enroutetoautomatedextractionandtransferofknowledgeinmultitaskoptimizationanevolutionaryperspective