ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning
Despite recent advances in deep neural networks (DNNs), multi-task learning has not been able to utilize DNNs thoroughly. The current method of DNN design for a single task requires considerable skill in deciding many architecture parameters a priori before training begins. However, extending it to...
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2023-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10076453/ |
| _version_ | 1827967811364847616 |
|---|---|
| author | Heechul Lim Kang-Wook Chon Min-Soo Kim |
| author_facet | Heechul Lim Kang-Wook Chon Min-Soo Kim |
| author_sort | Heechul Lim |
| collection | DOAJ |
| description | Despite recent advances in deep neural networks (DNNs), multi-task learning has not been able to utilize DNNs thoroughly. The current method of DNN design for a single task requires considerable skill in deciding many architecture parameters a priori before training begins. However, extending it to multi-task learning makes it more challenging. Inspired by findings from neuroscience, we propose a unified DNN modeling framework called ConnectomeNet that encompasses the best principles of contemporary DNN designs and unifies them with transfer, curriculum, and adaptive structural learning, all in the context of multi-task learning. Specifically, ConnectomeNet iteratively resembles connectome neuron units with a high-level topology represented as a general-directed acyclic graph. As a result, ConnectomeNet enables non-trivial automatic sharing of neurons across multiple tasks and learns to adapt its topology economically for a new task. Extensive experiments, including an ablation study, show that ConnectomeNet outperforms the state-of-the-art methods in multi-task learning such as the degree of catastrophic forgetting from sequential learning. For the degree of catastrophic forgetting, with normalized accuracy, our proposed method (which becomes 100%) overcomes mean-IMM (89.0%) and DEN (99.97%). |
| first_indexed | 2024-04-09T18:10:43Z |
| format | Article |
| id | doaj.art-f45ff330ae1447c2b2b7a45528c2e4c3 |
| institution | Directory Open Access Journal |
| issn | 2169-3536 |
| language | English |
| last_indexed | 2024-04-09T18:10:43Z |
| publishDate | 2023-01-01 |
| publisher | IEEE |
| record_format | Article |
| series | IEEE Access |
| spelling | doaj.art-f45ff330ae1447c2b2b7a45528c2e4c32023-04-13T23:00:13ZengIEEEIEEE Access2169-35362023-01-0111342973430810.1109/ACCESS.2023.325897510076453ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task LearningHeechul Lim0https://orcid.org/0000-0002-3281-3191Kang-Wook Chon1Min-Soo Kim2https://orcid.org/0000-0002-5065-0226Department of Information and Communication Engineering, Daegu Gyeongbuk Institute of Science and Technology, Daegu, Republic of KoreaSchool of Computer Engineering, Korea University of Technology and Education (KOREATECH), Cheonan, Republic of KoreaSchool of Computing, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaDespite recent advances in deep neural networks (DNNs), multi-task learning has not been able to utilize DNNs thoroughly. The current method of DNN design for a single task requires considerable skill in deciding many architecture parameters a priori before training begins. However, extending it to multi-task learning makes it more challenging. Inspired by findings from neuroscience, we propose a unified DNN modeling framework called ConnectomeNet that encompasses the best principles of contemporary DNN designs and unifies them with transfer, curriculum, and adaptive structural learning, all in the context of multi-task learning. Specifically, ConnectomeNet iteratively resembles connectome neuron units with a high-level topology represented as a general-directed acyclic graph. As a result, ConnectomeNet enables non-trivial automatic sharing of neurons across multiple tasks and learns to adapt its topology economically for a new task. Extensive experiments, including an ablation study, show that ConnectomeNet outperforms the state-of-the-art methods in multi-task learning such as the degree of catastrophic forgetting from sequential learning. For the degree of catastrophic forgetting, with normalized accuracy, our proposed method (which becomes 100%) overcomes mean-IMM (89.0%) and DEN (99.97%).https://ieeexplore.ieee.org/document/10076453/Adaptive learningdynamic network expansionmulti-task learning |
| spellingShingle | Heechul Lim Kang-Wook Chon Min-Soo Kim ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning IEEE Access Adaptive learning dynamic network expansion multi-task learning |
| title | ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning |
| title_full | ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning |
| title_fullStr | ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning |
| title_full_unstemmed | ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning |
| title_short | ConnectomeNet: A Unified Deep Neural Network Modeling Framework for Multi-Task Learning |
| title_sort | connectomenet a unified deep neural network modeling framework for multi task learning |
| topic | Adaptive learning dynamic network expansion multi-task learning |
| url | https://ieeexplore.ieee.org/document/10076453/ |
| work_keys_str_mv | AT heechullim connectomenetaunifieddeepneuralnetworkmodelingframeworkformultitasklearning AT kangwookchon connectomenetaunifieddeepneuralnetworkmodelingframeworkformultitasklearning AT minsookim connectomenetaunifieddeepneuralnetworkmodelingframeworkformultitasklearning |