Dynamically growing neural network architecture for lifelong deep learning on the edge

Conventional deep learning models are trained once and deployed. However, models deployed in agents operating in dynamic environments need to constantly acquire new knowledge, while preventing catastrophic forgetting of previous knowledge. This ability is commonly referred to as lifelong learning. I...

Full description

Bibliographic Details
Main Authors: Piyasena, Duvindu, Thathsara, Miyuru, Kanagarajah, Sathursan, Lam,Siew-Kei, Wu, Meiqing
Other Authors: School of Computer Science and Engineering
Format: Conference Paper
Language:English
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/10356/146242
_version_ 1824455979372118016
author Piyasena, Duvindu
Thathsara, Miyuru
Kanagarajah, Sathursan
Lam,Siew-Kei
Wu, Meiqing
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Piyasena, Duvindu
Thathsara, Miyuru
Kanagarajah, Sathursan
Lam,Siew-Kei
Wu, Meiqing
author_sort Piyasena, Duvindu
collection NTU
description Conventional deep learning models are trained once and deployed. However, models deployed in agents operating in dynamic environments need to constantly acquire new knowledge, while preventing catastrophic forgetting of previous knowledge. This ability is commonly referred to as lifelong learning. In this paper, we address the performance and resource challenges for realizing lifelong learning on edge devices. We propose a FPGA based architecture for a Self-Organization Neural Network (SONN), that in combination with a Convolutional Neural Network (CNN) can perform class-incremental lifelong learning for object classification. The proposed SONN architecture is capable of performing unsupervised learning on input features from the CNN by dynamically growing neurons and connections. In order to meet the tight constraints of edge computing, we introduce efficient scheduling methods to maximize resource reuse and parallelism, as well as approximate computing strategies. Experiments based on the Core50 dataset for continuous object recognition from video sequences demonstrated that the proposed FPGA architecture significantly outperforms CPU and GPU based implementations.
first_indexed 2025-02-19T03:46:48Z
format Conference Paper
id ntu-10356/146242
institution Nanyang Technological University
language English
last_indexed 2025-02-19T03:46:48Z
publishDate 2021
record_format dspace
spelling ntu-10356/1462422021-02-03T08:00:56Z Dynamically growing neural network architecture for lifelong deep learning on the edge Piyasena, Duvindu Thathsara, Miyuru Kanagarajah, Sathursan Lam,Siew-Kei Wu, Meiqing School of Computer Science and Engineering 2020 30th International Conference on Field-Programmable Logic and Applications (FPL) Technical University of Munich (TUM) Campus for Research Excellence and Technological Enterprise (CREATE) Embedded System Engineering::Computer science and engineering Deep Learning Lifelong Learning Conventional deep learning models are trained once and deployed. However, models deployed in agents operating in dynamic environments need to constantly acquire new knowledge, while preventing catastrophic forgetting of previous knowledge. This ability is commonly referred to as lifelong learning. In this paper, we address the performance and resource challenges for realizing lifelong learning on edge devices. We propose a FPGA based architecture for a Self-Organization Neural Network (SONN), that in combination with a Convolutional Neural Network (CNN) can perform class-incremental lifelong learning for object classification. The proposed SONN architecture is capable of performing unsupervised learning on input features from the CNN by dynamically growing neurons and connections. In order to meet the tight constraints of edge computing, we introduce efficient scheduling methods to maximize resource reuse and parallelism, as well as approximate computing strategies. Experiments based on the Core50 dataset for continuous object recognition from video sequences demonstrated that the proposed FPGA architecture significantly outperforms CPU and GPU based implementations. National Research Foundation (NRF) This work was supported in part by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) Programme with the Technical University of Munich at TUMCREATE. 2021-02-03T08:00:56Z 2021-02-03T08:00:56Z 2020 Conference Paper Piyasena, D., Thathsara, M., Kanagarajah, S., Lam, S.-K., & Wu, M. (2020). Dynamically growing neural network architecture for lifelong deep learning on the edge. Proceedings of the 2020 30th International Conference on Field-Programmable Logic and Applications (FPL), 262-268. doi:10.1109/FPL50879.2020.00051 9781728199023 https://hdl.handle.net/10356/146242 10.1109/FPL50879.2020.00051 2-s2.0-85095564958 262 268 en © 2020 IEEE. All rights reserved.
spellingShingle Embedded System
Engineering::Computer science and engineering
Deep Learning
Lifelong Learning
Piyasena, Duvindu
Thathsara, Miyuru
Kanagarajah, Sathursan
Lam,Siew-Kei
Wu, Meiqing
Dynamically growing neural network architecture for lifelong deep learning on the edge
title Dynamically growing neural network architecture for lifelong deep learning on the edge
title_full Dynamically growing neural network architecture for lifelong deep learning on the edge
title_fullStr Dynamically growing neural network architecture for lifelong deep learning on the edge
title_full_unstemmed Dynamically growing neural network architecture for lifelong deep learning on the edge
title_short Dynamically growing neural network architecture for lifelong deep learning on the edge
title_sort dynamically growing neural network architecture for lifelong deep learning on the edge
topic Embedded System
Engineering::Computer science and engineering
Deep Learning
Lifelong Learning
url https://hdl.handle.net/10356/146242
work_keys_str_mv AT piyasenaduvindu dynamicallygrowingneuralnetworkarchitectureforlifelongdeeplearningontheedge
AT thathsaramiyuru dynamicallygrowingneuralnetworkarchitectureforlifelongdeeplearningontheedge
AT kanagarajahsathursan dynamicallygrowingneuralnetworkarchitectureforlifelongdeeplearningontheedge
AT lamsiewkei dynamicallygrowingneuralnetworkarchitectureforlifelongdeeplearningontheedge
AT wumeiqing dynamicallygrowingneuralnetworkarchitectureforlifelongdeeplearningontheedge