A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications
Inspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algori...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2019-04-01
|
Series: | Frontiers in Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/article/10.3389/fnins.2019.00405/full |
_version_ | 1818300617454518272 |
---|---|
author | Yuhan Shi Leon Nguyen Sangheon Oh Xin Liu Duygu Kuzum |
author_facet | Yuhan Shi Leon Nguyen Sangheon Oh Xin Liu Duygu Kuzum |
author_sort | Yuhan Shi |
collection | DOAJ |
description | Inspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algorithmic optimization that improves energy efficiency of online learning with SNNs on emerging non-volatile memory (eNVM) devices. We develop a pruning method for SNNs by exploiting the output firing characteristics of neurons. Our pruning method can be applied during network training, which is different from previous approaches in the literature that employ pruning on already-trained networks. This approach prevents unnecessary updates of network parameters during training. This algorithmic optimization can complement the energy efficiency of eNVM technology, which offers a unique in-memory computing platform for the parallelization of neural network operations. Our SNN maintains ~90% classification accuracy on the MNIST dataset with up to ~75% pruning, significantly reducing the number of weight updates. The SNN and pruning scheme developed in this work can pave the way toward applications of eNVM based neuro-inspired systems for energy efficient online learning in low power applications. |
first_indexed | 2024-12-13T05:09:58Z |
format | Article |
id | doaj.art-c10d61247b1247b09175743483e1bf8e |
institution | Directory Open Access Journal |
issn | 1662-453X |
language | English |
last_indexed | 2024-12-13T05:09:58Z |
publishDate | 2019-04-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neuroscience |
spelling | doaj.art-c10d61247b1247b09175743483e1bf8e2022-12-21T23:58:33ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2019-04-011310.3389/fnins.2019.00405438484A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing ApplicationsYuhan ShiLeon NguyenSangheon OhXin LiuDuygu KuzumInspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algorithmic optimization that improves energy efficiency of online learning with SNNs on emerging non-volatile memory (eNVM) devices. We develop a pruning method for SNNs by exploiting the output firing characteristics of neurons. Our pruning method can be applied during network training, which is different from previous approaches in the literature that employ pruning on already-trained networks. This approach prevents unnecessary updates of network parameters during training. This algorithmic optimization can complement the energy efficiency of eNVM technology, which offers a unique in-memory computing platform for the parallelization of neural network operations. Our SNN maintains ~90% classification accuracy on the MNIST dataset with up to ~75% pruning, significantly reducing the number of weight updates. The SNN and pruning scheme developed in this work can pave the way toward applications of eNVM based neuro-inspired systems for energy efficient online learning in low power applications.https://www.frontiersin.org/article/10.3389/fnins.2019.00405/fullspiking neural networksunsupervised learninghandwriting recognitionpruningin-memory computingemerging non-volatile memory |
spellingShingle | Yuhan Shi Leon Nguyen Sangheon Oh Xin Liu Duygu Kuzum A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications Frontiers in Neuroscience spiking neural networks unsupervised learning handwriting recognition pruning in-memory computing emerging non-volatile memory |
title | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_full | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_fullStr | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_full_unstemmed | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_short | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_sort | soft pruning method applied during training of spiking neural networks for in memory computing applications |
topic | spiking neural networks unsupervised learning handwriting recognition pruning in-memory computing emerging non-volatile memory |
url | https://www.frontiersin.org/article/10.3389/fnins.2019.00405/full |
work_keys_str_mv | AT yuhanshi asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT leonnguyen asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT sangheonoh asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT xinliu asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT duygukuzum asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT yuhanshi softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT leonnguyen softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT sangheonoh softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT xinliu softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT duygukuzum softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications |