An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors
We have performed different simulation experiments in relation to hardware neural networks (NN) to analyze the role of the number of synapses for different NN architectures in the network accuracy, considering different datasets. A technology that stands upon 4-kbit 1T1R ReRAM arrays, where resistiv...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-12-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/10/24/3141 |
_version_ | 1797505185636941824 |
---|---|
author | Rocio Romero-Zaliz Antonio Cantudo Eduardo Perez Francisco Jimenez-Molinos Christian Wenger Juan Bautista Roldan |
author_facet | Rocio Romero-Zaliz Antonio Cantudo Eduardo Perez Francisco Jimenez-Molinos Christian Wenger Juan Bautista Roldan |
author_sort | Rocio Romero-Zaliz |
collection | DOAJ |
description | We have performed different simulation experiments in relation to hardware neural networks (NN) to analyze the role of the number of synapses for different NN architectures in the network accuracy, considering different datasets. A technology that stands upon 4-kbit 1T1R ReRAM arrays, where resistive switching devices based on <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>H</mi><mi>f</mi><msub><mi>O</mi><mn>2</mn></msub></mrow></semantics></math></inline-formula> dielectrics are employed, is taken as a reference. In our study, fully dense (FdNN) and convolutional neural networks (CNN) were considered, where the NN size in terms of the number of synapses and of hidden layer neurons were varied. CNNs work better when the number of synapses to be used is limited. If quantized synaptic weights are included, we observed that NN accuracy decreases significantly as the number of synapses is reduced; in this respect, a trade-off between the number of synapses and the NN accuracy has to be achieved. Consequently, the CNN architecture must be carefully designed; in particular, it was noticed that different datasets need specific architectures according to their complexity to achieve good results. It was shown that due to the number of variables that can be changed in the optimization of a NN hardware implementation, a specific solution has to be worked in each case in terms of synaptic weight levels, NN architecture, etc. |
first_indexed | 2024-03-10T04:15:00Z |
format | Article |
id | doaj.art-6959d5ea548c4d5d8cd97b98060566d0 |
institution | Directory Open Access Journal |
issn | 2079-9292 |
language | English |
last_indexed | 2024-03-10T04:15:00Z |
publishDate | 2021-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Electronics |
spelling | doaj.art-6959d5ea548c4d5d8cd97b98060566d02023-11-23T08:02:41ZengMDPI AGElectronics2079-92922021-12-011024314110.3390/electronics10243141An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on MemristorsRocio Romero-Zaliz0Antonio Cantudo1Eduardo Perez2Francisco Jimenez-Molinos3Christian Wenger4Juan Bautista Roldan5Andalusian Research Institute on Data Science and Computational Intelligence (DaSCI), University of Granada, 18071 Granada, SpainDepartamento de Electrónica y Tecnología de Computadores, Universidad de Granada, 18071 Granada, SpainIHP-Leibniz-Institut für Innovative Mikroelektronik, 15236 Frankfurt an der Oder, GermanyDepartamento de Electrónica y Tecnología de Computadores, Universidad de Granada, 18071 Granada, SpainIHP-Leibniz-Institut für Innovative Mikroelektronik, 15236 Frankfurt an der Oder, GermanyDepartamento de Electrónica y Tecnología de Computadores, Universidad de Granada, 18071 Granada, SpainWe have performed different simulation experiments in relation to hardware neural networks (NN) to analyze the role of the number of synapses for different NN architectures in the network accuracy, considering different datasets. A technology that stands upon 4-kbit 1T1R ReRAM arrays, where resistive switching devices based on <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>H</mi><mi>f</mi><msub><mi>O</mi><mn>2</mn></msub></mrow></semantics></math></inline-formula> dielectrics are employed, is taken as a reference. In our study, fully dense (FdNN) and convolutional neural networks (CNN) were considered, where the NN size in terms of the number of synapses and of hidden layer neurons were varied. CNNs work better when the number of synapses to be used is limited. If quantized synaptic weights are included, we observed that NN accuracy decreases significantly as the number of synapses is reduced; in this respect, a trade-off between the number of synapses and the NN accuracy has to be achieved. Consequently, the CNN architecture must be carefully designed; in particular, it was noticed that different datasets need specific architectures according to their complexity to achieve good results. It was shown that due to the number of variables that can be changed in the optimization of a NN hardware implementation, a specific solution has to be worked in each case in terms of synaptic weight levels, NN architecture, etc.https://www.mdpi.com/2079-9292/10/24/3141memristormultilevel operationhardware neural networkdeep neural networkconvolutional neural networknetwork architecture |
spellingShingle | Rocio Romero-Zaliz Antonio Cantudo Eduardo Perez Francisco Jimenez-Molinos Christian Wenger Juan Bautista Roldan An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors Electronics memristor multilevel operation hardware neural network deep neural network convolutional neural network network architecture |
title | An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors |
title_full | An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors |
title_fullStr | An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors |
title_full_unstemmed | An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors |
title_short | An Analysis on the Architecture and the Size of Quantized Hardware Neural Networks Based on Memristors |
title_sort | analysis on the architecture and the size of quantized hardware neural networks based on memristors |
topic | memristor multilevel operation hardware neural network deep neural network convolutional neural network network architecture |
url | https://www.mdpi.com/2079-9292/10/24/3141 |
work_keys_str_mv | AT rocioromerozaliz ananalysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT antoniocantudo ananalysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT eduardoperez ananalysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT franciscojimenezmolinos ananalysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT christianwenger ananalysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT juanbautistaroldan ananalysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT rocioromerozaliz analysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT antoniocantudo analysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT eduardoperez analysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT franciscojimenezmolinos analysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT christianwenger analysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors AT juanbautistaroldan analysisonthearchitectureandthesizeofquantizedhardwareneuralnetworksbasedonmemristors |