In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM
Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energy efficiency, and low volatility, spin-orbit torque magnetic random access memory (SOT-MRAM) has received substantial attention...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-04-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/11/8/1245 |
_version_ | 1797434855677493248 |
---|---|
author | Jun-Ying Huang Jing-Lin Syu Yao-Tung Tsou Sy-Yen Kuo Ching-Ray Chang |
author_facet | Jun-Ying Huang Jing-Lin Syu Yao-Tung Tsou Sy-Yen Kuo Ching-Ray Chang |
author_sort | Jun-Ying Huang |
collection | DOAJ |
description | Recently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energy efficiency, and low volatility, spin-orbit torque magnetic random access memory (SOT-MRAM) has received substantial attention. However, previous studies used calculation circuits to support complex calculations, leading to substantial energy consumption. Therefore, our research proposes a new CIM architecture with small peripheral circuits; this architecture achieved higher performance relative to other CIM architectures when processing convolution neural networks (CNNs). We included a distributed arithmetic (DA) algorithm to improve the efficiency of the CIM calculation method by reducing the excessive read/write times and execution steps of CIM-based CNN calculation circuits. Furthermore, our method also uses SOT-MRAM to increase the calculation speed and reduce power consumption. Compared with CIM-based CNN arithmetic circuits in previous studies, our method can achieve shorter clock periods and reduce read times by up to <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>43.3</mn></mrow></semantics></math></inline-formula>% without the need for additional circuits. |
first_indexed | 2024-03-09T10:38:54Z |
format | Article |
id | doaj.art-b284cded863a48e1826d1a15d12f6a27 |
institution | Directory Open Access Journal |
issn | 2079-9292 |
language | English |
last_indexed | 2024-03-09T10:38:54Z |
publishDate | 2022-04-01 |
publisher | MDPI AG |
record_format | Article |
series | Electronics |
spelling | doaj.art-b284cded863a48e1826d1a15d12f6a272023-12-01T20:47:11ZengMDPI AGElectronics2079-92922022-04-01118124510.3390/electronics11081245In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAMJun-Ying Huang0Jing-Lin Syu1Yao-Tung Tsou2Sy-Yen Kuo3Ching-Ray Chang4Department of Electrical Engineering, National Taiwan University, Taipei 106, TaiwanDepartment of Communications Engineering, Feng Chia University, Taichung 407, TaiwanDepartment of Communications Engineering, Feng Chia University, Taichung 407, TaiwanDepartment of Electrical Engineering, National Taiwan University, Taipei 106, TaiwanQuantum Information Center, Chung Yuan Christian University, Taoyuan 320, TaiwanRecently, numerous studies have investigated computing in-memory (CIM) architectures for neural networks to overcome memory bottlenecks. Because of its low delay, high energy efficiency, and low volatility, spin-orbit torque magnetic random access memory (SOT-MRAM) has received substantial attention. However, previous studies used calculation circuits to support complex calculations, leading to substantial energy consumption. Therefore, our research proposes a new CIM architecture with small peripheral circuits; this architecture achieved higher performance relative to other CIM architectures when processing convolution neural networks (CNNs). We included a distributed arithmetic (DA) algorithm to improve the efficiency of the CIM calculation method by reducing the excessive read/write times and execution steps of CIM-based CNN calculation circuits. Furthermore, our method also uses SOT-MRAM to increase the calculation speed and reduce power consumption. Compared with CIM-based CNN arithmetic circuits in previous studies, our method can achieve shorter clock periods and reduce read times by up to <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>43.3</mn></mrow></semantics></math></inline-formula>% without the need for additional circuits.https://www.mdpi.com/2079-9292/11/8/1245convolution neural networkcomputing in memoryprocessing in memorydistributed arithmeticMRAMSOT-MRAM |
spellingShingle | Jun-Ying Huang Jing-Lin Syu Yao-Tung Tsou Sy-Yen Kuo Ching-Ray Chang In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM Electronics convolution neural network computing in memory processing in memory distributed arithmetic MRAM SOT-MRAM |
title | In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM |
title_full | In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM |
title_fullStr | In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM |
title_full_unstemmed | In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM |
title_short | In-Memory Computing Architecture for a Convolutional Neural Network Based on Spin Orbit Torque MRAM |
title_sort | in memory computing architecture for a convolutional neural network based on spin orbit torque mram |
topic | convolution neural network computing in memory processing in memory distributed arithmetic MRAM SOT-MRAM |
url | https://www.mdpi.com/2079-9292/11/8/1245 |
work_keys_str_mv | AT junyinghuang inmemorycomputingarchitectureforaconvolutionalneuralnetworkbasedonspinorbittorquemram AT jinglinsyu inmemorycomputingarchitectureforaconvolutionalneuralnetworkbasedonspinorbittorquemram AT yaotungtsou inmemorycomputingarchitectureforaconvolutionalneuralnetworkbasedonspinorbittorquemram AT syyenkuo inmemorycomputingarchitectureforaconvolutionalneuralnetworkbasedonspinorbittorquemram AT chingraychang inmemorycomputingarchitectureforaconvolutionalneuralnetworkbasedonspinorbittorquemram |