SHE-MTJ based ReLU-max pooling functions for on-chip training of neural networks

We present a detailed investigation of various routes to optimize the power consumption of the spintronic-based devices for implementing rectified linear activation (ReLU) and max-pooling functions. We examine the influence of various spin Hall effect layers, and their input resistances on the power...

Full description

Bibliographic Details
Main Authors: Venkatesh Vadde, Bhaskaran Muralidharan, Abhishek Sharma
Format: Article
Language:English
Published: AIP Publishing LLC 2024-02-01
Series:AIP Advances
Online Access:http://dx.doi.org/10.1063/9.0000685
Description
Summary:We present a detailed investigation of various routes to optimize the power consumption of the spintronic-based devices for implementing rectified linear activation (ReLU) and max-pooling functions. We examine the influence of various spin Hall effect layers, and their input resistances on the power consumption of the ReLU-max pooling functions, we also access the impact of the thermal stability factor of the free-ferromagnet layer on the power consumption and accuracy of the device. The design for ReLU-max pooling relies on the continuous rotation of magnetization, which is accomplished by applying orthogonal spin current to the free-FM layer. We also demonstrate the non-trivial power-resistance relation, where the power consumption decreases with an increase in SHE resistance. We utilize the hybrid spintronic-CMOS simulation platform that combines Keldysh non-equilibrium Green’s function (NEGF) with Landau-Lifshitz-Gilbert-Slonzewski (LLGS) equations and the HSPICE circuit simulator to evaluate our network. Our design takes 0.343 μW of power for ReLU emulation and 17.86 μW of power for ReLU-max pooling network implementation at a thermal stability factor of 4.58, all while maintaining reliable results. We validate the efficiency of our design by implementing a convolutional neural network that classifies the handwritten-MNIST and fashion-MNIST datasets. This implementation illustrates that the classification accuracies achieved are on par with those attained using the ideal software ReLU-max pooling functions, with an energy consumption of 167.31 pJ per sample.
ISSN:2158-3226