SHE-MTJ based ReLU-max pooling functions for on-chip training of neural networks
We present a detailed investigation of various routes to optimize the power consumption of the spintronic-based devices for implementing rectified linear activation (ReLU) and max-pooling functions. We examine the influence of various spin Hall effect layers, and their input resistances on the power...
Main Authors: | Venkatesh Vadde, Bhaskaran Muralidharan, Abhishek Sharma |
---|---|
Format: | Article |
Language: | English |
Published: |
AIP Publishing LLC
2024-02-01
|
Series: | AIP Advances |
Online Access: | http://dx.doi.org/10.1063/9.0000685 |
Similar Items
-
Reliably Learning the ReLU
by: Goel, S, et al.
Published: (2017) -
Locally linear attributes of ReLU neural networks
by: Ben Sattelberg, et al.
Published: (2023-11-01) -
Integrating geometries of ReLU feedforward neural networks
by: Yajing Liu, et al.
Published: (2023-11-01) -
Reductions of ReLU neural networks to linear neural networks and their applications
by: Le, Thien
Published: (2022) -
Training a Two-Layer ReLU Network Analytically
by: Adrian Barbu
Published: (2023-04-01)