Enhancing On-Device DNN Inference Performance With a Reduced Retention-Time MRAM-Based Memory Architecture
As applications using deep neural networks (DNNs) are increasingly deployed on mobile devices, researchers are exploring various methods to achieve low energy consumption and high performance. Recently, advances in STT-MRAM have shown promise in offering non-volatility, high performance, and low ene...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10752558/ |