A Survey of Near-Data Processing Architectures for Neural Networks
Data-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in un...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-01-01
|
Series: | Machine Learning and Knowledge Extraction |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-4990/4/1/4 |
_version_ | 1797445849759875072 |
---|---|
author | Mehdi Hassanpour Marc Riera Antonio González |
author_facet | Mehdi Hassanpour Marc Riera Antonio González |
author_sort | Mehdi Hassanpour |
collection | DOAJ |
description | Data-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as Near-Data Processing (NDP), machine learning, and especially neural network (NN)-based accelerators has grown significantly. Emerging memory technologies, such as ReRAM and 3D-stacked, are promising for efficiently architecting NDP-based accelerators for NN due to their capabilities to work as both high-density/low-energy storage and in/near-memory computation/search engine. In this paper, we present a survey of techniques for designing NDP architectures for NN. By classifying the techniques based on the memory technology employed, we underscore their similarities and differences. Finally, we discuss open challenges and future perspectives that need to be explored in order to improve and extend the adoption of NDP architectures for future computing platforms. This paper will be valuable for computer architects, chip designers, and researchers in the area of machine learning. |
first_indexed | 2024-03-09T13:32:42Z |
format | Article |
id | doaj.art-f81356e16834462ebc62a7aee654221e |
institution | Directory Open Access Journal |
issn | 2504-4990 |
language | English |
last_indexed | 2024-03-09T13:32:42Z |
publishDate | 2022-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Machine Learning and Knowledge Extraction |
spelling | doaj.art-f81356e16834462ebc62a7aee654221e2023-11-30T21:16:50ZengMDPI AGMachine Learning and Knowledge Extraction2504-49902022-01-01416610210.3390/make4010004A Survey of Near-Data Processing Architectures for Neural NetworksMehdi Hassanpour0Marc Riera1Antonio González2Department of Computer Architecture, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, SpainDepartment of Computer Architecture, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, SpainDepartment of Computer Architecture, Universitat Politècnica de Catalunya (UPC), 08034 Barcelona, SpainData-intensive workloads and applications, such as machine learning (ML), are fundamentally limited by traditional computing systems based on the von-Neumann architecture. As data movement operations and energy consumption become key bottlenecks in the design of computing systems, the interest in unconventional approaches such as Near-Data Processing (NDP), machine learning, and especially neural network (NN)-based accelerators has grown significantly. Emerging memory technologies, such as ReRAM and 3D-stacked, are promising for efficiently architecting NDP-based accelerators for NN due to their capabilities to work as both high-density/low-energy storage and in/near-memory computation/search engine. In this paper, we present a survey of techniques for designing NDP architectures for NN. By classifying the techniques based on the memory technology employed, we underscore their similarities and differences. Finally, we discuss open challenges and future perspectives that need to be explored in order to improve and extend the adoption of NDP architectures for future computing platforms. This paper will be valuable for computer architects, chip designers, and researchers in the area of machine learning.https://www.mdpi.com/2504-4990/4/1/4machine learningdeep neural networksnear-data processingnear-memory-processingprocessing-in-memoryconventional memory technology |
spellingShingle | Mehdi Hassanpour Marc Riera Antonio González A Survey of Near-Data Processing Architectures for Neural Networks Machine Learning and Knowledge Extraction machine learning deep neural networks near-data processing near-memory-processing processing-in-memory conventional memory technology |
title | A Survey of Near-Data Processing Architectures for Neural Networks |
title_full | A Survey of Near-Data Processing Architectures for Neural Networks |
title_fullStr | A Survey of Near-Data Processing Architectures for Neural Networks |
title_full_unstemmed | A Survey of Near-Data Processing Architectures for Neural Networks |
title_short | A Survey of Near-Data Processing Architectures for Neural Networks |
title_sort | survey of near data processing architectures for neural networks |
topic | machine learning deep neural networks near-data processing near-memory-processing processing-in-memory conventional memory technology |
url | https://www.mdpi.com/2504-4990/4/1/4 |
work_keys_str_mv | AT mehdihassanpour asurveyofneardataprocessingarchitecturesforneuralnetworks AT marcriera asurveyofneardataprocessingarchitecturesforneuralnetworks AT antoniogonzalez asurveyofneardataprocessingarchitecturesforneuralnetworks AT mehdihassanpour surveyofneardataprocessingarchitecturesforneuralnetworks AT marcriera surveyofneardataprocessingarchitecturesforneuralnetworks AT antoniogonzalez surveyofneardataprocessingarchitecturesforneuralnetworks |