Towards energy-efficient neural network calculations

Nowadays, the problem of creating high-performance and energy-efficient hardware for Artificial Intelligence tasks is very acute. The most popular solution to this problem is the use of Deep Learning Accelerators, such as GPUs and Tensor Processing Units to run neural networks. Recently, NVIDIA has...

Full description

Bibliographic Details
Main Authors: E.S. Noskova, I.E. Zakharov, Y.N. Shkandybin, S.G. Rykovanov
Format: Article
Language:English
Published: Samara National Research University 2022-02-01
Series:Компьютерная оптика
Subjects:
Online Access:https://computeroptics.ru/eng/KO/Annot/KO46-1/460118e.html
Description
Summary:Nowadays, the problem of creating high-performance and energy-efficient hardware for Artificial Intelligence tasks is very acute. The most popular solution to this problem is the use of Deep Learning Accelerators, such as GPUs and Tensor Processing Units to run neural networks. Recently, NVIDIA has announced the NVDLA project, which allows one to design neural network accelerators based on an open-source code. This work describes a full cycle of creating a prototype NVDLA accelerator, as well as testing the resulting solution by running the resnet-50 neural network on it. Finally, an assessment of the performance and power efficiency of the prototype NVDLA accelerator when compared to the GPU and CPU is provided, the results of which show the superiority of NVDLA in many characteristics.
ISSN:0134-2452
2412-6179