Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models
This study presents a universal reconfigurable hardware accelerator for efficient processing of sparse decision trees, artificial neural networks and support vector machines. The main idea is to develop a hardware accelerator that will be able to directly process sparse machine learning models, resu...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-04-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/11/8/1178 |
_version_ | 1797446782338203648 |
---|---|
author | Vuk Vranjkovic Predrag Teodorovic Rastislav Struharik |
author_facet | Vuk Vranjkovic Predrag Teodorovic Rastislav Struharik |
author_sort | Vuk Vranjkovic |
collection | DOAJ |
description | This study presents a universal reconfigurable hardware accelerator for efficient processing of sparse decision trees, artificial neural networks and support vector machines. The main idea is to develop a hardware accelerator that will be able to directly process sparse machine learning models, resulting in shorter inference times and lower power consumption compared to existing solutions. To the author’s best knowledge, this is the first hardware accelerator of this type. Additionally, this is the first accelerator that is capable of processing sparse machine learning models of different types. Besides the hardware accelerator itself, algorithms for induction of sparse decision trees, pruning of support vector machines and artificial neural networks are presented. Such sparse machine learning classifiers are attractive since they require significantly less memory resources for storing model parameters. This results in reduced data movement between the accelerator and the DRAM memory, as well as a reduced number of operations required to process input instances, leading to faster and more energy-efficient processing. This could be of a significant interest in edge-based applications, with severely constrained memory, computation resources and power consumption. The performance of algorithms and the developed hardware accelerator are demonstrated using standard benchmark datasets from the UCI Machine Learning Repository database. The results of the experimental study reveal that the proposed algorithms and presented hardware accelerator are superior when compared to some of the existing solutions. Throughput is increased up to 2 times for decision trees, 2.3 times for support vector machines and 38 times for artificial neural networks. When the processing latency is considered, maximum performance improvement is even higher: up to a 4.4 times reduction for decision trees, a 84.1 times reduction for support vector machines and a 22.2 times reduction for artificial neural networks. Finally, since it is capable of supporting sparse classifiers, the usage of the proposed hardware accelerator leads to a significant reduction in energy spent on DRAM data transfers and a reduction of 50.16% for decision trees, 93.65% for support vector machines and as much as 93.75% for artificial neural networks, respectively. |
first_indexed | 2024-03-09T13:45:32Z |
format | Article |
id | doaj.art-b97a11fd5e474255bd5aef4d098dbfed |
institution | Directory Open Access Journal |
issn | 2079-9292 |
language | English |
last_indexed | 2024-03-09T13:45:32Z |
publishDate | 2022-04-01 |
publisher | MDPI AG |
record_format | Article |
series | Electronics |
spelling | doaj.art-b97a11fd5e474255bd5aef4d098dbfed2023-11-30T21:01:26ZengMDPI AGElectronics2079-92922022-04-01118117810.3390/electronics11081178Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive ModelsVuk Vranjkovic0Predrag Teodorovic1Rastislav Struharik2Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, SerbiaFaculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, SerbiaFaculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, SerbiaThis study presents a universal reconfigurable hardware accelerator for efficient processing of sparse decision trees, artificial neural networks and support vector machines. The main idea is to develop a hardware accelerator that will be able to directly process sparse machine learning models, resulting in shorter inference times and lower power consumption compared to existing solutions. To the author’s best knowledge, this is the first hardware accelerator of this type. Additionally, this is the first accelerator that is capable of processing sparse machine learning models of different types. Besides the hardware accelerator itself, algorithms for induction of sparse decision trees, pruning of support vector machines and artificial neural networks are presented. Such sparse machine learning classifiers are attractive since they require significantly less memory resources for storing model parameters. This results in reduced data movement between the accelerator and the DRAM memory, as well as a reduced number of operations required to process input instances, leading to faster and more energy-efficient processing. This could be of a significant interest in edge-based applications, with severely constrained memory, computation resources and power consumption. The performance of algorithms and the developed hardware accelerator are demonstrated using standard benchmark datasets from the UCI Machine Learning Repository database. The results of the experimental study reveal that the proposed algorithms and presented hardware accelerator are superior when compared to some of the existing solutions. Throughput is increased up to 2 times for decision trees, 2.3 times for support vector machines and 38 times for artificial neural networks. When the processing latency is considered, maximum performance improvement is even higher: up to a 4.4 times reduction for decision trees, a 84.1 times reduction for support vector machines and a 22.2 times reduction for artificial neural networks. Finally, since it is capable of supporting sparse classifiers, the usage of the proposed hardware accelerator leads to a significant reduction in energy spent on DRAM data transfers and a reduction of 50.16% for decision trees, 93.65% for support vector machines and as much as 93.75% for artificial neural networks, respectively.https://www.mdpi.com/2079-9292/11/8/1178decision treessupport vector machinesartificial neural networkshardware accelerator architecturesedge computingsparse predictive models |
spellingShingle | Vuk Vranjkovic Predrag Teodorovic Rastislav Struharik Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models Electronics decision trees support vector machines artificial neural networks hardware accelerator architectures edge computing sparse predictive models |
title | Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models |
title_full | Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models |
title_fullStr | Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models |
title_full_unstemmed | Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models |
title_short | Universal Reconfigurable Hardware Accelerator for Sparse Machine Learning Predictive Models |
title_sort | universal reconfigurable hardware accelerator for sparse machine learning predictive models |
topic | decision trees support vector machines artificial neural networks hardware accelerator architectures edge computing sparse predictive models |
url | https://www.mdpi.com/2079-9292/11/8/1178 |
work_keys_str_mv | AT vukvranjkovic universalreconfigurablehardwareacceleratorforsparsemachinelearningpredictivemodels AT predragteodorovic universalreconfigurablehardwareacceleratorforsparsemachinelearningpredictivemodels AT rastislavstruharik universalreconfigurablehardwareacceleratorforsparsemachinelearningpredictivemodels |