Optimization of neural networks through high level synthesis

With the increasing popularity of machine learning, coupled with increasing computing power, the field of machine learning algorithms has grown to be a very dynamic and fast-growing one. The effectiveness of such applications has led to concerted efforts to embed such applications into other s...

Full description

Bibliographic Details
Main Author: Liem, Jonathan Zhuan Kim
Other Authors: Smitha Kavallur Pisharath Gopi
Format: Final Year Project (FYP)
Language:English
Published: 2018
Subjects:
Online Access:http://hdl.handle.net/10356/76135
_version_ 1811691467430690816
author Liem, Jonathan Zhuan Kim
author2 Smitha Kavallur Pisharath Gopi
author_facet Smitha Kavallur Pisharath Gopi
Liem, Jonathan Zhuan Kim
author_sort Liem, Jonathan Zhuan Kim
collection NTU
description With the increasing popularity of machine learning, coupled with increasing computing power, the field of machine learning algorithms has grown to be a very dynamic and fast-growing one. The effectiveness of such applications has led to concerted efforts to embed such applications into other systems. However, such a drawback of machine learning algorithms is the humongous computational and space complexity, requiring large amounts of power and/or physical size to run. In embedded systems, these issues pose a problem, as size and performance are key constraints. However, optimizing such solutions require engineering at the Register Transfer Level (RTL), which is time-consuming and error-prone. In such implementations, it may be acceptable to accept a solution that does the job well enough, instead of one that is optimized down to the last bit through RTL designs. In this report, we have implemented a small-scale machine learning model, trained offline in Python, a Convolutional Neural Network (CNN) onto an Field-Programmable Gate Array, the Zedboard. This report explores the combinations of compiler directives or compiler pragmas, which are interpreted by the High-Level Synthesis (HLS) compiler. Under these directives, the designer can affect how the solution is implemented, and can improve the space and computational complexity.
first_indexed 2024-10-01T06:20:21Z
format Final Year Project (FYP)
id ntu-10356/76135
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:20:21Z
publishDate 2018
record_format dspace
spelling ntu-10356/761352023-03-03T20:41:15Z Optimization of neural networks through high level synthesis Liem, Jonathan Zhuan Kim Smitha Kavallur Pisharath Gopi School of Computer Science and Engineering DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence With the increasing popularity of machine learning, coupled with increasing computing power, the field of machine learning algorithms has grown to be a very dynamic and fast-growing one. The effectiveness of such applications has led to concerted efforts to embed such applications into other systems. However, such a drawback of machine learning algorithms is the humongous computational and space complexity, requiring large amounts of power and/or physical size to run. In embedded systems, these issues pose a problem, as size and performance are key constraints. However, optimizing such solutions require engineering at the Register Transfer Level (RTL), which is time-consuming and error-prone. In such implementations, it may be acceptable to accept a solution that does the job well enough, instead of one that is optimized down to the last bit through RTL designs. In this report, we have implemented a small-scale machine learning model, trained offline in Python, a Convolutional Neural Network (CNN) onto an Field-Programmable Gate Array, the Zedboard. This report explores the combinations of compiler directives or compiler pragmas, which are interpreted by the High-Level Synthesis (HLS) compiler. Under these directives, the designer can affect how the solution is implemented, and can improve the space and computational complexity. Bachelor of Engineering (Computer Engineering) 2018-11-19T08:49:40Z 2018-11-19T08:49:40Z 2018 Final Year Project (FYP) http://hdl.handle.net/10356/76135 en Nanyang Technological University 58 p. application/pdf
spellingShingle DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
Liem, Jonathan Zhuan Kim
Optimization of neural networks through high level synthesis
title Optimization of neural networks through high level synthesis
title_full Optimization of neural networks through high level synthesis
title_fullStr Optimization of neural networks through high level synthesis
title_full_unstemmed Optimization of neural networks through high level synthesis
title_short Optimization of neural networks through high level synthesis
title_sort optimization of neural networks through high level synthesis
topic DRNTU::Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
url http://hdl.handle.net/10356/76135
work_keys_str_mv AT liemjonathanzhuankim optimizationofneuralnetworksthroughhighlevelsynthesis