An evolvable block-based neural network architecture for embedded hardware

Evolvable neural networks are a more recent architecture, and differs from the conventional artificial neural networks (ANN) in the sense that it allows changes in the structure and design to cope with dynamic operating environments. Blockbased neural networks (BbNN) provide a more unified solution...

Full description

Bibliographic Details
Main Author: Paramasivam, Vishnu
Format: Thesis
Language:English
Published: 2013
Subjects:
Online Access:http://eprints.utm.my/33763/5/VishnuParamsivamPFKE2013.pdf
_version_ 1796857251079651328
author Paramasivam, Vishnu
author_facet Paramasivam, Vishnu
author_sort Paramasivam, Vishnu
collection ePrints
description Evolvable neural networks are a more recent architecture, and differs from the conventional artificial neural networks (ANN) in the sense that it allows changes in the structure and design to cope with dynamic operating environments. Blockbased neural networks (BbNN) provide a more unified solution to the two fundamental problems of ANNs, which include simultaneous optimization of structure, and viable implementation in reconfigurable embedded hardware such as field programmable gate arrays (FPGAs) due to its modular structure. However, BbNNs still have several outstanding issues to be resolved for an effective implementation. An efficient hardware design can only be obtained with proper design consideration. To date, there has been no previous work reported on BbNNs configured in recurrent mode for complex case studies, even though it is theoretically possible. Existing BbNN models do not explicitly specify or model the latency of the system, determine how it affects the system, nor how it can be optimized. Also, current methods of training BbNNs using genetic algorithm (GA) are slow, especially with large training datasets. This thesis presents an improved BbNN model, proposes a state-of-the-art simulation and co-design environment for it, and implements it on a hardware platform for improved speed and performance. It has a novel architecture with deterministic outputs that can evolve and operate in both feedforward and recurrent modes. The BbNN is redesigned for optimal system latency to achieve higher performance, and supports onchip training for multi-objective optimization using a multi-population parallel genetic algorithm. All the algorithms proposed led to an efficient and scalable hardware implementation. The viability of the resulting BbNN system-on-chip (SoC) is proven with real-time performance analysis of real-world case studies, where performance improvements of up to 410� are observed. The hardware logic utilization is minimized with the help of theoretical analysis and design considerations. A case study requiring the use of recurrent mode BbNN is also presented. All case studies tested with the BbNN give equivalent or better classification accuracies compared to those provided in previous works, but with optimized latency values. As an example, the proposed BbNN solution achieves a classification accuracy of 99.41% for the heart arrhythmia case study, which is an improvement over previous work. The validity of the proposed BbNN model is thus verified.
first_indexed 2024-03-05T18:54:54Z
format Thesis
id utm.eprints-33763
institution Universiti Teknologi Malaysia - ePrints
language English
last_indexed 2024-03-05T18:54:54Z
publishDate 2013
record_format dspace
spelling utm.eprints-337632017-07-17T07:07:10Z http://eprints.utm.my/33763/ An evolvable block-based neural network architecture for embedded hardware Paramasivam, Vishnu QA75 Electronic computers. Computer science TK Electrical engineering. Electronics Nuclear engineering Evolvable neural networks are a more recent architecture, and differs from the conventional artificial neural networks (ANN) in the sense that it allows changes in the structure and design to cope with dynamic operating environments. Blockbased neural networks (BbNN) provide a more unified solution to the two fundamental problems of ANNs, which include simultaneous optimization of structure, and viable implementation in reconfigurable embedded hardware such as field programmable gate arrays (FPGAs) due to its modular structure. However, BbNNs still have several outstanding issues to be resolved for an effective implementation. An efficient hardware design can only be obtained with proper design consideration. To date, there has been no previous work reported on BbNNs configured in recurrent mode for complex case studies, even though it is theoretically possible. Existing BbNN models do not explicitly specify or model the latency of the system, determine how it affects the system, nor how it can be optimized. Also, current methods of training BbNNs using genetic algorithm (GA) are slow, especially with large training datasets. This thesis presents an improved BbNN model, proposes a state-of-the-art simulation and co-design environment for it, and implements it on a hardware platform for improved speed and performance. It has a novel architecture with deterministic outputs that can evolve and operate in both feedforward and recurrent modes. The BbNN is redesigned for optimal system latency to achieve higher performance, and supports onchip training for multi-objective optimization using a multi-population parallel genetic algorithm. All the algorithms proposed led to an efficient and scalable hardware implementation. The viability of the resulting BbNN system-on-chip (SoC) is proven with real-time performance analysis of real-world case studies, where performance improvements of up to 410� are observed. The hardware logic utilization is minimized with the help of theoretical analysis and design considerations. A case study requiring the use of recurrent mode BbNN is also presented. All case studies tested with the BbNN give equivalent or better classification accuracies compared to those provided in previous works, but with optimized latency values. As an example, the proposed BbNN solution achieves a classification accuracy of 99.41% for the heart arrhythmia case study, which is an improvement over previous work. The validity of the proposed BbNN model is thus verified. 2013-06 Thesis NonPeerReviewed application/pdf en http://eprints.utm.my/33763/5/VishnuParamsivamPFKE2013.pdf Paramasivam, Vishnu (2013) An evolvable block-based neural network architecture for embedded hardware. PhD thesis, Universiti Teknologi Malaysia, Faculty of Electrical Engineering. http://dms.library.utm.my:8080/vital/access/manager/Repository/vital:69931?site_name=Restricted Repository
spellingShingle QA75 Electronic computers. Computer science
TK Electrical engineering. Electronics Nuclear engineering
Paramasivam, Vishnu
An evolvable block-based neural network architecture for embedded hardware
title An evolvable block-based neural network architecture for embedded hardware
title_full An evolvable block-based neural network architecture for embedded hardware
title_fullStr An evolvable block-based neural network architecture for embedded hardware
title_full_unstemmed An evolvable block-based neural network architecture for embedded hardware
title_short An evolvable block-based neural network architecture for embedded hardware
title_sort evolvable block based neural network architecture for embedded hardware
topic QA75 Electronic computers. Computer science
TK Electrical engineering. Electronics Nuclear engineering
url http://eprints.utm.my/33763/5/VishnuParamsivamPFKE2013.pdf
work_keys_str_mv AT paramasivamvishnu anevolvableblockbasedneuralnetworkarchitectureforembeddedhardware
AT paramasivamvishnu evolvableblockbasedneuralnetworkarchitectureforembeddedhardware