DeepQGHO: Quantized Greedy Hyperparameter Optimization in Deep Neural Networks for on-the-Fly Learning
On-the-fly learning is unavoidable for applications that demand instantaneous deep neural network (DNN) training or where transferring data to the central system for training is costly. Hyperparameter optimization plays a significant role in the performance and reliability in deep learning. Many hyp...
Main Authors: | Anjir Ahmed Chowdhury, Md Abir Hossen, Md Ali Azam, Md Hafizur Rahman |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9676610/ |
Similar Items
-
The Effects of Weight Quantization on Online Federated Learning for the IoT: A Case Study
by: Nil Llisterri Gimenez, et al.
Published: (2024-01-01) -
Evaluation of a Machine Learning Algorithm to Classify Ultrasonic Transducer Misalignment and Deployment Using TinyML
by: Des Brennan, et al.
Published: (2024-01-01) -
A TinyML Deep Learning Approach for Indoor Tracking of Assets
by: Diego Avellaneda, et al.
Published: (2023-01-01) -
A leap into the future: Towards an augmented reality learning environment in ski-jumping
by: Lukas Schulthess, et al.
Published: (2024-02-01) -
DDD TinyML: A TinyML-Based Driver Drowsiness Detection Model Using Deep Learning
by: Norah N. Alajlan, et al.
Published: (2023-06-01)