PARAMETRIC FLATTEN-T SWISH: AN ADAPTIVE NONLINEAR ACTIVATION FUNCTION FOR DEEP LEARNING
QActivation function is a key component in deep learning that performs non-linear mappings between the inputs and outputs. Rectified Linear Unit (ReLU) has been the most popular activation function across the deep learning community. However, ReLU contains several shortcomings that can result in in...
Main Authors: | Hock Hung Chieng, Noorhaniza Wahid, Pauline Ong |
---|---|
Format: | Article |
Language: | English |
Published: |
UUM Press
2020-11-01
|
Series: | Journal of ICT |
Subjects: | |
Online Access: | https://e-journal.uum.edu.my/index.php/jict/article/view/12398 |
Similar Items
-
Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning
by: Hock Hung Chieng, et al.
Published: (2018-07-01) -
Optimization of Microchannels and Application of Basic Activation Functions of Deep Neural Network for Accuracy Analysis of Microfluidic Parameter Data
by: Feroz Ahmed, et al.
Published: (2022-08-01) -
Parametric flatten-t swish: an adaptive nonlinear activation function for deep learning
by: Hock, Hung Chieng, et al.
Published: (2021) -
PARAMETRIC FLATTEN-T SWISH: AN ADAPTIVE NONLINEAR ACTIVATION FUNCTION FOR DEEP LEARNING
by: Hock Chieng, et al.
Published: (2020-10-01) -
Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning
by: Hock, Hung Chieng, et al.
Published: (2018)