PARAMETRIC FLATTEN-T SWISH: AN ADAPTIVE NONLINEAR ACTIVATION FUNCTION FOR DEEP LEARNING
QActivation function is a key component in deep learning that performs non-linear mappings between the inputs and outputs. Rectified Linear Unit (ReLU) has been the most popular activation function across the deep learning community. However, ReLU contains several shortcomings that can result in ine...
Main Authors: | Hock Chieng, Noorhaniza Wahid, Pauline Ong |
---|---|
Format: | Article |
Language: | English |
Published: |
UUM Press
2020-10-01
|
Series: | Journal of ICT |
Online Access: | https://www.scienceopen.com/document?vid=4423025f-cc63-457c-9a28-5fdfb8de0723 |
Similar Items
-
Parametric flatten-t swish: an adaptive nonlinear activation function for deep learning
by: Hock, Hung Chieng, et al.
Published: (2021) -
PARAMETRIC FLATTEN-T SWISH: AN ADAPTIVE NONLINEAR ACTIVATION FUNCTION FOR DEEP LEARNING
by: Hock Hung Chieng, et al.
Published: (2020-11-01) -
Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning
by: Hock, Hung Chieng, et al.
Published: (2018) -
Flatten-T Swish: a thresholded ReLU-Swish-like activation function for deep learning
by: Hock Hung Chieng, et al.
Published: (2018-07-01) -
Breast Cancer Diagnosis Using YOLO-Based Multiscale Parallel CNN and Flattened Threshold Swish
by: Ahmed Dhahi Mohammed, et al.
Published: (2024-03-01)