Parametric flatten-t swish: an adaptive nonlinear activation function for deep learning

Activation function is a key component in deep learning that performs non-linear mappings between the inputs and outputs. Rectified Linear Unit (ReLU) has been the most popular activation function across the deep learning community. However, ReLU contains several shortcomings that can result in ine...

Full description

Bibliographic Details
Main Authors: Hock, Hung Chieng, Wahid, Noorhaniza, Ong, Pauline
Format: Article
Language:English
Published: Universiti Utara Malaysia 2021
Subjects:
Online Access:https://repo.uum.edu.my/id/eprint/28125/1/document%20%284%29.pdf
https://doi.org/10.32890/jict.20.1.2021.9267