Improving the Performance of Convolutional Neural Networks by Fusing Low-Level Features With Different Scales in the Preceding Stage

The width of convolutional neural networks (CNNs) is crucial for improving performance. Many wide CNNs use a convolutional layer to fuse multiscale features or fuse the preceding features to subsequent features. However, these CNNs rarely use blocks, which consist of a series of successive convoluti...

Full description

Bibliographic Details
Main Authors: Xiaohong Yu, Wei Long, Yanyan Li, Xiaoqiu Shi, Lin Gao
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9420729/
Description
Summary:The width of convolutional neural networks (CNNs) is crucial for improving performance. Many wide CNNs use a convolutional layer to fuse multiscale features or fuse the preceding features to subsequent features. However, these CNNs rarely use blocks, which consist of a series of successive convolutional layers, to fuse multiscale features. In this paper, we propose an approach for improving performance by fusing the low-level features extracted from different blocks. We utilize five different convolutions, including <inline-formula> <tex-math notation="LaTeX">$3\times 3, 5\times 5, 7\times 7,5\times 3\,\cup \, 3\times 5$ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$7\times 3\,\cup \, 3\times 7$ </tex-math></inline-formula>, to generate five low-level features, and we design two fusion strategies: low-level feature fusion (L-Fusion) and high-level feature fusion (H-Fusion). Experimental results show that the L-Fusion is more helpful for improving the performance of CNNs, and the <inline-formula> <tex-math notation="LaTeX">$5\times 5$ </tex-math></inline-formula> convolution is more suitable for multiscale feature fusion. We summarize the conclusion as a strategy that fuses multiscale features in the preceding stage of CNNs. Furthermore, we propose a new architecture to perceive the input of CNNs by using two self-governed blocks based on the strategy. Finally, we modify five off-the-shelf networks, DenseNet-BC (depth &#x003D; 40), ALL-CNN-C (depth &#x003D; 9), Darknet 19 (depth &#x003D; 19), Resnet 18 (depth &#x003D; 18) and Resnet 50 (depth &#x003D; 50), by utilizing the proposed architecture to verify the conclusion, and these updated networks provide more competitive results.
ISSN:2169-3536