Salient Dual Activations Aggregation for Ground-Based Cloud Classification in Weather Station Networks

Since appearances of clouds are always changeable, ground-based cloud classification is still in urgent need of development in weather station networks. Many existing methods resort to convolutional neural networks to improve the classification accuracy. However, these methods just carry out the fea...

Full description

Bibliographic Details
Main Authors: Zhong Zhang, Donghong Li, Shuang Liu
Format: Article
Language:English
Published: IEEE 2018-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8487026/
Description
Summary:Since appearances of clouds are always changeable, ground-based cloud classification is still in urgent need of development in weather station networks. Many existing methods resort to convolutional neural networks to improve the classification accuracy. However, these methods just carry out the feature extraction from one convolutional layer, hence making it difficult to obtain complete information of ground-based cloud images. To address this limitation, in this paper, we propose a novel method named salient dual activations aggregation (SDA<sup>2</sup>) to extract ground-based cloud features from different convolutional layers, which could learn the structural, textural, and high-level semantic information for ground-based cloud representation, simultaneously. Specifically, the salient patch selection strategy is first applied to select salient vectors from one shallow convolutional layer. Then, corresponding weights are learned from one deep convolutional layer. After obtaining a set of salient vectors with various weights, this paper is further designed to aggregate them into a representative vector for each ground-based cloud image by explicitly modeling the relationship among salient vectors. The proposed SDA<sup>2</sup> is validated on three ground-based cloud databases, and the experimental results prove its effectiveness. Especially, we obtain the promising classification results of 91.24&#x0025; on the MOC&#x005F;e database, 91.15&#x0025; on the IAP&#x005F;e database, and 88.73&#x0025; on the CAMS&#x005F;e database.
ISSN:2169-3536