Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural network
<p>We introduce a new machine learning approach to retrieve cloud optical thickness (COT) fields from visible passive imagery. In contrast to the heritage independent pixel approximation (IPA), our convolutional neural network (CNN) retrieval takes the spatial context of a pixel into account a...
Main Authors: | , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Copernicus Publications
2022-09-01
|
Series: | Atmospheric Measurement Techniques |
Online Access: | https://amt.copernicus.org/articles/15/5181/2022/amt-15-5181-2022.pdf |
_version_ | 1811261125046566912 |
---|---|
author | V. Nataraja S. Schmidt S. Schmidt H. Chen H. Chen T. Yamaguchi T. Yamaguchi J. Kazil J. Kazil G. Feingold K. Wolf H. Iwabuchi |
author_facet | V. Nataraja S. Schmidt S. Schmidt H. Chen H. Chen T. Yamaguchi T. Yamaguchi J. Kazil J. Kazil G. Feingold K. Wolf H. Iwabuchi |
author_sort | V. Nataraja |
collection | DOAJ |
description | <p>We introduce a new machine learning approach to retrieve cloud optical thickness (COT) fields from visible passive imagery. In contrast to the heritage independent pixel approximation (IPA), our convolutional neural network (CNN) retrieval takes the spatial context of a pixel into account and thereby reduces artifacts arising from net horizontal photon transfer, which is commonly known as independent pixel (IP) bias. The CNN maps radiance fields acquired by imaging radiometers at a single wavelength channel to COT fields. It is trained with a low-complexity and therefore fast U-Net architecture with which the mapping is implemented as a segmentation problem with 36 COT classes. As a training data set, we use a single radiance channel (600 nm) generated from a 3D radiative transfer model using large eddy simulations (LESs) from the Sulu Sea. We study the CNN model under various conditions based on different permutations of cloud aspect ratio and morphology, and we use appropriate cloud morphology metrics to measure the performance of the retrievals. Additionally, we test the general applicability of the CNN on a new geographic location with LES data from the equatorial Atlantic. Results indicate that the CNN is broadly successful in overcoming the IP bias and outperforms IPA retrievals across all morphologies. Over the Atlantic, the CNN tends to overestimate the COT but shows promise in regions with high cloud fractions and high optical thicknesses, despite being outside the general training envelope. This work is intended to be used as a baseline for future implementations of the CNN that can enable generalization to different regions, scales, wavelengths, and sun-sensor geometries with limited training.</p> |
first_indexed | 2024-04-12T18:57:50Z |
format | Article |
id | doaj.art-851777fe33f74341baa053dab3ec14e0 |
institution | Directory Open Access Journal |
issn | 1867-1381 1867-8548 |
language | English |
last_indexed | 2024-04-12T18:57:50Z |
publishDate | 2022-09-01 |
publisher | Copernicus Publications |
record_format | Article |
series | Atmospheric Measurement Techniques |
spelling | doaj.art-851777fe33f74341baa053dab3ec14e02022-12-22T03:20:15ZengCopernicus PublicationsAtmospheric Measurement Techniques1867-13811867-85482022-09-01155181520510.5194/amt-15-5181-2022Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural networkV. Nataraja0S. Schmidt1S. Schmidt2H. Chen3H. Chen4T. Yamaguchi5T. Yamaguchi6J. Kazil7J. Kazil8G. Feingold9K. Wolf10H. Iwabuchi11Laboratory for Atmospheric and Space Physics (LASP), University of Colorado, Boulder, CO 80303, USALaboratory for Atmospheric and Space Physics (LASP), University of Colorado, Boulder, CO 80303, USADepartment of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, CO 80303, USALaboratory for Atmospheric and Space Physics (LASP), University of Colorado, Boulder, CO 80303, USADepartment of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, CO 80303, USACooperative Institute for Research in Environmental Sciences (CIRES), University of Colorado Boulder, CO 80309, USANational Oceanic and Atmospheric Administration (NOAA), Chemical Sciences Laboratory, Boulder, CO 80305, USACooperative Institute for Research in Environmental Sciences (CIRES), University of Colorado Boulder, CO 80309, USANational Oceanic and Atmospheric Administration (NOAA), Chemical Sciences Laboratory, Boulder, CO 80305, USACooperative Institute for Research in Environmental Sciences (CIRES), University of Colorado Boulder, CO 80309, USALaboratory for Atmospheric and Space Physics (LASP), University of Colorado, Boulder, CO 80303, USACenter for Atmospheric and Oceanic Studies, Graduate School of Science, Tohoku University, Sendai, Miyagi 980-8578, Japan<p>We introduce a new machine learning approach to retrieve cloud optical thickness (COT) fields from visible passive imagery. In contrast to the heritage independent pixel approximation (IPA), our convolutional neural network (CNN) retrieval takes the spatial context of a pixel into account and thereby reduces artifacts arising from net horizontal photon transfer, which is commonly known as independent pixel (IP) bias. The CNN maps radiance fields acquired by imaging radiometers at a single wavelength channel to COT fields. It is trained with a low-complexity and therefore fast U-Net architecture with which the mapping is implemented as a segmentation problem with 36 COT classes. As a training data set, we use a single radiance channel (600 nm) generated from a 3D radiative transfer model using large eddy simulations (LESs) from the Sulu Sea. We study the CNN model under various conditions based on different permutations of cloud aspect ratio and morphology, and we use appropriate cloud morphology metrics to measure the performance of the retrievals. Additionally, we test the general applicability of the CNN on a new geographic location with LES data from the equatorial Atlantic. Results indicate that the CNN is broadly successful in overcoming the IP bias and outperforms IPA retrievals across all morphologies. Over the Atlantic, the CNN tends to overestimate the COT but shows promise in regions with high cloud fractions and high optical thicknesses, despite being outside the general training envelope. This work is intended to be used as a baseline for future implementations of the CNN that can enable generalization to different regions, scales, wavelengths, and sun-sensor geometries with limited training.</p>https://amt.copernicus.org/articles/15/5181/2022/amt-15-5181-2022.pdf |
spellingShingle | V. Nataraja S. Schmidt S. Schmidt H. Chen H. Chen T. Yamaguchi T. Yamaguchi J. Kazil J. Kazil G. Feingold K. Wolf H. Iwabuchi Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural network Atmospheric Measurement Techniques |
title | Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural network |
title_full | Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural network |
title_fullStr | Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural network |
title_full_unstemmed | Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural network |
title_short | Segmentation-based multi-pixel cloud optical thickness retrieval using a convolutional neural network |
title_sort | segmentation based multi pixel cloud optical thickness retrieval using a convolutional neural network |
url | https://amt.copernicus.org/articles/15/5181/2022/amt-15-5181-2022.pdf |
work_keys_str_mv | AT vnataraja segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT sschmidt segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT sschmidt segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT hchen segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT hchen segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT tyamaguchi segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT tyamaguchi segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT jkazil segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT jkazil segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT gfeingold segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT kwolf segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork AT hiwabuchi segmentationbasedmultipixelcloudopticalthicknessretrievalusingaconvolutionalneuralnetwork |