A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images

Public aquariums and similar institutions often use video as a method to monitor the behavior, health, and status of aquatic organisms in their environments. These video footages take up a sizeable amount of space and require the use of autoencoders to reduce their file size for efficient storage. T...

Full description

Bibliographic Details
Main Authors: Rich C. Lee, Ing-Yi Chen
Format: Article
Language:English
Published: MDPI AG 2021-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/15/4966
_version_ 1797525182602018816
author Rich C. Lee
Ing-Yi Chen
author_facet Rich C. Lee
Ing-Yi Chen
author_sort Rich C. Lee
collection DOAJ
description Public aquariums and similar institutions often use video as a method to monitor the behavior, health, and status of aquatic organisms in their environments. These video footages take up a sizeable amount of space and require the use of autoencoders to reduce their file size for efficient storage. The autoencoder neural network is an emerging technique which uses the extracted latent space from an input source to reduce the image size for storage, and then reconstructs the source within an acceptable loss range for use. To meet an aquarium’s practical needs, the autoencoder must have easily maintainable codes, low power consumption, be easily adoptable, and not require a substantial amount of memory use or processing power. Conventional configurations of autoencoders often provide results that perform beyond an aquarium’s needs at the cost of being too complex for their architecture to handle, while few take low-contrast sources into consideration. Thus, in this instance, “keeping it simple” would be the ideal approach to the autoencoder’s model design. This paper proposes a practical approach catered to an aquarium’s specific needs through the configuration of autoencoder parameters. It first explores the differences between the two of the most widely applied autoencoder approaches, Multilayer Perceptron (MLP) and Convolution Neural Networks (CNN), to identify the most appropriate approach. The paper concludes that while both approaches (with proper configurations and image preprocessing) can reduce the dimensionality and reduce visual noise of the low-contrast images gathered from aquatic video footage, the CNN approach is more suitable for an aquarium’s architecture. As an unexpected finding of the experiments conducted, the paper also discovered that by manipulating the formula for the MLP approach, the autoencoder could generate a denoised differential image that contains sharper and more desirable visual information to an aquarium’s operation. Lastly, the paper has found that proper image preprocessing prior to the application of the autoencoder led to better model convergence and prediction results, as demonstrated both visually and numerically in the experiment. The paper concludes that by combining the denoising effect of MLP, CNN’s ability to manage memory consumption, and proper image preprocessing, the specific practical needs of an aquarium can be adeptly fulfilled.
first_indexed 2024-03-10T09:09:10Z
format Article
id doaj.art-67a6c97a8b53457c91178151cc6588a0
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T09:09:10Z
publishDate 2021-07-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-67a6c97a8b53457c91178151cc6588a02023-11-22T06:08:26ZengMDPI AGSensors1424-82202021-07-012115496610.3390/s21154966A Deep Dive of Autoencoder Models on Low-Contrast Aquatic ImagesRich C. Lee0Ing-Yi Chen1Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 23741, TaiwanDepartment of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 23741, TaiwanPublic aquariums and similar institutions often use video as a method to monitor the behavior, health, and status of aquatic organisms in their environments. These video footages take up a sizeable amount of space and require the use of autoencoders to reduce their file size for efficient storage. The autoencoder neural network is an emerging technique which uses the extracted latent space from an input source to reduce the image size for storage, and then reconstructs the source within an acceptable loss range for use. To meet an aquarium’s practical needs, the autoencoder must have easily maintainable codes, low power consumption, be easily adoptable, and not require a substantial amount of memory use or processing power. Conventional configurations of autoencoders often provide results that perform beyond an aquarium’s needs at the cost of being too complex for their architecture to handle, while few take low-contrast sources into consideration. Thus, in this instance, “keeping it simple” would be the ideal approach to the autoencoder’s model design. This paper proposes a practical approach catered to an aquarium’s specific needs through the configuration of autoencoder parameters. It first explores the differences between the two of the most widely applied autoencoder approaches, Multilayer Perceptron (MLP) and Convolution Neural Networks (CNN), to identify the most appropriate approach. The paper concludes that while both approaches (with proper configurations and image preprocessing) can reduce the dimensionality and reduce visual noise of the low-contrast images gathered from aquatic video footage, the CNN approach is more suitable for an aquarium’s architecture. As an unexpected finding of the experiments conducted, the paper also discovered that by manipulating the formula for the MLP approach, the autoencoder could generate a denoised differential image that contains sharper and more desirable visual information to an aquarium’s operation. Lastly, the paper has found that proper image preprocessing prior to the application of the autoencoder led to better model convergence and prediction results, as demonstrated both visually and numerically in the experiment. The paper concludes that by combining the denoising effect of MLP, CNN’s ability to manage memory consumption, and proper image preprocessing, the specific practical needs of an aquarium can be adeptly fulfilled.https://www.mdpi.com/1424-8220/21/15/4966autoencoderdeep learningcomputer visionimage recognition
spellingShingle Rich C. Lee
Ing-Yi Chen
A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images
Sensors
autoencoder
deep learning
computer vision
image recognition
title A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images
title_full A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images
title_fullStr A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images
title_full_unstemmed A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images
title_short A Deep Dive of Autoencoder Models on Low-Contrast Aquatic Images
title_sort deep dive of autoencoder models on low contrast aquatic images
topic autoencoder
deep learning
computer vision
image recognition
url https://www.mdpi.com/1424-8220/21/15/4966
work_keys_str_mv AT richclee adeepdiveofautoencodermodelsonlowcontrastaquaticimages
AT ingyichen adeepdiveofautoencodermodelsonlowcontrastaquaticimages
AT richclee deepdiveofautoencodermodelsonlowcontrastaquaticimages
AT ingyichen deepdiveofautoencodermodelsonlowcontrastaquaticimages