Recreating Lunar Environments by Fusion of Multimodal Data Using Machine Learning Models

The latest satellite infrastructure for data processing, transmission and reception can certainly be improved by upgrading tools used to deal with very large amounts of data from every different sensor incorporated within the space missions. In order to develop a better technique to process data, in...

Full description

Bibliographic Details
Main Authors: Ana C. Castillo, Jesus A. Marroquin-Escobedo, Santiago Gonzalez-Irigoyen, Marlene Martinez-Santoyo, Rafaela Villalpando-Hernandez, Cesar Vargas-Rosales
Format: Article
Language:English
Published: MDPI AG 2022-11-01
Series:Engineering Proceedings
Subjects:
Online Access:https://www.mdpi.com/2673-4591/27/1/54
Description
Summary:The latest satellite infrastructure for data processing, transmission and reception can certainly be improved by upgrading tools used to deal with very large amounts of data from every different sensor incorporated within the space missions. In order to develop a better technique to process data, in this paper we will take an insight into multimodal data fusion using machine learning algorithms. This paper discusses how machine learning models are used to recreate environments from heterogeneous, multi-modal data sets. In particular, for those models based on neural networks, the most important difficulty is the vast number of training objects of the connected neural network based on Convolutional Neural Networks (CNN) to avoid overfitting and underfitting of the models.
ISSN:2673-4591