MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior
A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtai...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-11-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/15/22/5324 |
_version_ | 1797457925726273536 |
---|---|
author | Guangyun Zhang Rongting Zhang |
author_facet | Guangyun Zhang Rongting Zhang |
author_sort | Guangyun Zhang |
collection | DOAJ |
description | A textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively. |
first_indexed | 2024-03-09T16:29:46Z |
format | Article |
id | doaj.art-c98a4a77e28c41ddae4b9c5cba8167bc |
institution | Directory Open Access Journal |
issn | 2072-4292 |
language | English |
last_indexed | 2024-03-09T16:29:46Z |
publishDate | 2023-11-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj.art-c98a4a77e28c41ddae4b9c5cba8167bc2023-11-24T15:04:24ZengMDPI AGRemote Sensing2072-42922023-11-011522532410.3390/rs15225324MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse PriorGuangyun Zhang0Rongting Zhang1School of Geomatics Science and Technology, Nanjing Tech University, Nanjing 211800, ChinaSchool of Geomatics Science and Technology, Nanjing Tech University, Nanjing 211800, ChinaA textured urban 3D mesh is an important part of 3D real scene technology. Semantically segmenting an urban 3D mesh is a key task in the photogrammetry and remote sensing field. However, due to the irregular structure of a 3D mesh and redundant texture information, it is a challenging issue to obtain high and robust semantic segmentation results for an urban 3D mesh. To address this issue, we propose a semantic urban 3D mesh segmentation network (MeshNet) with sparse prior (SP), named MeshNet-SP. MeshNet-SP consists of a differentiable sparse coding (DSC) subnetwork and a semantic feature extraction (SFE) subnetwork. The DSC subnetwork learns low-intrinsic-dimensional features from raw texture information, which increases the effectiveness and robustness of semantic urban 3D mesh segmentation. The SFE subnetwork produces high-level semantic features from the combination of features containing the geometric features of a mesh and the low-intrinsic-dimensional features of texture information. The proposed method is evaluated on the SUM dataset. The results of ablation experiments demonstrate that the low-intrinsic-dimensional feature is the key to achieving high and robust semantic segmentation results. The comparison results show that the proposed method can achieve competitive accuracies, and the maximum increase can reach 34.5%, 35.4%, and 31.8% in mR, mF1, and mIoU, respectively.https://www.mdpi.com/2072-4292/15/22/53243D real sceneurban 3D meshsemantic segmentationsparse priorlow intrinsic dimensionconvolutional neural network |
spellingShingle | Guangyun Zhang Rongting Zhang MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior Remote Sensing 3D real scene urban 3D mesh semantic segmentation sparse prior low intrinsic dimension convolutional neural network |
title | MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior |
title_full | MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior |
title_fullStr | MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior |
title_full_unstemmed | MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior |
title_short | MeshNet-SP: A Semantic Urban 3D Mesh Segmentation Network with Sparse Prior |
title_sort | meshnet sp a semantic urban 3d mesh segmentation network with sparse prior |
topic | 3D real scene urban 3D mesh semantic segmentation sparse prior low intrinsic dimension convolutional neural network |
url | https://www.mdpi.com/2072-4292/15/22/5324 |
work_keys_str_mv | AT guangyunzhang meshnetspasemanticurban3dmeshsegmentationnetworkwithsparseprior AT rongtingzhang meshnetspasemanticurban3dmeshsegmentationnetworkwithsparseprior |