Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation

Lane and road marker segmentation is crucial in autonomous driving, and many related methods have been proposed in this field. However, most of them are based on single-frame prediction, which causes unstable results between frames. Some semantic multi-frame segmentation methods produce error accumu...

Full description

Bibliographic Details
Main Authors: Guansheng Xing, Ziming Zhu
Format: Article
Language:English
Published: MDPI AG 2021-10-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/21/21/7156
_version_ 1797511788560908288
author Guansheng Xing
Ziming Zhu
author_facet Guansheng Xing
Ziming Zhu
author_sort Guansheng Xing
collection DOAJ
description Lane and road marker segmentation is crucial in autonomous driving, and many related methods have been proposed in this field. However, most of them are based on single-frame prediction, which causes unstable results between frames. Some semantic multi-frame segmentation methods produce error accumulation and are not fast enough. Therefore, we propose a deep learning algorithm that takes into account the continuity information of adjacent image frames, including image sequence processing and an end-to-end trainable multi-input single-output network to jointly process the segmentation of lanes and road markers. In order to emphasize the location of the target with high probability in the adjacent frames and to refine the segmentation result of the current frame, we explicitly consider the time consistency between frames, expand the segmentation region of the previous frame, and use the optical flow of the adjacent frames to reverse the past prediction, then use it as an additional input of the network in training and reasoning, thereby improving the network’s attention to the target area of the past frame. We segmented lanes and road markers on the Baidu Apolloscape lanemark segmentation dataset and CULane dataset, and present benchmarks for different networks. The experimental results show that this method accelerates the segmentation speed of video lanes and road markers by 2.5 times, increases accuracy by 1.4%, and reduces temporal consistency by only 2.2% at most.
first_indexed 2024-03-10T05:53:00Z
format Article
id doaj.art-80c5503b8e5542fcbbb02eaee20d2f47
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T05:53:00Z
publishDate 2021-10-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-80c5503b8e5542fcbbb02eaee20d2f472023-11-22T21:37:25ZengMDPI AGSensors1424-82202021-10-012121715610.3390/s21217156Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow EstimationGuansheng Xing0Ziming Zhu1School of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, ChinaSchool of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, ChinaLane and road marker segmentation is crucial in autonomous driving, and many related methods have been proposed in this field. However, most of them are based on single-frame prediction, which causes unstable results between frames. Some semantic multi-frame segmentation methods produce error accumulation and are not fast enough. Therefore, we propose a deep learning algorithm that takes into account the continuity information of adjacent image frames, including image sequence processing and an end-to-end trainable multi-input single-output network to jointly process the segmentation of lanes and road markers. In order to emphasize the location of the target with high probability in the adjacent frames and to refine the segmentation result of the current frame, we explicitly consider the time consistency between frames, expand the segmentation region of the previous frame, and use the optical flow of the adjacent frames to reverse the past prediction, then use it as an additional input of the network in training and reasoning, thereby improving the network’s attention to the target area of the past frame. We segmented lanes and road markers on the Baidu Apolloscape lanemark segmentation dataset and CULane dataset, and present benchmarks for different networks. The experimental results show that this method accelerates the segmentation speed of video lanes and road markers by 2.5 times, increases accuracy by 1.4%, and reduces temporal consistency by only 2.2% at most.https://www.mdpi.com/1424-8220/21/21/7156lane and road marker segmentationmask croppingoptical flow estimationsemantic video segmentationtemporal consistency
spellingShingle Guansheng Xing
Ziming Zhu
Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation
Sensors
lane and road marker segmentation
mask cropping
optical flow estimation
semantic video segmentation
temporal consistency
title Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation
title_full Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation
title_fullStr Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation
title_full_unstemmed Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation
title_short Lane and Road Marker Semantic Video Segmentation Using Mask Cropping and Optical Flow Estimation
title_sort lane and road marker semantic video segmentation using mask cropping and optical flow estimation
topic lane and road marker segmentation
mask cropping
optical flow estimation
semantic video segmentation
temporal consistency
url https://www.mdpi.com/1424-8220/21/21/7156
work_keys_str_mv AT guanshengxing laneandroadmarkersemanticvideosegmentationusingmaskcroppingandopticalflowestimation
AT zimingzhu laneandroadmarkersemanticvideosegmentationusingmaskcroppingandopticalflowestimation