Summary: | Efficient and safe navigation of Unmanned Ground Vehicles(UGV) in unstructured off-road environments remains a significant challenge due to diverse terrains and highly dynamic scenarios. Traversability analysis presents tremendous potential in addressing such problem by perceiving surroundings and building traversability maps. In this dissertation, we introduce a novel traversability analysis algorithm based on multi-modal information fusion. This algorithm integrates data from LiDAR and cameras to comprehensively understand the environment. Point clouds are utilized to extract geometric features such as flat surfaces, slopes and depressions, while semantic segmentation of images enables the identification of various terrains as well as dynamic objects. These two types of information will be effectively fused and integrated to generate a real-time traversability map, which is dynamically insensitive. To verify the performance and effectiveness, the algorithm has been deployed on a UGV where MMDeploy Toolbox is used to accelerate the inference speed of the segmentation model and maintain the data fusion frequency at around 10 Hz. Extensive experiments conducted on campus showcased the algorithm's robustness in such environment with diverse terrains and dynamic objects.
|