Active Exploration for Obstacle Detection on a Mobile Humanoid Robot

Conventional approaches to robot navigation in unstructured environments rely on information acquired from the LiDAR mounted on the robot base to detect and avoid obstacles. This approach fails to detect obstacles that are too small, or that are invisible because they are outside the LiDAR’s field o...

Full description

Bibliographic Details
Main Authors: Luca Nobile, Marco Randazzo, Michele Colledanchise, Luca Monorchio, Wilson Villa, Francesco Puja, Lorenzo Natale
Format: Article
Language:English
Published: MDPI AG 2021-08-01
Series:Actuators
Subjects:
Online Access:https://www.mdpi.com/2076-0825/10/9/205
Description
Summary:Conventional approaches to robot navigation in unstructured environments rely on information acquired from the LiDAR mounted on the robot base to detect and avoid obstacles. This approach fails to detect obstacles that are too small, or that are invisible because they are outside the LiDAR’s field of view. A possible strategy is to integrate information from other sensors. In this paper, we explore the possibility of using depth information from a movable RGB-D camera mounted on the head of the robot, and investigate, in particular, active control strategies to effectively scan the environment. Existing works combine RGBD-D and 2D LiDAR data passively by fusing the current point-cloud from the RGB-D camera with the occupancy grid computed from the 2D LiDAR data, while the robot follows a given path. In contrast, we propose an optimization strategy that actively changes the position of the robot’s head, where the camera is mounted, at each point of the given navigation path; thus, we can fully exploit the RGB-D camera to detect, and hence avoid, obstacles undetected by the 2D LiDAR, such as overhanging obstacles or obstacles in blind spots. We validate our approach in both simulation environments to gather statistically significant data and real environments to show the applicability of our method to real robots. The platform used is the humanoid robot R1.
ISSN:2076-0825