A Monocular Pointing Pose Estimator for Gestural Instruction of a Mobile Robot

We present an important aspect of our human-robot communication interface which is being developed in the context of our long-term research framework PERSES dealing with highly interactive mobile companion robots. Based on a multi-modal people detection and tracking system, we present a hierarchical...

Full description

Bibliographic Details
Main Authors: Andrea Scheidig, Christian Martin, Steffen Mueler, Horst-Michael Gross, Jan Richarz
Format: Article
Language:English
Published: SAGE Publishing 2008-11-01
Series:International Journal of Advanced Robotic Systems
Online Access:http://www.intechopen.com/articles/show/title/a_monocular_pointing_pose_estimator_for_gestural_instruction_of_a_mobile_robot
Description
Summary:We present an important aspect of our human-robot communication interface which is being developed in the context of our long-term research framework PERSES dealing with highly interactive mobile companion robots. Based on a multi-modal people detection and tracking system, we present a hierarchical neural architec- ture that estimates a target point at the floor indicated by a pointing pose, thus enabling a user to navigate a mo- bile robot to a specific target position in his local surroundings by means of pointing. In this context, we were especially interested in determining whether it is possible to accomplish such a target point estimator using only monocular images of low-cost cameras. The estimator has been implemented and experimentally investigated on our mobile robotic assistant HOROS. Although only monocular image data of relatively poor quality were util- ized, the estimator accomplishes a good estimation performance, achieving an accuracy better than that of a hu- man viewer on the same data. The achieved recognition results demonstrate that it is in fact possible to realize a user-independent pointing direction estimation using monocular images only, but further efforts are necessary to improve the robustness of this approach for everyday application.
ISSN:1729-8806
1729-8814