Next best view planning for object recognition in mobile robotics

Recognising objects in everyday human environments is a challenging task for autonomous mobile robots. However, actively planning the views from which an object might be perceived can significantly improve the overall task performance. In this paper we have designed, developed, and evaluated an appr...

詳細記述

書誌詳細
主要な著者: McGreavy, C, Kunze, L, Hawes, N
フォーマット: Conference item
出版事項: CEUR Workshop Proceedings 2017
その他の書誌記述
要約:Recognising objects in everyday human environments is a challenging task for autonomous mobile robots. However, actively planning the views from which an object might be perceived can significantly improve the overall task performance. In this paper we have designed, developed, and evaluated an approach for next best view planning. Our view planning approach is based on online aspect graphs and selects the next best view after having identified an initial object candidate. The approach has two steps. First, we analyse the visibility of the object candidate from a set of candidate views that are reachable by a robot. Secondly, we analyse the visibility of object features by projecting the model of the most likely object into the scene. Experimental results on a mobile robot platform show that our approach is (I) effective at finding a next view that leads to recognition of an object in 82.5% of cases, (II) able to account for visual occlusions in 85% of the trials, and (III) able to disambiguate between objects that share a similar set of features. Hence, overall, we believe that the proposed approach can provide a general methodology that is applicable to a range of tasks beyond object recognition such as inspection, reconstruction, and task outcome classification.