Video-rate recognition and localization for wearable cameras
Using simultaneous localization and mapping to determine the 3D surroundings and pose of a wearable or hand-held camera provides the geometrical foundation for several capabilities of value to an autonomous wearable vision system. The one explored here is the ability to incorporate recognized object...
Main Authors: | , , , |
---|---|
格式: | Conference item |
出版: |
British Machine Vision Association, BMVA
2007
|
_version_ | 1826279764981186560 |
---|---|
author | Castle, R Gawley, D Klein, G Murray, D |
author_facet | Castle, R Gawley, D Klein, G Murray, D |
author_sort | Castle, R |
collection | OXFORD |
description | Using simultaneous localization and mapping to determine the 3D surroundings and pose of a wearable or hand-held camera provides the geometrical foundation for several capabilities of value to an autonomous wearable vision system. The one explored here is the ability to incorporate recognized objects into the map of the surroundings and refer to them. Established methods for feature cluster recognition are used to identify and localize known planar objects, and their geometry is incorporated into the map of the surrounds using a minimalist representation. Continued measurement of these mapped objects improves both the accuracy of estimated maps and the robustness of the tracking system. In the context of wearable (or hand-held) vision, the system's ability to enhance generated maps with known objects increases the map's value to human operators, and also enables meaningful automatic annotation of the user's surroundings. |
first_indexed | 2024-03-07T00:03:39Z |
format | Conference item |
id | oxford-uuid:76d37d07-1788-464e-abf2-6b687d3c56ca |
institution | University of Oxford |
last_indexed | 2024-03-07T00:03:39Z |
publishDate | 2007 |
publisher | British Machine Vision Association, BMVA |
record_format | dspace |
spelling | oxford-uuid:76d37d07-1788-464e-abf2-6b687d3c56ca2022-03-26T20:18:52ZVideo-rate recognition and localization for wearable camerasConference itemhttp://purl.org/coar/resource_type/c_5794uuid:76d37d07-1788-464e-abf2-6b687d3c56caSymplectic Elements at OxfordBritish Machine Vision Association, BMVA2007Castle, RGawley, DKlein, GMurray, DUsing simultaneous localization and mapping to determine the 3D surroundings and pose of a wearable or hand-held camera provides the geometrical foundation for several capabilities of value to an autonomous wearable vision system. The one explored here is the ability to incorporate recognized objects into the map of the surroundings and refer to them. Established methods for feature cluster recognition are used to identify and localize known planar objects, and their geometry is incorporated into the map of the surrounds using a minimalist representation. Continued measurement of these mapped objects improves both the accuracy of estimated maps and the robustness of the tracking system. In the context of wearable (or hand-held) vision, the system's ability to enhance generated maps with known objects increases the map's value to human operators, and also enables meaningful automatic annotation of the user's surroundings. |
spellingShingle | Castle, R Gawley, D Klein, G Murray, D Video-rate recognition and localization for wearable cameras |
title | Video-rate recognition and localization for wearable cameras |
title_full | Video-rate recognition and localization for wearable cameras |
title_fullStr | Video-rate recognition and localization for wearable cameras |
title_full_unstemmed | Video-rate recognition and localization for wearable cameras |
title_short | Video-rate recognition and localization for wearable cameras |
title_sort | video rate recognition and localization for wearable cameras |
work_keys_str_mv | AT castler videoraterecognitionandlocalizationforwearablecameras AT gawleyd videoraterecognitionandlocalizationforwearablecameras AT kleing videoraterecognitionandlocalizationforwearablecameras AT murrayd videoraterecognitionandlocalizationforwearablecameras |