From visual query to visual portrayal
In this paper we show how online images can be automatically exploited for scene visualization and reconstruction starting from a mere visual query provided by the user. A visual query is used to retrieve images of a landmark place using a visual search engine. These images are used to reconstruct r...
Main Authors: | , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
British Machine Vision Association
2008
|
_version_ | 1817931179284758528 |
---|---|
author | Shahrokni, A Mei, C Torr, P Reid, I |
author_facet | Shahrokni, A Mei, C Torr, P Reid, I |
author_sort | Shahrokni, A |
collection | OXFORD |
description | In this paper we show how online images can be automatically exploited for scene visualization and reconstruction starting from a mere visual query provided by the user. A visual query is used to retrieve images of a landmark place using a visual search engine. These images are used to reconstruct robust 3–D features and camera poses in projective space. Novel views are then rendered corresponding to a virtual camera flying smoothly through the projective space by triangulation of the projected points in the output view. We introduce a method to fuse the rendered novel views from all input images at each virtual view point by computing their intrinsic image and illuminations. This approach allows us to remove the occlusions and maintain consistent and controlled illumination throughout the rendered sequence. We demonstrate the performance of our prototype system on two landmark structures. |
first_indexed | 2024-12-09T03:17:54Z |
format | Conference item |
id | oxford-uuid:0c8c0989-2da0-48a3-88ac-cb51000724a1 |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:17:54Z |
publishDate | 2008 |
publisher | British Machine Vision Association |
record_format | dspace |
spelling | oxford-uuid:0c8c0989-2da0-48a3-88ac-cb51000724a12024-10-31T15:38:39ZFrom visual query to visual portrayalConference itemhttp://purl.org/coar/resource_type/c_5794uuid:0c8c0989-2da0-48a3-88ac-cb51000724a1EnglishSymplectic ElementsBritish Machine Vision Association2008Shahrokni, AMei, CTorr, PReid, IIn this paper we show how online images can be automatically exploited for scene visualization and reconstruction starting from a mere visual query provided by the user. A visual query is used to retrieve images of a landmark place using a visual search engine. These images are used to reconstruct robust 3–D features and camera poses in projective space. Novel views are then rendered corresponding to a virtual camera flying smoothly through the projective space by triangulation of the projected points in the output view. We introduce a method to fuse the rendered novel views from all input images at each virtual view point by computing their intrinsic image and illuminations. This approach allows us to remove the occlusions and maintain consistent and controlled illumination throughout the rendered sequence. We demonstrate the performance of our prototype system on two landmark structures. |
spellingShingle | Shahrokni, A Mei, C Torr, P Reid, I From visual query to visual portrayal |
title | From visual query to visual portrayal |
title_full | From visual query to visual portrayal |
title_fullStr | From visual query to visual portrayal |
title_full_unstemmed | From visual query to visual portrayal |
title_short | From visual query to visual portrayal |
title_sort | from visual query to visual portrayal |
work_keys_str_mv | AT shahroknia fromvisualquerytovisualportrayal AT meic fromvisualquerytovisualportrayal AT torrp fromvisualquerytovisualportrayal AT reidi fromvisualquerytovisualportrayal |