Discovering place-informative scenes and objects using social media photos
Understanding the visual discrepancy and heterogeneity of different places is of great interest to architectural design, urban design and tourism planning. However, previous studies have been limited by the lack of adequate data and efficient methods to quantify the visual aspects of a place. This w...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
The Royal Society
2019-03-01
|
Series: | Royal Society Open Science |
Subjects: | |
Online Access: | https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.181375 |
_version_ | 1819085967829499904 |
---|---|
author | Fan Zhang Bolei Zhou Carlo Ratti Yu Liu |
author_facet | Fan Zhang Bolei Zhou Carlo Ratti Yu Liu |
author_sort | Fan Zhang |
collection | DOAJ |
description | Understanding the visual discrepancy and heterogeneity of different places is of great interest to architectural design, urban design and tourism planning. However, previous studies have been limited by the lack of adequate data and efficient methods to quantify the visual aspects of a place. This work proposes a data-driven framework to explore the place-informative scenes and objects by employing deep convolutional neural network to learn and measure the visual knowledge of place appearance automatically from a massive dataset of photos and imagery. Based on the proposed framework, we compare the visual similarity and visual distinctiveness of 18 cities worldwide using millions of geo-tagged photos obtained from social media. As a result, we identify the visual cues of each city that distinguish that city from others: other than landmarks, a large number of historical architecture, religious sites, unique urban scenes, along with some unusual natural landscapes have been identified as the most place-informative elements. In terms of the city-informative objects, taking vehicles as an example, we find that the taxis, police cars and ambulances are the most place-informative objects. The results of this work are inspiring for various fields—providing insights on what large-scale geo-tagged data can achieve in understanding place formalization and urban design. |
first_indexed | 2024-12-21T21:12:46Z |
format | Article |
id | doaj.art-8f95bcb044524809a047e5ed9afb02e5 |
institution | Directory Open Access Journal |
issn | 2054-5703 |
language | English |
last_indexed | 2024-12-21T21:12:46Z |
publishDate | 2019-03-01 |
publisher | The Royal Society |
record_format | Article |
series | Royal Society Open Science |
spelling | doaj.art-8f95bcb044524809a047e5ed9afb02e52022-12-21T18:50:06ZengThe Royal SocietyRoyal Society Open Science2054-57032019-03-016310.1098/rsos.181375181375Discovering place-informative scenes and objects using social media photosFan ZhangBolei ZhouCarlo RattiYu LiuUnderstanding the visual discrepancy and heterogeneity of different places is of great interest to architectural design, urban design and tourism planning. However, previous studies have been limited by the lack of adequate data and efficient methods to quantify the visual aspects of a place. This work proposes a data-driven framework to explore the place-informative scenes and objects by employing deep convolutional neural network to learn and measure the visual knowledge of place appearance automatically from a massive dataset of photos and imagery. Based on the proposed framework, we compare the visual similarity and visual distinctiveness of 18 cities worldwide using millions of geo-tagged photos obtained from social media. As a result, we identify the visual cues of each city that distinguish that city from others: other than landmarks, a large number of historical architecture, religious sites, unique urban scenes, along with some unusual natural landscapes have been identified as the most place-informative elements. In terms of the city-informative objects, taking vehicles as an example, we find that the taxis, police cars and ambulances are the most place-informative objects. The results of this work are inspiring for various fields—providing insights on what large-scale geo-tagged data can achieve in understanding place formalization and urban design.https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.181375city similaritycity streetscapedeep learningstreet-level imagery |
spellingShingle | Fan Zhang Bolei Zhou Carlo Ratti Yu Liu Discovering place-informative scenes and objects using social media photos Royal Society Open Science city similarity city streetscape deep learning street-level imagery |
title | Discovering place-informative scenes and objects using social media photos |
title_full | Discovering place-informative scenes and objects using social media photos |
title_fullStr | Discovering place-informative scenes and objects using social media photos |
title_full_unstemmed | Discovering place-informative scenes and objects using social media photos |
title_short | Discovering place-informative scenes and objects using social media photos |
title_sort | discovering place informative scenes and objects using social media photos |
topic | city similarity city streetscape deep learning street-level imagery |
url | https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.181375 |
work_keys_str_mv | AT fanzhang discoveringplaceinformativescenesandobjectsusingsocialmediaphotos AT boleizhou discoveringplaceinformativescenesandobjectsusingsocialmediaphotos AT carloratti discoveringplaceinformativescenesandobjectsusingsocialmediaphotos AT yuliu discoveringplaceinformativescenesandobjectsusingsocialmediaphotos |