Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments

Visual crowdsensing applications using built-in cameras in smartphones have recently attracted researchers’ interest. Making the most out of the limited resources to acquire the most helpful images from the public is a challenge in disaster recovery applications. Proposed solutions should adequately...

Full description

Bibliographic Details
Main Authors: Moad Mowafi, Fahed Awad, Fida’a Al-Quran
Format: Article
Language:English
Published: MDPI AG 2022-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/22/15/5467
Description
Summary:Visual crowdsensing applications using built-in cameras in smartphones have recently attracted researchers’ interest. Making the most out of the limited resources to acquire the most helpful images from the public is a challenge in disaster recovery applications. Proposed solutions should adequately address several constraints, including limited bandwidth, limited energy resources, and interrupted communication links with the command center or server. Furthermore, data redundancy is considered one of the main challenges in visual crowdsensing. In distributed visual crowdsensing systems, photo sharing duplicates and expands the amount of data stored on each sensor node. As a result, if any node can communicate with the server, then more photos of the target region would be available to the server. Methods for recognizing and removing redundant data provide a range of benefits, including decreased transmission costs and energy consumption overall. To handle the interrupted communication with the server and the restricted resources of the sensor nodes, this paper proposes a distributed visual crowdsensing system for full-view area coverage. The target area is divided into virtual sub-regions, each of which is represented by a set of boundary points of interest. Then, based on the criteria for full-view area coverage, a specific data structure theme is developed to represent each photo with a set of features. The geometric context parameters of each photo are utilized to extract the features of each photo based on the full-view area coverage criteria. Finally, data redundancy removal algorithms are implemented based on the proposed clustering scheme to eliminate duplicate photos. As a result, each sensor node may filter redundant photographs in dispersed contexts without requiring high computational complexity, resources, or global awareness of all photos from all sensor nodes inside the target area. Compared to the most recent state-of-the-art, the improvement ratio of the added values of the photos provided by the proposed method is more than 38%. In terms of traffic transfer, the proposed method requires fewer data to be transferred between sensor nodes and between sensor nodes and the command center. The overall reduction in traffic exceeds 20% and the overall savings in energy consumption is more than 25%. It was evident that in the proposed system, sending photos between sensor nodes, as well as between sensor nodes and the command center, consumes less energy than existing approaches due to the considerable amount of photo exchange required. Thus, the proposed technique effectively transfers only the most valuable photos needed.
ISSN:1424-8220