Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration
The development of a self-configuring method for efficiently locating moving targets indoors could enable extraordinary advances in the control of industrial automatic production equipment. Being interactively connected, cameras that constitute a network represent a promising visual system for wirel...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-08-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/15/5806 |
_version_ | 1827601572867080192 |
---|---|
author | Yingfeng Wu Weiwei Zhao Jifa Zhang |
author_facet | Yingfeng Wu Weiwei Zhao Jifa Zhang |
author_sort | Yingfeng Wu |
collection | DOAJ |
description | The development of a self-configuring method for efficiently locating moving targets indoors could enable extraordinary advances in the control of industrial automatic production equipment. Being interactively connected, cameras that constitute a network represent a promising visual system for wireless positioning, with the ultimate goal of replacing or enhancing conventional sensors. Developing a highly efficient algorithm for collaborating cameras in the network is of particular interest. This paper presents an intelligent positioning system, which is capable of integrating visual information, obtained by large quantities of cameras, through self-configuration. The use of the extended Kalman filter predicts the position, velocity, acceleration and jerk (the third derivative of position) in the moving target. As a result, the camera-network-based visual positioning system is capable of locating a moving target with high precision: relative errors for positional parameters are all smaller than 10%; relative errors for linear velocities (<i>v<sub>x</sub></i>, <i>v<sub>y</sub></i>) are also kept to an acceptable level, i.e., lower than 20%. This presents the outstanding potential of this visual positioning system to assist in the industry of automation, including wireless intelligent control, high-precision indoor positioning, and navigation. |
first_indexed | 2024-03-09T04:59:21Z |
format | Article |
id | doaj.art-abdfe1f5fc6c439095b64e1e33d7bcff |
institution | Directory Open Access Journal |
issn | 1424-8220 |
language | English |
last_indexed | 2024-03-09T04:59:21Z |
publishDate | 2022-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj.art-abdfe1f5fc6c439095b64e1e33d7bcff2023-12-03T13:01:54ZengMDPI AGSensors1424-82202022-08-012215580610.3390/s22155806Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-ConfigurationYingfeng Wu0Weiwei Zhao1Jifa Zhang2School of Mechanical and Electronic Engineering, Wuhan University of Technology, 122 Luoshi Road, Wuhan 430070, ChinaSchool of Mechanical and Electronic Engineering, Wuhan University of Technology, 122 Luoshi Road, Wuhan 430070, ChinaSchool of Mechanical and Electronic Engineering, Wuhan University of Technology, 122 Luoshi Road, Wuhan 430070, ChinaThe development of a self-configuring method for efficiently locating moving targets indoors could enable extraordinary advances in the control of industrial automatic production equipment. Being interactively connected, cameras that constitute a network represent a promising visual system for wireless positioning, with the ultimate goal of replacing or enhancing conventional sensors. Developing a highly efficient algorithm for collaborating cameras in the network is of particular interest. This paper presents an intelligent positioning system, which is capable of integrating visual information, obtained by large quantities of cameras, through self-configuration. The use of the extended Kalman filter predicts the position, velocity, acceleration and jerk (the third derivative of position) in the moving target. As a result, the camera-network-based visual positioning system is capable of locating a moving target with high precision: relative errors for positional parameters are all smaller than 10%; relative errors for linear velocities (<i>v<sub>x</sub></i>, <i>v<sub>y</sub></i>) are also kept to an acceptable level, i.e., lower than 20%. This presents the outstanding potential of this visual positioning system to assist in the industry of automation, including wireless intelligent control, high-precision indoor positioning, and navigation.https://www.mdpi.com/1424-8220/22/15/5806large-scale positioning and navigationintelligent self-configurationcollaborative visual networkextended Kalman filter |
spellingShingle | Yingfeng Wu Weiwei Zhao Jifa Zhang Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration Sensors large-scale positioning and navigation intelligent self-configuration collaborative visual network extended Kalman filter |
title | Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration |
title_full | Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration |
title_fullStr | Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration |
title_full_unstemmed | Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration |
title_short | Methodology for Large-Scale Camera Positioning to Enable Intelligent Self-Configuration |
title_sort | methodology for large scale camera positioning to enable intelligent self configuration |
topic | large-scale positioning and navigation intelligent self-configuration collaborative visual network extended Kalman filter |
url | https://www.mdpi.com/1424-8220/22/15/5806 |
work_keys_str_mv | AT yingfengwu methodologyforlargescalecamerapositioningtoenableintelligentselfconfiguration AT weiweizhao methodologyforlargescalecamerapositioningtoenableintelligentselfconfiguration AT jifazhang methodologyforlargescalecamerapositioningtoenableintelligentselfconfiguration |