Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance

Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices.Objectives and Meth...

Full description

Bibliographic Details
Main Authors: Sergio-Uriel Maya-Martínez, Amadeo-José Argüelles-Cruz, Zobeida-Jezabel Guzmán-Zavaleta, Miguel-de-Jesús Ramírez-Cadena
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-03-01
Series:Frontiers in Robotics and AI
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/frobt.2023.1052509/full
_version_ 1797869580044992512
author Sergio-Uriel Maya-Martínez
Amadeo-José Argüelles-Cruz
Zobeida-Jezabel Guzmán-Zavaleta
Miguel-de-Jesús Ramírez-Cadena
author_facet Sergio-Uriel Maya-Martínez
Amadeo-José Argüelles-Cruz
Zobeida-Jezabel Guzmán-Zavaleta
Miguel-de-Jesús Ramírez-Cadena
author_sort Sergio-Uriel Maya-Martínez
collection DOAJ
description Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices.Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired.Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects.Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed.Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.
first_indexed 2024-04-10T00:13:48Z
format Article
id doaj.art-dd6be0c38e18419c855f3fc0bbb6ff81
institution Directory Open Access Journal
issn 2296-9144
language English
last_indexed 2024-04-10T00:13:48Z
publishDate 2023-03-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Robotics and AI
spelling doaj.art-dd6be0c38e18419c855f3fc0bbb6ff812023-03-16T07:05:50ZengFrontiers Media S.A.Frontiers in Robotics and AI2296-91442023-03-011010.3389/frobt.2023.10525091052509Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistanceSergio-Uriel Maya-Martínez0Amadeo-José Argüelles-Cruz1Zobeida-Jezabel Guzmán-Zavaleta2Miguel-de-Jesús Ramírez-Cadena3Centro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City, MexicoCentro de Investigación en Computación, Instituto Politécnico Nacional, Mexico City, MexicoUniversidad de las Américas Puebla, Puebla, MexicoSchool of Engineering and Science, Tecnológico de Monterrey, Mexico City, MexicoIntroduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices.Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired.Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects.Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed.Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.https://www.frontiersin.org/articles/10.3389/frobt.2023.1052509/fullTiny YOLOv3deep learningvisual impairedimage processinggraphic processing unitpedestrian detection
spellingShingle Sergio-Uriel Maya-Martínez
Amadeo-José Argüelles-Cruz
Zobeida-Jezabel Guzmán-Zavaleta
Miguel-de-Jesús Ramírez-Cadena
Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
Frontiers in Robotics and AI
Tiny YOLOv3
deep learning
visual impaired
image processing
graphic processing unit
pedestrian detection
title Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_full Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_fullStr Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_full_unstemmed Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_short Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_sort pedestrian detection model based on tiny yolov3 architecture for wearable devices to visually impaired assistance
topic Tiny YOLOv3
deep learning
visual impaired
image processing
graphic processing unit
pedestrian detection
url https://www.frontiersin.org/articles/10.3389/frobt.2023.1052509/full
work_keys_str_mv AT sergiourielmayamartinez pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance
AT amadeojosearguellescruz pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance
AT zobeidajezabelguzmanzavaleta pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance
AT migueldejesusramirezcadena pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance