StairNet: visual recognition of stairs for human–robot locomotion

Abstract Human–robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and fro...

Full description

Bibliographic Details
Main Authors: Andrew Garrett Kurbis, Dmytro Kuzmenko, Bogdan Ivanyuk-Skulskiy, Alex Mihailidis, Brokoslaw Laschowski
Format: Article
Language:English
Published: BMC 2024-02-01
Series:BioMedical Engineering OnLine
Subjects:
Online Access:https://doi.org/10.1186/s12938-024-01216-0
_version_ 1797273856565575680
author Andrew Garrett Kurbis
Dmytro Kuzmenko
Bogdan Ivanyuk-Skulskiy
Alex Mihailidis
Brokoslaw Laschowski
author_facet Andrew Garrett Kurbis
Dmytro Kuzmenko
Bogdan Ivanyuk-Skulskiy
Alex Mihailidis
Brokoslaw Laschowski
author_sort Andrew Garrett Kurbis
collection DOAJ
description Abstract Human–robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to develop the StairNet initiative to support the development of new deep learning models for visual perception of real-world stair environments. In this study, we present a comprehensive overview of the StairNet initiative and key research to date. First, we summarize the development of our large-scale data set with over 515,000 manually labeled images. We then provide a summary and detailed comparison of the performances achieved with different algorithms (i.e., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks), training methods (i.e., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images), and deployment methods (i.e., mobile and embedded computing), using the StairNet data set. Finally, we discuss the challenges and future directions. To date, our StairNet models have consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. In comparison, when deployed on our custom-designed CPU-powered smart glasses, our models yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centered design and performance. Overall, the results of numerous experiments presented herein provide consistent evidence that StairNet can be an effective platform to develop and study new deep learning models for visual perception of human–robot walking environments, with an emphasis on stair recognition. This research aims to support the development of next-generation vision-based control systems for robotic prosthetic legs, exoskeletons, and other mobility assistive technologies.
first_indexed 2024-03-07T14:50:08Z
format Article
id doaj.art-fa8c78ebfe43441ab0640c077dc36254
institution Directory Open Access Journal
issn 1475-925X
language English
last_indexed 2024-03-07T14:50:08Z
publishDate 2024-02-01
publisher BMC
record_format Article
series BioMedical Engineering OnLine
spelling doaj.art-fa8c78ebfe43441ab0640c077dc362542024-03-05T19:47:37ZengBMCBioMedical Engineering OnLine1475-925X2024-02-0123111910.1186/s12938-024-01216-0StairNet: visual recognition of stairs for human–robot locomotionAndrew Garrett Kurbis0Dmytro Kuzmenko1Bogdan Ivanyuk-Skulskiy2Alex Mihailidis3Brokoslaw Laschowski4Institute of Biomedical Engineering, University of TorontoDepartment of Mathematics, National University of Kyiv-Mohyla AcademyDepartment of Mathematics, National University of Kyiv-Mohyla AcademyInstitute of Biomedical Engineering, University of TorontoRobotics Institute, University of TorontoAbstract Human–robot walking with prosthetic legs and exoskeletons, especially over complex terrains, such as stairs, remains a significant challenge. Egocentric vision has the unique potential to detect the walking environment prior to physical interactions, which can improve transitions to and from stairs. This motivated us to develop the StairNet initiative to support the development of new deep learning models for visual perception of real-world stair environments. In this study, we present a comprehensive overview of the StairNet initiative and key research to date. First, we summarize the development of our large-scale data set with over 515,000 manually labeled images. We then provide a summary and detailed comparison of the performances achieved with different algorithms (i.e., 2D and 3D CNN, hybrid CNN and LSTM, and ViT networks), training methods (i.e., supervised learning with and without temporal data, and semi-supervised learning with unlabeled images), and deployment methods (i.e., mobile and embedded computing), using the StairNet data set. Finally, we discuss the challenges and future directions. To date, our StairNet models have consistently achieved high classification accuracy (i.e., up to 98.8%) with different designs, offering trade-offs between model accuracy and size. When deployed on mobile devices with GPU and NPU accelerators, our deep learning models achieved inference speeds up to 2.8 ms. In comparison, when deployed on our custom-designed CPU-powered smart glasses, our models yielded slower inference speeds of 1.5 s, presenting a trade-off between human-centered design and performance. Overall, the results of numerous experiments presented herein provide consistent evidence that StairNet can be an effective platform to develop and study new deep learning models for visual perception of human–robot walking environments, with an emphasis on stair recognition. This research aims to support the development of next-generation vision-based control systems for robotic prosthetic legs, exoskeletons, and other mobility assistive technologies.https://doi.org/10.1186/s12938-024-01216-0Computer visionDeep learningWearable roboticsProstheticsExoskeletons
spellingShingle Andrew Garrett Kurbis
Dmytro Kuzmenko
Bogdan Ivanyuk-Skulskiy
Alex Mihailidis
Brokoslaw Laschowski
StairNet: visual recognition of stairs for human–robot locomotion
BioMedical Engineering OnLine
Computer vision
Deep learning
Wearable robotics
Prosthetics
Exoskeletons
title StairNet: visual recognition of stairs for human–robot locomotion
title_full StairNet: visual recognition of stairs for human–robot locomotion
title_fullStr StairNet: visual recognition of stairs for human–robot locomotion
title_full_unstemmed StairNet: visual recognition of stairs for human–robot locomotion
title_short StairNet: visual recognition of stairs for human–robot locomotion
title_sort stairnet visual recognition of stairs for human robot locomotion
topic Computer vision
Deep learning
Wearable robotics
Prosthetics
Exoskeletons
url https://doi.org/10.1186/s12938-024-01216-0
work_keys_str_mv AT andrewgarrettkurbis stairnetvisualrecognitionofstairsforhumanrobotlocomotion
AT dmytrokuzmenko stairnetvisualrecognitionofstairsforhumanrobotlocomotion
AT bogdanivanyukskulskiy stairnetvisualrecognitionofstairsforhumanrobotlocomotion
AT alexmihailidis stairnetvisualrecognitionofstairsforhumanrobotlocomotion
AT brokoslawlaschowski stairnetvisualrecognitionofstairsforhumanrobotlocomotion