Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment

Abstract Background Falls in older adults are a critical public health problem. As a means to assess fall risks, free-living digital biomarkers (FLDBs), including spatiotemporal gait measures, drawn from wearable inertial measurement unit (IMU) data have been investigated to identify those at high r...

Full description

Bibliographic Details
Main Authors: Mina Nouredanesh, Alan Godfrey, Dylan Powell, James Tung
Format: Article
Language:English
Published: BMC 2022-07-01
Series:Journal of NeuroEngineering and Rehabilitation
Subjects:
Online Access:https://doi.org/10.1186/s12984-022-01022-6
_version_ 1818189124024139776
author Mina Nouredanesh
Alan Godfrey
Dylan Powell
James Tung
author_facet Mina Nouredanesh
Alan Godfrey
Dylan Powell
James Tung
author_sort Mina Nouredanesh
collection DOAJ
description Abstract Background Falls in older adults are a critical public health problem. As a means to assess fall risks, free-living digital biomarkers (FLDBs), including spatiotemporal gait measures, drawn from wearable inertial measurement unit (IMU) data have been investigated to identify those at high risk. Although gait-related FLDBs can be impacted by intrinsic (e.g., gait impairment) and/or environmental (e.g., walking surfaces) factors, their respective impacts have not been differentiated by the majority of free-living fall risk assessment methods. This may lead to the ambiguous interpretation of the subsequent FLDBs, and therefore, less precise intervention strategies to prevent falls. Methods With the aim of improving the interpretability of gait-related FLDBs and investigating the impact of environment on older adults’ gait, a vision-based framework was proposed to automatically detect the most common level walking surfaces. Using a belt-mounted camera and IMUs worn by fallers and non-fallers (mean age 73.6 yrs), a unique dataset (i.e., Multimodal Ambulatory Gait and Fall Risk Assessment in the Wild (MAGFRA-W)) was acquired. The frames and image patches attributed to nine participants’ gait were annotated: (a) outdoor terrains: pavement (asphalt, cement, outdoor bricks/tiles), gravel, grass/foliage, soil, snow/slush; and (b) indoor terrains: high-friction materials (e.g., carpet, laminated floor), wood, and tiles. A series of ConvNets were developed: EgoPlaceNet categorizes frames into indoor and outdoor; and EgoTerrainNet (with outdoor and indoor versions) detects the enclosed terrain type in patches. To improve the framework’s generalizability, an independent training dataset with 9,424 samples was curated from different databases including GTOS and MINC-2500, and used for pretrained models’ (e.g., MobileNetV2) fine-tuning. Results EgoPlaceNet detected outdoor and indoor scenes in MAGFRA-W with 97.36 $$\%$$ % and 95.59 $$\%$$ % (leave-one-subject-out) accuracies, respectively. EgoTerrainNet-Indoor and -Outdoor achieved high detection accuracies for pavement (87.63 $$\%$$ % ), foliage (91.24 $$\%$$ % ), gravel (95.12 $$\%$$ % ), and high-friction materials (95.02 $$\%$$ % ), which indicate the models’ high generalizabiliy. Conclusions Encouraging results suggest that the integration of wearable cameras and deep learning approaches can provide objective contextual information in an automated manner, towards context-aware FLDBs for gait and fall risk assessment in the wild.
first_indexed 2024-12-11T23:37:49Z
format Article
id doaj.art-7a9c4f02f24142498862e26600bef4c6
institution Directory Open Access Journal
issn 1743-0003
language English
last_indexed 2024-12-11T23:37:49Z
publishDate 2022-07-01
publisher BMC
record_format Article
series Journal of NeuroEngineering and Rehabilitation
spelling doaj.art-7a9c4f02f24142498862e26600bef4c62022-12-22T00:45:49ZengBMCJournal of NeuroEngineering and Rehabilitation1743-00032022-07-0119111610.1186/s12984-022-01022-6Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessmentMina Nouredanesh0Alan Godfrey1Dylan Powell2James Tung3Department of Mechanical and Mechatronics Engineering, University of WaterlooDepartment of Computer & Information Sciences, Northumbria UniversityDepartment of Computer & Information Sciences, Northumbria UniversityDepartment of Mechanical and Mechatronics Engineering, University of WaterlooAbstract Background Falls in older adults are a critical public health problem. As a means to assess fall risks, free-living digital biomarkers (FLDBs), including spatiotemporal gait measures, drawn from wearable inertial measurement unit (IMU) data have been investigated to identify those at high risk. Although gait-related FLDBs can be impacted by intrinsic (e.g., gait impairment) and/or environmental (e.g., walking surfaces) factors, their respective impacts have not been differentiated by the majority of free-living fall risk assessment methods. This may lead to the ambiguous interpretation of the subsequent FLDBs, and therefore, less precise intervention strategies to prevent falls. Methods With the aim of improving the interpretability of gait-related FLDBs and investigating the impact of environment on older adults’ gait, a vision-based framework was proposed to automatically detect the most common level walking surfaces. Using a belt-mounted camera and IMUs worn by fallers and non-fallers (mean age 73.6 yrs), a unique dataset (i.e., Multimodal Ambulatory Gait and Fall Risk Assessment in the Wild (MAGFRA-W)) was acquired. The frames and image patches attributed to nine participants’ gait were annotated: (a) outdoor terrains: pavement (asphalt, cement, outdoor bricks/tiles), gravel, grass/foliage, soil, snow/slush; and (b) indoor terrains: high-friction materials (e.g., carpet, laminated floor), wood, and tiles. A series of ConvNets were developed: EgoPlaceNet categorizes frames into indoor and outdoor; and EgoTerrainNet (with outdoor and indoor versions) detects the enclosed terrain type in patches. To improve the framework’s generalizability, an independent training dataset with 9,424 samples was curated from different databases including GTOS and MINC-2500, and used for pretrained models’ (e.g., MobileNetV2) fine-tuning. Results EgoPlaceNet detected outdoor and indoor scenes in MAGFRA-W with 97.36 $$\%$$ % and 95.59 $$\%$$ % (leave-one-subject-out) accuracies, respectively. EgoTerrainNet-Indoor and -Outdoor achieved high detection accuracies for pavement (87.63 $$\%$$ % ), foliage (91.24 $$\%$$ % ), gravel (95.12 $$\%$$ % ), and high-friction materials (95.02 $$\%$$ % ), which indicate the models’ high generalizabiliy. Conclusions Encouraging results suggest that the integration of wearable cameras and deep learning approaches can provide objective contextual information in an automated manner, towards context-aware FLDBs for gait and fall risk assessment in the wild.https://doi.org/10.1186/s12984-022-01022-6Free-living digital biomarkersEgocentric visionFree-living gait analysisWearable sensorsTerrain type identificationDeep convolutional neural networks
spellingShingle Mina Nouredanesh
Alan Godfrey
Dylan Powell
James Tung
Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment
Journal of NeuroEngineering and Rehabilitation
Free-living digital biomarkers
Egocentric vision
Free-living gait analysis
Wearable sensors
Terrain type identification
Deep convolutional neural networks
title Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment
title_full Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment
title_fullStr Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment
title_full_unstemmed Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment
title_short Egocentric vision-based detection of surfaces: towards context-aware free-living digital biomarkers for gait and fall risk assessment
title_sort egocentric vision based detection of surfaces towards context aware free living digital biomarkers for gait and fall risk assessment
topic Free-living digital biomarkers
Egocentric vision
Free-living gait analysis
Wearable sensors
Terrain type identification
Deep convolutional neural networks
url https://doi.org/10.1186/s12984-022-01022-6
work_keys_str_mv AT minanouredanesh egocentricvisionbaseddetectionofsurfacestowardscontextawarefreelivingdigitalbiomarkersforgaitandfallriskassessment
AT alangodfrey egocentricvisionbaseddetectionofsurfacestowardscontextawarefreelivingdigitalbiomarkersforgaitandfallriskassessment
AT dylanpowell egocentricvisionbaseddetectionofsurfacestowardscontextawarefreelivingdigitalbiomarkersforgaitandfallriskassessment
AT jamestung egocentricvisionbaseddetectionofsurfacestowardscontextawarefreelivingdigitalbiomarkersforgaitandfallriskassessment