Exercise quantification from single camera view markerless 3D pose estimation
Sports physiotherapists and coaches are tasked with evaluating the movement quality of athletes across the spectrum of ability and experience. However, the accuracy of visual observation is low and existing technology outside of expensive lab-based solutions has limited adoption, leading to an unmet...
Main Authors: | , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2024-03-01
|
Series: | Heliyon |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2405844024036272 |
_version_ | 1797224230144704512 |
---|---|
author | Clara Mercadal-Baudart Chao-Jung Liu Garreth Farrell Molly Boyne Jorge González Escribano Aljosa Smolic Ciaran Simms |
author_facet | Clara Mercadal-Baudart Chao-Jung Liu Garreth Farrell Molly Boyne Jorge González Escribano Aljosa Smolic Ciaran Simms |
author_sort | Clara Mercadal-Baudart |
collection | DOAJ |
description | Sports physiotherapists and coaches are tasked with evaluating the movement quality of athletes across the spectrum of ability and experience. However, the accuracy of visual observation is low and existing technology outside of expensive lab-based solutions has limited adoption, leading to an unmet need for an efficient and accurate means to measure static and dynamic joint angles during movement, converted to movement metrics useable by practitioners. This paper proposes a set of pose landmarks for computing frequently used joint angles as metrics of interest to sports physiotherapists and coaches in assessing common strength-building human exercise movements. It then proposes a set of rules for computing these metrics for a range of common exercises (single and double drop jumps and counter-movement jumps, deadlifts and various squats) from anatomical key-points detected using video, and evaluates the accuracy of these using a published 3D human pose model trained with ground truth data derived from VICON motion capture of common rehabilitation exercises. Results show a set of mathematically defined metrics which are derived from the chosen pose landmarks, and which are sufficient to compute the metrics for each of the exercises under consideration. Comparison to ground truth data showed that root mean square angle errors were within 10° for all exercises for the following metrics: shin angle, knee varus/valgus and left/right flexion, hip flexion and pelvic tilt, trunk angle, spinal flexion lower/upper/mid and rib flare. Larger errors (though still all within 15°) were observed for shoulder flexion and ASIS asymmetry in some exercises, notably front squats and drop-jumps. In conclusion, the contribution of this paper is that a set of sufficient key-points and associated metrics for exercise assessment from 3D human pose have been uniquely defined. Further, we found generally very good accuracy of the Strided Transformer 3D pose model in predicting these metrics for the chosen set of exercises from a single mobile device camera, when trained on a suitable set of functional exercises recorded using a VICON motion capture system. Future assessment of generalization is needed. |
first_indexed | 2024-04-24T13:49:49Z |
format | Article |
id | doaj.art-db1f31c3777945c690d02393d6bca70a |
institution | Directory Open Access Journal |
issn | 2405-8440 |
language | English |
last_indexed | 2024-04-24T13:49:49Z |
publishDate | 2024-03-01 |
publisher | Elsevier |
record_format | Article |
series | Heliyon |
spelling | doaj.art-db1f31c3777945c690d02393d6bca70a2024-04-04T05:05:14ZengElsevierHeliyon2405-84402024-03-01106e27596Exercise quantification from single camera view markerless 3D pose estimationClara Mercadal-Baudart0Chao-Jung Liu1Garreth Farrell2Molly Boyne3Jorge González Escribano4Aljosa Smolic5Ciaran Simms6Trinity College Dublin, Ireland; Corresponding author.Trinity College Dublin, IrelandLeinster Rugby, IrelandTrinity College Dublin, IrelandTrinity College Dublin, IrelandLucerne University of Applied Sciences and Arts, IrelandTrinity College Dublin, IrelandSports physiotherapists and coaches are tasked with evaluating the movement quality of athletes across the spectrum of ability and experience. However, the accuracy of visual observation is low and existing technology outside of expensive lab-based solutions has limited adoption, leading to an unmet need for an efficient and accurate means to measure static and dynamic joint angles during movement, converted to movement metrics useable by practitioners. This paper proposes a set of pose landmarks for computing frequently used joint angles as metrics of interest to sports physiotherapists and coaches in assessing common strength-building human exercise movements. It then proposes a set of rules for computing these metrics for a range of common exercises (single and double drop jumps and counter-movement jumps, deadlifts and various squats) from anatomical key-points detected using video, and evaluates the accuracy of these using a published 3D human pose model trained with ground truth data derived from VICON motion capture of common rehabilitation exercises. Results show a set of mathematically defined metrics which are derived from the chosen pose landmarks, and which are sufficient to compute the metrics for each of the exercises under consideration. Comparison to ground truth data showed that root mean square angle errors were within 10° for all exercises for the following metrics: shin angle, knee varus/valgus and left/right flexion, hip flexion and pelvic tilt, trunk angle, spinal flexion lower/upper/mid and rib flare. Larger errors (though still all within 15°) were observed for shoulder flexion and ASIS asymmetry in some exercises, notably front squats and drop-jumps. In conclusion, the contribution of this paper is that a set of sufficient key-points and associated metrics for exercise assessment from 3D human pose have been uniquely defined. Further, we found generally very good accuracy of the Strided Transformer 3D pose model in predicting these metrics for the chosen set of exercises from a single mobile device camera, when trained on a suitable set of functional exercises recorded using a VICON motion capture system. Future assessment of generalization is needed.http://www.sciencedirect.com/science/article/pii/S2405844024036272Pose estimationMotion captureSports biomechanicsInjury biomechanicsComputer visionMarkerless |
spellingShingle | Clara Mercadal-Baudart Chao-Jung Liu Garreth Farrell Molly Boyne Jorge González Escribano Aljosa Smolic Ciaran Simms Exercise quantification from single camera view markerless 3D pose estimation Heliyon Pose estimation Motion capture Sports biomechanics Injury biomechanics Computer vision Markerless |
title | Exercise quantification from single camera view markerless 3D pose estimation |
title_full | Exercise quantification from single camera view markerless 3D pose estimation |
title_fullStr | Exercise quantification from single camera view markerless 3D pose estimation |
title_full_unstemmed | Exercise quantification from single camera view markerless 3D pose estimation |
title_short | Exercise quantification from single camera view markerless 3D pose estimation |
title_sort | exercise quantification from single camera view markerless 3d pose estimation |
topic | Pose estimation Motion capture Sports biomechanics Injury biomechanics Computer vision Markerless |
url | http://www.sciencedirect.com/science/article/pii/S2405844024036272 |
work_keys_str_mv | AT claramercadalbaudart exercisequantificationfromsinglecameraviewmarkerless3dposeestimation AT chaojungliu exercisequantificationfromsinglecameraviewmarkerless3dposeestimation AT garrethfarrell exercisequantificationfromsinglecameraviewmarkerless3dposeestimation AT mollyboyne exercisequantificationfromsinglecameraviewmarkerless3dposeestimation AT jorgegonzalezescribano exercisequantificationfromsinglecameraviewmarkerless3dposeestimation AT aljosasmolic exercisequantificationfromsinglecameraviewmarkerless3dposeestimation AT ciaransimms exercisequantificationfromsinglecameraviewmarkerless3dposeestimation |