DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning

B-mode ultrasound is commonly used to image musculoskeletal tissues, but one major bottleneck is data analysis. Manual analysis is commonly deployed for assessment of muscle thickness, pennation angle and fascicle length in muscle ultrasonography images. However, manual analysis is somewhat subject...

Full description

Bibliographic Details
Main Authors: Paul Ritsche, Oliver Faude, Martino Franchi, Taija Finni, Olivier Seynnes, Neil Cronin
Format: Article
Language:English
Published: Bern Open Publishing 2023-02-01
Series:Current Issues in Sport Science
Subjects:
Online Access:https://ciss-journal.org/article/view/9385
_version_ 1797905376558972928
author Paul Ritsche
Oliver Faude
Martino Franchi
Taija Finni
Olivier Seynnes
Neil Cronin
author_facet Paul Ritsche
Oliver Faude
Martino Franchi
Taija Finni
Olivier Seynnes
Neil Cronin
author_sort Paul Ritsche
collection DOAJ
description B-mode ultrasound is commonly used to image musculoskeletal tissues, but one major bottleneck is data analysis. Manual analysis is commonly deployed for assessment of muscle thickness, pennation angle and fascicle length in muscle ultrasonography images. However, manual analysis is somewhat subjective, laborious and requires thorough experience. We provide an openly available algorithm (DL_Track) to automatically analyze muscle architectural parameters in ultrasonography images or videos of human lower limb muscles. We trained two different neural networks (classic U-net [Ronneberger et al., 2021] and U-net with VGG16 [Simonyan & Zisserman, 2015] pretrained encoder) one to detect muscle fascicles and another to detect muscle aponeuroses using a set of labelled musculoskeletal ultrasound images. We included images from four different devices of the vastus lateralis, gastrocnemius medialis, tibilias anterior and soleus. In total, we included 310 images for the fascicle model and 570 images for the aponeuroses model, which we augmented to about 1,700 images per set. Each dataset was randomly split into a training and test set for model training, using a common 80/20 train/test split. We determined the best performing model based on intersection-over-union and loss metrics calculated during model training. We compared neural network predictions on an unseen test set consisting of 35 images to those obtained via manual analysis and two existing semi/automated analysis approaches (SMA and Ultratrack). Across the set of 35 unseen images, the mean differences between DL_Track and manual analysis were for fascicle length -2.4 mm (95% compatibility interval (CI) = -3.7 to -1.2), for pennation angle 0.6° (-0.2 to 1.4), and for muscle thickness -0.6 mm (-1.2 to 0.002). The corresponding values comparing DL_Track with SMA were for fascicle length 5.2 mm (1.3 to 9.0), for pennation angle -1.4° (-2.6 to -0.4) and for muscle thickness -0.9 mm (-1.5 to -0.3) respectively. ICC values between DL_Track and Ultratrack were 0.19 (0.00 to 0.35) for medial gastrocnemius passive contraction, 0.79 (0.77 to 0.81) for medial gastrocnemius maximal voluntary contraction, 0.88 (0.87 to 0.89) for calf raise, 0.67 (0.07 to 0.86) for medial gastrocnemius during walking, 0.80 (0.79 to 0.82) for tibialis passive plantar and dorsiflexion, and 0.85 (0.83 to 0.86) for tibialis anterior maximum voluntary contraction. Our method is fully automated and can estimate fascicle length, pennation angle and muscle thickness from single images or videos in multiple superficial muscles. For single images, the method gave results that are in agreement with those produced by SMA or manual analysis. Similarly, for videos, there was overlap between the results produced with Ultratrack and our method. In contrast to Ultratrack, DL_Track analyzes each frame independently of the previous frames, which might explain the observerd variability. References Ronneberger, O., Fischer, P., & Brox, T. (2021). U-Net: Convolutional networks for biomedical image segmentation. arXiv. https://doi.org/10.48550/arXiv.1505.04597 Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. arXiv. https://doi.org/10.48550/arXiv.1409.1556
first_indexed 2024-04-10T10:04:15Z
format Article
id doaj.art-27451c357ba4450db16993578cd14c75
institution Directory Open Access Journal
issn 2414-6641
language English
last_indexed 2024-04-10T10:04:15Z
publishDate 2023-02-01
publisher Bern Open Publishing
record_format Article
series Current Issues in Sport Science
spelling doaj.art-27451c357ba4450db16993578cd14c752023-02-16T03:15:14ZengBern Open PublishingCurrent Issues in Sport Science2414-66412023-02-018210.36950/2023.2ciss088DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learningPaul Ritsche0Oliver Faude1Martino Franchi2Taija Finni3Olivier Seynnes4Neil Cronin5Department of Sport, Exercise and Health, University of Basel, SwitzerlandDepartment of Sport, Exercise and Health, University of Basel, SwitzerlandDepartment of Biomedical Sciences, University of Padova, Italy Faculty of Sport and Health Sciences, University of Jyväskylä, FinlandDepartment for Physical Performance, Norwegian School of Sport Sciences, NorwayFaculty of Sport and Health Sciences, University of Jyväskylä, Finland & School of Sport & Exercise, University of Gloucestershire, United Kingdom B-mode ultrasound is commonly used to image musculoskeletal tissues, but one major bottleneck is data analysis. Manual analysis is commonly deployed for assessment of muscle thickness, pennation angle and fascicle length in muscle ultrasonography images. However, manual analysis is somewhat subjective, laborious and requires thorough experience. We provide an openly available algorithm (DL_Track) to automatically analyze muscle architectural parameters in ultrasonography images or videos of human lower limb muscles. We trained two different neural networks (classic U-net [Ronneberger et al., 2021] and U-net with VGG16 [Simonyan & Zisserman, 2015] pretrained encoder) one to detect muscle fascicles and another to detect muscle aponeuroses using a set of labelled musculoskeletal ultrasound images. We included images from four different devices of the vastus lateralis, gastrocnemius medialis, tibilias anterior and soleus. In total, we included 310 images for the fascicle model and 570 images for the aponeuroses model, which we augmented to about 1,700 images per set. Each dataset was randomly split into a training and test set for model training, using a common 80/20 train/test split. We determined the best performing model based on intersection-over-union and loss metrics calculated during model training. We compared neural network predictions on an unseen test set consisting of 35 images to those obtained via manual analysis and two existing semi/automated analysis approaches (SMA and Ultratrack). Across the set of 35 unseen images, the mean differences between DL_Track and manual analysis were for fascicle length -2.4 mm (95% compatibility interval (CI) = -3.7 to -1.2), for pennation angle 0.6° (-0.2 to 1.4), and for muscle thickness -0.6 mm (-1.2 to 0.002). The corresponding values comparing DL_Track with SMA were for fascicle length 5.2 mm (1.3 to 9.0), for pennation angle -1.4° (-2.6 to -0.4) and for muscle thickness -0.9 mm (-1.5 to -0.3) respectively. ICC values between DL_Track and Ultratrack were 0.19 (0.00 to 0.35) for medial gastrocnemius passive contraction, 0.79 (0.77 to 0.81) for medial gastrocnemius maximal voluntary contraction, 0.88 (0.87 to 0.89) for calf raise, 0.67 (0.07 to 0.86) for medial gastrocnemius during walking, 0.80 (0.79 to 0.82) for tibialis passive plantar and dorsiflexion, and 0.85 (0.83 to 0.86) for tibialis anterior maximum voluntary contraction. Our method is fully automated and can estimate fascicle length, pennation angle and muscle thickness from single images or videos in multiple superficial muscles. For single images, the method gave results that are in agreement with those produced by SMA or manual analysis. Similarly, for videos, there was overlap between the results produced with Ultratrack and our method. In contrast to Ultratrack, DL_Track analyzes each frame independently of the previous frames, which might explain the observerd variability. References Ronneberger, O., Fischer, P., & Brox, T. (2021). U-Net: Convolutional networks for biomedical image segmentation. arXiv. https://doi.org/10.48550/arXiv.1505.04597 Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. arXiv. https://doi.org/10.48550/arXiv.1409.1556 https://ciss-journal.org/article/view/9385ultrasoundU-netconvolutional neural networkmuscle architecture
spellingShingle Paul Ritsche
Oliver Faude
Martino Franchi
Taija Finni
Olivier Seynnes
Neil Cronin
DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning
Current Issues in Sport Science
ultrasound
U-net
convolutional neural network
muscle architecture
title DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning
title_full DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning
title_fullStr DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning
title_full_unstemmed DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning
title_short DL_Track - Automated analysis of muscle architecture from B-mode ultrasonography images using deep learning
title_sort dl track automated analysis of muscle architecture from b mode ultrasonography images using deep learning
topic ultrasound
U-net
convolutional neural network
muscle architecture
url https://ciss-journal.org/article/view/9385
work_keys_str_mv AT paulritsche dltrackautomatedanalysisofmusclearchitecturefrombmodeultrasonographyimagesusingdeeplearning
AT oliverfaude dltrackautomatedanalysisofmusclearchitecturefrombmodeultrasonographyimagesusingdeeplearning
AT martinofranchi dltrackautomatedanalysisofmusclearchitecturefrombmodeultrasonographyimagesusingdeeplearning
AT taijafinni dltrackautomatedanalysisofmusclearchitecturefrombmodeultrasonographyimagesusingdeeplearning
AT olivierseynnes dltrackautomatedanalysisofmusclearchitecturefrombmodeultrasonographyimagesusingdeeplearning
AT neilcronin dltrackautomatedanalysisofmusclearchitecturefrombmodeultrasonographyimagesusingdeeplearning