A comparison of point-tracking algorithms in ultrasound videos from the upper limb
Abstract Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas–Kanade (LK), exploit frame-to-frame temporal information to track regio...
Main Authors: | , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
BioMed Central
2023
|
Online Access: | https://hdl.handle.net/1721.1/150827 |
_version_ | 1826216409184600064 |
---|---|
author | Magana-Salgado, Uriel Namburi, Praneeth Feigin-Almon, Micha Pallares-Lopez, Roger Anthony, Brian |
author2 | Massachusetts Institute of Technology. Department of Mechanical Engineering |
author_facet | Massachusetts Institute of Technology. Department of Mechanical Engineering Magana-Salgado, Uriel Namburi, Praneeth Feigin-Almon, Micha Pallares-Lopez, Roger Anthony, Brian |
author_sort | Magana-Salgado, Uriel |
collection | MIT |
description | Abstract
Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas–Kanade (LK), exploit frame-to-frame temporal information to track regions of interest. In contrast, convolutional neural-network (CNN) models process each video frame independently of neighboring frames. In this paper, we show that frame-to-frame trackers accumulate error over time. We propose three interpolation-like methods to combat error accumulation and show that all three methods reduce tracking errors in frame-to-frame trackers. On the neural-network end, we show that a CNN-based tracker, DeepLabCut (DLC), outperforms all four frame-to-frame trackers when tracking tissues in motion. DLC is more accurate than the frame-to-frame trackers and less sensitive to variations in types of tissue movement. The only caveat found with DLC comes from its non-temporal tracking strategy, leading to jitter between consecutive frames. Overall, when tracking points in videos of moving tissue, we recommend using DLC when prioritizing accuracy and robustness across movements in videos, and using LK with the proposed error-correction methods for small movements when tracking jitter is unacceptable. |
first_indexed | 2024-09-23T16:47:12Z |
format | Article |
id | mit-1721.1/150827 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T16:47:12Z |
publishDate | 2023 |
publisher | BioMed Central |
record_format | dspace |
spelling | mit-1721.1/1508272024-03-20T20:00:45Z A comparison of point-tracking algorithms in ultrasound videos from the upper limb Magana-Salgado, Uriel Namburi, Praneeth Feigin-Almon, Micha Pallares-Lopez, Roger Anthony, Brian Massachusetts Institute of Technology. Department of Mechanical Engineering Massachusetts Institute of Technology. Institute for Medical Engineering & Science Abstract Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas–Kanade (LK), exploit frame-to-frame temporal information to track regions of interest. In contrast, convolutional neural-network (CNN) models process each video frame independently of neighboring frames. In this paper, we show that frame-to-frame trackers accumulate error over time. We propose three interpolation-like methods to combat error accumulation and show that all three methods reduce tracking errors in frame-to-frame trackers. On the neural-network end, we show that a CNN-based tracker, DeepLabCut (DLC), outperforms all four frame-to-frame trackers when tracking tissues in motion. DLC is more accurate than the frame-to-frame trackers and less sensitive to variations in types of tissue movement. The only caveat found with DLC comes from its non-temporal tracking strategy, leading to jitter between consecutive frames. Overall, when tracking points in videos of moving tissue, we recommend using DLC when prioritizing accuracy and robustness across movements in videos, and using LK with the proposed error-correction methods for small movements when tracking jitter is unacceptable. 2023-05-30T16:32:35Z 2023-05-30T16:32:35Z 2023-05-24 2023-05-28T03:14:22Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/150827 BioMedical Engineering OnLine. 2023 May 24;22(1):52 PUBLISHER_CC en https://doi.org/10.1186/s12938-023-01105-y Creative Commons Attribution http://creativecommons.org/licenses/by/4.0/ The Author(s) application/pdf BioMed Central BioMed Central |
spellingShingle | Magana-Salgado, Uriel Namburi, Praneeth Feigin-Almon, Micha Pallares-Lopez, Roger Anthony, Brian A comparison of point-tracking algorithms in ultrasound videos from the upper limb |
title | A comparison of point-tracking algorithms in ultrasound videos from the upper limb |
title_full | A comparison of point-tracking algorithms in ultrasound videos from the upper limb |
title_fullStr | A comparison of point-tracking algorithms in ultrasound videos from the upper limb |
title_full_unstemmed | A comparison of point-tracking algorithms in ultrasound videos from the upper limb |
title_short | A comparison of point-tracking algorithms in ultrasound videos from the upper limb |
title_sort | comparison of point tracking algorithms in ultrasound videos from the upper limb |
url | https://hdl.handle.net/1721.1/150827 |
work_keys_str_mv | AT maganasalgadouriel acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT namburipraneeth acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT feiginalmonmicha acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT pallareslopezroger acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT anthonybrian acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT maganasalgadouriel comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT namburipraneeth comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT feiginalmonmicha comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT pallareslopezroger comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb AT anthonybrian comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb |