Robust visual tracking based on watershed regions
Robust visual tracking is a very challenging problem especially when the target undergoes large appearance variation. In this study, the authors propose an efficient and effective tracker based on watershed regions. As middle‐level visual cues, watershed regions contain more semantics information th...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2014-12-01
|
Series: | IET Computer Vision |
Subjects: | |
Online Access: | https://doi.org/10.1049/iet-cvi.2013.0250 |
_version_ | 1827817238032285696 |
---|---|
author | Wangsheng Yu Xiaohua Tian Zhiqiang Hou Yufei Zha |
author_facet | Wangsheng Yu Xiaohua Tian Zhiqiang Hou Yufei Zha |
author_sort | Wangsheng Yu |
collection | DOAJ |
description | Robust visual tracking is a very challenging problem especially when the target undergoes large appearance variation. In this study, the authors propose an efficient and effective tracker based on watershed regions. As middle‐level visual cues, watershed regions contain more semantics information than low‐level features, and reflect more structure information than high‐level model. First, the authors manually select the target template in initial frame, and predict the target candidate in the next frame using motion prediction. Then, the authors utilise marker‐based watershed algorithm to obtain the watershed regions of target template and candidate template, and describe each region with multiple features. Next, the authors calculate the nearest neighbour in feature space to match the watershed regions and construct an affine relation from target template to candidate template. Finally, the authors resolve the affine relation to calculate the final tracking result, and update the template for the following tracking. The authors test their tracker on some challenging sequences with appearance variation range from illumination change, partial occlusion, pose change to background clutters and compare it with some state‐of‐the‐art works. Experiment results indicate that the proposed tracker is robust to the large appearance variation and exceeds the state‐of‐the‐art trackers in most situations. |
first_indexed | 2024-03-12T00:31:36Z |
format | Article |
id | doaj.art-c7bce025736649c0943ef6ec4bff06d2 |
institution | Directory Open Access Journal |
issn | 1751-9632 1751-9640 |
language | English |
last_indexed | 2024-03-12T00:31:36Z |
publishDate | 2014-12-01 |
publisher | Wiley |
record_format | Article |
series | IET Computer Vision |
spelling | doaj.art-c7bce025736649c0943ef6ec4bff06d22023-09-15T10:15:58ZengWileyIET Computer Vision1751-96321751-96402014-12-018658860010.1049/iet-cvi.2013.0250Robust visual tracking based on watershed regionsWangsheng Yu0Xiaohua Tian1Zhiqiang Hou2Yufei Zha3Information and Navigation CollegeAir Force Engineering UniversityXi'anPeople's Republic of ChinaInformation and Navigation CollegeAir Force Engineering UniversityXi'anPeople's Republic of ChinaInformation and Navigation CollegeAir Force Engineering UniversityXi'anPeople's Republic of ChinaAeronautics and Astronautics Engineering CollegeAir Force Engineering UniversityXi'anPeople's Republic of ChinaRobust visual tracking is a very challenging problem especially when the target undergoes large appearance variation. In this study, the authors propose an efficient and effective tracker based on watershed regions. As middle‐level visual cues, watershed regions contain more semantics information than low‐level features, and reflect more structure information than high‐level model. First, the authors manually select the target template in initial frame, and predict the target candidate in the next frame using motion prediction. Then, the authors utilise marker‐based watershed algorithm to obtain the watershed regions of target template and candidate template, and describe each region with multiple features. Next, the authors calculate the nearest neighbour in feature space to match the watershed regions and construct an affine relation from target template to candidate template. Finally, the authors resolve the affine relation to calculate the final tracking result, and update the template for the following tracking. The authors test their tracker on some challenging sequences with appearance variation range from illumination change, partial occlusion, pose change to background clutters and compare it with some state‐of‐the‐art works. Experiment results indicate that the proposed tracker is robust to the large appearance variation and exceeds the state‐of‐the‐art trackers in most situations.https://doi.org/10.1049/iet-cvi.2013.0250robust visual trackingwatershed regionshigh-level appearance modellow-level feature modelmiddle-level visual cuessemantic information |
spellingShingle | Wangsheng Yu Xiaohua Tian Zhiqiang Hou Yufei Zha Robust visual tracking based on watershed regions IET Computer Vision robust visual tracking watershed regions high-level appearance model low-level feature model middle-level visual cues semantic information |
title | Robust visual tracking based on watershed regions |
title_full | Robust visual tracking based on watershed regions |
title_fullStr | Robust visual tracking based on watershed regions |
title_full_unstemmed | Robust visual tracking based on watershed regions |
title_short | Robust visual tracking based on watershed regions |
title_sort | robust visual tracking based on watershed regions |
topic | robust visual tracking watershed regions high-level appearance model low-level feature model middle-level visual cues semantic information |
url | https://doi.org/10.1049/iet-cvi.2013.0250 |
work_keys_str_mv | AT wangshengyu robustvisualtrackingbasedonwatershedregions AT xiaohuatian robustvisualtrackingbasedonwatershedregions AT zhiqianghou robustvisualtrackingbasedonwatershedregions AT yufeizha robustvisualtrackingbasedonwatershedregions |