Coarse-to-fine online learning for hand segmentation in egocentric video
Abstract Hand segmentation is one of the most fundamental and crucial steps for egocentric human-computer interaction. The special egocentric view brings new challenges to hand segmentation tasks, such as the unpredictable environmental conditions. The performance of traditional hand segmentation me...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2018-04-01
|
Series: | EURASIP Journal on Image and Video Processing |
Subjects: | |
Online Access: | http://link.springer.com/article/10.1186/s13640-018-0262-1 |
_version_ | 1818672322704310272 |
---|---|
author | Ying Zhao Zhiwei Luo Changqin Quan |
author_facet | Ying Zhao Zhiwei Luo Changqin Quan |
author_sort | Ying Zhao |
collection | DOAJ |
description | Abstract Hand segmentation is one of the most fundamental and crucial steps for egocentric human-computer interaction. The special egocentric view brings new challenges to hand segmentation tasks, such as the unpredictable environmental conditions. The performance of traditional hand segmentation methods depend on abundant manually labeled training data. However, these approaches do not appropriately capture the whole properties of egocentric human-computer interaction for neglecting the user-specific context. It is only necessary to build a personalized hand model of the active user. Based on this observation, we propose an online-learning hand segmentation approach without using manually labeled data for training. Our approach consists of top-down classifications and bottom-up optimizations. More specifically, we divide the segmentation task into three parts, a frame-level hand detection which detects the presence of the interactive hand using motion saliency and initializes hand masks for online learning, a superpixel-level hand classification which coarsely segments hand regions from which stable samples are selected for next level, and a pixel-level hand classification which produces a fine-grained hand segmentation. Based on the pixel-level classification result, we update the hand appearance model and optimize the upper layer classifier and detector. This online-learning strategy makes our approach robust to varying illumination conditions and hand appearances. Experimental results demonstrate the robustness of our approach. |
first_indexed | 2024-12-17T07:38:04Z |
format | Article |
id | doaj.art-53b8ca8513d7499084a97f24ce1c8f5d |
institution | Directory Open Access Journal |
issn | 1687-5281 |
language | English |
last_indexed | 2024-12-17T07:38:04Z |
publishDate | 2018-04-01 |
publisher | SpringerOpen |
record_format | Article |
series | EURASIP Journal on Image and Video Processing |
spelling | doaj.art-53b8ca8513d7499084a97f24ce1c8f5d2022-12-21T21:58:16ZengSpringerOpenEURASIP Journal on Image and Video Processing1687-52812018-04-012018111210.1186/s13640-018-0262-1Coarse-to-fine online learning for hand segmentation in egocentric videoYing Zhao0Zhiwei Luo1Changqin Quan2Ricoh Software Research Center (Beijing) Co., LtdGraduate School of System Informatics, Kobe UniversityGraduate School of System Informatics, Kobe UniversityAbstract Hand segmentation is one of the most fundamental and crucial steps for egocentric human-computer interaction. The special egocentric view brings new challenges to hand segmentation tasks, such as the unpredictable environmental conditions. The performance of traditional hand segmentation methods depend on abundant manually labeled training data. However, these approaches do not appropriately capture the whole properties of egocentric human-computer interaction for neglecting the user-specific context. It is only necessary to build a personalized hand model of the active user. Based on this observation, we propose an online-learning hand segmentation approach without using manually labeled data for training. Our approach consists of top-down classifications and bottom-up optimizations. More specifically, we divide the segmentation task into three parts, a frame-level hand detection which detects the presence of the interactive hand using motion saliency and initializes hand masks for online learning, a superpixel-level hand classification which coarsely segments hand regions from which stable samples are selected for next level, and a pixel-level hand classification which produces a fine-grained hand segmentation. Based on the pixel-level classification result, we update the hand appearance model and optimize the upper layer classifier and detector. This online-learning strategy makes our approach robust to varying illumination conditions and hand appearances. Experimental results demonstrate the robustness of our approach.http://link.springer.com/article/10.1186/s13640-018-0262-1Hand detectionHand segmentationEgocentricUnsupervised online learning |
spellingShingle | Ying Zhao Zhiwei Luo Changqin Quan Coarse-to-fine online learning for hand segmentation in egocentric video EURASIP Journal on Image and Video Processing Hand detection Hand segmentation Egocentric Unsupervised online learning |
title | Coarse-to-fine online learning for hand segmentation in egocentric video |
title_full | Coarse-to-fine online learning for hand segmentation in egocentric video |
title_fullStr | Coarse-to-fine online learning for hand segmentation in egocentric video |
title_full_unstemmed | Coarse-to-fine online learning for hand segmentation in egocentric video |
title_short | Coarse-to-fine online learning for hand segmentation in egocentric video |
title_sort | coarse to fine online learning for hand segmentation in egocentric video |
topic | Hand detection Hand segmentation Egocentric Unsupervised online learning |
url | http://link.springer.com/article/10.1186/s13640-018-0262-1 |
work_keys_str_mv | AT yingzhao coarsetofineonlinelearningforhandsegmentationinegocentricvideo AT zhiweiluo coarsetofineonlinelearningforhandsegmentationinegocentricvideo AT changqinquan coarsetofineonlinelearningforhandsegmentationinegocentricvideo |