Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems
The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each p...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2018-08-01
|
Series: | Journal of Sensor and Actuator Networks |
Subjects: | |
Online Access: | http://www.mdpi.com/2224-2708/7/3/31 |
_version_ | 1818359563086200832 |
---|---|
author | Shun Chiba Tomo Miyazaki Yoshihiro Sugaya Shinichiro Omachi |
author_facet | Shun Chiba Tomo Miyazaki Yoshihiro Sugaya Shinichiro Omachi |
author_sort | Shun Chiba |
collection | DOAJ |
description | The development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person’s activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment. |
first_indexed | 2024-12-13T20:46:53Z |
format | Article |
id | doaj.art-6b2054cbd18a433eb13aae69b07b549a |
institution | Directory Open Access Journal |
issn | 2224-2708 |
language | English |
last_indexed | 2024-12-13T20:46:53Z |
publishDate | 2018-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Journal of Sensor and Actuator Networks |
spelling | doaj.art-6b2054cbd18a433eb13aae69b07b549a2022-12-21T23:31:58ZengMDPI AGJournal of Sensor and Actuator Networks2224-27082018-08-01733110.3390/jsan7030031jsan7030031Activity Recognition Using Gazed Text and Viewpoint Information for User Support SystemsShun Chiba0Tomo Miyazaki1Yoshihiro Sugaya2Shinichiro Omachi3Graduate School of Engineering, Tohoku University, Aoba 6-6-05, Aramaki, Aoba-ku, Sendai 980-8579, JapanGraduate School of Engineering, Tohoku University, Aoba 6-6-05, Aramaki, Aoba-ku, Sendai 980-8579, JapanGraduate School of Engineering, Tohoku University, Aoba 6-6-05, Aramaki, Aoba-ku, Sendai 980-8579, JapanGraduate School of Engineering, Tohoku University, Aoba 6-6-05, Aramaki, Aoba-ku, Sendai 980-8579, JapanThe development of information technology has added many conveniences to our lives. On the other hand, however, we have to deal with various kinds of information, which can be a difficult task for elderly people or those who are not familiar with information devices. A technology to recognize each person’s activity and providing appropriate support based on that activity could be useful for such people. In this paper, we propose a novel fine-grained activity recognition method for user support systems that focuses on identifying the text at which a user is gazing, based on the idea that the content of the text is related to the activity of the user. It is necessary to keep in mind that the meaning of the text depends on its location. To tackle this problem, we propose the simultaneous use of a wearable device and fixed camera. To obtain the global location of the text, we perform image matching using the local features of the images obtained by these two devices. Then, we generate a feature vector based on this information and the content of the text. To show the effectiveness of the proposed approach, we performed activity recognition experiments with six subjects in a laboratory environment.http://www.mdpi.com/2224-2708/7/3/31activity recognitioneye trackerfisheye cameraviewpoint information |
spellingShingle | Shun Chiba Tomo Miyazaki Yoshihiro Sugaya Shinichiro Omachi Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems Journal of Sensor and Actuator Networks activity recognition eye tracker fisheye camera viewpoint information |
title | Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems |
title_full | Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems |
title_fullStr | Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems |
title_full_unstemmed | Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems |
title_short | Activity Recognition Using Gazed Text and Viewpoint Information for User Support Systems |
title_sort | activity recognition using gazed text and viewpoint information for user support systems |
topic | activity recognition eye tracker fisheye camera viewpoint information |
url | http://www.mdpi.com/2224-2708/7/3/31 |
work_keys_str_mv | AT shunchiba activityrecognitionusinggazedtextandviewpointinformationforusersupportsystems AT tomomiyazaki activityrecognitionusinggazedtextandviewpointinformationforusersupportsystems AT yoshihirosugaya activityrecognitionusinggazedtextandviewpointinformationforusersupportsystems AT shinichiroomachi activityrecognitionusinggazedtextandviewpointinformationforusersupportsystems |