FaceSense: sensing face touch with an ear-worn system
Face touch is an unconscious human habit. Frequent touching of sensitive/mucosal facial zones (eyes, nose, and mouth) increases health risks by passing pathogens into the body and spreading diseases. Furthermore, accurate monitoring of face touch is critical for behavioral intervention. Existing mon...
Κύριοι συγγραφείς: | , , , , , , , |
---|---|
Μορφή: | Journal article |
Γλώσσα: | English |
Έκδοση: |
Association for Computing Machinery
2021
|
_version_ | 1826307643715616768 |
---|---|
author | Kakaraparthi, V Shao, Q Carver, CJ Pham, T Bui, N Nguyen, P Zhou, X Vu, T |
author_facet | Kakaraparthi, V Shao, Q Carver, CJ Pham, T Bui, N Nguyen, P Zhou, X Vu, T |
author_sort | Kakaraparthi, V |
collection | OXFORD |
description | Face touch is an unconscious human habit. Frequent touching of sensitive/mucosal facial zones (eyes, nose, and mouth) increases health risks by passing pathogens into the body and spreading diseases. Furthermore, accurate monitoring of face touch is critical for behavioral intervention. Existing monitoring systems only capture objects approaching the face, rather than detecting actual touches. As such, these systems are prone to false positives upon hand or object movement in proximity to one's face (e.g., picking up a phone). We present FaceSense, an ear-worn system capable of identifying actual touches and differentiating them between sensitive/mucosal areas from other facial areas. Following a multimodal approach, FaceSense integrates low-resolution thermal images and physiological signals. Thermal sensors sense the thermal infrared signal emitted by an approaching hand, while physiological sensors monitor impedance changes caused by skin deformation during a touch. Processed thermal and physiological signals are fed into a deep learning model (TouchNet) to detect touches and identify the facial zone of the touch. We fabricated prototypes using off-the-shelf hardware and conducted experiments with 14 participants while they perform various daily activities (e.g., drinking, talking). Results show a macro-F1-score of 83.4% for touch detection with leave-one-user-out cross-validation and a macro-F1-score of 90.1% for touch zone identification with a personalized model. |
first_indexed | 2024-03-07T07:07:47Z |
format | Journal article |
id | oxford-uuid:71d1232e-d65c-415e-a167-6206a58a04e2 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T07:07:47Z |
publishDate | 2021 |
publisher | Association for Computing Machinery |
record_format | dspace |
spelling | oxford-uuid:71d1232e-d65c-415e-a167-6206a58a04e22022-05-16T15:55:39ZFaceSense: sensing face touch with an ear-worn systemJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:71d1232e-d65c-415e-a167-6206a58a04e2EnglishSymplectic ElementsAssociation for Computing Machinery2021Kakaraparthi, VShao, QCarver, CJPham, TBui, NNguyen, PZhou, XVu, TFace touch is an unconscious human habit. Frequent touching of sensitive/mucosal facial zones (eyes, nose, and mouth) increases health risks by passing pathogens into the body and spreading diseases. Furthermore, accurate monitoring of face touch is critical for behavioral intervention. Existing monitoring systems only capture objects approaching the face, rather than detecting actual touches. As such, these systems are prone to false positives upon hand or object movement in proximity to one's face (e.g., picking up a phone). We present FaceSense, an ear-worn system capable of identifying actual touches and differentiating them between sensitive/mucosal areas from other facial areas. Following a multimodal approach, FaceSense integrates low-resolution thermal images and physiological signals. Thermal sensors sense the thermal infrared signal emitted by an approaching hand, while physiological sensors monitor impedance changes caused by skin deformation during a touch. Processed thermal and physiological signals are fed into a deep learning model (TouchNet) to detect touches and identify the facial zone of the touch. We fabricated prototypes using off-the-shelf hardware and conducted experiments with 14 participants while they perform various daily activities (e.g., drinking, talking). Results show a macro-F1-score of 83.4% for touch detection with leave-one-user-out cross-validation and a macro-F1-score of 90.1% for touch zone identification with a personalized model. |
spellingShingle | Kakaraparthi, V Shao, Q Carver, CJ Pham, T Bui, N Nguyen, P Zhou, X Vu, T FaceSense: sensing face touch with an ear-worn system |
title | FaceSense: sensing face touch with an ear-worn system |
title_full | FaceSense: sensing face touch with an ear-worn system |
title_fullStr | FaceSense: sensing face touch with an ear-worn system |
title_full_unstemmed | FaceSense: sensing face touch with an ear-worn system |
title_short | FaceSense: sensing face touch with an ear-worn system |
title_sort | facesense sensing face touch with an ear worn system |
work_keys_str_mv | AT kakaraparthiv facesensesensingfacetouchwithanearwornsystem AT shaoq facesensesensingfacetouchwithanearwornsystem AT carvercj facesensesensingfacetouchwithanearwornsystem AT phamt facesensesensingfacetouchwithanearwornsystem AT buin facesensesensingfacetouchwithanearwornsystem AT nguyenp facesensesensingfacetouchwithanearwornsystem AT zhoux facesensesensingfacetouchwithanearwornsystem AT vut facesensesensingfacetouchwithanearwornsystem |