Model-Agnostic Structural Transfer Learning for Cross-Domain Autonomous Activity Recognition

Activity recognition using data collected with smart devices such as mobile and wearable sensors has become a critical component of many emerging applications ranging from behavioral medicine to gaming. However, an unprecedented increase in the diversity of smart devices in the internet-of-things er...

Full description

Bibliographic Details
Main Authors: Parastoo Alinia, Asiful Arefeen, Zhila Esna Ashari, Seyed Iman Mirzadeh, Hassan Ghasemzadeh
Format: Article
Language:English
Published: MDPI AG 2023-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/14/6337
Description
Summary:Activity recognition using data collected with smart devices such as mobile and wearable sensors has become a critical component of many emerging applications ranging from behavioral medicine to gaming. However, an unprecedented increase in the diversity of smart devices in the internet-of-things era has limited the adoption of activity recognition models for use across different devices. This lack of cross-domain adaptation is particularly notable across sensors of different modalities where the mapping of the sensor data in the traditional feature level is highly challenging. To address this challenge, we propose <i>ActiLabel</i>, a combinatorial framework that learns structural similarities among the events that occur in a target domain and those of a source domain and identifies an optimal mapping between the two domains at their structural level. The structural similarities are captured through a graph model, referred to as the <i>dependency graph</i>, which abstracts details of activity patterns in low-level signal and feature space. The activity labels are then autonomously learned in the target domain by finding an optimal tiered mapping between the dependency graphs. We carry out an extensive set of experiments on three large datasets collected with wearable sensors involving human subjects. The results demonstrate the superiority of ActiLabel over state-of-the-art transfer learning and deep learning methods. In particular, ActiLabel outperforms such algorithms by average F1-scores of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>36.3</mn><mo>%</mo></mrow></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>32.7</mn><mo>%</mo></mrow></semantics></math></inline-formula>, and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>9.1</mn><mo>%</mo></mrow></semantics></math></inline-formula> for cross-modality, cross-location, and cross-subject activity recognition, respectively.
ISSN:1424-8220