Cross-modal learning from visual information for activity recognition on inertial sensors
<p>The lack of large-scale, labeled datasets impedes progress in developing robust and generalized predictive models for human activity recognition (HAR) from wearable inertial sensor data. Labeled data is scarce as sensor data collection is expensive, and their annotation is time-consuming an...
Main Author: | Tong, EGC |
---|---|
Other Authors: | Lane, ND |
Format: | Thesis |
Language: | English |
Published: |
2023
|
Subjects: |
Similar Items
-
Resilience of Machine Learning Models in Anxiety Detection: Assessing the Impact of Gaussian Noise on Wearable Sensors
by: Abdulrahman Alkurdi, et al.
Published: (2024-12-01) -
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors
by: Ang Ji, et al.
Published: (2023-07-01) -
Model-Agnostic Structural Transfer Learning for Cross-Domain Autonomous Activity Recognition
by: Parastoo Alinia, et al.
Published: (2023-07-01) -
Extending Anxiety Detection from Multimodal Wearables in Controlled Conditions to Real-World Environments
by: Abdulrahman Alkurdi, et al.
Published: (2025-02-01) -
A Review of Deep Transfer Learning and Recent Advancements
by: Mohammadreza Iman, et al.
Published: (2023-03-01)