Cross-modal learning from visual information for activity recognition on inertial sensors
<p>The lack of large-scale, labeled datasets impedes progress in developing robust and generalized predictive models for human activity recognition (HAR) from wearable inertial sensor data. Labeled data is scarce as sensor data collection is expensive, and their annotation is time-consuming an...
第一著者: | Tong, EGC |
---|---|
その他の著者: | Lane, ND |
フォーマット: | 学位論文 |
言語: | English |
出版事項: |
2023
|
主題: |
類似資料
-
Resilience of Machine Learning Models in Anxiety Detection: Assessing the Impact of Gaussian Noise on Wearable Sensors
著者:: Abdulrahman Alkurdi, 等
出版事項: (2024-12-01) -
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors
著者:: Ang Ji, 等
出版事項: (2023-07-01) -
Model-Agnostic Structural Transfer Learning for Cross-Domain Autonomous Activity Recognition
著者:: Parastoo Alinia, 等
出版事項: (2023-07-01) -
Extending Anxiety Detection from Multimodal Wearables in Controlled Conditions to Real-World Environments
著者:: Abdulrahman Alkurdi, 等
出版事項: (2025-02-01) -
A Review of Deep Transfer Learning and Recent Advancements
著者:: Mohammadreza Iman, 等
出版事項: (2023-03-01)