Cross-modal learning from visual information for activity recognition on inertial sensors
<p>The lack of large-scale, labeled datasets impedes progress in developing robust and generalized predictive models for human activity recognition (HAR) from wearable inertial sensor data. Labeled data is scarce as sensor data collection is expensive, and their annotation is time-consuming an...
Үндсэн зохиолч: | Tong, EGC |
---|---|
Бусад зохиолчид: | Lane, ND |
Формат: | Дипломын ажил |
Хэл сонгох: | English |
Хэвлэсэн: |
2023
|
Нөхцлүүд: |
Ижил төстэй зүйлс
Ижил төстэй зүйлс
-
Resilience of Machine Learning Models in Anxiety Detection: Assessing the Impact of Gaussian Noise on Wearable Sensors
-н: Abdulrahman Alkurdi, зэрэг
Хэвлэсэн: (2024-12-01) -
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors
-н: Ang Ji, зэрэг
Хэвлэсэн: (2023-07-01) -
Model-Agnostic Structural Transfer Learning for Cross-Domain Autonomous Activity Recognition
-н: Parastoo Alinia, зэрэг
Хэвлэсэн: (2023-07-01) -
Extending Anxiety Detection from Multimodal Wearables in Controlled Conditions to Real-World Environments
-н: Abdulrahman Alkurdi, зэрэг
Хэвлэсэн: (2025-02-01) -
A Review of Deep Transfer Learning and Recent Advancements
-н: Mohammadreza Iman, зэрэг
Хэвлэсэн: (2023-03-01)