Cross-modal learning from visual information for activity recognition on inertial sensors
<p>The lack of large-scale, labeled datasets impedes progress in developing robust and generalized predictive models for human activity recognition (HAR) from wearable inertial sensor data. Labeled data is scarce as sensor data collection is expensive, and their annotation is time-consuming an...
Главный автор: | Tong, EGC |
---|---|
Другие авторы: | Lane, ND |
Формат: | Диссертация |
Язык: | English |
Опубликовано: |
2023
|
Предметы: |
Схожие документы
-
Resilience of Machine Learning Models in Anxiety Detection: Assessing the Impact of Gaussian Noise on Wearable Sensors
по: Abdulrahman Alkurdi, и др.
Опубликовано: (2024-12-01) -
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors
по: Ang Ji, и др.
Опубликовано: (2023-07-01) -
Model-Agnostic Structural Transfer Learning for Cross-Domain Autonomous Activity Recognition
по: Parastoo Alinia, и др.
Опубликовано: (2023-07-01) -
Extending Anxiety Detection from Multimodal Wearables in Controlled Conditions to Real-World Environments
по: Abdulrahman Alkurdi, и др.
Опубликовано: (2025-02-01) -
A Review of Deep Transfer Learning and Recent Advancements
по: Mohammadreza Iman, и др.
Опубликовано: (2023-03-01)