Cross-modal learning from visual information for activity recognition on inertial sensors
<p>The lack of large-scale, labeled datasets impedes progress in developing robust and generalized predictive models for human activity recognition (HAR) from wearable inertial sensor data. Labeled data is scarce as sensor data collection is expensive, and their annotation is time-consuming an...
Κύριος συγγραφέας: | Tong, EGC |
---|---|
Άλλοι συγγραφείς: | Lane, ND |
Μορφή: | Thesis |
Γλώσσα: | English |
Έκδοση: |
2023
|
Θέματα: |
Παρόμοια τεκμήρια
Παρόμοια τεκμήρια
-
Resilience of Machine Learning Models in Anxiety Detection: Assessing the Impact of Gaussian Noise on Wearable Sensors
ανά: Abdulrahman Alkurdi, κ.ά.
Έκδοση: (2024-12-01) -
Dataglove for Sign Language Recognition of People with Hearing and Speech Impairment via Wearable Inertial Sensors
ανά: Ang Ji, κ.ά.
Έκδοση: (2023-07-01) -
Model-Agnostic Structural Transfer Learning for Cross-Domain Autonomous Activity Recognition
ανά: Parastoo Alinia, κ.ά.
Έκδοση: (2023-07-01) -
Extending Anxiety Detection from Multimodal Wearables in Controlled Conditions to Real-World Environments
ανά: Abdulrahman Alkurdi, κ.ά.
Έκδοση: (2025-02-01) -
A Review of Deep Transfer Learning and Recent Advancements
ανά: Mohammadreza Iman, κ.ά.
Έκδοση: (2023-03-01)