Utility of Crowdsourced User Experiments for Measuring the Central Tendency of User Performance: A Case of Error-Rate Model Evaluation in a Pointing Task
The usage of crowdsourcing to recruit numerous participants has been recognized as beneficial in the human-computer interaction (HCI) field, such as for designing user interfaces and validating user performance models. In this work, we investigate its effectiveness for evaluating an error-rate predi...
Main Author: | Shota Yamanaka |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2022-03-01
|
Series: | Frontiers in Artificial Intelligence |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2022.798892/full |
Similar Items
-
Evaluation of Computer-Based Target Achievement Tests for Myoelectric Control
by: Jacob Gusman, et al.
Published: (2017-01-01) -
What Makes a UI Simple? Difficulty and Complexity in Tasks Engaging Visual-Spatial Working Memory
by: Maxim Bakaev, et al.
Published: (2021-01-01) -
From Idea Crowdsourcing to Managing User Knowledge
by: Risto Rajala, et al.
Published: (2013-12-01) -
Modeling Angle-Based Pointing Tasks in Augmented Reality Interfaces
by: Sichen Jin, et al.
Published: (2020-01-01) -
Crowdsourced Evaluation of Robot Programming Environments: Methodology and Application
by: Daria Piacun, et al.
Published: (2021-11-01)