Operator matching during visually aided teleoperation
Two contrasting models are proposed to account for an operator's performance during an insertion task using a teleoperated robot arm in which, in addition to haptic feedback, visual guidance is provided via a computer-generated display of the workspace. In the first model, the operator's i...
Main Authors: | , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
2005
|
_version_ | 1797053134121467904 |
---|---|
author | Thompson, R McAree, P Daniel, R Murray, D |
author_facet | Thompson, R McAree, P Daniel, R Murray, D |
author_sort | Thompson, R |
collection | OXFORD |
description | Two contrasting models are proposed to account for an operator's performance during an insertion task using a teleoperated robot arm in which, in addition to haptic feedback, visual guidance is provided via a computer-generated display of the workspace. In the first model, the operator's internal aim is formulated as one of maximising the amount of information available, and in the second as one of minimising variance. Experimental measurements of the times to complete such a task are made, with various degrees of noise added to the pose of objects and different smoothing applied before generating the display. The observations appear inconsistent with the first performance model, suggesting that the operator prefers to use partial information rapidly, rather than to suffer the delay associated with extracting full information. The observations are more consistent with the operator as a minimiser of variance, an idea used successfully (albeit embodied in a different controller) by others in the modelling of human eye and arm trajectories, and in the prediction of the empirical Fitts' law found in reaching and touching tasks. It is found that under relatively high noise, the operator performs best when pose is low-pass-filtered with a cut-off frequency comparable with the natural frequency at which the operator interacts with the environment. © 2004 Elsevier B.V. All rights reserved. |
first_indexed | 2024-03-06T18:39:49Z |
format | Journal article |
id | oxford-uuid:0c7e9454-166b-4ce4-9221-28d1ad738621 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-06T18:39:49Z |
publishDate | 2005 |
record_format | dspace |
spelling | oxford-uuid:0c7e9454-166b-4ce4-9221-28d1ad7386212022-03-26T09:35:15ZOperator matching during visually aided teleoperationJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:0c7e9454-166b-4ce4-9221-28d1ad738621EnglishSymplectic Elements at Oxford2005Thompson, RMcAree, PDaniel, RMurray, DTwo contrasting models are proposed to account for an operator's performance during an insertion task using a teleoperated robot arm in which, in addition to haptic feedback, visual guidance is provided via a computer-generated display of the workspace. In the first model, the operator's internal aim is formulated as one of maximising the amount of information available, and in the second as one of minimising variance. Experimental measurements of the times to complete such a task are made, with various degrees of noise added to the pose of objects and different smoothing applied before generating the display. The observations appear inconsistent with the first performance model, suggesting that the operator prefers to use partial information rapidly, rather than to suffer the delay associated with extracting full information. The observations are more consistent with the operator as a minimiser of variance, an idea used successfully (albeit embodied in a different controller) by others in the modelling of human eye and arm trajectories, and in the prediction of the empirical Fitts' law found in reaching and touching tasks. It is found that under relatively high noise, the operator performs best when pose is low-pass-filtered with a cut-off frequency comparable with the natural frequency at which the operator interacts with the environment. © 2004 Elsevier B.V. All rights reserved. |
spellingShingle | Thompson, R McAree, P Daniel, R Murray, D Operator matching during visually aided teleoperation |
title | Operator matching during visually aided teleoperation |
title_full | Operator matching during visually aided teleoperation |
title_fullStr | Operator matching during visually aided teleoperation |
title_full_unstemmed | Operator matching during visually aided teleoperation |
title_short | Operator matching during visually aided teleoperation |
title_sort | operator matching during visually aided teleoperation |
work_keys_str_mv | AT thompsonr operatormatchingduringvisuallyaidedteleoperation AT mcareep operatormatchingduringvisuallyaidedteleoperation AT danielr operatormatchingduringvisuallyaidedteleoperation AT murrayd operatormatchingduringvisuallyaidedteleoperation |