Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb

Fundamental to human movement is the ability to interact with objects in our environment. How one reaches an object depends on the object’s shape and intended interaction afforded by the object, e.g., grasp and transport. Extensive research has revealed that the motor intention of reach-to-grasp can...

Full description

Bibliographic Details
Main Authors: Kevin Hooks, Refaat El-Said, Qiushi Fu
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-11-01
Series:Frontiers in Human Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnhum.2023.1302647/full
_version_ 1827766727031652352
author Kevin Hooks
Refaat El-Said
Qiushi Fu
Qiushi Fu
author_facet Kevin Hooks
Refaat El-Said
Qiushi Fu
Qiushi Fu
author_sort Kevin Hooks
collection DOAJ
description Fundamental to human movement is the ability to interact with objects in our environment. How one reaches an object depends on the object’s shape and intended interaction afforded by the object, e.g., grasp and transport. Extensive research has revealed that the motor intention of reach-to-grasp can be decoded from cortical activities using EEG signals. The goal of the present study is to determine the extent to which information encoded in the EEG signals is shared between two limbs to enable cross-hand decoding. We performed an experiment in which human subjects (n = 10) were tasked to interact with a novel object with multiple affordances using either right or left hands. The object had two vertical handles attached to a horizontal base. A visual cue instructs what action (lift or touch) and whether the left or right handle should be used for each trial. EEG was recorded and processed from bilateral frontal-central-parietal regions (30 channels). We trained LDA classifiers using data from trials performed by one limb and tested the classification accuracy using data from trials performed by the contralateral limb. We found that the type of hand-object interaction can be decoded with approximately 59 and 69% peak accuracy in the planning and execution stages, respectively. Interestingly, the decoding accuracy of the reaching directions was dependent on how EEG channels in the testing dataset were spatially mirrored, and whether directions were labeled in the extrinsic (object-centered) or intrinsic (body-centered) coordinates.
first_indexed 2024-03-11T11:45:02Z
format Article
id doaj.art-ddfdd34b70e34e268d54384d43cf2c8a
institution Directory Open Access Journal
issn 1662-5161
language English
last_indexed 2024-03-11T11:45:02Z
publishDate 2023-11-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Human Neuroscience
spelling doaj.art-ddfdd34b70e34e268d54384d43cf2c8a2023-11-09T15:10:44ZengFrontiers Media S.A.Frontiers in Human Neuroscience1662-51612023-11-011710.3389/fnhum.2023.13026471302647Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limbKevin Hooks0Refaat El-Said1Qiushi Fu2Qiushi Fu3Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United StatesCollege of Medicine, University of Central Florida, Orlando, FL, United StatesMechanical and Aerospace Engineering, University of Central Florida, Orlando, FL, United StatesBiionix Cluster, University of Central Florida, Orlando, FL, United StatesFundamental to human movement is the ability to interact with objects in our environment. How one reaches an object depends on the object’s shape and intended interaction afforded by the object, e.g., grasp and transport. Extensive research has revealed that the motor intention of reach-to-grasp can be decoded from cortical activities using EEG signals. The goal of the present study is to determine the extent to which information encoded in the EEG signals is shared between two limbs to enable cross-hand decoding. We performed an experiment in which human subjects (n = 10) were tasked to interact with a novel object with multiple affordances using either right or left hands. The object had two vertical handles attached to a horizontal base. A visual cue instructs what action (lift or touch) and whether the left or right handle should be used for each trial. EEG was recorded and processed from bilateral frontal-central-parietal regions (30 channels). We trained LDA classifiers using data from trials performed by one limb and tested the classification accuracy using data from trials performed by the contralateral limb. We found that the type of hand-object interaction can be decoded with approximately 59 and 69% peak accuracy in the planning and execution stages, respectively. Interestingly, the decoding accuracy of the reaching directions was dependent on how EEG channels in the testing dataset were spatially mirrored, and whether directions were labeled in the extrinsic (object-centered) or intrinsic (body-centered) coordinates.https://www.frontiersin.org/articles/10.3389/fnhum.2023.1302647/fullelectroencephalographybrain-machine interfacedecodingreachinggraspingvisuomotor transformation
spellingShingle Kevin Hooks
Refaat El-Said
Qiushi Fu
Qiushi Fu
Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb
Frontiers in Human Neuroscience
electroencephalography
brain-machine interface
decoding
reaching
grasping
visuomotor transformation
title Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb
title_full Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb
title_fullStr Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb
title_full_unstemmed Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb
title_short Decoding reach-to-grasp from EEG using classifiers trained with data from the contralateral limb
title_sort decoding reach to grasp from eeg using classifiers trained with data from the contralateral limb
topic electroencephalography
brain-machine interface
decoding
reaching
grasping
visuomotor transformation
url https://www.frontiersin.org/articles/10.3389/fnhum.2023.1302647/full
work_keys_str_mv AT kevinhooks decodingreachtograspfromeegusingclassifierstrainedwithdatafromthecontralaterallimb
AT refaatelsaid decodingreachtograspfromeegusingclassifierstrainedwithdatafromthecontralaterallimb
AT qiushifu decodingreachtograspfromeegusingclassifierstrainedwithdatafromthecontralaterallimb
AT qiushifu decodingreachtograspfromeegusingclassifierstrainedwithdatafromthecontralaterallimb