Decoding Visual Motions from EEG Using Attention-Based RNN
The main objective of this paper is to use deep neural networks to decode the electroencephalography (EEG) signals evoked when individuals perceive four types of motion stimuli (contraction, expansion, rotation, and translation). Methods for single-trial and multi-trial EEG classification are both i...
Main Authors: | Dongxu Yang, Yadong Liu, Zongtan Zhou, Yang Yu, Xinbin Liang |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-08-01
|
Series: | Applied Sciences |
Subjects: | |
Online Access: | https://www.mdpi.com/2076-3417/10/16/5662 |
Similar Items
-
Decoding covert visual attention to motion direction using graph theory features of EEG signals and quadratic discriminant analysis
by: Zeinab Rezaei, et al.
Published: (2024-12-01) -
EEG-based Incongruency Decoding in AR with sLDA, SVM, and EEGNet
by: Wimmer Michael, et al.
Published: (2024-10-01) -
AttentionCARE: replicability of a BCI for the clinical application of augmented reality-guided EEG-based attention modification for adolescents at high risk for depression
by: Richard Gall, et al.
Published: (2024-07-01) -
Enhanced System Robustness of Asynchronous BCI in Augmented Reality Using Steady-State Motion Visual Evoked Potential
by: Aravind Ravi, et al.
Published: (2022-01-01) -
Evaluating the Feasibility of Visual Imagery for an EEG-Based Brain–Computer Interface
by: Justin Kilmarx, et al.
Published: (2024-01-01)