Extraction and classification of tempo stimuli from electroencephalography recordings using convolutional recurrent attention model

AbstractElectroencephalography (EEG) recordings taken during the perception of music tempo contain information that estimates the tempo of a music piece. If information about this tempo stimulus in EEG recordings can be extracted and classified, it can be effectively used to construct a music‐based...

Full description

Bibliographic Details
Main Authors: Gi Yong Lee, Min‐Soo Kim, Hyoung‐Gook Kim
Format: Article
Language:English
Published: Electronics and Telecommunications Research Institute (ETRI) 2021-11-01
Series:ETRI Journal
Subjects:
Online Access:https://doi.org/10.4218/etrij.2021-0174
Description
Summary:AbstractElectroencephalography (EEG) recordings taken during the perception of music tempo contain information that estimates the tempo of a music piece. If information about this tempo stimulus in EEG recordings can be extracted and classified, it can be effectively used to construct a music‐based brain–computer interface. This study proposes a novel convolutional recurrent attention model (CRAM) to extract and classify features corresponding to tempo stimuli from EEG recordings of listeners who listened with concentration to the tempo of musics. The proposed CRAM is composed of six modules, namely, network inputs, two‐dimensional convolutional bidirectional gated recurrent unit‐based sample encoder, sample‐level intuitive attention, segment encoder, segment‐level intuitive attention, and softmax layer, to effectively model spatiotemporal features and improve the classification accuracy of tempo stimuli. To evaluate the proposed method's performance, we conducted experiments on two benchmark datasets. The proposed method achieves promising results, outperforming recent methods.
ISSN:1225-6463