Summary: | Motor imagery (MI) is an important part of brain-computer interface (BCI) research, which could decode the subject’s intention and help remodel the neural system of stroke patients. Therefore, accurate decoding of electroencephalography- (EEG-) based motion imagination has received a lot of attention, especially in the research of rehabilitation training. We propose a novel multifrequency brain network-based deep learning framework for motor imagery decoding. Firstly, a multifrequency brain network is constructed from the multichannel MI-related EEG signals, and each layer corresponds to a specific brain frequency band. The structure of the multifrequency brain network matches the activity profile of the brain properly, which combines the information of channel and multifrequency. The filter bank common spatial pattern (FBCSP) algorithm filters the MI-based EEG signals in the spatial domain to extract features. Further, a multilayer convolutional network model is designed to distinguish different MI tasks accurately, which allows extracting and exploiting the topology in the multifrequency brain network. We use the public BCI competition IV dataset 2a and the public BCI competition III dataset IIIa to evaluate our framework and get state-of-the-art results in the first dataset, i.e., the average accuracy is 83.83% and the value of kappa is 0.784 for the BCI competition IV dataset 2a, and the accuracy is 89.45% and the value of kappa is 0.859 for the BCI competition III dataset IIIa. All these results demonstrate that our framework can classify different MI tasks from multichannel EEG signals effectively and show great potential in the study of remodelling the neural system of stroke patients.
|