收录:
摘要:
Research on brain-computer interfaces (BCI) can identify the limbs of subjects that generate motor imagination by decoding brain physiological signals. The features extracted from traditional Electroencephalography (EEG)-based decoding methods are relatively single and limited, leading to the limited decoding performance of classification methods. In order to solve this problem, we proposed a complementary feature fusion network (CFFNet) model based on EEG and functional near-infrared spectroscopy (fNIRS) signals since the complementary information content between EEG and fNIRS signals. The CFFNet method integrates the feature extraction block, the feature selection block, the complementary feature fusion block, and the shared-specific feature for complementary feature fusion learning, which enables effective capability of utilizing the shared and specific information of each modal. This approach and representative MI recognition methods were evaluated an open multimodal dataset. The method we proposed achieved an average accuracy of 76.45% in intra-subject experiments, which is significantly higher than single-modal classification methods and slightly higher than the representative multi-modal BCI methods. Comprehensive experimental results verify the effectiveness of our proposed method, which can provide novel perspectives for multi-modal decoding. © 2024 ACM.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
年份: 2024
页码: 278-282
语种: 英文
归属院系: