收录:
摘要:
In the BCI rehabilitation system, the decoding of motor imagery tasks (MI-tasks) with dipoles in the source domain has gradually become a new research focus. For complex multiclass MI-tasks, the number of activated dipoles is large, and the activation area, activation time and intensity are also different for different subjects. The means by which to identify fewer subject-based dipoles is very important. There exist two main methods of dipole selection: one method is based on the physiological functional partition theory, and the other method is based on human experience. However, the number of dipoles that are selected by the two methods is still large and contains information redundancy, and the selected dipoles are the same in both number and position for different subjects, which is not necessarily ideal for distinguishing different MI-tasks. In this paper, the data-driven method is used to preliminarily select fully activated dipoles with large amplitudes; the obtained dipoles are refined by using continuous wavelet transform (CWT) to best reflect the differences among the multiclass MI-tasks, thereby yielding a subject-based dipole selection method, which is named PRDS. PRDS is further used to decode multiclass MI-tasks in which some representative dipoles are found, and their wavelet coefficient power is calculated and input to one-vs.-one common spatial pattern (OVO-CSP) for feature extraction, and the features are classified by the support vector machine. We denote this decoding method as D-CWTCSP, which enhances the spatial resolution and also makes full use of the time-frequency-spatial domain information. Experiments are carried out using a public dataset with nine subjects and four classes of MI-tasks, and the proposed D-CWTCSP is compared with the relevant methods in sensor space and brain-source space in terms of the decoding accuracy, standard deviation, recall rate and kappa value. The experimental results show that D-CWTCSP reaches an average decoding accuracy of 82.66% among the nine subjects, which generates 8-20% improvement over other methods, thus reflecting its great superiority in decoding accuracy. (C) 2020 Elsevier B.V. All rights reserved.
关键词:
通讯作者信息:
电子邮件地址: