您的检索:
学者姓名:李明爱
精炼检索结果:
年份
成果类型
收录类型
来源
综合
曾用名
合作者
语言
清除所有精炼条件
摘要 :
Domain adaptation, as an important branch of transfer learning, can be applied to cope with data insufficiency and high subject variabilities in motor imagery electroencephalogram (MI-EEG) based brain-computer interfaces. The existing methods generally focus on aligning data and feature distribution; however, aligning each source domain with the informative samples of the target domain and seeking the most appropriate source domains to enhance the classification effect has not been considered. In this paper, we propose a dual alignment-based multi-source domain adaptation framework, denoted DAMSDAF. Based on continuous wavelet transform, all channels of MI-EEG signals are converted respectively and the generated time-frequency spectrum images are stitched to construct multi-source domains and target domain. Then, the informative samples close to the decision boundary are found in the target domain by using entropy, and they are employed to align and reassign each source domain with normalized mutual information. Furthermore, a multi-branch deep network (MBDN) is designed, and the maximum mean discrepancy is embedded in each branch to realign the specific feature distribution. Each branch is separately trained by an aligned source domain, and all the single branch transfer accuracies are arranged in descending order and utilized for weighted prediction of MBDN. Therefore, the most suitable number of source domains with top weights can be automatically determined. Extensive experiments are conducted based on 3 public MI-EEG datasets. DAMSDAF achieves the classification accuracies of 92.56%, 69.45% and 89.57%, and the statistical analysis is performed by the kappa value and t-test. Experimental results show that DAMSDAF significantly improves the transfer effects compared to the present methods, indicating that dual alignment can sufficiently use the different weighted samples and even source domains at different levels as well as realizing optimal selection of multi-source domains.
关键词 :
Transfer learning Transfer learning Weighted alignment Weighted alignment Maximum mean discrepancy Maximum mean discrepancy Motor imagery Motor imagery Domain adaptation Domain adaptation
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Xu, Dong-qin , Li, Ming-ai . A dual alignment-based multi-source domain adaptation framework for motor imagery EEG classification [J]. | APPLIED INTELLIGENCE , 2022 , 53 (9) : 10766-10788 . |
MLA | Xu, Dong-qin 等. "A dual alignment-based multi-source domain adaptation framework for motor imagery EEG classification" . | APPLIED INTELLIGENCE 53 . 9 (2022) : 10766-10788 . |
APA | Xu, Dong-qin , Li, Ming-ai . A dual alignment-based multi-source domain adaptation framework for motor imagery EEG classification . | APPLIED INTELLIGENCE , 2022 , 53 (9) , 10766-10788 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Electrooculogram (EOG) is an inevitable main interference in electroencephalogram (EEG) acquisition, which directly affects the analysis and application of EEG. Second-order blind identification (SOBI), as a blind source separation (BSS), has been used to remove the ocular artifacts (OA) of contaminated EEG. However, SOBI that assumes the source signal to be stationary is not appropriate for nonstationary EEG signals, yielding undesirable separation results. In addition, it is regrettable that the current discriminations of ocular artifacts, such as correlation coefficients, sample entropy, do not take into account of the fuzzy characteristics of EEG, which leads to the inaccurate judgement of OA. In this paper, a novel OA removal method is proposed based on the combination of discrete wavelet transform (DWT) and SOBI and denoted as DWSOBI. DWT is used to analyze each channel of contaminated EEG to obtain more stable multi-scale wavelet coefficients; then, the wavelet coefficients in the same layer are selected to construct the wavelet coefficient matrix, and it is further separated by using SOBI to obtain the estimation of source signals, whose fuzzy entropies are calculated and employed to realize the automatic identification and removal of OA. Based on a public database, many experiments are conducted and two performance indexes are adopted to measure the elimination effect of OA. The experiment results show that DWSOBI achieves more adaptive and accurate performance for four kinds of OA from three subjects, and is superior to the commonly used methods. © 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
关键词 :
Automation Automation Blind source separation Blind source separation Discrete wavelet transforms Discrete wavelet transforms Electroencephalography Electroencephalography Entropy Entropy Intelligent computing Intelligent computing Signal analysis Signal analysis Signal reconstruction Signal reconstruction
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Li, Mingai , Liu, Fan , Sun, Yanjun et al. An Automatic Removal Method of Ocular Artifacts in EEG [C] . 2021 : 362-371 . |
MLA | Li, Mingai et al. "An Automatic Removal Method of Ocular Artifacts in EEG" . (2021) : 362-371 . |
APA | Li, Mingai , Liu, Fan , Sun, Yanjun , Wei, Lina . An Automatic Removal Method of Ocular Artifacts in EEG . (2021) : 362-371 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
本发明公开了一种基于时频能量的符号传递熵及脑网络特征计算方法,首先,基于共平均参考对采集的运动想象脑电信号(MI‑EEG)进行预处理;然后,对各导联MI‑EEG进行连续小波变换,求得其时‑频‑能量矩阵,并将与运动想象密切相关的频带内各频率所对应的时间‑能量序列依次拼接,得到该导联的一维时频能量序列;进而,计算任意两个导联时频能量序列之间的符号传递熵,构建大脑连通性矩阵,并使用皮尔逊特征选择算法优化矩阵元素;最后,计算脑功能网络的度和中间中心性,构成特征向量,用于MI‑EEG的分类。结果表明,本发明可以有效地提取MI‑EEG的频域特征和非线性特征,相比于传统的基于脑功能网络的特征提取方法具有明显的优势。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 李明爱 , 张圆圆 , 刘有军 et al. 一种基于时频能量的符号传递熵及脑网络特征计算方法 : CN202110058776.9[P]. | 2021-01-16 . |
MLA | 李明爱 et al. "一种基于时频能量的符号传递熵及脑网络特征计算方法" : CN202110058776.9. | 2021-01-16 . |
APA | 李明爱 , 张圆圆 , 刘有军 , 杨金福 . 一种基于时频能量的符号传递熵及脑网络特征计算方法 : CN202110058776.9. | 2021-01-16 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
本发明公开了偶极子成像与识别方法,采用标准化低分辨率脑电磁断层扫描成像sLORETA算法将经过带通滤波后的头皮层脑电信号逆变换到脑皮层;将四类运动想象任务分成两个两分类任务,计算每个两类任务之间的偶极子幅值差值,将其差异明显的共同时段选取为感兴趣时间TOI,并将TOI内每类任务激活的区域取并集,得到感兴趣区域ROI,提取ROI内偶极子的坐标和幅值;再针对每个离散时间点,通过对偶极子坐标进行平移、放大和取整等操作,并将偶极子幅值赋于到对应的坐标点处,构建二维偶极子成像图,再按照时间维度将二维偶极子成像图堆叠成二维图像序列;最后利用滑动时间窗法进行数据增广,获得三维偶极子特征数据,并输入到三维卷积神经网络3DCNN进行分类。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 李明爱 , 刘斌 , 刘有军 et al. 偶极子成像与识别方法 : CN202110058762.7[P]. | 2021-01-16 . |
MLA | 李明爱 et al. "偶极子成像与识别方法" : CN202110058762.7. | 2021-01-16 . |
APA | 李明爱 , 刘斌 , 刘有军 , 孙炎珺 . 偶极子成像与识别方法 : CN202110058762.7. | 2021-01-16 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
本发明公开了基于4D数据表达和3DCNN的运动想象任务解码方法,对原始运动想象脑电信号MI‑EEG进行基线校正和带通滤波处理;将预处理后的MI‑EEG信号从低维头皮空间映射到高维脑皮层空间,获得偶极子源估计;结合偶极子坐标系转换、插值和体积下采样等操作,构建3D偶极子幅值矩阵;在TOI内设置滑窗,将窗内采样时刻对应的3D偶极子幅值矩阵按照采样顺序堆叠为4D偶极子特征矩阵;设计三模块级联结构的三维卷积神经网络3M3DCNN,对4DDFM含有的三维空间位置信息以及一维时间信息的复合特征进行提取和识别,实现运动想象任务解码;本发明避免了ROI的选择带来的大量信息丢失,并省去了时频分析等复杂操作步骤,有效提高了脑电信号的识别效果。
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | 李明爱 , 阮秭威 , 刘有军 et al. 基于4D数据表达和3DCNN的运动想象任务解码方法 : CN202110058756.1[P]. | 2021-01-16 . |
MLA | 李明爱 et al. "基于4D数据表达和3DCNN的运动想象任务解码方法" : CN202110058756.1. | 2021-01-16 . |
APA | 李明爱 , 阮秭威 , 刘有军 , 杨金福 , 孙炎珺 . 基于4D数据表达和3DCNN的运动想象任务解码方法 : CN202110058756.1. | 2021-01-16 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Sleep staging is one of the important methods to diagnosis and treatment of sleep diseases. However, it is laborious and time-consuming, therefore, computer assisted sleep staging is necessary. Most of the existing sleep staging researches using hand-engineered features rely on prior knowledges of sleep analysis, and usually single channel electroencephalogram (EEG) is used for sleep staging task. Prior knowledge is not always available, and single channel EEG signal cannot fully represent the patient's sleeping physiological states. To tackle the above two problems, we propose an automatic sleep staging network model based on data adaptation and multimodal feature fusion using EEG and electrooculogram (EOG) signals. 3D-CNN is used to extract the time-frequency features of EEG at different time scales, and LSTM is used to learn the frequency evolution of EOG. The nonlinear relationship between the High-layer features of EEG and EOG is fitted by deep probabilistic network. Experiments on SLEEP-EDF and a private dataset show that the proposed model achieves state-of-the-art performance. Moreover, the prediction result is in accordance with that from the expert diagnosis.
关键词 :
sleep stage classification sleep stage classification deep learning deep learning HHT HHT multimodal physiological signals multimodal physiological signals fusion networks fusion networks
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Duan, Lijuan , Li, Mengying , Wang, Changming et al. A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion [J]. | FRONTIERS IN HUMAN NEUROSCIENCE , 2021 , 15 . |
MLA | Duan, Lijuan et al. "A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion" . | FRONTIERS IN HUMAN NEUROSCIENCE 15 (2021) . |
APA | Duan, Lijuan , Li, Mengying , Wang, Changming , Qiao, Yuanhua , Wang, Zeyu , Sha, Sha et al. A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion . | FRONTIERS IN HUMAN NEUROSCIENCE , 2021 , 15 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Deep neural network is a promising method to recognize motor imagery electroencephalography (MI-EEG), which is often used as the source signal of a rehabilitation system; and the core issues are the data representation and the matched neural networks. MI-EEG images is one of the main expressions, however, all the measured data of a trial are usually integrated into one image, causing information loss, especially in the time dimension; and the neural network architecture might not fully extract the features over the alpha and beta frequency bands, which are closely related to MI. In this paper, we propose a key band imaging method (KBIM). A short time Fourier transform is applied to each electrode of the MI-EEG signal to generate a time-frequency image, and the parts corresponding to the alpha and beta bands are intercepted, fused, and further arranged into the EEG electrode map by the nearest neighbor interpolation method, forming two key band image sequences. In addition, a hybrid deep neural network named the parallel multimodule convolutional neural network and long short-term memory network (PMMCL) is designed for the extraction and fusion of the spatial-spectral and temporal features of two key band image sequences to realize automatic classification of MI-EEG signals. Extensive experiments are conducted on two public datasets, and the accuracies after 10-fold cross-validation are 97.42% and 77.33%, respectively. Statistical analysis shows the superb discrimination ability for multiclass MI-EEG too. The results demonstrate that KBIM can preserve the integrity of the feature information, and they well match with PMMCL.
关键词 :
convolutional neural network convolutional neural network data representation data representation image sequence image sequence long short-term memory long short-term memory Motor imagery electroencephalography Motor imagery electroencephalography
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Li, Ming-Ai , Peng, Wei-Min , Yang, Jin-Fu . Key Band Image Sequences and A Hybrid Deep Neural Network for Recognition of Motor Imagery EEG [J]. | IEEE ACCESS , 2021 , 9 : 86994-87006 . |
MLA | Li, Ming-Ai et al. "Key Band Image Sequences and A Hybrid Deep Neural Network for Recognition of Motor Imagery EEG" . | IEEE ACCESS 9 (2021) : 86994-87006 . |
APA | Li, Ming-Ai , Peng, Wei-Min , Yang, Jin-Fu . Key Band Image Sequences and A Hybrid Deep Neural Network for Recognition of Motor Imagery EEG . | IEEE ACCESS , 2021 , 9 , 86994-87006 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
A motor imagery EEG (MI-EEG) signal is often selected as the driving signal in an active brain computer interface (BCI) system, and it has been a popular field to recognize MI-EEG images via convolutional neural network (CNN), which poses a potential problem for maintaining the integrity of the time-frequency-space information in MI-EEG images and exploring the feature fusion mechanism in the CNN. However, information is excessively compressed in the present MI-EEG image, and the sequential CNN is unfavorable for the comprehensive utilization of local features. In this paper, a multidimensional MI-EEG imaging method is proposed, which is based on time-frequency analysis and the Clough-Tocher (CT) interpolation algorithm. The time-frequency matrix of each electrode is generated via continuous wavelet transform (WT), and the relevant section of frequency is extracted and divided into nine submatrices, the longitudinal sums and lengths of which are calculated along the directions of frequency and time successively to produce a 3 x 3 feature matrix for each electrode. Then, feature matrix of each electrode is interpolated to coincide with their corresponding coordinates, thereby yielding a WT-based multidimensional image, called WTMI. Meanwhile, a multilevel and multiscale feature fusion convolutional neural network (MLMSFFCNN) is designed for WTMI, which has dense information, low signal-to-noise ratio, and strong spatial distribution. Extensive experiments are conducted on the BCI Competition IV 2a and 2b datasets, and accuracies of 92.95% and 97.03% are yielded based on 10-fold cross-validation, respectively, which exceed those of the state-of-the-art imaging methods. The kappa values and p values demonstrate that our method has lower class skew and error costs. The experimental results demonstrate that WTMI can fully represent the time-frequency-space features of MI-EEG and that MLMSFFCNN is beneficial for improving the collection of multiscale features and the fusion recognition of general and abstract features for WTMI.
关键词 :
Wavelet transform Wavelet transform Brain-computer interface Brain-computer interface Machine learning Machine learning MI-EEG imaging method MI-EEG imaging method Convolutional neural network Convolutional neural network
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Li, Ming-ai , Han, Jian-fu , Yang, Jin-fu . Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN [J]. | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2021 , 59 (10) : 2037-2050 . |
MLA | Li, Ming-ai et al. "Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN" . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING 59 . 10 (2021) : 2037-2050 . |
APA | Li, Ming-ai , Han, Jian-fu , Yang, Jin-fu . Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2021 , 59 (10) , 2037-2050 . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Objective. Motor imagery electroencephalography (MI-EEG) produces one of the most commonly used biosignals in intelligent rehabilitation systems. The newly developed 3D convolutional neural network (3DCNN) is gaining increasing attention for its ability to recognize MI tasks. The key to successful identification of movement intention is dependent on whether the data representation can faithfully reflect the cortical activity induced by MI. However, the present data representation, which is often generated from partial source signals with time-frequency analysis, contains incomplete information. Therefore, it would be beneficial to explore a new type of data representation using raw spatiotemporal dipole information as well as the possible development of a matching 3DCNN. Approach. Based on EEG source imaging and 3DCNN, a novel decoding method for identifying MI tasks is proposed, called ESICNND. MI-EEG is mapped to the cerebral cortex by the standardized low resolution electromagnetic tomography algorithm, and the optimal sampling points of the dipoles are selected as the time of interest to best reveal the difference between any two MI tasks. Then, the initial subject coordinate system is converted to a magnetic resonance imaging coordinate system, followed by dipole interpolation and volume down-sampling; the resulting 3D dipole amplitude matrices are merged at the selected sampling points to obtain 4D dipole feature matrices (4DDFMs). These matrices are augmented by sliding window technology and input into a 3DCNN with a cascading architecture of three modules (3M3DCNN) to perform the extraction and classification of comprehensive features. Main results. Experiments are carried out on two public datasets; the average ten-fold CV classification accuracies reach 88.73% and 96.25%, respectively, and the statistical analysis demonstrates outstanding consistency and stability. Significance. The 4DDFMs reveals the variation of cortical activation in a 3D spatial cube with a temporal dimension and matches the 3M3DCNN well, making full use of the high-resolution spatiotemporal information from all dipoles.
关键词 :
EEG source imaging EEG source imaging 4D dipole feature matrix 4D dipole feature matrix motor imagery EEG motor imagery EEG convolutional neural network convolutional neural network data representation data representation time of interest time of interest
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Li, Ming-ai , Ruan, Zi-wei . A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks [J]. | JOURNAL OF NEURAL ENGINEERING , 2021 , 18 (4) . |
MLA | Li, Ming-ai et al. "A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks" . | JOURNAL OF NEURAL ENGINEERING 18 . 4 (2021) . |
APA | Li, Ming-ai , Ruan, Zi-wei . A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks . | JOURNAL OF NEURAL ENGINEERING , 2021 , 18 (4) . |
导入链接 | NoteExpress RIS BibTex |
摘要 :
Brain computer interface (BCI) technology can help the disabled to achieve the recovery of neural function by using the Motor Imagery Electroencephalogram (MI-EEG) based rehebilitation system. However, it is difficult to acquire a large amount of available EEG data, transfer learning technology provides an effective method, and the source domain selection is one of key issues. In this study, we develop a novel parameter transfer learning method based on VGG-16 convolutional neural network (CNN) for MI classification. First, the number of fall MI-EEG signals are augmented with the sliding window method, and the short-time Fourier transformation (STFT) is applied to obtain the time-frequency spectrum images (TFSI). Then, the VGG-16 CNN is pre-trained with TFSI of source domain, which is divided into five blocks.. The parameters of the pre-trained CNN are transferred to the target network though a new transfer strategy, i.e. utilization of the data of part subjects from target domain to fine-tune the five blocks in turn. Finally, the fine-tuned CNN is used for MI classification of the rest subjects in target domain. This work is evaluated with a public dataset, the best classification accuracy of this study is 96.59%. The results show that the high correlation between the source domain and the target domain is better than using the domains with low correlation, and the proposed transfer strategy is efficiency.
关键词 :
Motor Imagery Motor Imagery Transfer learning Transfer learning Convolutional neural network Convolutional neural network Brain computer interface Brain computer interface Transfer strategy Transfer strategy
引用:
复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。
GB/T 7714 | Li, Ming-Ai , Xu, Dong-Qin . A Transfer Learning Method based on VGG-16 Convolutional Neural Network for MI Classification [J]. | PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021) , 2021 : 5430-5435 . |
MLA | Li, Ming-Ai et al. "A Transfer Learning Method based on VGG-16 Convolutional Neural Network for MI Classification" . | PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021) (2021) : 5430-5435 . |
APA | Li, Ming-Ai , Xu, Dong-Qin . A Transfer Learning Method based on VGG-16 Convolutional Neural Network for MI Classification . | PROCEEDINGS OF THE 33RD CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2021) , 2021 , 5430-5435 . |
导入链接 | NoteExpress RIS BibTex |
导出
数据: |
选中 到 |
格式: |