• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索
高影响力成果及被引频次趋势图 关键词云图及合作者关系图

您的检索:

学者姓名:李明爱

精炼检索结果:

来源

应用 展开

合作者

应用 展开

清除所有精炼条件

排序方式:
默认
  • 默认
  • 标题
  • 年份
  • WOS被引数
  • 影响因子
  • 正序
  • 倒序
< 页,共 26 >
An Automatic Removal Method of Ocular Artifacts in EEG EI
会议论文 | 2021 , 1185 , 362-371 | 5th International Conference on Intelligent Computing, Communication and Devices, ICCD 2019
摘要&关键词 引用

摘要 :

Electrooculogram (EOG) is an inevitable main interference in electroencephalogram (EEG) acquisition, which directly affects the analysis and application of EEG. Second-order blind identification (SOBI), as a blind source separation (BSS), has been used to remove the ocular artifacts (OA) of contaminated EEG. However, SOBI that assumes the source signal to be stationary is not appropriate for nonstationary EEG signals, yielding undesirable separation results. In addition, it is regrettable that the current discriminations of ocular artifacts, such as correlation coefficients, sample entropy, do not take into account of the fuzzy characteristics of EEG, which leads to the inaccurate judgement of OA. In this paper, a novel OA removal method is proposed based on the combination of discrete wavelet transform (DWT) and SOBI and denoted as DWSOBI. DWT is used to analyze each channel of contaminated EEG to obtain more stable multi-scale wavelet coefficients; then, the wavelet coefficients in the same layer are selected to construct the wavelet coefficient matrix, and it is further separated by using SOBI to obtain the estimation of source signals, whose fuzzy entropies are calculated and employed to realize the automatic identification and removal of OA. Based on a public database, many experiments are conducted and two performance indexes are adopted to measure the elimination effect of OA. The experiment results show that DWSOBI achieves more adaptive and accurate performance for four kinds of OA from three subjects, and is superior to the commonly used methods. © 2021, The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

关键词 :

Automation Automation Blind source separation Blind source separation Discrete wavelet transforms Discrete wavelet transforms Electroencephalography Electroencephalography Entropy Entropy Intelligent computing Intelligent computing Signal analysis Signal analysis Signal reconstruction Signal reconstruction

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Mingai , Liu, Fan , Sun, Yanjun et al. An Automatic Removal Method of Ocular Artifacts in EEG [C] . 2021 : 362-371 .
MLA Li, Mingai et al. "An Automatic Removal Method of Ocular Artifacts in EEG" . (2021) : 362-371 .
APA Li, Mingai , Liu, Fan , Sun, Yanjun , Wei, Lina . An Automatic Removal Method of Ocular Artifacts in EEG . (2021) : 362-371 .
导入链接 NoteExpress RIS BibTex
一种基于时频能量的符号传递熵及脑网络特征计算方法 incoPat
专利 | 2021-01-16 | CN202110058776.9
摘要&关键词 引用

摘要 :

本发明公开了一种基于时频能量的符号传递熵及脑网络特征计算方法,首先,基于共平均参考对采集的运动想象脑电信号(MI‑EEG)进行预处理;然后,对各导联MI‑EEG进行连续小波变换,求得其时‑频‑能量矩阵,并将与运动想象密切相关的频带内各频率所对应的时间‑能量序列依次拼接,得到该导联的一维时频能量序列;进而,计算任意两个导联时频能量序列之间的符号传递熵,构建大脑连通性矩阵,并使用皮尔逊特征选择算法优化矩阵元素;最后,计算脑功能网络的度和中间中心性,构成特征向量,用于MI‑EEG的分类。结果表明,本发明可以有效地提取MI‑EEG的频域特征和非线性特征,相比于传统的基于脑功能网络的特征提取方法具有明显的优势。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 李明爱 , 张圆圆 , 刘有军 et al. 一种基于时频能量的符号传递熵及脑网络特征计算方法 : CN202110058776.9[P]. | 2021-01-16 .
MLA 李明爱 et al. "一种基于时频能量的符号传递熵及脑网络特征计算方法" : CN202110058776.9. | 2021-01-16 .
APA 李明爱 , 张圆圆 , 刘有军 , 杨金福 . 一种基于时频能量的符号传递熵及脑网络特征计算方法 : CN202110058776.9. | 2021-01-16 .
导入链接 NoteExpress RIS BibTex
偶极子成像与识别方法 incoPat
专利 | 2021-01-16 | CN202110058762.7
摘要&关键词 引用

摘要 :

本发明公开了偶极子成像与识别方法,采用标准化低分辨率脑电磁断层扫描成像sLORETA算法将经过带通滤波后的头皮层脑电信号逆变换到脑皮层;将四类运动想象任务分成两个两分类任务,计算每个两类任务之间的偶极子幅值差值,将其差异明显的共同时段选取为感兴趣时间TOI,并将TOI内每类任务激活的区域取并集,得到感兴趣区域ROI,提取ROI内偶极子的坐标和幅值;再针对每个离散时间点,通过对偶极子坐标进行平移、放大和取整等操作,并将偶极子幅值赋于到对应的坐标点处,构建二维偶极子成像图,再按照时间维度将二维偶极子成像图堆叠成二维图像序列;最后利用滑动时间窗法进行数据增广,获得三维偶极子特征数据,并输入到三维卷积神经网络3DCNN进行分类。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 李明爱 , 刘斌 , 刘有军 et al. 偶极子成像与识别方法 : CN202110058762.7[P]. | 2021-01-16 .
MLA 李明爱 et al. "偶极子成像与识别方法" : CN202110058762.7. | 2021-01-16 .
APA 李明爱 , 刘斌 , 刘有军 , 孙炎珺 . 偶极子成像与识别方法 : CN202110058762.7. | 2021-01-16 .
导入链接 NoteExpress RIS BibTex
Fuzzy support vector machine with joint optimization of genetic algorithm and fuzzy c-means. PubMed
期刊论文 | 2021 | Technology and health care : official journal of the European Society for Engineering and Medicine
摘要&关键词 引用

摘要 :

Motor imagery electroencephalogram (MI-EEG) play an important role in the field of neurorehabilitation, and a fuzzy support vector machine (FSVM) is one of the most used classifiers. Specifically, a fuzzy c-means (FCM) algorithm was used to membership calculation to deal with the classification problems with outliers or noises. However, FCM is sensitive to its initial value and easily falls into local optima.The joint optimization of genetic algorithm (GA) and FCM is proposed to enhance robustness of fuzzy memberships to initial cluster centers, yielding an improved FSVM (GF-FSVM).The features of each channel of MI-EEG are extracted by the improved refined composite multivariate multiscale fuzzy entropy and fused to form a feature vector for a trial. Then, GA is employed to optimize the initial cluster center of FCM, and the fuzzy membership degrees are calculated through an iterative process and further applied to classify two-class MI-EEGs.Extensive experiments are conducted on two publicly available datasets, the average recognition accuracies achieve 99.89% and 98.81% and the corresponding kappa values are 0.9978 and 0.9762, respectively.The optimized cluster centers of FCM via GA are almost overlapping, showing great stability, and GF-FSVM obtains higher classification accuracies and higher consistency as well.

关键词 :

fuzzy c-means fuzzy c-means fuzzy support vector machine fuzzy support vector machine genetic algorithm genetic algorithm joint optimization joint optimization Motor imagery electroencephalogram Motor imagery electroencephalogram

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li Ming-Ai , Wang Ruo-Tu , Wei Li-Na . Fuzzy support vector machine with joint optimization of genetic algorithm and fuzzy c-means. [J]. | Technology and health care : official journal of the European Society for Engineering and Medicine , 2021 .
MLA Li Ming-Ai et al. "Fuzzy support vector machine with joint optimization of genetic algorithm and fuzzy c-means." . | Technology and health care : official journal of the European Society for Engineering and Medicine (2021) .
APA Li Ming-Ai , Wang Ruo-Tu , Wei Li-Na . Fuzzy support vector machine with joint optimization of genetic algorithm and fuzzy c-means. . | Technology and health care : official journal of the European Society for Engineering and Medicine , 2021 .
导入链接 NoteExpress RIS BibTex
基于4D数据表达和3DCNN的运动想象任务解码方法 incoPat
专利 | 2021-01-16 | CN202110058756.1
摘要&关键词 引用

摘要 :

本发明公开了基于4D数据表达和3DCNN的运动想象任务解码方法,对原始运动想象脑电信号MI‑EEG进行基线校正和带通滤波处理;将预处理后的MI‑EEG信号从低维头皮空间映射到高维脑皮层空间,获得偶极子源估计;结合偶极子坐标系转换、插值和体积下采样等操作,构建3D偶极子幅值矩阵;在TOI内设置滑窗,将窗内采样时刻对应的3D偶极子幅值矩阵按照采样顺序堆叠为4D偶极子特征矩阵;设计三模块级联结构的三维卷积神经网络3M3DCNN,对4DDFM含有的三维空间位置信息以及一维时间信息的复合特征进行提取和识别,实现运动想象任务解码;本发明避免了ROI的选择带来的大量信息丢失,并省去了时频分析等复杂操作步骤,有效提高了脑电信号的识别效果。

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 李明爱 , 阮秭威 , 刘有军 et al. 基于4D数据表达和3DCNN的运动想象任务解码方法 : CN202110058756.1[P]. | 2021-01-16 .
MLA 李明爱 et al. "基于4D数据表达和3DCNN的运动想象任务解码方法" : CN202110058756.1. | 2021-01-16 .
APA 李明爱 , 阮秭威 , 刘有军 , 杨金福 , 孙炎珺 . 基于4D数据表达和3DCNN的运动想象任务解码方法 : CN202110058756.1. | 2021-01-16 .
导入链接 NoteExpress RIS BibTex
A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion SCIE
期刊论文 | 2021 , 15 | FRONTIERS IN HUMAN NEUROSCIENCE
WoS核心集被引次数: 5
摘要&关键词 引用

摘要 :

Sleep staging is one of the important methods to diagnosis and treatment of sleep diseases. However, it is laborious and time-consuming, therefore, computer assisted sleep staging is necessary. Most of the existing sleep staging researches using hand-engineered features rely on prior knowledges of sleep analysis, and usually single channel electroencephalogram (EEG) is used for sleep staging task. Prior knowledge is not always available, and single channel EEG signal cannot fully represent the patient's sleeping physiological states. To tackle the above two problems, we propose an automatic sleep staging network model based on data adaptation and multimodal feature fusion using EEG and electrooculogram (EOG) signals. 3D-CNN is used to extract the time-frequency features of EEG at different time scales, and LSTM is used to learn the frequency evolution of EOG. The nonlinear relationship between the High-layer features of EEG and EOG is fitted by deep probabilistic network. Experiments on SLEEP-EDF and a private dataset show that the proposed model achieves state-of-the-art performance. Moreover, the prediction result is in accordance with that from the expert diagnosis.

关键词 :

deep learning deep learning fusion networks fusion networks HHT HHT multimodal physiological signals multimodal physiological signals sleep stage classification sleep stage classification

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Duan, Lijuan , Li, Mengying , Wang, Changming et al. A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion [J]. | FRONTIERS IN HUMAN NEUROSCIENCE , 2021 , 15 .
MLA Duan, Lijuan et al. "A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion" . | FRONTIERS IN HUMAN NEUROSCIENCE 15 (2021) .
APA Duan, Lijuan , Li, Mengying , Wang, Changming , Qiao, Yuanhua , Wang, Zeyu , Sha, Sha et al. A Novel Sleep Staging Network Based on Data Adaptation and Multimodal Fusion . | FRONTIERS IN HUMAN NEUROSCIENCE , 2021 , 15 .
导入链接 NoteExpress RIS BibTex
Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN SCIE
期刊论文 | 2021 , 59 (10) , 2037-2050 | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING
WoS核心集被引次数: 26
摘要&关键词 引用

摘要 :

A motor imagery EEG (MI-EEG) signal is often selected as the driving signal in an active brain computer interface (BCI) system, and it has been a popular field to recognize MI-EEG images via convolutional neural network (CNN), which poses a potential problem for maintaining the integrity of the time-frequency-space information in MI-EEG images and exploring the feature fusion mechanism in the CNN. However, information is excessively compressed in the present MI-EEG image, and the sequential CNN is unfavorable for the comprehensive utilization of local features. In this paper, a multidimensional MI-EEG imaging method is proposed, which is based on time-frequency analysis and the Clough-Tocher (CT) interpolation algorithm. The time-frequency matrix of each electrode is generated via continuous wavelet transform (WT), and the relevant section of frequency is extracted and divided into nine submatrices, the longitudinal sums and lengths of which are calculated along the directions of frequency and time successively to produce a 3 x 3 feature matrix for each electrode. Then, feature matrix of each electrode is interpolated to coincide with their corresponding coordinates, thereby yielding a WT-based multidimensional image, called WTMI. Meanwhile, a multilevel and multiscale feature fusion convolutional neural network (MLMSFFCNN) is designed for WTMI, which has dense information, low signal-to-noise ratio, and strong spatial distribution. Extensive experiments are conducted on the BCI Competition IV 2a and 2b datasets, and accuracies of 92.95% and 97.03% are yielded based on 10-fold cross-validation, respectively, which exceed those of the state-of-the-art imaging methods. The kappa values and p values demonstrate that our method has lower class skew and error costs. The experimental results demonstrate that WTMI can fully represent the time-frequency-space features of MI-EEG and that MLMSFFCNN is beneficial for improving the collection of multiscale features and the fusion recognition of general and abstract features for WTMI.

关键词 :

Brain-computer interface Brain-computer interface Convolutional neural network Convolutional neural network Machine learning Machine learning MI-EEG imaging method MI-EEG imaging method Wavelet transform Wavelet transform

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Ming-ai , Han, Jian-fu , Yang, Jin-fu . Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN [J]. | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2021 , 59 (10) : 2037-2050 .
MLA Li, Ming-ai et al. "Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN" . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING 59 . 10 (2021) : 2037-2050 .
APA Li, Ming-ai , Han, Jian-fu , Yang, Jin-fu . Automatic feature extraction and fusion recognition of motor imagery EEG using multilevel multiscale CNN . | MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING , 2021 , 59 (10) , 2037-2050 .
导入链接 NoteExpress RIS BibTex
A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks SCIE
期刊论文 | 2021 , 18 (4) | JOURNAL OF NEURAL ENGINEERING
WoS核心集被引次数: 14
摘要&关键词 引用

摘要 :

Objective. Motor imagery electroencephalography (MI-EEG) produces one of the most commonly used biosignals in intelligent rehabilitation systems. The newly developed 3D convolutional neural network (3DCNN) is gaining increasing attention for its ability to recognize MI tasks. The key to successful identification of movement intention is dependent on whether the data representation can faithfully reflect the cortical activity induced by MI. However, the present data representation, which is often generated from partial source signals with time-frequency analysis, contains incomplete information. Therefore, it would be beneficial to explore a new type of data representation using raw spatiotemporal dipole information as well as the possible development of a matching 3DCNN. Approach. Based on EEG source imaging and 3DCNN, a novel decoding method for identifying MI tasks is proposed, called ESICNND. MI-EEG is mapped to the cerebral cortex by the standardized low resolution electromagnetic tomography algorithm, and the optimal sampling points of the dipoles are selected as the time of interest to best reveal the difference between any two MI tasks. Then, the initial subject coordinate system is converted to a magnetic resonance imaging coordinate system, followed by dipole interpolation and volume down-sampling; the resulting 3D dipole amplitude matrices are merged at the selected sampling points to obtain 4D dipole feature matrices (4DDFMs). These matrices are augmented by sliding window technology and input into a 3DCNN with a cascading architecture of three modules (3M3DCNN) to perform the extraction and classification of comprehensive features. Main results. Experiments are carried out on two public datasets; the average ten-fold CV classification accuracies reach 88.73% and 96.25%, respectively, and the statistical analysis demonstrates outstanding consistency and stability. Significance. The 4DDFMs reveals the variation of cortical activation in a 3D spatial cube with a temporal dimension and matches the 3M3DCNN well, making full use of the high-resolution spatiotemporal information from all dipoles.

关键词 :

4D dipole feature matrix 4D dipole feature matrix convolutional neural network convolutional neural network data representation data representation EEG source imaging EEG source imaging motor imagery EEG motor imagery EEG time of interest time of interest

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Ming-ai , Ruan, Zi-wei . A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks [J]. | JOURNAL OF NEURAL ENGINEERING , 2021 , 18 (4) .
MLA Li, Ming-ai et al. "A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks" . | JOURNAL OF NEURAL ENGINEERING 18 . 4 (2021) .
APA Li, Ming-ai , Ruan, Zi-wei . A novel decoding method for motor imagery tasks with 4D data representation and 3D convolutional neural networks . | JOURNAL OF NEURAL ENGINEERING , 2021 , 18 (4) .
导入链接 NoteExpress RIS BibTex
Key Band Image Sequences and A Hybrid Deep Neural Network for Recognition of Motor Imagery EEG SCIE
期刊论文 | 2021 , 9 , 86994-87006 | IEEE ACCESS
WoS核心集被引次数: 3
摘要&关键词 引用

摘要 :

Deep neural network is a promising method to recognize motor imagery electroencephalography (MI-EEG), which is often used as the source signal of a rehabilitation system; and the core issues are the data representation and the matched neural networks. MI-EEG images is one of the main expressions, however, all the measured data of a trial are usually integrated into one image, causing information loss, especially in the time dimension; and the neural network architecture might not fully extract the features over the alpha and beta frequency bands, which are closely related to MI. In this paper, we propose a key band imaging method (KBIM). A short time Fourier transform is applied to each electrode of the MI-EEG signal to generate a time-frequency image, and the parts corresponding to the alpha and beta bands are intercepted, fused, and further arranged into the EEG electrode map by the nearest neighbor interpolation method, forming two key band image sequences. In addition, a hybrid deep neural network named the parallel multimodule convolutional neural network and long short-term memory network (PMMCL) is designed for the extraction and fusion of the spatial-spectral and temporal features of two key band image sequences to realize automatic classification of MI-EEG signals. Extensive experiments are conducted on two public datasets, and the accuracies after 10-fold cross-validation are 97.42% and 77.33%, respectively. Statistical analysis shows the superb discrimination ability for multiclass MI-EEG too. The results demonstrate that KBIM can preserve the integrity of the feature information, and they well match with PMMCL.

关键词 :

convolutional neural network convolutional neural network data representation data representation image sequence image sequence long short-term memory long short-term memory Motor imagery electroencephalography Motor imagery electroencephalography

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Li, Ming-Ai , Peng, Wei-Min , Yang, Jin-Fu . Key Band Image Sequences and A Hybrid Deep Neural Network for Recognition of Motor Imagery EEG [J]. | IEEE ACCESS , 2021 , 9 : 86994-87006 .
MLA Li, Ming-Ai et al. "Key Band Image Sequences and A Hybrid Deep Neural Network for Recognition of Motor Imagery EEG" . | IEEE ACCESS 9 (2021) : 86994-87006 .
APA Li, Ming-Ai , Peng, Wei-Min , Yang, Jin-Fu . Key Band Image Sequences and A Hybrid Deep Neural Network for Recognition of Motor Imagery EEG . | IEEE ACCESS , 2021 , 9 , 86994-87006 .
导入链接 NoteExpress RIS BibTex
A lightweight network with attention decoder for real-time semantic segmentation SCIE
期刊论文 | 2021 | VISUAL COMPUTER
WoS核心集被引次数: 13
摘要&关键词 引用

摘要 :

As an important task in scene understanding, semantic segmentation requires a large amount of computation to achieve high performance. In recent years, with the rise of autonomous systems, it is crucial to make a trade-off in terms of accuracy and speed. In this paper, we propose a novel asymmetric encoder-decoder network structure to address this problem. In the encoder, we design a Separable Asymmetric Module, which combines depth-wise separable asymmetric convolution with dilated convolution to greatly reduce computation cost while maintaining accuracy. On the other hand, an attention mechanism is also used in the decoder to further improve segmentation performance. Experimental results on CityScapes and CamVid datasets show that the proposed method can achieve a better balance between segmentation precision and speed compared with state-of-the-art semantic segmentation methods. Specifically, our model obtains mean IoU of 72.5% and 66.3% on CityScapes and CamVid test dataset, respectively, with less than 1M parameters.

关键词 :

Attention mechanism Attention mechanism decoder structure decoder structure Depth-wise separable asymmetric convolution Depth-wise separable asymmetric convolution Dilated convolution Dilated convolution Encoder&#8211 Encoder&#8211 Semantic segmentation Semantic segmentation

引用:

复制并粘贴一种已设定好的引用格式,或利用其中一个链接导入到文献管理软件中。

GB/T 7714 Wang, Kang , Yang, Jinfu , Yuan, Shuai et al. A lightweight network with attention decoder for real-time semantic segmentation [J]. | VISUAL COMPUTER , 2021 .
MLA Wang, Kang et al. "A lightweight network with attention decoder for real-time semantic segmentation" . | VISUAL COMPUTER (2021) .
APA Wang, Kang , Yang, Jinfu , Yuan, Shuai , Li, Mingai . A lightweight network with attention decoder for real-time semantic segmentation . | VISUAL COMPUTER , 2021 .
导入链接 NoteExpress RIS BibTex
每页显示 10| 20| 50 条结果
< 页,共 26 >

导出

数据:

选中

格式:
在线人数/总访问数:157/2887629
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司