• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Mu, Zhibo (Mu, Zhibo.) | Zheng, Shuang (Zheng, Shuang.) | Wang, Quanmin (Wang, Quanmin.)

收录:

CPCI-S EI Scopus

摘要:

Aiming at the problem of sparse Chinese text features and mixing of long and short texts, which lead to the difficulty of extracting word vector features and the single convolution kernel of traditional neural network and redundant parameters, the ACL-RoBERTa-CNN text classification model uses a contrast learning method to learn a uniformly distributed vector representation in order to achieve the effect of regular expression of space. Input the same sentence into dropout twice as "positive pairs", replacing the traditional data enhancement method. Use the contrast learned RoBERTa pre-training model to train the word vector, send the word vector to the CNN layer, use different size convolution kernels to capture the information of different length words in each piece of data, finally combine with Softmax the classifier classifies the extracted features. The experimental results on two public data sets show that ACL-RoBERTa-CNN classification performance is better than TextCNN, TextRNN, LSTM-ATT, RoBERTa-LSTM, RoBERTa-CNN and other deep learning text classification models.

关键词:

CNN Text Classification Contrastive learning RoBERTa

作者机构:

  • [ 1 ] [Mu, Zhibo]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 2 ] [Zheng, Shuang]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
  • [ 3 ] [Wang, Quanmin]Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China

通讯作者信息:

查看成果更多字段

相关关键词:

相关文章:

来源 :

2021 INTERNATIONAL CONFERENCE ON BIG DATA ENGINEERING AND EDUCATION (BDEE 2021)

年份: 2021

页码: 193-197

被引次数:

WoS核心集被引频次: 3

SCOPUS被引频次: 5

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 0

归属院系:

在线人数/总访问数:554/4956086
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司