收录:
摘要:
Aiming at the problem of sparse Chinese text features and mixing of long and short texts, which lead to the difficulty of extracting word vector features and the single convolution kernel of traditional neural network and redundant parameters, the ACL-RoBERTa-CNN text classification model uses a contrast learning method to learn a uniformly distributed vector representation in order to achieve the effect of regular expression of space. Input the same sentence into dropout twice as "positive pairs", replacing the traditional data enhancement method. Use the contrast learned RoBERTa pre-training model to train the word vector, send the word vector to the CNN layer, use different size convolution kernels to capture the information of different length words in each piece of data, finally combine with Softmax the classifier classifies the extracted features. The experimental results on two public data sets show that ACL-RoBERTa-CNN classification performance is better than TextCNN, TextRNN, LSTM-ATT, RoBERTa-LSTM, RoBERTa-CNN and other deep learning text classification models.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
2021 INTERNATIONAL CONFERENCE ON BIG DATA ENGINEERING AND EDUCATION (BDEE 2021)
年份: 2021
页码: 193-197
归属院系: