• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Zhang, Li (Zhang, Li.) | Hu, Yuxuan (Hu, Yuxuan.)

收录:

EI

摘要:

A Fine-tuning method has been mention in BERT, which is a pre-trained model use widely in NLP. In BERT and GPT, they hold that a standard fine-tuning model should there have a minimal difference between pre-trained architecture and the final downsteam architecture, and the task-special model will harm the result. In this paper, we mention two stream model which use hidden state pre-trained in BERT. In order to facilitate the validity of the verification method, We use sentiment analysis tasks to verify the results, which is a very simple text classification task in natural language process. Experiments on Yelp-review-poliarty show that using the same training data and other fine-tuning method, we can reduce ERROR by 0.21%. With the same setup, we can reduce ERROR of Amazon-review-poliarty by 0.13 %. © 2021 IEEE.

关键词:

Classification (of information) Power electronics Sentiment analysis Tuning

作者机构:

  • [ 1 ] [Zhang, Li]Software Institute, Beijing University of Technology, Beijing, China
  • [ 2 ] [Hu, Yuxuan]Software Institute, Beijing University of Technology, Beijing, China

通讯作者信息:

电子邮件地址:

查看成果更多字段

相关关键词:

相关文章:

来源 :

年份: 2021

页码: 905-908

语种: 英文

被引次数:

WoS核心集被引频次: 0

SCOPUS被引频次: 3

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 2

归属院系:

在线人数/总访问数:1239/2920925
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司