收录:
摘要:
A Fine-tuning method has been mention in BERT, which is a pre-trained model use widely in NLP. In BERT and GPT, they hold that a standard fine-tuning model should there have a minimal difference between pre-trained architecture and the final downsteam architecture, and the task-special model will harm the result. In this paper, we mention two stream model which use hidden state pre-trained in BERT. In order to facilitate the validity of the verification method, We use sentiment analysis tasks to verify the results, which is a very simple text classification task in natural language process. Experiments on Yelp-review-poliarty show that using the same training data and other fine-tuning method, we can reduce ERROR by 0.21%. With the same setup, we can reduce ERROR of Amazon-review-poliarty by 0.13 %. © 2021 IEEE.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
年份: 2021
页码: 905-908
语种: 英文
归属院系: