收录:
摘要:
Text comprehension and information retrieval are two essential methods which could be reinforced by modeling to semantic similarity in sentences and phrases. However, there are general problems of traditional methods on LSTM which is used to process the input sentences. Those semantic vectors cannot fully represent the entire information sequence and the information contained in firstly input content will he diluted or overwritten by the late r information. The longer the input sequence, the more serious this phenomenon is. In order to address these problems, we propose new methods with self-attention. It can incorporate weights of special words and highlight the comparison of the similarity in key words. Compared with normal self-attention which can only incorporate the weight of the key words into the naive sentences and describe position information on sentences through position encoding. Our experiment shows that new method can improve the performance of model.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT (IEEE IC-NIDC)
ISSN: 2374-0272
年份: 2018
页码: 16-19
语种: 英文
归属院系: