Indexed by:
Abstract:
Text comprehension and information retrieval are two essential methods which could be reinforced by modeling to semantic similarity in sentences and phrases. However, there are general problems of traditional methods on LSTM which is used to process the input sentences. Those semantic vectors cannot fully represent the entire information sequence and the information contained in firstly input content will he diluted or overwritten by the late r information. The longer the input sequence, the more serious this phenomenon is. In order to address these problems, we propose new methods with self-attention. It can incorporate weights of special words and highlight the comparison of the similarity in key words. Compared with normal self-attention which can only incorporate the weight of the key words into the naive sentences and describe position information on sentences through position encoding. Our experiment shows that new method can improve the performance of model.
Keyword:
Reprint Author's Address:
Email:
Source :
PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON NETWORK INFRASTRUCTURE AND DIGITAL CONTENT (IEEE IC-NIDC)
ISSN: 2374-0272
Year: 2018
Page: 16-19
Language: English
Cited Count:
WoS CC Cited Count: 4
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: