• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Xiao, Yinlong (Xiao, Yinlong.) | Ji, Zongcheng (Ji, Zongcheng.) | Li, Jianqiang (Li, Jianqiang.) (Scholars:李建强) | Han, Mei (Han, Mei.)

Indexed by:

EI Scopus SCIE

Abstract:

Integrating lexical knowledge in Chinese named entity recognition (NER) has been proven effective. Among the existing methods, Flat-LAttice Transformer (FLAT) has achieved great success in both performance and efficiency. FLAT performs lexical enhancement for each sentence by constructing a flat lattice (i.e., a sequence of tokens including the characters in a sentence and the matched words in a lexicon) and calculating self-attention with a fully-connected structure. However, the different interactions between tokens, which can bring different aspects of semantic information for Chinese NER, cannot be well captured by self-attention with a fully-connected structure. In this paper, we propose a novel Multi-View Transformer (MVT) to effectively capture the different interactions between tokens. We first define four views to capture four different token interaction structures. We then construct a view-aware visible matrix for each view according to the corresponding structure and introduce a view-aware dot-product attention for each view to limit the attention scope by incorporating the corresponding visible matrix. Finally, we design three different MVT variants to fuse the multi-view features at different levels of the Transformer architecture. Experimental results conducted on four public Chinese NER datasets show the effectiveness of the proposed method. Specifically, on the most challenging dataset Weibo, which is in an informal text style, MVT outperforms FLAT in F1 score by 2.56%, and when combined with BERT, MVT outperforms FLAT in F1 score by 3.03%.

Keyword:

Context modeling Transformer Transformers Chinese NER Speech processing Semantics lexicon-based Chinese NER Urban areas Bridges Multi-view Rivers multi-view Transformer

Author Community:

  • [ 1 ] [Xiao, Yinlong]Ping Technol, Beijing 100124, Peoples R China
  • [ 2 ] [Ji, Zongcheng]Ping Technol, Beijing 100124, Peoples R China
  • [ 3 ] [Xiao, Yinlong]Beijing Univ Technol, Fac Informat Technol, Beijing 100021, Peoples R China
  • [ 4 ] [Li, Jianqiang]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 5 ] [Han, Mei]PAII Inc, Palo Alto, CA 94306 USA

Reprint Author's Address:

  • [Ji, Zongcheng]Ping An Technol, Beijing 100027, Peoples R China;;

Show more details

Related Keywords:

Related Article:

Source :

IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING

ISSN: 2329-9290

Year: 2024

Volume: 32

Page: 3656-3668

5 . 4 0 0

JCR@2022

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 4

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:614/5295169
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.