• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Chen, Mengyao (Chen, Mengyao.) | Li, Yong (Li, Yong.) | Li, Runqi (Li, Runqi.)

收录:

EI Scopus

摘要:

In neural machine translation (NMT), cyclic neural networks, especially long-term and short-term memory networks and gated recurrent neural networks, have been regarded as the latest methods for sequence modeling and transduction problems for a long time, such as language modeling and machine translation. When the cyclic neural network is running, the sequence information is processed one by one, strictly following the order from left to right or from right to left, processing one word at a time, and parallel operation cannot be realized, resulting in slow running speed. With the rapid development of neural machine translation (NMT) network architecture, cyclic neural network has been effectively replaced by convolution network and self- A ttention. Convolution neural network has replaced the divine circulation neural network due to its parallel computation of convolution. The Transformer model replaces the long-term and short-term memory network with a complete self-attention structure, and abandons the traditional encoder and decoder model which must combine the inherent mode of convolutional neural network or circular neural network and only uses the self-attention mechanism. Although the biggest innovation of Transformer architecture is to use full self- A ttention, there are several other factors, such as multi-head attention and residual connection. The model flexibly combine several common building blocks in the Transformer architecture with the cyclic neural network. By borrowing the framework of the Transformer architecture without using full self- A ttention, experiments show that the cyclic model can be very close to the performance of the Transformer Our model achieved 26.7 BLEU in the WMT 2014 English to German translation task and 37.8 BLEU in the WMT 2014 English to French translation task. Using these two scores alone is very close to the score of the Transformer architecture using full attention, so even if the cyclic neural network is used instead of full self- A ttention, it can perform well on the data set. © 2019 IOP Publishing Ltd. All rights reserved.

关键词:

Brain Computational linguistics Computer aided language translation Convolution Convolutional neural networks Intelligent computing Memory architecture Modeling languages Network architecture Recurrent neural networks Signal processing

作者机构:

  • [ 1 ] [Chen, Mengyao]Institute of Information, Beijing University of Technology, No100 Pingleyuan, Beijing, China
  • [ 2 ] [Li, Yong]Institute of Information, Beijing University of Technology, No100 Pingleyuan, Beijing, China
  • [ 3 ] [Li, Runqi]Institute of Information, Beijing University of Technology, No100 Pingleyuan, Beijing, China

通讯作者信息:

电子邮件地址:

查看成果更多字段

相关关键词:

相关文章:

来源 :

ISSN: 1742-6588

年份: 2019

期: 5

卷: 1237

语种: 英文

被引次数:

WoS核心集被引频次: 0

SCOPUS被引频次: 6

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 3

归属院系:

在线人数/总访问数:660/3578028
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司