收录:
摘要:
Attention mechanism has achieved remarkable success in image captioning under the neural encoder-decoder frameworks. However, these methods are limited to introduce attention to the language model, e.g., LSTM (long short-term memory), straightforwardly: the attention is embedded into LSTM outside the core hidden layer, and the current attention is irrelevant to the previous one. In this paper, through exploring the inner relationship of attention mechanism and the gates of LSTM, we propose a new attention-gated LSTM model (AGL) that introduces dynamic attention to the language model. In this method, the visual attention is incorporated into the output gate of LSTM and propagates along with the sequential cell state. Thus the attention in AGL obtains dynamic characteristics, which means the current focused visual region can give remote guidance to the later state. Quantitative and qualitative experiments conducted on the MS COCO dataset demonstrate the the advantage of our proposed method. © 2019 IEEE.
关键词:
通讯作者信息:
电子邮件地址: