• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Jia, Xibin (Jia, Xibin.) (学者:贾熹滨) | Li, Xiaobo (Li, Xiaobo.) | Jin, Ya (Jin, Ya.) | Miao, Jun (Miao, Jun.)

收录:

EI Scopus SCIE

摘要:

Deep neural networks have made significant achievements in representation learning of traditionally man-made features, especially in terms of complex objects. Over the decades, this learning process has attracted thousands of researchers and has been widely used in the speech, visual, and text recognition fields. One deep network multi-layer extreme learning machine (ML-ELM) achieves a good performance in representation learning while inheriting the advantages of faster learning and the approximating capability of the extreme learning machine (ELM). However, as with most deep networks, the ML-ELM's algorithmic performance largely depends on the probability distribution of the training data. In this paper, we propose an improved ML-ELM made via using the local significant regions at the input end to enhance the contributions of these regions according to the idea of the selective attention mechanism. To avoid involving and exploring the complex principle of the attention system and to focus on the clarification of our local regional enhancement idea, the paper only selects two typical attention regions. One is the geometric central region, which is normally the important region to attract human attention due to the focal attention mechanism. The other is the task-driven interest region, with facial recognition as an example. The comprehensive experiments are done on the three public datasets of MNIST, NORB, and ORL. The comparison experiment results demonstrate that our proposed region-enhanced ML-ELM (RE-ML-ELM) achieves performance increases in important feature learning by utilizing the apriori knowledge of attention and has a higher recognition rate than that of the normal ML-ELM and the basic ELM. Moreover, it benefits from the non-iterative parameter training method of other ELMs, and our proposed algorithm outperforms most state-of-the-art deep networks such as deep belief network(DBN), in the aspects of training efficiency. Furthermore, because of the deep structure with fewer hidden nodes at each layer, our proposed RE-ML-ELM achieves a comparable training efficiency to that of the ML-ELM but has a higher training speed with the basic ELM, which is normally the width single network that has more hidden nodes to obtain the similar recognition accuracy with the deep networks. Based on our idea of combining the apriori knowledge of the human selective attention system with the data learning, our proposed region-enhanced ML-ELM increases the image classification performance. We believe that the idea of intentionally combining psychological knowledge with the most algorithms based on data-driven learning has the potential to improve their cognitive computing ability.

关键词:

Local significant region Region-enhanced ML-ELM Sage classification Selective attention mechanism

作者机构:

  • [ 1 ] [Jia, Xibin]Beijing Univ Technol, Fac Informat Technol, Beijing Municipal Key Lab Multimedia & Intelligen, Beijing 100124, Peoples R China
  • [ 2 ] [Li, Xiaobo]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 3 ] [Jin, Ya]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
  • [ 4 ] [Miao, Jun]Beijing Informat Sci & Technol Univ, Sch Comp Sci, Beijing Key Lab Internet Culture & Digital Dissem, Beijing 100101, Peoples R China

通讯作者信息:

  • [Miao, Jun]Beijing Informat Sci & Technol Univ, Sch Comp Sci, Beijing Key Lab Internet Culture & Digital Dissem, Beijing 100101, Peoples R China

查看成果更多字段

相关关键词:

相关文章:

来源 :

COGNITIVE COMPUTATION

ISSN: 1866-9956

年份: 2019

期: 1

卷: 11

页码: 101-109

5 . 4 0 0

JCR@2022

ESI学科: COMPUTER SCIENCE;

ESI高被引阀值:58

被引次数:

WoS核心集被引频次: 7

SCOPUS被引频次: 8

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 1

在线人数/总访问数:3708/2963833
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司