• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Qiao, Junfei (Qiao, Junfei.) (学者:乔俊飞) | Li, Sanyi (Li, Sanyi.) | Han, Honggui (Han, Honggui.) (学者:韩红桂) | Wang, Dianhui (Wang, Dianhui.)

收录:

EI Scopus SCIE

摘要:

Feedforward neural networks (FNNs) with a single hidden layer have been widely applied in data modeling due to its' universal approximation capability to nonlinear maps. However, such a theoretical result does not provide with any guideline to determine the architecture of the model in practice. Thus, researches on self-organization of FNNs are useful and critical for effective data modeling. This paper proposes a hybrid constructing and pruning strategy (HCPS) for problem solving, where the mutual information (MI) and sensitivity analysis (SA) are employed to measure the amount of internal information of neurons at the hidden layer and the contribution rate of each hidden neuron, respectively. HCPS merges hidden neurons when their MI value becomes too high, deletes hidden neurons when their contribution rates are sufficiently small, and splits hidden neurons when their contribution rates are very big. For each instant pattern feed into the model as a training sample, the weights of the neural network will be updated to ensure the model's output unchanged during structural adjustment. HCPS aims to get a condensed model through eliminating redundant neurons and without degrading the instant modeling performance, which is associated with the model's generalization property. The proposed algorithm is evaluated by some benchmark data sets, including classification problems, a non-linear system identification problem, a time-series prediction problem, and a real world application for pM(2.5) predictions. Simulation results with comparisons demonstrate that our proposed method performs favorably and has improved the existing work in terms of modeling performance. (C) 2017 Elsevier B.V. All rights reserved.

关键词:

Feedforward neural network Mutual information Self-organization structure Sensitivity analysis

作者机构:

  • [ 1 ] [Qiao, Junfei]Beijing Univ Technol, Coll Elect Informat & Control Engn, Beijing 100124, Peoples R China
  • [ 2 ] [Li, Sanyi]Beijing Univ Technol, Coll Elect Informat & Control Engn, Beijing 100124, Peoples R China
  • [ 3 ] [Han, Honggui]Beijing Univ Technol, Coll Elect Informat & Control Engn, Beijing 100124, Peoples R China
  • [ 4 ] [Wang, Dianhui]Beijing Univ Technol, Coll Elect Informat & Control Engn, Beijing 100124, Peoples R China
  • [ 5 ] [Qiao, Junfei]Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 6 ] [Li, Sanyi]Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 7 ] [Han, Honggui]Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
  • [ 8 ] [Wang, Dianhui]La Trobe Univ, Dept Comp Sci & Informat Technol, Melbourne, Vic 3083, Australia

通讯作者信息:

  • [Wang, Dianhui]La Trobe Univ, Dept Comp Sci & Informat Technol, Melbourne, Vic 3083, Australia

电子邮件地址:

查看成果更多字段

相关关键词:

来源 :

NEUROCOMPUTING

ISSN: 0925-2312

年份: 2017

卷: 262

页码: 28-40

6 . 0 0 0

JCR@2022

ESI学科: COMPUTER SCIENCE;

ESI高被引阀值:102

中科院分区:2

被引次数:

WoS核心集被引频次: 22

SCOPUS被引频次: 27

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 2

在线人数/总访问数:1482/2970289
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司