收录:
摘要:
In order to overcome the defects of gradient descent (GD) algorithm which lead to slow convergence and easy to fall into local minima, this paper proposes an adaptive optimum steepest descent (AOSD) learning algorithm which is used for the recurrent radial basis function (RRBF) neural network. Compared with traditional GD algorithm, the adaptive learning rate is integrated into the AOSD learning algorithm in order to accelerate the convergence speed of training algorithm and improve the network performance of nonlinear system modeling. Several comparisons show that the proposed RRBF has faster convergence speed and better prediction performance. © 2017 Technical Committee on Control Theory, CAA.
关键词:
通讯作者信息:
电子邮件地址: