收录:
摘要:
In the machine learning fleld, the core technique of artiflcial intelligence, reinforcement learning is a class of strategies focusing on learning during the interaction process between machine and environment. As an important branch of reinforcement learning, the adaptive critic technique is closely related to dynamic programming and optimization design. In order to efiectively solve optimal control problems of complex dynamical systems, the adaptive dynamic programming approach was proposed by combining adaptive critic, dynamic programming with artiflcial neural networks and has been attracted extensive attention. Particularly, great progress has been obtained on robust adaptive critic control design with uncertainties and disturbances. Now, it has been regarded as a necessary outlet to construct intelligent learning systems and achieve true brain-like intelligence. This paper presents a comprehensive survey on the learning-based robust adaptive critic control theory and methods, including self-learning robust stabilization, adaptive trajectory tracking, event-driven robust control, and adaptive H∞ control design. Therein, it covers a general analysis for adaptive critic systems in terms of stability, convergence, optimality, and robustness. In addition, considering novel techniques such as artiflcial intelligence, big data, deep learning, and knowledge automation, it also discusses future prospects of robust adaptive critic control. Copyright © 2019 Acta Automatica Sinica. All rights reserved.
关键词:
通讯作者信息:
电子邮件地址: