收录:
摘要:
In this paper, a neural-network-based policy learning method is established to solve robust stabilization for a class of continuous-time nonlinear systems with both internal dynamic uncertainties and input matrix uncertainties. First, the robust stabilization problem is converted to an optimal control problem by choosing an appropriate cast function and proving system stability. Then, in order to solve the Hamilton-Jacobi-Bellman equation, a policy iteration algorithm is employed by constructing and training a critic neural network. The approximate optimal control policy can be obtained by this algorithm, and the solution of the robust stabilization can he derived as well. Finally, a numerical example and an experimental simulation are provided to verify the availability of the proposed strategy.
关键词:
通讯作者信息:
来源 :
2020 CHINESE AUTOMATION CONGRESS (CAC 2020)
ISSN: 2688-092X
年份: 2020
页码: 987-992
语种: 英文
归属院系: