收录:
摘要:
Local path planning and obstacle avoidance in complex environments are two challenging problems in the research of intelligent robots. In this study, we develop a novel approach grounded in deep distributional reinforcement learning to address these challenges. Within this methodology, agents instantiated by deep neural networks perceive real-time local environmental information through sensor data, addressing inherent stochasticity and local path planning tasks in complex environments. End -to -end training is facilitated via distributional reinforcement learning algorithms and reward functions informed by heuristic knowledge. Optimal actions for path planning are determined through return value distributions. Finally, the simulation results show that the success rate of the proposed distributed algorithm is 98% in a random environment and 94% in a dynamic environment. This proves that the algorithm has better generalization and flexibility than the non -distributed algorithm.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
NEUROCOMPUTING
ISSN: 0925-2312
年份: 2024
卷: 599
6 . 0 0 0
JCR@2022
归属院系: