• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Xu, Dongwei (Xu, Dongwei.) | Zhang, Biao (Zhang, Biao.) | Qiu, Qingwei (Qiu, Qingwei.) | Li, Haijian (Li, Haijian.) | Guo, Haifeng (Guo, Haifeng.) | Wang, Baojie (Wang, Baojie.)

收录:

EI Scopus SCIE

摘要:

The application of Deep Reinforcement Learning (DRL) has significantly impacted the development of autonomous driving technology in the field of intelligent transportation. However, in mixed traffic scenarios involving both human-driven vehicles (HDVs) and connected and autonomous vehicles (CAVs), challenges arise, particularly concerning information sharing and collaborative control among multiple intelligent agents using DRL. To address this issue, we propose a novel framework, namely Spatial-Temporal Deep Reinforcement Learning (ST-DRL), that enables collaborative control among multiple CAVs in mixed traffic scenarios. Initially, the traffic states involving multiple agents are constructed as graph-formatted data, which is then sequential created to represent continuous time intervals. With the data representation, interactive behaviors and dynamic characteristics among multiple intelligent agents are implicitly captured. Subsequently, to better represent the spatial relationships between vehicles, a graph enabling network is utilize to encode the vehicle states, which can contribute to the improvement of information sharing efficiency among multiple intelligent agents. Additionally, a spatial-temporal feature fusion network module is designed, which integrates graph convolutional networks (GCN) and gated recurrent units (GRU). It can effectively fuse independent spatial-temporal features and further enhance collaborative control performance. Through extensive experiments conducted in the SUMO traffic simulator and comparison with baseline methods, it is demonstrated that the ST-DRL framework achieves higher success rates in mixed traffic scenarios and exhibits better trade-offs between safety and efficiency. The analysis of the results indicates that ST-DRL has increased the success rate of the task by 15.6% compared to the baseline method, while reducing model training and task completion times by 26.6% respectively.

关键词:

Graph neural network Autonomous driving Deep reinforcement learning Gated recurrent unit

作者机构:

  • [ 1 ] [Xu, Dongwei]Zhejiang Univ Technol, Dept Inst Cyberspace Secur, 288 Liuhe Rd, Hangzhou 311121, Zhejiang, Peoples R China
  • [ 2 ] [Guo, Haifeng]Zhejiang Univ Technol, Dept Inst Cyberspace Secur, 288 Liuhe Rd, Hangzhou 311121, Zhejiang, Peoples R China
  • [ 3 ] [Zhang, Biao]Informat Syst GS1 China & Oriental Speedy Code Tec, Beijing 100011, Peoples R China
  • [ 4 ] [Qiu, Qingwei]Zhejiang Univ Technol, Dept Coll Informat Engn, 288 Liuhe Rd, Hangzhou 311121, Peoples R China
  • [ 5 ] [Li, Haijian]Beijing Univ Technol, Coll Metropolitan Transportat, 100 Pingyuan Village,Boya Rd, Beijing 100021, Peoples R China
  • [ 6 ] [Wang, Baojie]Changan Univ, Key Lab Transport Ind Management, Xian 710064, Peoples R China

通讯作者信息:

查看成果更多字段

相关关键词:

相关文章:

来源 :

APPLIED INTELLIGENCE

ISSN: 0924-669X

年份: 2024

期: 8

卷: 54

页码: 6400-6414

5 . 3 0 0

JCR@2022

被引次数:

WoS核心集被引频次: 6

SCOPUS被引频次:

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 1

归属院系:

在线人数/总访问数:506/4932388
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司