收录:
摘要:
In recent years, autonomous driving has become a hot topic, especially in the complex urban road environment. The visual algorithm is the most used scheme for autonomous driving. The traditional condition imitation learning adopts the end-to-end deep learning network. But it lacks interpretability, and the ability of feature extraction and expression of network is limited. There are still some problems in the local planning and detail implementation. To solve these problems, we propose to use the deep residual network architecture and add the dual attention module to learn driving skills, which are closer to human beings. To further improve the detailed feature extraction ability of the network, the deeper residual network architecture is used. To adaptively integrate the global context long-range dependence of the image in the spatial and feature dimensions, the dual attention module is adopted to improve the ability of network expression. At the same time, in order to make full use of the multi-period attribute information of the camera image itself, we redesign the network architecture, extract, integrate the three-way temporal information features and the high-level semantics, and increase the interpretability of the temporal information of the model. This method is tested on the CARLA simulator. The experimental results show that compared with the benchmark algorithm, it achieves better driving effect. Deeper feature extraction and multi-period information fusion can effectively improve the driving ability and driving completion of the agent. © 2020 ACM.
关键词:
通讯作者信息:
电子邮件地址: