收录:
摘要:
In recent years, end-to-end autonomous driving has become an emerging research direction in the field of autonomous driving. This method attempts to map the road images collected by the vehicle camera to the decision control of the vehicle. We propose a spatiotemporal neural network model with a visual attention mechanism to predict vehicle decision control in an end-to-end manner. The model is composed of CNN and LSTM and can extract temporal and spatial features from road image sequences. The visual attention mechanism in the model helps the model to focus on important areas in the image. We evaluated the model in the open racing car simulator TORCS, and the experiments showed that our model is better at predicting driving decisions than the simple CNN model. In addition, the visual attention mechanism in the model is conducive to improving the performance of the end-to-end autonomous driving model. © 2020 IEEE.
关键词:
通讯作者信息:
电子邮件地址: