• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Mou, Luntian (Mou, Luntian.) | Zhou, Chao (Zhou, Chao.) | Zhao, Pengfei (Zhao, Pengfei.) | Nakisa, Bahareh (Nakisa, Bahareh.) | Rastgoo, Mohammad Naim (Rastgoo, Mohammad Naim.) | Jain, Ramesh (Jain, Ramesh.) | Gao, Wen (Gao, Wen.)

收录:

EI SCIE

摘要:

Stress has been identified as one of major contributing factors in car crashes due to its negative impact on driving performance. It is in urgent need that the stress levels of drivers can be detected in real time with high accuracy so that intervening or navigating measures can be taken in time to mitigate the situation. Existing driver stress detection models mainly rely on traditional machine learning techniques to fuse multimodal data. However, due to the non-linear correlations among modalities, it is still challenging for traditional multimodal fusion methods to handle the real-time influx of complex multimodal and high dimensional data, and report drivers? stress levels accurately. To solve this issue, a framework of driver stress detection through multimodal fusion using attention based deep learning techniques is proposed in this paper. Specifically, an attention based convolutional neural networks (CNN) and long short-term memory (LSTM) model is proposed to fuse non-invasive data, including eye data, vehicle data, and environmental data. Then, the proposed model can automatically extract features separately from each modality and give different levels of attention to features from different modalities through self-attention mechanism. To verify the validity of the proposed method, extensive experiments have been carried out on our dataset collected using an advanced driving simulator. Experimental results demonstrate that the performance of the proposed method on driver stress detection outperforms the state-of-the-art models with an average accuracy of 95.5%.

关键词:

Attention mechanism Convolutional neural network Driver stress detection Eye data Long short-term memory Vehicle data

作者机构:

  • [ 1 ] [Mou, Luntian]Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 2 ] [Zhou, Chao]Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 3 ] [Zhao, Pengfei]Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China
  • [ 4 ] [Nakisa, Bahareh]Deakin Univ, Fac Sci Engn & Built Environm, Sch Informat Technol, Geelong, Vic, Australia
  • [ 5 ] [Rastgoo, Mohammad Naim]Queensland Univ Technol, Sch Elect Engn & Comp Sci, Brisbane, Qld, Australia
  • [ 6 ] [Jain, Ramesh]Univ Calif Irvine, Bren Sch Informat & Comp Sci, Inst Future Hlth, Irvine, CA USA
  • [ 7 ] [Gao, Wen]Peking Univ, Sch Elect Engn & Comp Sci, Inst Digital Media, Beijing, Peoples R China

通讯作者信息:

  • [Mou, Luntian]Beijing Univ Technol, Fac Informat Technol, Beijing Key Lab Multimedia & Intelligent Software, Beijing, Peoples R China

电子邮件地址:

查看成果更多字段

相关关键词:

来源 :

EXPERT SYSTEMS WITH APPLICATIONS

ISSN: 0957-4174

年份: 2021

卷: 173

8 . 5 0 0

JCR@2022

ESI学科: ENGINEERING;

ESI高被引阀值:9

被引次数:

WoS核心集被引频次: 44

SCOPUS被引频次: 74

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 2

在线人数/总访问数:915/2901636
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司