• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Wang, Maonan (Wang, Maonan.) | Zheng, Kangfeng (Zheng, Kangfeng.) | Yang, Yanqing (Yang, Yanqing.) | Wang, Xiujuan (Wang, Xiujuan.)

收录:

EI SCIE

摘要:

In recent years, machine learning-based intrusion detection systems (IDSs) have proven to be effective; especially, deep neural networks improve the detection rates of intrusion detection models. However, as models become more and more complex, people can hardly get the explanations behind their decisions. At the same time, most of the works about model interpretation focuses on other fields like computer vision, natural language processing, and biology. This leads to the fact that in practical use, cybersecurity experts can hardly optimize their decisions according to the judgments of the model. To solve these issues, a framework is proposed in this paper to give an explanation for IDSs. This framework uses SHapley Additive exPlanations (SHAP), and combines local and global explanations to improve the interpretation of IDSs. The local explanations give the reasons why the model makes certain decisions on the specific input. The global explanations give the important features extracted from IDSs, present the relationships between the feature values and different types of attacks. At the same time, the interpretations between two different classifiers, one-vs-all classifier and multiclass classifier, are compared. NSL-KDD dataset is used to test the feasibility of the framework. The framework proposed in this paper leads to improve the transparency of any IDS, and helps the cybersecurity staff have a better understanding of IDSs & x2019; judgments. Furthermore, the different interpretations between different kinds of classifiers can also help security experts better design the structures of the IDSs. More importantly, this work is unique in the intrusion detection field, presenting the first use of the SHAP method to give explanations for IDSs.

关键词:

Biological system modeling Computational modeling Feature extraction Intrusion detection Intrusion detection system machine learning Machine learning model interpretation Predictive models SHapley Additive exPlanations Shapley value

作者机构:

  • [ 1 ] [Wang, Maonan]Beijing Univ Posts & Telecommun, Sch Cyberspace Secur, Beijing 100876, Peoples R China
  • [ 2 ] [Zheng, Kangfeng]Beijing Univ Posts & Telecommun, Sch Cyberspace Secur, Beijing 100876, Peoples R China
  • [ 3 ] [Yang, Yanqing]Beijing Univ Posts & Telecommun, Sch Cyberspace Secur, Beijing 100876, Peoples R China
  • [ 4 ] [Yang, Yanqing]Xinjiang Univ, Coll Informat Sci & Engn, Urumqi 830046, Peoples R China
  • [ 5 ] [Wang, Xiujuan]Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China

通讯作者信息:

  • [Zheng, Kangfeng]Beijing Univ Posts & Telecommun, Sch Cyberspace Secur, Beijing 100876, Peoples R China

电子邮件地址:

查看成果更多字段

相关关键词:

来源 :

IEEE ACCESS

ISSN: 2169-3536

年份: 2020

卷: 8

页码: 73127-73141

3 . 9 0 0

JCR@2022

JCR分区:2

被引次数:

WoS核心集被引频次: 117

SCOPUS被引频次: 179

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 2

归属院系:

在线人数/总访问数:1394/2902150
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司