Indexed by:
Abstract:
Deep neural networks (DNNs) have been increasingly adopted in various applications. Systematic verification and validation are essential techniques to guarantee the quality of such systems. Testing is one of the feasible solutions for system validation. However, testing DNNs usually requires a large number of test cases and leads to the high labeling cost. Meanwhile, the test cases may contain various noisy data, which also increases the burden of testing. In this paper, we put forwards a test case prioritization method for DNN-based classifiers, which assigns high priorities to those cases that can lead to problematic classifications. The test cases are prioritized w.r.t. the pattern extracted from the neuron outputs. The difference between the pattern acquired from the training set and the neurons matched from a given input determines the priority of the input. For a trained model, the method consists of two steps. First, we collect neuron output values over the training set and construct a neuron-based pattern for every class of training samples. Second, the metrics are computed according to the comparison between neuron output values of the DNN model from an input and the selected pattern. We carry out the experimentation over three popular datasets with various neural network structures. The extensive experimental results demonstrate that the efficiency and the generalizability of the prioritization method outperform most of the existing techniques. (C) 2021 Elsevier B.V. All rights reserved.
Keyword:
Reprint Author's Address:
Email:
Source :
SCIENCE OF COMPUTER PROGRAMMING
ISSN: 0167-6423
Year: 2021
Volume: 215
1 . 3 0 0
JCR@2022
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:87
JCR Journal Grade:4
Cited Count:
WoS CC Cited Count: 11
SCOPUS Cited Count: 18
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: