Indexed by:
Abstract:
Binary neural networks (BNNs) are promising on resource-constrained devices because they reduce mem-ory consumption and accelerate inference effectively. However, they are still potential on performance improvement. Prior studies attribute performance degradation of BNNs to limited representation ability and gradient mismatch. In this paper, we find that it also results from the mandatory representation of small full-precision auxiliary weights to large values. To tackle with this issue, we propose an approach dubbed as Diluted Binary Neural Network (DBNN). Besides avoiding mandatory representation effectively, the proposed DBNN also alleviates sign flip problem to a large extent. For activations, we jointly min-imize quantization error and maximize information entropy to develop the binarization scheme. Com-pared with existing sparsity-binarization approaches, DBNN trains network from scratch without other procedures and achieves larger sparsity. Experiments on several datasets with various networks demon-strate the superiority of our approach. (c) 2023 Elsevier Ltd. All rights reserved.
Keyword:
Reprint Author's Address:
Source :
PATTERN RECOGNITION
ISSN: 0031-3203
Year: 2023
Volume: 140
8 . 0 0 0
JCR@2022
ESI Discipline: ENGINEERING;
ESI HC Threshold:19
Cited Count:
WoS CC Cited Count: 4
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: