• 综合
  • 标题
  • 关键词
  • 摘要
  • 学者
  • 期刊-刊名
  • 期刊-ISSN
  • 会议名称
搜索

作者:

Lin, Yuhan (Lin, Yuhan.) | Lei, Minglong (Lei, Minglong.) | Niu, Lingfeng (Niu, Lingfeng.)

收录:

EI

摘要:

Deep neural networks(DNNs) have achieved great success in many real-world applications, but they also had some drawbacks such as considerable storage requirement, large computational power consumption and delay for training and inference, making it impracticable to deploy state-of-the-art models into embedded systems and portable devices. Thus, the demand of compressing DNNs has been taken into consideration. In this paper, we focus on quantized neural networks, which is one scheme of compressing DNNs. At first we introduce some baseline works in quantized neural networks and then give a review on optimization ways used in quantizing neural networks. In our perspective, these methods fall into two categories: minimizing quantization error and minimizing loss function. Specialized introduction for each category follows after baseline works. We also make some comments to each category and some methods. Finally, we discuss on some possible directions of this area and make a conclusion. © 2019 IEEE.

关键词:

Data mining Deep neural networks Digital storage Embedded systems Neural networks

作者机构:

  • [ 1 ] [Lin, Yuhan]School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, China
  • [ 2 ] [Lei, Minglong]Faculty of Information Technology, Beijing University of Technology, Beijing, China
  • [ 3 ] [Niu, Lingfeng]School of Economics and Management, Chinese Academy of Sciences, Beijing, China

通讯作者信息:

电子邮件地址:

查看成果更多字段

相关关键词:

相关文章:

来源 :

ISSN: 2375-9232

年份: 2019

卷: 2019-November

页码: 385-390

语种: 英文

被引次数:

WoS核心集被引频次: 0

SCOPUS被引频次: 2

ESI高被引论文在榜: 0 展开所有

万方被引频次:

中文被引频次:

近30日浏览量: 2

归属院系:

在线人数/总访问数:374/2932802
地址:北京工业大学图书馆(北京市朝阳区平乐园100号 邮编:100124) 联系我们:010-67392185
版权所有:北京工业大学图书馆 站点建设与维护:北京爱琴海乐之技术有限公司