收录:
摘要:
In this paper, a floating point multiplication and accumulation operator based on FPGA is designed for neural network calculation, and a custom 32 bit floating-point data format is used to change the amount of computation by changing the overall structure of the data, and the performance of the operator is optimized. Finally, the simulation results in FPGA are given to verify the correctness of the design. The design saves the resources by comparing the floating point operation with the common algorithm of 32 bit floating-point data of the IEEE standard. © 2018 IEEE.
关键词:
通讯作者信息:
电子邮件地址:
来源 :
年份: 2018
页码: 282-285
语种: 英文
归属院系: