Summary: | 碩士 === 國立中正大學 === 資訊工程研究所 === 106 === Machine Learning (ML) techniques have been widely utilized in an edge-device of Internet of Things (IoT) devices. Achieving sufficient energy efficiency to execute ML workloads on an edge-device necessitates specialized hardware with efficient digital circuits. Timing sensor can detect internal operation of chip, it thru dynamic voltage scaling compensated PVTA variations. This paper presents the design of "A 28nm SoC with a 1.2GHz 568nJ/Prediction Sparse Deep-Neural-Network Engine with >0.1 Timing Error Rate Tolerance for IoT Applications" on Xilinx 28nm FPGA (Artix7). Harvard University used deep neural network as Machine Learning algorithm. In order to implement hardware on deep neural network architecture. This paper presents fixed-point operation on deep neural network design, it used normalized method to avoid overflow of computed result, and increased utilization rate on most significant bit, which can prevent race between integer and fraction on different layer of DNN and decreased 3-bit capacity. We used timing sensor measured timing violation and result accuracy of DNN after dynamic voltage scaling on FPGA, which unable correct timing violation in low voltage. In Harvard University's paper, which does not mention the overhead on increase detection window of timing sensor. This paper simulate the overhead while increased detection window on 28nm, hence it may not get benefit on power consumption while increased timing sensor's detection window.
|