TY - JOUR
T1 - Low-VDD Operation of SRAM Synaptic Array for Implementing Ternary Neural Network
AU - Sun, Xiaoyu
AU - Liu, Rui
AU - Chen, Yi Ju
AU - Chiu, Hsiao Yun
AU - Chen, Wei Hao
AU - Chang, Meng Fan
N1 - Funding Information: Manuscript received March 12, 2017; revised May 30, 2017; accepted July 9, 2017. Date of publication July 28, 2017; date of current version September 25, 2017. This work was supported by the National Science Foundation under Grant NSF-CCF-1552687. (Corresponding author: Xiaoyu Sun.) X. Sun, R. Liu, and S. Yu are with the School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ 85287 USA (e-mail: [email protected]; [email protected]). Publisher Copyright: © 2017 IEEE.
PY - 2017/10
Y1 - 2017/10
N2 - For Internet of Things (IoT) edge devices, it is very attractive to have the local sensemaking capability instead of sending all the data back to the cloud for information processing. For image pattern recognition, neuro-inspired machine learning algorithms have demonstrated enormous powerfulness. To effectively implement learning algorithms on-chip for IoT edge devices, on-chip synaptic memory architectures have been proposed to implement the key operations such as weighted-sum or matrix-vector multiplication. In this paper, we proposed a low-power design of static random access memory (SRAM) synaptic array for implementing a low-precision ternary neural network. We experimentally demonstrated that the supply voltage (VDD) of the SRAM array could be aggressively reduced to a level, where the SRAM cell is susceptible to bit failures. The testing results from 65-nm SRAM chips indicate that VDD could be reduced from the nominal 1-0.55 V (or 0.5 V) with a bit error rate ∼0.23% (or ∼1.56%), which only introduced ∼0.08% (or ∼1.68%) degradation in the classification accuracy. As a result, the power consumption could be reduced by more than 8× (or 10×).
AB - For Internet of Things (IoT) edge devices, it is very attractive to have the local sensemaking capability instead of sending all the data back to the cloud for information processing. For image pattern recognition, neuro-inspired machine learning algorithms have demonstrated enormous powerfulness. To effectively implement learning algorithms on-chip for IoT edge devices, on-chip synaptic memory architectures have been proposed to implement the key operations such as weighted-sum or matrix-vector multiplication. In this paper, we proposed a low-power design of static random access memory (SRAM) synaptic array for implementing a low-precision ternary neural network. We experimentally demonstrated that the supply voltage (VDD) of the SRAM array could be aggressively reduced to a level, where the SRAM cell is susceptible to bit failures. The testing results from 65-nm SRAM chips indicate that VDD could be reduced from the nominal 1-0.55 V (or 0.5 V) with a bit error rate ∼0.23% (or ∼1.56%), which only introduced ∼0.08% (or ∼1.68%) degradation in the classification accuracy. As a result, the power consumption could be reduced by more than 8× (or 10×).
KW - Binary synapses
KW - classification
KW - low power
KW - neural network
KW - static random access memory (SRAM)
UR - https://www.scopus.com/pages/publications/85028936782
UR - https://www.scopus.com/pages/publications/85028936782#tab=citedBy
U2 - 10.1109/TVLSI.2017.2727528
DO - 10.1109/TVLSI.2017.2727528
M3 - Article
SN - 1063-8210
VL - 25
SP - 2962
EP - 2965
JO - IEEE Transactions on Very Large Scale Integration (VLSI) Systems
JF - IEEE Transactions on Very Large Scale Integration (VLSI) Systems
IS - 10
M1 - 7995135
ER -