Imc: Energy-Eicient In-Memory Convolver For Accelerating Binarized Deep Neural Network
Keywords
Deep convolutional Neural Network; In-memory computing; Spin Hall eect
Abstract
Deep Convolutional Neural Networks (CNNs) are widely employed in modern AI systems due to their unprecedented accuracy in object recognition and detection. However, it has been proven that the main bottleneck to improve large scale deep CNN based hardware implementation performance is massive data communication between processing units and o-chip memory. In this paper, we pave a way towards novel concept of in-memory convolver (IMC) that could implement the dominant convolution computation within main memory based on our proposed Spin Orbit Torque Magnetic Random Access Memory (SOT-MRAM) array architecture to greatly reduce data communication and thus accelerate Binary CNN (BCNN). The proposed architecture could simultaneously work as non-volatile memory and a recongurable in-memory logic (AND, OR) without add-on logic circuits to memory chip as in conventional logic-in-memory designs. The computed logic output could be also simply read out like a normal MRAM bit-cell using the shared memory peripheral circuits. We employ such intrinsic in-memory processing architecture to eciently process data within memory to greatly reduce power-hungry and long distance data communication concerning state-of-the-art BCNN hardware. The hardware mapping results show that IMC can process the Binarized AlexNet on ImageNet data-set favorably with 134.27 J/img where ∼ 16× and 9× lower energy and area are achieved, respectively, compared to RRAM-based BCNN. Furthermore, 21.5% reduction in data movement in term of main memory accesses is observed compared to CPU/DRAM baseline.
Publication Date
7-17-2017
Publication Title
ACM International Conference Proceeding Series
Volume
2017-July
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1145/3183584.3183613
Copyright Status
Unknown
Socpus ID
85047014455 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/85047014455
STARS Citation
Angizi, Shaahin and Fan, Deliang, "Imc: Energy-Eicient In-Memory Convolver For Accelerating Binarized Deep Neural Network" (2017). Scopus Export 2015-2019. 6654.
https://stars.library.ucf.edu/scopus2015/6654