Energy Efficient In-Memory Binary Deep Neural Network Accelerator With Dual-Mode Sot-Mram

Keywords

Convolutional Neural Network; In-memory computing; SOT-MRAM

Abstract

In this paper, we explore potentials of leveraging spin-based in-memory computing platform as an accelerator for Binary Convolutional Neural Networks (BCNN). Such platform can implement the dominant convolution computation based on presented Spin Orbit Torque Magnetic Random Access Memory (SOT-MRAM) array. The proposed array architecture could simultaneously work as non-volatile memory and a reconfigurable in-memory logic (AND, OR) without add-on logic circuits to memory chip as in conventional logic-in-memory designs. The computed logic output could be also simply read out like a normal MRAM bit-cell using the shared memory peripheral circuits. We employ such intrinsic in-memory computing architecture to efficiently process data within memory to greatly reduce power hungry and omit long distance data communication concerning state-of-the-art BCNN hardware.

Publication Date

11-22-2017

Publication Title

Proceedings - 35th IEEE International Conference on Computer Design, ICCD 2017

Number of Pages

609-612

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/ICCD.2017.107

Socpus ID

85041692629 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85041692629

This document is currently not available here.

Share

COinS