ReRAM‐based Accelerator for Deep Learning

Bing Li1,a, Linghao Song1,b, Fan Chen1,c, Xuehai Qian2, Yiran Chen1,d and Hai (Helen) Li1,e
1Department of Electrical and Computer Engineering, Duke University, Durham, NC, United States
abing.li.ece@duke.edu
blinghao.song@duke.edu
cfan.chene@duke.edu
dyiran.chen@duke.edu
ehai.li@duke.edu
2Department of Computer Science, University of Southern California, Los Angeles, CA, United States
xuehai.qian@usc.edu

ABSTRACT


Big data computing applications such as deep learning and graph analytic usually incur a large amount of data movements. Deploying such applications on conventional von Neumann architecture that separates the processing units and memory components likely leads to performance bottleneck due to the limited memory bandwidth. A common approach is to develop architecture and memory co‐design methodologies to overcome the challenge. Our research follows the same strategy by leveraging resistive memory (ReRAM) to further enhance the performance and energy efficiency. Specifically, we employ the general principles behind processing‐in‐memory to design efficient ReRAM based accelerators that support both testing and training operations. Related circuit and architecture optimization will be discussed too.



Full Text (PDF)