REGENT: A Heterogeneous ReRAM/GPU-based Architecture Enabled by NoC for Training CNNs

Biresh Kumar Joardar1,a, Bing Li2,d, Janardhan Rao Doppa1,b, Hai Li2,e, Partha Pratim Pande1,c and Krishnendu Chakrabarty2,f
1School of EECS, Washington State University Pullman, WA 99164, U.S.A.
abiresh.joardar@wsu.edu
bjana.doppa@wsu.edu
cpande@wsu.edu
2Department of ECE, Duke University Durham, NC, USA.
dbing.li.ece@duke.edu
ehai.li@duke.edu
fkrishnendu.chakrabarty@duke.edu

ABSTRACT


The growing popularity of Convolutional Neural Networks (CNNs) has led to the search for efficient computational platforms to enable these algorithms. Resistive random-access memory (ReRAM)-based architectures offer a promising alternative to commonly used GPU-based platforms for CNN training. However, backpropagation in CNNs is susceptible to the limited precision of ReRAMs. As a result, training CNNs on ReRAMs affects the final accuracy of learned model. In this work, we propose REGENT, a heterogeneous architecture that combines ReRAM arrays with GPU cores, and exploits the benefits provided by 3D integration along with a high-throughput yet energy efficient Network-on-Chip (NoC) for training CNNs. We also propose a bin-packing based framework that maps CNN layers and then optimize the placement of computing elements to meet the targeted design objectives. Experimental evaluations indicate that REGENT improves full-system EDP by 55.7% on average compared to conventional GPU-only platforms for training CNNs.



Full Text (PDF)