GenPIM: Generalized Processing In‐Memory to Accelerate Data Intensive Applications

Mohsen Imania, Saransh Guptab and Tajana Rosingc
CSE, UC San Diego, La Jolla, CA, USA
amoimani@ucsd.edu
bsgupta@ucsd.edu
ctajana@ucsd.edu

ABSTRACT


Big data has become a serious problem as data volumes have been skyrocketing for the past few years. Storage and CPU technologies are overwhelmed by the amount of data they have to handle. Traditional computer architectures show poor performance when processing such huge data. Processing in memory is a promising technique to address data movement issue by locally processing data inside memory. However, there are two main issues with stand‐alone PIM designs: (i) PIM is not always computationally faster than CMOS logic, (ii) PIM cannot process all operations in many applications. Thus, not many applications can benefit from PIM. To generalize the use of PIM, we designed GenPIM, a general processing in‐memory architecture consisting of the conventional processor as well as the PIM accelerators. GenPIM supports basic PIM functionalities in specialized nonvolatile memory including: bitwise operations, search operation, addition and multiplication. For each application, GenPIM identifies the part which uses PIM operations, and processes the rest of non‐PIM operations or not data intensive part of applications in general purpose cores. GenPIM also enables configurable PIM approximation by relaxing in‐memory computation. We test the efficiency of proposed design over two learning applications. Our experimental evaluation shows that our design can achieve 10.9× improvement in energy efficiency and 6.4× speedup as compared to processing data in conventional cores.



Full Text (PDF)