5.2 Hot Topic: In-memory Computing: Status and Trends

Printer-friendly version PDF version

Date: Wednesday 16 March 2016
Time: 08:30 - 10:00
Location / Room: Konferenz 6

Organiser:
Pierre-Emmanuel Gaillardon, University of Utah, Salt Lake City, US

Chair:
Ian O'Connor, Institute des Nanotechnologies de Lyon, Ecully, FR

Co-Chair:
Michael Niemier, University of Notre Dame, South Bend, US

With the recent evolutions of nanometer transistor technologies, power consumption emerged as the most critical limitation. Within advanced processors and computing architectures, the processor-memory communication accounts for a significant part of the energy requirement. While alternative design approaches, such as the use of optimized accelerators or advanced power management techniques are successfully employed in contemporary designs, the trend keeps worsening due to the ever-increasing gap between on-chip and off-chip memory data rates. This trend, known as Von Neumann bottleneck, not only limits the system performance, but also acts nowadays as a limiter of the energy scaling. The quest towards more energy-efficiency requires solutions that solve the Von Neumann bottleneck by tightly intertwining computing with memories. In this hot topic session, we intend to elaborate on in-memory computing by identifying its current applications and its promises in light of emerging technologies. In-memory computing is considered here in the general sense of computing information locally within large data storage. Four talks will be provided. The first talk will cover the current industrial applications of in-memory computing to achieve energy efficient acceleration. The three other talks will explore the opportunities of in-memory systems realized with emerging technologies. In particular, we will see how the memristor theory can benefit to Cellular Neural Network (CNN). We will also dig into the recently introduced concept of memcomputing that promises to speed up the execution of NP-complete problems. Finally, we will present a novel computer architecture that relies on resistive memory elements to compute and store information.

TimeLabelPresentation Title
Authors
08:305.2.1SOFTWARE AND SYSTEM CO-OPTIMIZATION IN THE ERA OF HETEROGENEOUS COMPUTING
Speaker and Author:
Ruchir Puri, IBM, US
Abstract
Escalating costs of semiconductor technology and its lagging performance relative to historic trends is motivating acceleration and specialization as more impactful means to increase system value. Targeted specialization is being increasingly pursued as an important way to achieve dramatic improvements in workload acceleration. This requires a broad understanding of workloads, system structures, and algorithms to determine what to accelerate / specialize, and how, i.e., via SW?; via HW?; or via SW+HW? which presents many choices, necessitating co-optimization of SW and HW. In this talk, we will focus on an application driven approach to software and system co-optimization, based on inventing new software algorithms, that have strong affinity to hardware acceleration. A High Level design methodology that is needed to enable targeted specialization in hardware will also be described.
08:525.2.2FADING MEMORY EFFECTS IN A MEMRISTOR FOR CELLULAR NANOSCALE NETWORK APPLICATIONS
Speaker:
Alon Ascoli, Technische Universität Dresden, DE
Authors:
Alon Ascoli1, Ronald Tetzlaff1, Leon O. Chua2, John Paul Strachan3 and R. Stanley Williams3
1Technische Universität Dresden, DE; 2University of California, Berkeley, US; 3Hewlett Packard Labs, US
Abstract
CNN based analogic cellular computing is a unified paradigm for universal spatio-temporal computation with several applications in a large number of different fields of research. By endowing CNN with local memory, control, and communication circuitry, many different hardware architectures with stored programmability, showing an enormous computing power - trillion of operations per second may be executed on a single chip -, have been realized. The complex spatio-temporal dynamics emerging in certain CNN may lead to the development of more efficient information processing methods as compared to conventional strategies. Memristors exhibit a rich variety of nonlinear behaviours, occupy a negligible amount of integrated circuit area, consume very little power, are suited to a massivelyparallel data flow, and may combine data storage with signal processing. As a result, the use of memristors in future CNN based computing structures may improve and/or extend the functionalities of state-of-the art hardware architectures. This contribution provides a detailed analysis of the system-theoretic model of a tantalum oxide memristor, in view of its potential adoption for the implementation of synaptic operators in CNN architectures.

Download Paper (PDF; Only available from the DATE venue WiFi)
09:145.2.3DIGITAL MEMCOMPUTING MACHINES
Speaker:
Fabio L. Traversa, University of California San Diego, US
Authors:
Massimiliano Di Ventra and Fabio L. Traversa, UC San Diego, US
Abstract
In this contribution we will discuss the digital, hence scalable, version of memcomputing machines. These are non-Turing machines that use memory to both process and store information at the same physical location. We will introduce their mathematical definition and provide as an example their implementation of an inverse three-bit sum gate using selforganizable logic gates, namely gates that organize dynamically to satisfy their logical propositions.

Download Paper (PDF; Only available from the DATE venue WiFi)
09:365.2.4THE PROGRAMMABLE LOGIC-IN-MEMORY (PLIM) COMPUTER
Speaker:
Pierre-Emmanuel Gaillardon, University of Utah, US
Authors:
Pierre-Emmanuel Gaillardon1, Luca Amaru2, Anne Siemon3, Eike Linn3, Rainer Waser3, Anupam Chattopadhyay4 and Giovanni De Micheli2
1University of Utah, US; 2École Polytechnique Fédérale de Lausanne (EPFL), CH; 3RWTH Aachen University, DE; 4Nanyang Technological University, SG
Abstract
Realization of logic and storage operations in memristive circuits have opened up a promising research direction of in-memory computing. Elementary digital circuits, e.g., Boolean arithmetic circuits, can be economically realized within memristive circuits with a limited performance overhead as compared to the standard computation paradigms. This paper takes a major step along this direction by proposing a fully-programmable in memory computing system. In particular, we address, for the first time, the question of controlling the in-memory computation, by proposing a lightweight unit managing the operations performed on a memristive array. Assembly-level programming abstraction is achieved by a natively-implemented majority and complement operator. This platform enables diverse sets of applications to be ported with little effort. As a case study, we present a standardized symmetric-key cipher for lightweight security applications. The detailed system design flow and simulation results with accurate device models are reported validating the approach.

Download Paper (PDF; Only available from the DATE venue WiFi)
10:00End of session
Coffee Break in Exhibition Area