6.1 Special Day on "Embedded Meets Hyperscale and HPC" Session: Near-memory computing

Printer-friendly version PDF version

Date: Wednesday, March 27, 2019
Time: 11:00 - 12:30
Location / Room: Room 1

Chair:
Christoph Hagleitner, IBM Research, CH, Contact Christoph Hagleitner

Co-Chair:
Christian Plessl, Paderborn University, DE, Contact Christian Plessl

While it used to be easy to increase the peak computational capabilities of processors by exploiting the growth in available transistors delivered by Moore's law, the latency and bandwidth of the memory system did not improve at the same pace. Today's microprocessors hide this fact behind a complex memory hierarchy, but often fail to optimally utilize the available memory bandwidth across a broad range of applications. Near-memory computing takes a fresh look at the memory system and proposes innovations ranging from micro-architecture to the runtime system to address these bottlenecks and build more balanced computing systems

TimeLabelPresentation Title
Authors
11:006.1.1NTX: AN ENERGY-EFFICIENT STREAMING ACCELERATOR FOR FLOATING-POINT GENERALIZED REDUCTION WORKLOADS IN 22NM FD-SOI
Speaker:
Luca Benini, IIS, ETH Zürich Zürich, Switzerland DEI, University of Bologna Bologna, Italy, CH
Authors:
Fabian Schuiki1, Michael Schaffner1 and Luca Benini2
1IIS, ETH Zürich, CH; 2IIS, ETH Zürich/DEI, University of Bologna, CH
Abstract
Specialized coprocessors for Multiply-Accumulate (MAC) intensive workloads such as Deep Learning are becoming widespread in SoC platforms, from GPUs to mobile SoCs. In this paper we revisit NTX (an efficient accelerator developed for training Deep Neural Networks at scale) as a generalized MAC and reduction streaming engine. The architecture consists of a set of 32 bit floating-point streaming co-processors that are loosely coupled to a RISC-V core in charge of orchestrating data movement and computation. Post-layout results of a recent silicon implementation in 22nm FD-SOI technology show the accelerator's capability to deliver up to 20Gflop/s at 1.25GHz and 168mW. Based on these results we show that a version of NTX scaled down to 14nm can achieve a 3× energy efficiency improvement over contemporary GPUs at 10.4× less silicon area, and a compute performance of 1.4Tflop/s for training large state-of-the-art networks with full floating-point precision. An extended evaluation of MAC-intensive kernels shows that NTX can consistently achieve up to 87% of its peak performance across general reduction workloads beyond machine learning. Its modular architecture enables deployment at different scales ranging from high-performance GPU-class to low-power embedded scenarios.
11:306.1.2NEAR-MEMORY PROCESSING: IT'S THE HARDWARE AND SOFTWARE, SILLY!
Author:
Boris Grot, University of Edinburgh, GB
Abstract
Conventional computing systems are increasingly challenged by the need to process rapidly growing volumes of data, often at online speeds. One promising way to boost compute efficiency is through Near-Memory Processing (NMP), which integrates light-weight compute logic close to the memory arrays. NMP affords massive bandwidth to the memory-resident data and dramatically reduces energy-hungry data movement.  A key challenge for effectively leveraging NMP is that today's high-performance data processing algorithms have been designed for CPUs with powerful cores, large caches, and bandwidth-constrained memory interfaces. Meanwhile, NMP architectures are limited to simple logic and small caches while offering abundant memory bandwidth. Hence, achieving high efficiency with NMP requires a careful algorithm-hardware co-design to maximize bandwidth utilization given a highly constrained area and power budget. I will describe one instance of such a co-designed NMP architecture for data analytics, and show that it reaps significant performance and energy-efficiency advantages over both CPU-based and baseline NMP systems.  
12:006.1.3EXTREME HETEROGENEITY IN HIGH PERFORMANCE COMPUTING
Author:
Jan van Lunteren, IBM Research Zurich, CH
Abstract
Concerns about energy-efficiency and cost are forcing our community to reexamine system architectures, including the memory and storage hierarchy. While computing technologies have remained relatively stable for nearly two decades, new architectural features, such as heterogeneous cores, deep memory hierarchies, non-volatile memory (NVM), and near-memory processing, have emerged as possible solutions to address these concerns. However, we expect this "golden age"of architectural change to lead to extreme heterogeneity and it will have a major impact on software systems and applications. Software will need to be redesigned to exploit these new capabilities and provide some level of performance portability across these diverse architectures. In this talk, I will sample these emerging memory technologies, discuss their architectural and software implications, and describe several new approaches to address these challenges. One programming system we have designed allows users to program FPGAs using C and OpenACC directives, which facilitates portability to GPUs and CPUs. Another system is Papyrus (Parallel Aggregate Persistent -yru- Storage); it is a programming system that aggregates NVM from across the system for use as application data structures, such as vectors and key-value stores, while providing performance portability across emerging NVM hierarchies
12:30End of session
Lunch Break in Lunch Area



Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area.

Lunch Breaks (Lunch Area)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the ""Lunch Area"" to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 26, 2019

  • Coffee Break 10:30 - 11:30
  • Lunch Break 13:00 - 14:30
  • Awards Presentation and Keynote Lecture in ""TBD"" 13:50 - 14:20
  • Coffee Break 16:00 - 17:00

Wednesday, March 27, 2019

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:30
  • Awards Presentation and Keynote Lecture in ""TBD"" 13:30 - 14:20
  • Coffee Break 16:00 - 17:00

Thursday, March 28, 2019

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:00
  • Keynote Lecture in ""TBD"" 13:20 - 13:50
  • Coffee Break 15:30 - 16:00