IP4_5 Interactive Presentations

Date: Wednesday, 03 February 2021
Time: 09:00 - 09:30 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/aTD3XZgEET2TWjiKy

Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session

Label Presentation Title
Authors
IP4_5.1 MEMORY HIERARCHY CALIBRATION BASED ON REAL HARDWARE IN-ORDER CORES FOR ACCURATE SIMULATION
Speaker:
Quentin Huppert, LIRMM, FR
Authors:
Quentin Huppert1, Timon Evenblij2, Manu Komalan2, Francky Catthoor3, Lionel Torres4 and David Novo5
1Lirmm, FR; 2imec, BE; 3IMEC, BE; 4University of Montpellier, FR; 5CNRS, LIRMM, University of Montpellier, FR
Abstract
Computer system simulators are major tools used by architecture researchers. Two key elements play a role in the credibility of simulator results: (1) the simulator’s accuracy, and (2) the quality of the baseline architecture. Some simulators, such as gem5, already provide highly accurate parameterized models. However, finding the right values for all these parameters to faithfully model a real architecture is still a problem. In this paper, we calibrate the memory hierarchy of an in-order core gem5 simulation to accurately model a real mobile Arm SoC. We execute small programs, which are designed to stress specific parts of the memory system, to deduce key parameter values for the model. We compare the execution of SPEC CPU2006 benchmarks on the real hardware with the gem5 simulation. Our results show that our calibration reduces the average and worst-case IPC error by 36% and 50%, respectively, when compared with a gem5 simulation configured with the default parameters.
IP4_5.2 SPRITE: SPARSITY-AWARE NEURAL PROCESSING UNIT WITH CONSTANT PROBABILITY OF INDEX-MATCHING
Speaker:
Sungju Ryu, POSTECH, KR
Authors:
Sungju Ryu1, Youngtaek Oh2, Taesu Kim1, Daehyun Ahn1 and jae-joon kim3
1Pohang University of Science and Technology, KR; 2Pohang university of science and technology, KR; 3postech, KR
Abstract
Sparse neural networks are widely used for memory savings. However, irregular indices of non-zero input activations and weights tend to degrade the overall system performance. This paper presents a scheme to maintain constant probability of index-matching for weight and input over a wide range of sparsity overcoming a critical limitation in previous works. A sparsity-aware neural processing unit based on the proposed scheme improves the system performance up to 6.1X compared to previous sparse convolutional neural network hardware accelerators.