DATE 2021 became a virtual conference due to the worldwide COVID-19 pandemic (click here for more details)

Taking into consideration the continued erratic development of the worldwide COVID-19 pandemic and the accompanying restrictions of worldwide travelling as well as the safety and health of the DATE community, the Organizing Committees decided to host DATE 2021 as a virtual conference in early February 2021. Unfortunately, the current situation does not allow a face-to-face conference in Grenoble, France.

The Organizing Committees are working intensively to create a virtual conference that gives as much of a real conference atmosphere as possible.

IP2_2 Interactive Presentations

Date: Tuesday, 02 February 2021
Time: 17:00 - 17:30

Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session

Label Presentation Title
Authors
IP2_2.1 RECEPTIVE-FIELD AND SWITCH-MATRICES BASED RERAM ACCELERATOR WITH LOW DIGITAL-ANALOG CONVERSION FOR CNNS
Speaker:
Xun Liu, North China University of Technology, CN
Authors:
Yingxun Fu1, Xun Liu2, Jiwu Shu3, Zhirong Shen4, Shiye Zhang1, Jun Wu1 and Li Ma1
1North China University of Technology, CN; 2North China Univercity of Technology, CN; 3Tsinghua University, CN; 4Xiamen University, CN
Abstract
Process-in-Memory (PIM) based accelerator becomes one of the best solutions for the execution of convolution neural networks (CNN). Resistive random access memory (ReRAM) is a classic type of non-volatile random-access memory, which is very suitable for implementing PIM architectures. However, existing ReRAM-based accelerators mainly consider to improve the calculation efficiency, but ignore the fact that the digital-analog signal conversion process spends a lot of energy and executing time. In this paper, we propose a novel ReRAM-based accelerator named Receptive-Field and Switch-Matrices based CNN Accelerator (RFSM). In RFSM, we first propose a receptive-field based convolution strategy to analyze the data relationships, and then gives a dynamic and configurable crossbar combination method to reduce the digital-analog conversion operations. The evaluation result shows that, compared to existing works, RFSM gains up to 6.7x higher speedup and 7.1x lower energy consumption.
IP2_2.2 AN ON-CHIP LAYER-WISE TRAINING METHOD FOR RRAM BASED COMPUTING-IN-MEMORY CHIPS
Speaker:
Yiwen Geng, Tsinghua University, CN
Authors:
Yiwen Geng, Bin Gao, Qingtian Zhang, Wenqiang Zhang, Peng Yao, Yue Xi, Yudeng Lin, Junren Chen, Jianshi Tang, Huaqiang Wu and He Qian, Institute of Microelectronics, Tsinghua University, CN
Abstract
RRAM based computing-in-memory (CIM) chips have shown great potentials to accelerate deep neural networks on edge devices by reducing data transfer between the memory and the computing unit. However, due to the non-ideal characteristics of RRAM, the accuracy of the neural network on the RRAM chip is usually lower than the software. Here we propose an on-chip layer-wise training (LWT) method to alleviate the adverse effect of RRAM imperfections and improve the accuracy of the chip. Using a locally validated dataset, LWT can reduce the communication between the edge and the cloud, which benefits for the personalized data privacy. The simulation results on the CIFAR-10 dataset show that the LWT method can improve the accuracy of VGG-16 and ResNet-18 more than 5% and 10%, respectively, with only 30% operations and 35% buffer compared with the back-propagation method. Moreover, the pipe-LWT method is presented to improve the throughput by three times further.