IP4_6 Interactive Presentations
Date: Wednesday, 03 February 2021
Time: 09:00 - 09:30 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/sQdNyqetXmw6iE6R9
Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session
|IP4_6.1||WISER: DEEP NEURAL NETWORK WEIGHT-BIT INVERSION FOR STATE ERROR REDUCTION IN MLC NAND FLASH
Jaehun Jang, SungKyunkwan University, KR
Jaehun Jang1 and Jong Hwan Ko2
1Department of Semiconductor and Display Engineering, SungKyunkwan University, Memory Division, Samsung Electronics, KR; 2Sungkyunkwan University (SKKU), KR
When Flash memory is used to store the deep neural network (DNN) weights, inference accuracy can degrade due to the Flash memory state errors. To protect the weights from the state errors, the existing methods relied on ECC(Error Correction Code) or parity, which can incur power/storage overhead. In this study, we propose a weight bit inversion method that minimizes the accuracy loss due to the Flash memory state errors without using the ECC or parity. The method first applies WISE(Weight-bit Inversion for State Elimination) that removes the most error-prone state from MLC NAND, thereby improving both the error robustness and the MSB page read speed. If the initial accuracy loss due to weight inversion of WISE is unacceptable, we apply WISER(Weight-bit Inversion for State Error Reduction) that reduces weight mapping to the error-prone state with minimum weight value changes. The simulation results show that after 16K program-erase cycles in NAND Flash, WISER reduces CIFAR-100 accuracy loss by 2.92X for VGG-16 compared to the existing methods.
|IP4_6.2||OR-ML: ENHANCING RELIABILITY FOR MACHINE LEARNING ACCELERATOR WITH OPPORTUNISTIC REDUNDANCY
Zheng Wang, Shenzhen Institutes of Advanced Technology, CN
Bo Dong1, Zheng Wang2, Wenxuan Chen3, Chao Chen2, Yongkui Yang2 and Zhibin Yu2
1Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; School of Microelectronics, Xidian University, CN; 2Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, CN; 3Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; School of Microelectronics, Xidian University Country/Region: China (CN), CN
Reliability plays a central role in deep sub-micron and nanometre IC fabrication technology and has recently been reported to be one of the key issues affecting the inference phase of neural networks. State-of-the-art machine learning (ML) accelerators exploit massively computing parallelism observed in neural networks to achieve high energy efficiency. The topology of ML engines' computing fabric, which constitutes large arrays of processing elements (PEs), has been increasing dramatically to incorporate the huge size and heterogeneity of the rapid evolving ML algorithm. However, it is commonly observed that activations of zero value lead to reduced PE utilization. In this work, we present a novel and low-cost approach to enhance the reliability of generic ML accelerators by Opportunistically exploring the chances of runtime Redundancy provided by neighbouring PEs, named as OR-ML. In contrast to conventional redundancy techniques, the proposed technique introduces no additional computing resources, therefore significantly reduces the implementation overhead and achieves obvious level of protection. The design prototype is evaluated using emulated fault injection on FPGA, executing mainstream neural networks for objection classification and detection.