9.4 Where do NoC and Machine Learning meet?

Printer-friendly version PDF version

Date: Thursday, March 28, 2019
Time: 08:30 - 10:00
Location / Room: Room 4

Chair:
Masoud Daneshtalab, Mälardalen University, SE, Contact Masoud Daneshtalab

Co-Chair:
Sébastien Le Beux, Lyon Institute of Nanotechnology, FR, Contact Sébastien Le Beux

The NoC design is being enhanced using machine intelligence technologies to drive system more efficiently. Denial-of-Service is one attack, caused by a malicious intellectual property core flooding the network, that can affect a NoC. In this session a lightweight and real-time DoS attack detection mechanism will be presented with timely attack detection and minor area and power overhead. Find a trade-offs among error rate, packet retransmission, performance, and energy is a very challenging topic. In this session a proactive fault-tolerant mechanism to optimize energy efficiency and performance with reinforcement learning (RL) will be proposed. Method to exploit the elasticity and noise-tolerance features of deep learning algorithms to circumvent the bottleneck of on-chip inter-core data moving and accelerate their execution will be discussed in this session. This method shows a better interconnects energy efficiency. Taking into account the fact that we can predict the destination of some packets ahead at the network interface we can establishes a highway from the source to the destination built up by reserving virtual channel. This mechanism can reduce the target packets' transfer latency and will be presented in this session.

TimeLabelPresentation Title
Authors
08:309.4.1REAL-TIME DETECTION AND LOCALIZATION OF DOS ATTACKS IN NOC BASED SOCS
Speaker:
Subodha Charles, University of Florida, US
Authors:
Subodha Charles, Yangdi Lyu and Prabhat Mishra, University of Florida, US
Abstract
Network-on-Chip (NoC) is widely employed by multi-core System-on-Chip (SoC) architectures to cater to their communication requirements. The increased usage of NoC and its distributed nature across the chip has made it a focal point of potential security attacks. Denial-of-Service (DoS) is one such attack that is caused by a malicious intellectual property (IP) core flooding the network with unnecessary packets causing significant performance degradation through NoC congestion. In this paper, we propose a lightweight and real-time DoS attack detection mechanism. Once a potential attack has been flagged, our approach is also capable of localizing the malicious IP using latency data gathered by NoC components. Experimental results demonstrate the effectiveness of our approach with timely attack detection and localization while incurring minor area and power overhead (less than 6% and 4%, respectively).
09:009.4.2HIGH-PERFORMANCE, ENERGY-EFFICIENT, FAULT-TOLERANT NETWORK-ON-CHIP DESIGN USING REINFORCEMENT LEARNING
Speaker:
Ke Wang, George Washington University, US
Authors:
Ke Wang1, Ahmed Louri1, Avinash Karanth2 and Razvan Bunescu2
1George Washington University, US; 2Ohio University, US
Abstract
Network-on-Chips (NoCs) are becoming the standard communication fabric for multi-core and system on a chip (SoC) architectures. As technology continues to scale, transistors and wires on the chip are becoming increasingly vulnerable to various fault mechanisms, especially timing errors, resulting in exacerbation of energy efficiency and performance for NoCs. Typical techniques for handling timing errors are reactive in nature, responding to the faults after their occurrence. They rely on error detection/correction techniques which have resulted in excessive power consumption and degraded performance, since the error detection/correction hardware is constantly enabled. On the other hand, indiscriminately disabling error handling hardware can induce more errors and intrusive retransmission traffic. Therefore, the challenge is to balance the trade-offs among error rate, packet retransmission, performance, and energy. In this paper, we propose a proactive fault-tolerant mechanism to optimize energy efficiency and performance with reinforcement learning (RL). First, we propose a new proactive error handling technique comprised of a dynamic scheme for enabling per-router error detection/correction hardware and an effective retransmission mechanism. Second, we propose the use of RL to train the dynamic control policy with the goals of providing increased fault-tolerance, reduced power consumption and improved performance as compared to conventional techniques. Our evaluation indicates that, on average, end-to-end packet latency is lowered by 55%, energy efficiency is improved by 64%, and retransmission caused by faults is reduced by 48% over the reactive error correction techniques.
09:309.4.3LEARN-TO-SCALE: PARALLELIZING DEEP LEARNING INFERENCE ON CHIP MULTIPROCESSOR ARCHITECTURE
Speaker:
Kaiwei Zou, Institute of Computing Technology, Chinese Academy of Sciences, CN
Authors:
Kaiwei Zou, Ying Wang, Huawei Li and Xiaowei Li, Institute of Computing Technology, Chinese Academy of Sciences, CN
Abstract
Accelerating deep neural networks on resource-constrained embedded devices is becoming increasingly important for real-time applications. However, in contrast to the intensive research works on specialized neural network inference architectures, there is a lack of study on the acceleration and parallelization of deep learning inference on embedded chip-multiprocessor architectures, which are favored by many real-time applications for superb energy-efficiency and scalability. In this work, we investigate the strategies of parallelizing single-pass deep neural network inference on embedded on-chip multi-core accelerators. These methods exploit the elasticity and noise-tolerance features of deep learning algorithms to circumvent the bottleneck of on-chip inter-core data moving and reduce the communication overhead aggravated as the core number scales up. The experimental results show that the communication-aware sparsified parallelization method improves the system performance by 1.6×-1.1× and achieves 4×-1.6× better interconnects energy efficiency for different neural networks.
09:459.4.4ADVANCE VIRTUAL CHANNEL RESERVATION
Speaker:
Boqian Wang, KTH Royal Institute of Technology, SE
Authors:
Boqian Wang1 and Zhonghai Lu2
1KTH Royal Institute of Technology, National University of Defense Technology, CN; 2KTH Royal Institute of Technology, CN
Abstract
We present a smart communication service called Advance Virtual Channel Reservation (AVCR) to provide a highway to packets, which can greatly reduce their contention delay in NoC. AVCR takes advantage of the fact that we can know or predict the destination of some packets ahead at the network interface (NI). Exploiting the time slack before a packet is ready, AVCR establishes an end-to-end highway from the source NI to the destination NI. This highway is built up by reserving virtual channel (VC) resources ahead and at the same time, offering priority service to those VCs in the router, which can therefore avoid highway packets' VC allocation and switch arbitration delay in NoC. Additionally, optimization schemes are developed to reduce VC overhead and increase highway utilization. We evaluate AVCR with cycle-accurate full-system simulations in GEM5 by using all benchmarks in PARSEC. Compared to the state-of-art mechanisms and the priority based mechanism, experimental results show that our mechanism can significantly reduce the target packets' transfer latency and effectively decrease the average region-of-interest (ROI) time by 22.4% (maximally by 29.4%) across PARSEC benchmarks.
10:00End of session
Coffee Break in Exhibition Area



Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area.

Lunch Breaks (Lunch Area)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the Lunch Area to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 26, 2019

Wednesday, March 27, 2019

Thursday, March 28, 2019