IP3_4 Interactive Presentations

Date: Tuesday, 02 February 2021
Time: 18:30 - 19:00 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/c3kZHSMFp9WHTDNNG

Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session

Label Presentation Title
Authors
IP3_4.1 RESOLUTION-AWARE DEEP MULTI-VIEW CAMERA SYSTEMS
Speaker:
Zeinab Hakimi, Pennsylvania State University, US
Authors:
Zeinab Hakimi1 and Vijaykrishnan Narayanan2
1Pennsylvania State University, US; 2Penn State University, US
Abstract
Recognizing 3D objects with multiple views is an important problem in computer vision. However, multi view object recognition can be challenging for networked embedded intelligent systems (IoT devices) as they have data transmission limitation as well as computational resource constraint. In this work, we design an enhanced multi-view distributed recognition system which deploys a view importance estimator to transmit data with different resolutions. Moreover, a multi-view learning-based superresolution enhancer is used at the back-end to compensate for the performance degradation caused by information loss from resolution reduction. The extensive experiments on the benchmark dataset demonstrate that the designed resolution-aware multi-view system can decrease the endpoint’s communication energy by a factor of 5X while sustaining accuracy. Further experiments on the enhanced multi-view recognition system show that accuracy increment can be achieved with minimum effect on the computational cost of back-end system.
IP3_4.2 HSCONAS: HARDWARE-SOFTWARE CO-DESIGN OF EFFICIENT DNNS VIA NEURAL ARCHITECTURE SEARCH
Speaker:
Xiangzhong Luo, Nanyang Technological University, SG
Authors:
Xiangzhong Luo, Di Liu, Shuo Huai and Weichen Liu, Nanyang Technological University, SG
Abstract
In this paper, we present a novel multi-objective hardware-aware neural architecture search (NAS) framework, namely HSCoNAS, to automate the design of deep neural networks (DNNs) with high accuracy but low latency upon target hardware. To accomplish this goal, we first propose an effective hardware performance modeling method to approximate the runtime latency of DNNs on target hardware, which will be integrated into HSCoNAS to avoid the tedious on-device measurements. Besides, we propose two novel techniques, i.e., dynamic channel scaling to maximize the accuracy under the specified latency and progressive space shrinking to refine the search space towards target hardware as well as alleviate the search overheads. These two techniques jointly work to allow HSCoNAS to perform fine-grained and efficient explorations. Finally, an evolutionary algorithm (EA) is incorporated to conduct the architecture search. Extensive experiments on ImageNet are conducted upon diverse target hardware, i.e., GPU, CPU, and edge device to demonstrate the superiority of HSCoNAS over recent state-of-the-art approaches.