- DATE 2021 became a virtual conference due to the worldwide COVID-19 pandemic (click here for more details)
Taking into consideration the continued erratic development of the worldwide COVID-19 pandemic and the accompanying restrictions of worldwide travelling as well as the safety and health of the DATE community, the Organizing Committees decided to host DATE 2021 as a virtual conference in early February 2021. Unfortunately, the current situation does not allow a face-to-face conference in Grenoble, France.
The Organizing Committees are working intensively to create a virtual conference that gives as much of a real conference atmosphere as possible.
IP3_4 Interactive Presentations
Date: Tuesday, 02 February 2021
Time: 18:30 - 19:00
Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session
|IP3_4.1||RESOLUTION-AWARE DEEP MULTI-VIEW CAMERA SYSTEMS
Zeinab Hakimi, Pennsylvania State University, US
Zeinab Hakimi1 and Vijaykrishnan Narayanan2
1Pennsylvania State University, US; 2Penn State University, US
Recognizing 3D objects with multiple views is an important problem in computer vision. However, multi view object recognition can be challenging for networked embedded intelligent systems (IoT devices) as they have data transmission limitation as well as computational resource constraint. In this work, we design an enhanced multi-view distributed recognition system which deploys a view importance estimator to transmit data with different resolutions. Moreover, a multi-view learning-based superresolution enhancer is used at the back-end to compensate for the performance degradation caused by information loss from resolution reduction. The extensive experiments on the benchmark dataset demonstrate that the designed resolution-aware multi-view system can decrease the endpoint’s communication energy by a factor of 5X while sustaining accuracy. Further experiments on the enhanced multi-view recognition system show that accuracy increment can be achieved with minimum effect on the computational cost of back-end system.
|IP3_4.2||HSCONAS: HARDWARE-SOFTWARE CO-DESIGN OF EFFICIENT DNNS VIA NEURAL ARCHITECTURE SEARCH
Xiangzhong Luo, Nanyang Technological University, SG
Xiangzhong Luo, Di Liu, Shuo Huai and Weichen Liu, Nanyang Technological University, SG
In this paper, we present a novel multi-objective hardware-aware neural architecture search (NAS) framework, namely HSCoNAS, to automate the design of deep neural networks (DNNs) with high accuracy but low latency upon target hardware. To accomplish this goal, we first propose an effective hardware performance modeling method to approximate the runtime latency of DNNs on target hardware, which will be integrated into HSCoNAS to avoid the tedious on-device measurements. Besides, we propose two novel techniques, i.e., dynamic channel scaling to maximize the accuracy under the specified latency and progressive space shrinking to refine the search space towards target hardware as well as alleviate the search overheads. These two techniques jointly work to allow HSCoNAS to perform fine-grained and efficient explorations. Finally, an evolutionary algorithm (EA) is incorporated to conduct the architecture search. Extensive experiments on ImageNet are conducted upon diverse target hardware, i.e., GPU, CPU, and edge device to demonstrate the superiority of HSCoNAS over recent state-of-the-art approaches.