DATE 2022 fully virtual event

DATE 2022: fully virtual event -- ONSITE days moved ONLINE

We have been planning a special format for DATE 2022, aiming at bringing back the community together after two editions online, trying to balance the uncertainty of the situation and the desire to be partially back in presence. A special program has been organized to start the conference with two days in presence, full of outstanding talks and moments to meet and chat.
However, the current situation of the covid-19 infections across Europe and the consequent travelling/quarantine restrictions adopted by governments, companies and institutions have a strong impact on our health concerns and travelling opportunities, for speakers as well as attendees.
The "in presence" experience remains a fundamental aspect of any conference and of DATE in the specific, for its many networking moments, as well as for the social activities, however the safety of the community is once more a priority. Therefore, after thoughtfull discussion, the DATE Steering Committee opted to move DATE 2022 to a completely virtual event, moving the program of the first two days online also.

Although there is no way to mitigate the disappointment of not being able to meet in person, we trust the rich and interesting program will bring us together online to comment and contribute to the exciting talks and conversations with the speakers and authors of full DATE 2022 program.

11.8.1 Automating Tiny Neural Network Design with MCU Deploy-ability in the Loop

Start
09:30
End
10:20
Speaker
Danilo Pau, STMicroelectronics, Italy

Tiny Machine Learning (TinyML) is a growing, widely popular community focusing on the deployment of Deep Learning (DL) models on microcontrollers (MCUs). To run a trained DL model on an MCU, developers must have the necessary skills to handcraft network topologies and associated hyperparameters to fit a wide range of hardware requirements including operating frequency, embedded SRAM and embedded Flash memory along with the corresponding power consumption requirements.

Unfortunately, a hand-crafted design methodology poses multiple challenges: 1) AI and embedded developers exhibit different orthogonal skills, which do not meet each other during the development of AI applications until their validation in an operational environment 2) Tools for automated network design often assume virtually unlimited resources (typically deep networks are trained on cloud- or GPU-based systems) 3) The time-to-market from conception to realization of an AI system is usually quite long. Consequently, mass market adoption of AI technologies at the deep edge is jeopardized.

Our solution is based on Sequential Model Based Optimization (SMBO) – aka Bayesian Optimization (BO) – that is the standard methodology for Automated Machine Learning (AutoML) and Neural Architecture Search (NAS). Although AutoML and NAS are successfully applied on large GPU/Cloud platforms (i.e., some AutoML/NAS tools are commercialized by Google, Amazon and Microsoft), their application is still an issue in the case of tiny devices, such as MCUs. Our approach, instead, includes “deployability” constraints – related to the hardware resources of the MCUs – into the hyperparameter optimization process, leading to this new “AutoTinyML” perspective.

This talk will present our approach, along with its pros and cons with respect to multi-objective optimization (usually adopted to reduce resource usage on cloud). A set of relevant results will be presented and discussed, providing an overview of the next open challenges and perspectives in the AutoTinyML field.