IP3_1 Interactive Presentations

Date: Tuesday, 02 February 2021
Time: 18:30 - 19:00 CET
Virtual Conference Room: https://virtual21.date-conference.com/meetings/virtual/jpCmxWZZBBXmFoAEm

Interactive Presentations run simultaneously during a 30-minute slot. Additionally, each IP paper is briefly introduced in a one-minute presentation in a corresponding regular session

Label Presentation Title
Authors
IP3_1.1 SYNTHESIS OF SI CIRCUITS FROM BURST-MODE SPECIFICATIONS
Speaker:
Alex Chan, Newcastle University, GB
Authors:
Alex Chan1, Danil Sokolov1, Victor Khomenko1, David Lloyd2 and Alex Yakovlev1
1Newcastle University, GB; 2Dialog Semiconductor, GB
Abstract
In this paper, we present a new workflow that is based on the conversion of Extended Burst-Mode (XBM) specifications to Signal Transition Graphs (STGs). While XBMs offer a simple design entry to specify asynchronous circuits, they cannot be synthesised into speed-independent (SI) circuits, due to the 'burst mode' timing assumption inherent in the model. Furthermore, XBM synthesis tools are no longer supported, and there are no dedicated tools for formal verification of XBMs. Our approach addresses these issues, by granting the XBMs access to sophisticated synthesis and verification tools available for STGs, as well as the possibility to synthesise SI circuits. Experimental results show that our translation only linearly increases the model size and that our workflow achieves a much improved synthesis success rate, with a 33% average reduction in the literal count.
IP3_1.2 LOW-LATENCY ASYNCHRONOUS LOGIC DESIGN FOR INFERENCE AT THE EDGE
Speaker:
Adrian Wheeldon, Newcastle University, GB
Authors:
Adrian Wheeldon1, Alex Yakovlev1, Rishad Shafik1 and Jordan Morris2
1Newcastle University, GB; 2ARM Ltd, Newcastle University, GB
Abstract
Modern internet of things (IoT) devices leverage machine learning inference using sensed data on-device rather than offloading them to the cloud. Commonly known as inference at-the-edge, this gives many benefits to the users, including personalization and security. However, such applications demand high energy efficiency and robustness. In this paper we propose a method for reduced area and power overhead of self-timed early-propagative asynchronous inference circuits, designed using the principles of learning automata. Due to natural resilience to timing as well as logic underpinning, the circuits are tolerant to variations in environment and supply voltage whilst enabling the lowest possible latency. Our method is exemplified through an inference datapath for a low power machine learning application. The circuit builds on the Tsetlin machine algorithm further enhancing its energy efficiency. Average latency of the proposed circuit is reduced by 10x compared with the synchronous implementation whilst maintaining similar area. Robustness of the proposed circuit is proven through post-synthesis simulation with 0.25 V to 1.2 V supply. Functional correctness is maintained and latency scales with gate delay as voltage is decreased.