7.7 Toward Correct and Secure Embedded Systems

Printer-friendly version PDF version

Date: Wednesday, March 27, 2019
Time: 14:30 - 16:00
Location / Room: Room 7

Todd Austin, University of Michigan, US, Contact Todd Austin

Ylies Falcone, University Grenoble Alpes, FR, Contact Ylies Falcone

This session will explore novel techniques for developing embedded systems with strong assurances of correctness and security. The correctness topics explored include verification of execution deadlines and correctness enforcement in the field. The security topics to be explored include efficient behavioral analysis of malware, improved protection of machine learning algorithms from adversarial attacks, and zero-footprint hardware Trojan attacks.

TimeLabelPresentation Title
Martin Ring1, Fritjof Bornebusch1, Christoph Lüth2, Robert Wille3 and Rolf Drechsler4
1DFKI, DE; 2Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) and University of Bremen, DE; 3Johannes Kepler University Linz, AT; 4University of Bremen/DFKI GmbH, DE
This paper investigates the benefits of verifying embedded systems after deployment. We argue that one reason for the huge state spaces of contemporary embedded and cyber-physical systems is the large variety of operating contexts, which are unknown during design. Once the system is deployed, these contexts become observable, confining several variables. By this, the search space is dramatically reduced, making verification possible even on the limited resources of a deployed system. In this paper, we propose a design and verification flow which exploits this observation. We show how specifications are transferred to the deployed system and verified there. Evaluations on a number of case studies demonstrate the reduction of the search space, and we sketch how the proposed approach can be employed in practice.
Kuan-Hsun Chen, TU Dortmund, DE
Kuan-Hsun Chen1, Niklas Ueter2, Georg von der Brüggen2 and Jian-Jia Chen3
1Technical University of Dortmund, DE; 2TU Dortmund University, DE; 3TU Dortmund, DE
In soft real-time systems, applications can tolerate rare deadline misses. Therefore, probabilistic arguments and analyses are applicable in the timing analyses for this class of systems, as demonstrated in many existing researches. Convolution-based analyses allow to derive tight deadline-miss probabilities, but suffer from a high time complexity. Among the analytical approaches, which result in a significantly faster runtime than the convolution-based approaches, the Chernoff bounds provide the tightest results. In this paper, we show that calculating the deadline-miss probability using Chernoff bounds can be solved by considering an equivalent convex optimization problem. This allows us to, on the one hand, decrease the runtime of the Chernoff bounds while, on the other hand, ensure a tighter approximation since a larger variable space can be searched more efficiently, i.e., by using binary search techniques over a larger area instead of a sequential search over a smaller area. We evaluate this approach considering synthesized task sets. Our approach is shown to be computationally efficient for large task systems, whilst experimentally suggesting reasonable approximation quality compared to an exact analysis.
Faiq Khalid, Department of computer engineering, TU Wein, AT
Faiq Khalid1, Muhammad Abdullah Hanif2, Semeen Rehman3, Junaid Qadir4 and Muhammad Shafique5
1Department of Computer Engineering, Vienna University of Technology, AT; 2Institute of Computer Engineering, Vienna University of Technology, AT; 3TU Wien, AT; 4Information Technology University, Lahore, PK; 5Vienna University of Technology (TU Wien), AT
Deep neural networks (DNN)-based machine learning (ML) algorithms have recently emerged as the leading ML paradigm particularly for the task of classification due to their superior capability of learning efficiently from large datasets. The discovery of a number of well-known attacks such as dataset poisoning, adversarial examples, and network manipulation (through the addition of malicious nodes) has, however, put the spotlight squarely on the lack of security in DNN-based ML systems. In particular, malicious actors can use these well-known attacks to cause random/targeted misclassification, or cause a change in the prediction confidence, by only slightly but systematically manipulating the environmental parameters, inference data, or the data acquisition block. Most of the prior adversarial attacks have, however, not accounted for the pre-processing noise filters commonly integrated with the ML-inference module. Our contribution in this work is to show that this is a major omission since these noise filters can render ineffective the majority of the existing attacks, which rely essentially on introducing adversarial noise. Apart from this, we also extend the state of the art by proposing a novel pre-processing noise Filter-aware Adversarial ML attack called FAdeML. To demonstrate the effectiveness of the proposed methodology, we generate an adversarial attack image by exploiting the "VGGNet" DNN trained for the "German Traffic Sign Recognition Benchmarks (GTSRB)" dataset, which despite having no visual noise, can cause a classifier to misclassify even in the presence of preprocessing noise filters. We will make all the contributions open-source at: http://LinkHiddenForBlindReview to facilitate reproducible research and further R&D.
Hyunyoung Oh, Seoul National University, KR
Hyunyoung Oh1, Hayoon Yi1, Hyeokjun Choe1, Yeongpil Cho2, Sungroh Yoon1 and Yunheung Paek3
1Seoul National University, KR; 2Soongsil University, KR; 3Dept. of Electrical and Computer Engineering and Inter-University Semiconductor Research Center (ISRC), Seoul National University, KR
As the age of IoT approaches, the importance of security for embedded devices is continuously increasing. Since attacks on these devices are likely to occur any time in unexpected manners, the defense systems based on a fixed set of rules will easily be subverted by such unexpected, unknown attacks. Learning-based anomaly detection is a promising technique that may potentially prevent new unknown zero-day attacks by leveraging the capability of machine learning (ML) to learn the intricate true nature of software hidden within raw information. This paper introduces our recent work to develop an MPSoC platform, called RTAD, which can efficiently support in hardware various ML models that run to detect anomalous behaviors in a real-time fashion. In our work, we assume that ML models are trained with runtime branch information since it is widely regarded that a sequence of branches serves as a record of control flow transfers during program execution. In fact, there have been numerous studies that examine various types of branches including system calls and general branches in order to infer (or detect) anomaly in branch behaviors that may be induced by diverse attacks. Our goal of real-time anomalous branch behavior inference poses two challenges to our development of RTAD. One is that RTAD must collect and transfer in a timely fashion a sequence of branches as the input to the ML model. The other is that RTAD must be able to promptly process the delivered branch data with the ML model. To tackle these challenges, we have implemented in RTAD two core hardware components: an input generation module and a GPU-inspired ML processing engine. According to our empirical results, RTAD enables various ML models to infer anomaly instantly after the victim program behaves aberrantly as the result of attacks being injected into the system.
Imran Abbasi, NUST, PK
Imran Abbasi1, Faiq Khalid2, Semeen Rehman3, Awais Kamboh1, Axel Jantsch3, Siddharth Garg4 and Muhammad Shafique5
1NUST, PK; 2Department of Computer Engineering, Vienna University of Technology, AT; 3TU Wien, AT; 4University of Waterloo, CA; 5Vienna University of Technology (TU Wien), AT
Conventional Hardware Trojan (HT) detection techniques are based on the validation of integrated circuits to determine changes in their functionality, and on non-invasive side-channel analysis to identify the variations in their physical parameters. In particular, almost all the proposed side-channel power-based detection techniques presume that HTs are detectable because they only add gates to the original circuit with a noticeable increase in power consumption. This paper demonstrates how undetectable HTs can be realized with zero impact on the power and area footprint of the original circuit. Towards this, we propose a novel concept of TrojanZero and a systematic methodology for designing undetectable HTs in the circuits, which conceals their existence by gate-level modifications. The crux is to salvage the cost of the HT from the original circuit without being detected using standard testing techniques. Our methodology leverages the knowledge of transition probabilities of the circuit nodes to identify and safely remove expendable gates, and embeds malicious circuitry at the appropriate locations with zero power and area overheads when compared to the original circuit. We synthesize these designs and then embed in multiple ISCAS85 benchmarks using a 65nm technology library, and perform a comprehensive power and area characterization. Our experimental results demonstrate that the proposed TrojanZero designs are undetectable by the state-of-the-art power-based detection methods.
Laurence Pierre, Univ. Grenoble Alpes, FR
Enzo Brignon1 and Laurence Pierre2
1TIMA Lab (Univ. Grenoble Alpes, CNRS, Grenoble INP), FR; 2Univ. Grenoble Alpes, TIMA Lab., FR
Verifying the correctness and the reliability of C or C++ embedded software is a crucial issue. To alleviate this verification process, we advocate runtime assertion-based verification of formal properties. Such logic and temporal properties can be specified using the IEEE standard PSL (Property Specification Language) and automatically translated into software assertion checkers. A major issue is the instrumentation of the embedded program so that those assertion checkers will be triggered upon specific events during execution. This paper presents an automatic instrumentation solution for object files, which enables such an event-driven property evaluation. It also reports experimental results for different kinds of applications and properties.
16:00End of session
Coffee Break in Exhibition Area

Coffee Breaks in the Exhibition Area

On all conference days (Tuesday to Thursday), coffee and tea will be served during the coffee breaks at the below-mentioned times in the exhibition area.

Lunch Breaks (Lunch Area)

On all conference days (Tuesday to Thursday), a seated lunch (lunch buffet) will be offered in the ""Lunch Area"" to fully registered conference delegates only. There will be badge control at the entrance to the lunch break area.

Tuesday, March 26, 2019

  • Coffee Break 10:30 - 11:30
  • Lunch Break 13:00 - 14:30
  • Awards Presentation and Keynote Lecture in ""TBD"" 13:50 - 14:20
  • Coffee Break 16:00 - 17:00

Wednesday, March 27, 2019

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:30
  • Awards Presentation and Keynote Lecture in ""TBD"" 13:30 - 14:20
  • Coffee Break 16:00 - 17:00

Thursday, March 28, 2019

  • Coffee Break 10:00 - 11:00
  • Lunch Break 12:30 - 14:00
  • Keynote Lecture in ""TBD"" 13:20 - 13:50
  • Coffee Break 15:30 - 16:00