12.2 Searching for corner cases, bugs and security vulnerabilities

Printer-friendly version PDF version

Date: Thursday 22 March 2018
Time: 16:00 - 17:30
Location / Room: Konf. 6

Chair:
Daniel Grosse, University of Bremen / DFKI, DE

Co-Chair:
Jaan Raik, Tallinn University of Technology, EE

This session presents innovative solutions to identify challenging situations in RTL designs. The first work presents a fully automated and scalable approach for concolic generation of direct test on RTL models. The second work uses historical debugging data to predict future bug locations. The last presentation automatically mines formal assertions to highlight security vulnerabilities on IP firmware.

TimeLabelPresentation Title
Authors
16:0012.2.1DIRECTED TEST GENERATION USING CONCOLIC TESTING ON RTL MODELS
Speaker:
Jonathan Cruz, University of Florida, US
Authors:
Alif Ahmed, Farimah Farahmandi and Prabhat Mishra, University of Florida, US
Abstract
Functional validation is one of the most time consuming steps in System-on-Chip (SoC) design methodology. In today's industrial practice, simulating designs using billions of random or constrained-random tests can lead to high functional coverage. However, it is hard to cover the remaining small fraction of corner cases and rare functional scenarios. While formal methods are promising in such cases, it is infeasible to apply them on large designs. In this paper, we propose a fully automated and scalable approach for generating directed tests using concolic testing of RTL models. While application of concolic testing on hardware designs has shown some promising results, existing approaches are tuned for improving overall coverage, rather than covering a specific target. We developed a Control Flow Graph (CFG) assisted directed test generation method that can efficiently generate a test to activate a given target. Our experimental results demonstrate that our approach is both efficient and scalable compared to the state-of-the-art test generation methods.

Download Paper (PDF; Only available from the DATE venue WiFi)
16:3012.2.2SUSPECT SET PREDICTION IN RTL BUG HUNTING
Speaker:
Neil Veira, University of Toronto, CA
Authors:
Neil Veira, Zissis Poulos and Andreas Veneris, University of Toronto, CA
Abstract
We propose a framework for predicting erroneous design components from partially observed solution sets that are found through automated debugging tools. The proposed method involves learning design component dependencies by using historical debugging data, and representing these dependencies by means of a probabilistic graph. Using this representation, one can run a debugging tool non-exhaustively, obtain a partial set of potentially erroneous components and then predict the remaining by applying a cost-effective belief propagation pass. The method can reduce debugging runtime when it comes to multiple debugging sessions by 15x on the average while achieving a 91% average prediction accuracy.

Download Paper (PDF; Only available from the DATE venue WiFi)
17:0012.2.3SYMBOLIC ASSERTION MINING FOR SECURITY VALIDATION
Speaker:
Alessandro Danese, University of Verona, IT
Authors:
Alessandro Danese1, Valeria Bertacco2 and Graziano Pravadelli1
1University of Verona, IT; 2University of Michigan, US
Abstract
This paper presents DOVE, a validation framework to identify points of vulnerability inside IP firmwares. The framework relies on the symbolic simulation of the firmware to search for corner cases in its computational paths that may hide vulnerabilities. Then, DOVE automatically mine a compact set of formal assertions representing these unlikely paths to guide the analysis of the verification engineers. Experimental results on two case studies show the effectiveness of the generated assertions in pinpointing actual vulnerabilities and its efficiency in terms of execution time.

Download Paper (PDF; Only available from the DATE venue WiFi)
17:30End of session