W03 2nd Workshop on Model-Implementation Fidelity, MiFi’16

Printer-friendly versionPDF version

Agenda

TimeLabelSession
08:30W04.1Welcome and Opening
08:40W04.2Session 1a: Analysis Methods and Platform Requirements for Analysability
08:40W04.2.1Avionics Requirements for Dependability and Composability
Sascha Uhrig, Airbus Group Innovations, DE

09:25W04.2.2Static Code Level Timing Analysis on Systems with Interference
Christian Ferdinand, AbsInt, DE

09:50W04.2.3Addressing the Path Coverage Problem with Measurement-based Timing Analysis
Tullio Vardanega, University of Padua, IT

10:45W04.3Session 1b: Analysis Methods and Platform Requirements for Analysability
10:45W04.3.1Analysis of Power - Measurement, Simulation, and Composability
Kim Grüttner, OFFIS - Institute for Information Technology, DE

11:10W04.3.2Short Panel 1: Static Analysis vs. Measurement-based Analysis
Sascha Uhrig1, Christian Ferdinand2, Tullio Vardanega3 and Kim Grüttner4
1Airbus Group Innovations, DE; 2AbsInt, DE; 3University of Padua, IT; 4OFFIS - Institute for Information Technology, DE

11:35W04.4Session 2a: Concepts for Composable Dependable Architectures
11:35W04.4.1DREAMS: Dependable NoC
Roman Obermaisser, University of Siegen, DE

13:00W04.5Session 2b: Concepts for Composable Dependable Architectures
13:00W04.5.1Model-Based Code Generation for the MPPA Manycore Processor
Benoit Dupont de Dinechin, Kalray, FR

13:20W04.5.2Safe and Secure Real-Time (SSRT)
Benjamin Gittins, Synaptic Laboratories Limited, MT

13:40W04.5.3PROXIMA Probabilistic Architecture for FPGA and COTS
Francisco J. Cazorla, Barcelona Supercomputing Center, ES

14:05W04.5.4CompSOC: A Predictable and Composable Multicore System
Kees Goossens, Eindhoven Univ. of Technology, NL

14:35W04.5.5Short Panel 2: Costs of Hardware-Support for Dependability
Roman Obermaisser1, Benoit Dupont de Dinechin2, Benjamin Gittins3, Francisco J. Cazorla4 and Kees Goossens5
1University of Siegen, DE; 2Kalray, FR; 3Synaptic Laboratories Limited, MT; 4Barcelona Supercomputing Center, ES; 5Eindhoven Univ. of Technology, NL

Agenda

TimeLabelSession
08:30W03.1Opening session

Chair:
Christian Fabre, CEA-Leti, Grenoble, FR

08:40W03.2Session 1
08:40W03.2.1Technology trends and their impact on HPC benchmarks
Xavier Vigouroux, Atos, FR

For 20 years, according to the top500 list, High Performance Computing has steadily increased its performance and is now seeking for the exaflop (10**18 double precision floating point operations per second). This increase is hiding drastic evolution in terms or architectures, moving from vector machines to GPGPU. Today, if the hardware is sustaining the pace, the applications are not getting easily benefits from it. Code has to be rewritten. Furthermore, performance implies on-the fly decisions by the processor impacting the performance: An obvious example is the "turbo". The consequence is that it's becoming more and more difficult to predict the performance of an application on a future architecture.

In this talk, Xavier will do an introduction about HPC trends and requirements, then, he will provide details about the impact of these evolutions on the application performance. Finally, he will expose what kind of tools and models he would need to predict the performance in the future architecture.

Speaker's bio: After a Ph.D. from Ecole normale superieure de Lyon in Distributed computing, he worked for several major companies in different positions. He has now been working for Bull for 10 years. He led the HPC benchmarking team for the first five years, then in charge of the "Education and Research" market for HPC at Bull, he is now managing de "Center for Excellence in Parallel Programming" of Bull. This CEPP activities focus on tackling issues in HPC application Performance

09:30W03.2.2Fidelity of native-based performance models for Design Space Exploration
Fernando Herrera, University of Cantabria, ES
Eugenio Villar and Fernando Herrera, University of Cantabria, ES

The utilisation of fast performance assessment technologies is crucial for bounding the cost of the design of efficient embedded systems. Accuracy is a prime concern, since performance models have to be sufficiently faithful to the actual implementations they reflect. In a design space exploration (DSE) context, sufficiently means that the performance models shall enable design decisions. In this context, this talk shows how native simulation, while providing a qualitative speed-up for DSE, can also preserve an accuracy (.e.g, <10% error vs binary translation in account of simulated instructions in bare processing modelling). This should provide the fidelity required for design space exploration in most scenarios. The generic ideas presented are supported by the experiments performed on top of an actual adaptation and extension of a native simulation tool called VIPPE, in order to enable time and energy estimation of specific target processors.

09:50W03.2.3Thoughts on the Fidelity of (Data-Flow) Models for Real-Time MPSOC Architectures
Kees Goossens, Eindhoven Univ. of Technology, NL

To be able to guarantee that real-time requirements of an application are met, the performance of the application running on a MPSOCs must be analysed at design time. Since exhaustive simulation is not possible practically and theoretically, the application and MPSOC must be modelled somehow.  A model can be as complex as the real implementation and contain all details of the application and MPSOC, or be very simple by omitting most of the implementation details. The question is how abstraction (the level of detail that is modelled) affects fidelity (model accuracy, or the correspondence of model and implementation). Intuitively, when abstracting more, the analysis effort decreases but the fidelity also decreases. In this talk we discuss our experiences with modelling the CompSOC MPSOC platform using several variants of dataflow at different levels of abstraction.

10:10W03.3Coffee break
10:30W03.4Session 2

Chair:
Eugenio Villar, University of Cantabria, ES

10:30W03.4.1A Timed-automata based Middleware for Time-critical Multicore Applications
Saddek Ben Salem, Verimag, FR

Various models of computation for multi-core time-critical systems have been proposed in the literature, but there is a significant gap between the models of computation and the real-time scheduling and analysis techniques, that makes timing validation challenging. To overcome this difficulty, we represent both the models of computation and the scheduling policies by timed automata. While, traditionally, they are only used for simulation and validation, we use the automata for programming. We believe that using the same formal language for the model of computation and the scheduling techniques is an important step to close the gap between them. Our approach is demonstrated using a publicly available toolset, an industrial application use case and a multi-core platform.

11:00W03.4.2Microprocessor Thermal Modelling and Validation
Giovanni Beltrame, École Polytehcnique de Montréal, CA

Modern integrated circuits generate very high heat fluxes that can lead to a high temperature, degrading the performance and reducing the life time of the device. Thermal simulation is used to prevent this kind of issues, and many models were introduced in recent years. However, their validation is challenging: it is either based on established simulators (with reduced accuracy), or requires to produce a specific test chip with several thermal sensors. We present a thermal modelling approach with an associated methodology and measurement setup that uses existing commercial processors to validate thermal models. We use infrared thermography and a low-cost thermoelectric cooling, avoiding the issues of mineral oil setups. We show how our approach can be used to create validated models for two thermal simulators (our own approach, and a commercial tool).

 

11:30W03.4.3Accurate environment model for obstacle detection using multiple noisy range sensors and implementation on industrial targets
Julien Mottin, CEA LETI, FR
Julien Mottin1, Diego Puschini2 and Tiana Rakotovao1
1CEA LETI, FR; 2CEA, LETI, MINATEC, FR

Several robotic applications imply motion in complex unknown environments. Occupancy Grids model the surrounding obstacles through a partitioned spacial representation of the environment. A set of cells are filled iteratively with the interpretation of the information from sensors, considering their uncertainty through probabilistic models. Even if Occupancy Grids have been widely used in the state-of-the-art, the relation between the cell sizing and the inverse sensor model is usually neglected. In this paper, we propose a novel methodology to build inverse probabilistic model for single-target sensors. Since no additional limitation compared to the original formulation is introduced, our contribution allows to propagate the original precision of the sensor to the inverse model. In addition, it enables to properly choose the size of the cells. Our experiments apply our approach to a LIDAR sensor and to a Time-of-Flight camera, evaluating the grid resolution and the impact of the variation of the sensor precision.