Tutorial

Printer-friendly version PDF version
Monday Tutorials

M12 All You Need to Know About Hardware Trojans and Counterfeit Ics

Date: 
2014-03-24
Time: 
14:30-18:00
Location / Room: 
Konferenz 5

Speakers

Mohammad Tehranipoor, University of Connecticut, US (Contact Mohammad Tehranipoor)
Domenic Forte, University of Connecticut, US (Contact Domenic Forte)

Description

The migration from a vertical to horizontal business model has made it easier to introduce hardware Trojans and counterfeit electronic parts into the electronic component supply chain. Hardware Trojans are malicious modifications made to original IC designs that reduce system integrity (change functionality, leak private data, etc.). Counterfeit parts are often below specification and/or of substandard quality. The existence of Trojans and counterfeit parts creates risks for the life-critical systems and infrastructures that incorporate them including automotive, aerospace, military, and medical systems. In this tutorial, we will cover (i) Background and motivation for hardware Trojan and counterfeit prevention/detection; (ii) Taxonomies related to both topics; (iii) Existing solutions; (iv) Open challenges; (v) New and unified solutions to address these challenges.

Agenda

Agenda

TimeLabelSession
14:30M12.1Session 1
00:00M12.1.1Background and motivation for hardware Trojan and counterfeit prevention/detection

00:00M12.1.2Taxonomies related to both topics

16:30M12.2Session 2
00:00M12.2.1Existing solutions

00:00M12.2.2Open challenges

00:00M12.2.3New and unified solutions to address these challenges

M11 Post-Silicon Validation and Debug: Best Practices and Disruptive Innovation

Date: 
2014-03-24
Time: 
14:30-18:00
Location / Room: 
Konferenz 6

Speakers

Nagib Hakim, Intel Corporation Santa Clara, US
Subhasish Mitra, Stanford University, US
Amir Nahir, IBM Research Labs Haifa, IL
Alan Hu, University of British Columbia, CA

Description

Hardware failures are a growing concern as electronic systems become more complex, interconnected, and pervasive. The complexity challenge is further exacerbated by new ways of improving energy efficiency of electronic systems with the slowdown of CMOS (Dennard) scaling: increasing amounts of cores, uncore components, and accelerators; increasing degrees of adaptivity; and, increasing levels of heterogeneous integration. All these features and their complex interactions make future systems highly vulnerable to design flaws (bugs) that can jeopardize correct system operation and/or introduce security vulnerabilities. Existing validation methods barely cope with today's complexity. Traditional pre-silicon verification alone is no longer adequate. Post-silicon validation involves operating manufactured ICs in actual application environments to detect and fix bugs. Existing post-silicon practices are ad-hoc, and their costs are rising faster than design cost. Effective post-silicon validation requires a radical departure from today's ad-hoc practices to structured techniques. A wide range of topics will be covered in this tutorial, from best practices at leading companies to recent research results that are immediately applicable. Examples include: 1. overview of validation product life cycle; 2. trade-offs in pre- vs. post-silicon validation; 3. validation test content generation using the concept of exercisers; 4. validation infrastructure including triggers, observability structures, and performance monitors; 5. structured and systematic techniques such as QED (Quick Error Detection); 6. coverage metrics; 7. logic and electrical bug validation and debug techniques; 8. formal techniques for post-silicon validation and debug; 9. post-silicon repair, survivability, and resiliency; 10. bug benchmarks and industrial case studies.

Agenda

Agenda

TimeLabelSession
14:30M11.1Session 1
00:00M11.1.1Big Picture (Nagib Hakim, Subhasish Mitra, Amir Nahir)
Nagib Hakim1, Subhasish Mitra2 and Amir Nahir3
1Intel Corporation Santa Clara, US; 2Stanford University, US; 3IBM Research Labs Haifa, IL

16:30M11.2Session 2
00:00M11.2.1Observability enhancement during post-silicon validation (Alan Hu, Subhasish Mitra)
Alan Hu1 and Subhasish Mitra2
1University of British Columbia, CA; 2Stanford University, US

M10 A Cyber-Physical Approach to Modeling, Simulation and Verification of Smart Systems

Date: 
2014-03-24
Time: 
14:30-18:00
Location / Room: 
Konferenz 4

Speakers

Davide Quaglia, EDALab s.r.l., IT
Dimitris Drogoudis, Agilent, BE
Davide Bresolin, University of Bologna, IT

Description

The design of future smart embedded systems should jointly take into account aspects from different domains, such as digital (hardware, software, network) and analog (electronic, electromechanical, etc., as for instance RF, MEMS, power sources, thermal issues, sensors and actuators) so that they can be considered ever more cyber-physical systems. To increase energy efficiency, to fully exploit the potential of current nanoelectronics technologies, as well as to enable the integration of existing/new IPs and "More than Moore" devices, new methodologies and tools for multi-disciplinary and multi-scale modeling, simulation and verification are needed. In engineering practice, the analysis of a complex system is usually carried on through simulation, which allows the engineer to explore one of the possible system executions at a time. Formal verification instead aims at exploring all possible executions, in order to be certain that a property of interest holds in all cases, or conversely acquire information about potential fault cases. Because of their heterogeneous nature, cyber-physical systems have a mixed discrete and continuous behavior, which makes them quite challenging for verification. In this tutorial, we survey state-of-the-art modeling, simulation and verification techniques for cyber-physical systems. The presentations will be accompanied by concrete tool introductions and demonstrations, showing how the presented concepts support improvement of today's state-of-the-art system-level design flow of smart systems. Most of this tutorial is based of the results of the SMAC European project on smart systems design. By scope and contents, this tutorial targets students and researchers belonging to both academia and industry.

Agenda

Agenda

TimeLabelSession
14:30M10.1Session 1
00:00M10.1.1Introduction to smart systems and cyber-physical systems (Davide Quaglia)
Davide Quaglia, EDALab s.r.l, IT

00:00M10.1.2Multi-domain modeling languages and methodologies (Davide Quaglia)
Davide Quaglia, EDALab s.r.l, IT

00:00M10.1.3Multi-scale modeling: abstraction and refinement (Dimitris Drogoudis)
Dimitris Drogoudis, Agilent, BE

16:30M10.2Session 2
00:00M10.2.1Verification of Cyber-Physical Systems (Davide Bresolin)
Davide Bresolin, University of Bologna, IT

00:00M10.2.2Application of modeling concepts and tools to real case studies (Dimitris Drogoudis)
Dimitris Drogoudis, Agilent, BE

00:00M10.2.3Application of verification concepts and tools to real case studies (Davide Bresolin)
Davide Bresolin, University of Bologna, IT

M04 Dynamic Heterogeneous Architectures to Address The Efficiency Crisis!

Date: 
2014-03-24
Time: 
09:30-13:00
Location / Room: 
Konferenz 4

Speakers

Houman Homayoun, George Mason University, US (Contact Houman Homayoun)
Farhang Yazdani, BroadPak Corporation, US (Contact Farhang Yazdani)
Ayse Coskun, Boston University, US (Contact Ayse Coskun)
Hank Hoffmann, University of Chicago, US (Contact Hank Hoffmann)

Description

The microprocessor industry is at a crossroads. While it continues to scale performance with each generation, we continue to drive this critically important technology domain. When performance scaling stops, microprocessors become a generic commodity and no longer a technology driver or enabler. Because modern processors are most heavily constrained by power, and sometimes energy, constraints, performance scaling no longer falls naturally from increased transistor counts. Instead, total performance is maximized by maximizing performance/Watt. Future computing platforms will need to be flexible, scalable, conservative on power, while saving size, weight, energy, etc. In addressing these challenges, microprocessor industry is moving towards heterogeneous architecture design. Heterogeneous designs promise to push the envelope of power efficiency further by enabling general purpose processors to achieve the efficiency of customized cores. By enabling more diverse designs, and designs that are customized dynamically, we can push the efficiency envelope even further. This tutorial first reviews the major challenges facing semiconductor industry; in general performance, power, temperature and reliability, and in specific dark and unreliable silicon. The tutorial then introduces the concept of heterogeneous architecture to address the efficiency crisis and briefly reviews the state of the art in static and dynamic heterogeneous architectures in industry and academia. The tutorial then presents 3D design concept and argue how it can eliminate the fundamental barrier to dynamic heterogeneity. Finally it reviews the state of the art in simulators and modeling tools and how they can be integrated to accurately model performance, power, area, and temperature in 3D heterogeneous architectures. About the Team: The team consists of experts in interdisciplinary areas including heterogeneous architecture and 3D design (Houman Homayoun), temperature-aware design, DRAM and 3D integration (Ayse Coskun), 3D fabrication and packaging (Farhang Yazdani), and system architecture design (Hank Hoffman). The team consists of three faculties and one industry expert in the field. Houman Homayoun is an Assistant Professor of the Department of Electrical and Computer Engineering at George Mason University.

Agenda

Agenda

TimeLabelSession
09:30M04.1Session 1
00:00M04.1.1Reviews of major challenges facing semiconductor industry, introduce the concept of dynamic heterogeneous architecture and concept of core pooling (Houman Homayoun)
Houman Homayoun, ,

00:00M04.1.2Pathfinding methodology for optimal design and integration of 2.5D/3D heterogeneous systems
Farhang Yazdani, ,

11:30M04.2Session 2
00:00M04.2.13D systems as platforms for "flexible heterogeneity", cache/memory pooling, power & temperature challenges
Ayse Coskun, ,

00:00M04.2.2Managing dynamically configurable systems: optimizing energy under performance constraints; Coordinating adaptation across the system stack
Hank Hoffman, ,

 

M03 Automatic fixed-point conversion: a gate way to high-level power optimization

Date: 
2014-03-24
Time: 
09:30-13:00
Location / Room: 
Konferenz 3

Organiser

Daniel Ménard, INSA Rennes, FR

Speakers

Daniel Ménard, INSA Rennes, FR
David Novo, EPFL, CH
Karthick Parashar, Imperial College London, GB
Olivier Sentieys, Inria and University of Rennes, FR

Description

Given that Moore's law scaling has hit the power-wall, reducing power consumption of high-performance embedded systems becomes very crucial. It is also well admitted that system-level techniques offer the greatest potential for optimizing power. In this tutorial, we demonstrate how the careful tuning of the fixed-point arithmetic used to implement numerous functionalities in embedded system applications, can lead to significant savings in power consumption. Interestingly, proper dimensioning of the bit widths used to represent signals or variables can reduce the power consumption in both hardware and software implementations. Even in software implementation, the pervasive use of Single Instruction Multiple Data (SIMD) datapaths in modern processors is pushing designers to meddle with bit allocation. Often, a reduction in bit widths can enable the use of more SIMD slots, which increases the parallelism boosting the speed and energy efficiency of the software implementation. Although quantization effects in digital signal processing systems have been studied since the 70's, significant progress has been made in the recent years. This tutorial packs nearly a decade of research in designing systems with fixed-point arithmetic. We expose the deficiency in the support offered by existing EDA tools and motivate the need for new solutions. Accordingly, we put into perspective several recent techniques that have been developed to facilitate a quick analysis of the impact of a selected fixed-point format on the system performance and cost. We analyze the fixed-point refinement in a comprehensive way from a tools perspective, dividing the problem into various design steps (e.g., range and precision analysis). For each step, we present concrete solutions amenable to design automation that are illustrated with multiple relevant design examples from the wireless communication, multi-media and other signal processing domains.

Agenda

Agenda

TimeLabelSession
09:30M03.1Session 1
00:00M03.1.1Introduction

00:00M03.1.2Fixed-point arithmetic

00:00M03.1.3Range analysis

11:30M03.2Session 2
00:00M03.2.1Precision analysis

00:00M03.2.2Word-length optimization

00:00M03.2.3Opportunistic run-time precision adaptation

00:00M03.2.4Conclusion

 

M06 Testing of TSV-Based 2.5D- and 3D-Stacked ICs

Date: 
2014-03-24
Time: 
09:30-13:00
Location / Room: 
Konferenz 5

Speakers

Erik Jan Marinissen, IMEC - Leuven, BE
Krishnendu Chakrabarty, Duke University - Durham, NC, US

Target Audience: Test and design-for-test engineers and their managers; test methodology developers; test-automation tool developers; researchers, university professors, and students.  

Description

Stacked ICs with vertical interconnect containing fine-pitch micro-bumps and through-silicon vias (TSVs) are a hot-topic in design and manufacturing communities. These 2.5D- and 3D-SICs hold the promise of heterogeneous integration, inter-die connections with increased performance at lower power dissipation, and increased yield and hence decreased product cost. However, testing for manufacturing defects remains an obstacle and potential showstopper before stacked-die products can become a reality. There are concerns about the cost or, even worse, feasibility of testing such TSV-based 3D chips. In this tutorial, we present key concepts in 3D technology, terminology, and benefits. We discuss design and test challenges and emerging solutions for 2.5D- and 3D-SICs. Topics to be covered include an overview of 3D integration and trendsetting products such as a 2.5D-FPGA and 3D-stacked memory chips, test flows and test content for 3D chips, advances in wafer probing, 3D design-for-test architectures and ongoing IEEE P1838 standardization efforts for test access, and 3D test cost modeling and test-flow selection.

Agenda

Agenda

TimeLabelSession
09:30M06.1Session 1
00:00M06.1.1Introduction

00:00M06.1.2Overview of 2.5D- and 3D-technology

00:00M06.1.33D test flows and test contents

00:00M06.1.43D test access: wafer probing (industry/research)

11:30M06.2Session 2
00:00M06.2.13D test access: DfT architecture (incl. IEEE P1838) and optimizations

00:00M06.2.23D cost flow modeling (with case studies)

00:00M06.2.3Conclusion

M02 Software Debug on ARM Processors in Emulation

Date: 
2014-03-24
Time: 
09:30-13:00
Location / Room: 
Konferenz 2

Speaker

Russ Klein, Mentor,

Description

Emulation systems can execute designs fast enough to run significant amounts of software. For example, one can execute the software boot process, run diagnostics, boot an OS, load and exercise drivers. This allows earlier access to the design for the software team. It also allows software to be used to drive activity; exercising realistic use cases as part of the hardware verification. This software will need to be debugged. The emulated design will likely contain all the debug facilities, such as JTAG and ETM, as the final device. These can be used in emulation just as they would on the final silicon. Emulators will allow access to signals around the core, not accessible in the final device, which can used to debug and trace the processor. This gives the developer a number of options for debugging. This session explores the different debug approaches available, trade-offs involved in each approach, and how and when they can be most effectively applied during the design cycle. Russ Klein is a Technical Director in Mentor's emulation division. He has been developing verification and debug solutions which span the boundaries between hardware and software for over 20 year

Agenda

Agenda

TimeLabelSession
09:30M02.1Session 1
00:00M02.1.1Options for software debug and trace in the context of design running in emulation

00:00M02.1.2Understanding the trade-offs in terms of performance, functionality, and intrusiveness of different debug approaches

11:30M02.2Session 2
00:00M02.2.1Concurrent debug of multiple cores in emulation

00:00M02.2.2Correlation of hardware and software debug views

00:00M02.2.3Efficient utilization of emulation resources during software debug

M05 Wireless NoC as Interconnection Backbone for Multicore Chips: Promises, Challenges, and Recent Developments

Date: 
2014-03-24
Time: 
09:30-13:00
Location / Room: 
Konferenz 6

Organisers

Partha Pratim Pande, Washington State University, US
Radu Marculescu, Carnegie Mellon University, US

Speakers

Radu Marculescu, Carnegie Mellon University, US
Partha Pratim Pande, Washington State University, US
Deukhyoun Heo, Washington State University, US
Hiroki Matsutani, Keio University, JP

Description

Continuing progress and integration levels in silicon technologies make possible complete end-user systems consisting of extremely high number of cores on a single chip targeting either embedded or high-performance computing. However, without new approaches for energy- and thermally-efficient design, as well as scalable, low power and high bandwidth on-chip communication architectures, this vision may remain a pipe dream. Towards this end, wireless Network-on-Chip (WiNoC) represents an emerging paradigm for designing low power, high bandwidth interconnect infrastructure for multicore chips. This tutorial will provide a timely and insightful journey into various challenges and emerging solutions of designing WiNoC architectures from a variety of different perspectives, ranging from very high levels of abstraction (e.g., system architecture) to very low levels (e.g., on-chip antenna and transceiver design). The tutorial will start by discussing the fundamentals of network-based communication for 2D and 3D multicore systems and advanced design techniques for multi-domain clock and power management for embedded and high-performance processors, using real examples of multicore platforms. The second part of the tutorial will focus on the design of high bandwidth and low power WiNoC architectures incorporating the small-world effects. We will present detailed performance evaluation and necessary design trade-offs for the small-world WiNoCs with respect to their conventional wireline counterparts. We will conclude this part of the tutorial by presenting design of on-chip millimeter (mm)-wave wireless link as the suitable physical layer for the WiNoCs. In the last part, we will complement the above discussions regarding planar WiNoCs by introducing the wireless 3D NoCs that use inductive coupling though-chip interfaces (TCIs) to connect stacked chips by square coils as data transmitters. We will present design and implementation of wireless 3D NoC systems, real-chip experimental results, and their interconnection techniques. By scope and contents, this tutorial targets students and researchers belonging to both academia and industry.

Agenda

Agenda

TimeLabelSession
09:30M05.1Session 1
00:00M05.1.1Foundations of On-chip Communication: Performance and Power Management in 2D and 3D Multicore Platforms
Radu Marculescu, ,

00:00M05.1.2WiNoC: Network Architecture and Communication Resource Management
Partha Pratim Pande, ,

11:30M05.2Session 2
00:00M05.2.1Millimeter-Wave Wireless Link: The Physical Layer Design for WiNoCs
Deukhyoun Heo, ,

00:00M05.2.13D WiNoC Architectures
Hiroki Matsutani, ,

M09 Energy-Efficient System Design Through Error-Resilient Computing

Date: 
2014-03-24
Time: 
14:30-18:00
Location / Room: 
Konferenz 3

Speakers

Saibal Mukhopaddhyay, University of Georgia Tech., US
Shidhartha Das, ARM Ltd., GB
Anand Raghunathan, Purdue University, US
Srimat Chakradhar, NEC Labs, US

Description

This is a half-day tutorial that covers a broad range of technologies for error-resilient computing, and highlights the significant role of resiliency technologies in achieving high energy-efficiency across different levels of abstractions (circuit, hardware architecture and software) in modern computing systems. Safety-margins added to address the impact of rising variations at nanometer geometries incur unacceptable power and performance overheads. Traditional adaptive techniques compensate for some manifestations of these variations, however, they require margins to account for localized and fast-changing variations. The adverse impact of margins has led to a recent research focus on so-called "error-resilient" techniques, both in academia and industry. Resilient techniques permit computational errors to occur at run-time, either by operating without the full setup margin or by deliberately designing for inexact outputs. In lieu of the "always-correct" output as mandated in the traditional model of computing, computing with errors enables significant improvements in energy-efficiency as long as the error-rate and/or the magnitude of errors are sufficiently low. Resilient techniques have wide-ranging applications that span high-performance general-purpose computing to digital signal processing (DSP) algorithms. In this tutorial, we provide an in-depth overview of error-resilient techniques encompassing circuits, micro-architectural, algorithmic and system-architecture aspects. We organize the material into two segments. In the first, we discuss error-resilient techniques for bit-exact applications where perfect recovery from errors is a key requirement. We briefly review the existing design space for traditional adaptive techniques and motivate the case for error-resiliency by analyzing the additional margins eliminated through explicit error-detection and correction. We then discuss error-detection and recovery approaches for microprocessor pipelines highlighting "Razor" as a specific example. We present measurement results from academia and industry on resilient techniques similar to Razor. The second segment of the tutorial focuses on "approximate" computing; an approach to computing that defines correctness as producing outputs of acceptable "quality". Many applications (such as web search, data analytics, sensor data processing, recognition, mining, and synthesis) have a high degree of intrinsic resilience to their underlying computations being executed incorrectly. We review software, hardware architecture and circuit design techniques to build approximate computing systems. These new techniques significantly improve performance or energy efficiency while ensuring that the results produced are acceptable. We will conclude with a discussion of the key challenges that need to be addressed in order to facilitate a broader adoption of approximate computing.

Agenda

Agenda

TimeLabelSession
14:30M09.1Session 1
00:00M09.1.1Error-resilient Computing - Motivation and Example Applications
Saibal Mukhopaddhyay, University of Georgia Tech, US

00:00M09.1.2Error-resilience for general-purpose computing - Razor
Shidhartha Das, ARM Ltd, GB

16:30M09.2Session 2
00:00M09.2.1Approximate Computing - A circuits and architecture perspective
Anand Raghunathan, Purdue University, US

00:00M09.2.2Approximate Computing - A software and applications perspective
Srimat Chakradhar, NEC Labs, US

M08 Microfluidic Biochips: A Vision for More than Moore and Biochemistry-on-Chip

Date: 
2014-03-24
Time: 
14:30-18:00
Location / Room: 
Konferenz 2

Speakers

Tsung-Yi Ho, National Cheng Kung University, TW
Krishnendu Chakrabarty, Duke University, US

Description

The tutorial offers attendees an opportunity to bridge the semiconductor ICs/system industry with the biomedical and pharmaceutical industries. The tutorial will first describe emerging applications in biology and biochemistry that can benefit from advances in electronic "biochips". The presenters will next describe technology platforms for accomplishing "biochemistry on a chip", and introduce the audience to both the droplet-based "digital" microfluidics based on electrowetting actuation and flow-based "continuous" microfluidics based on microvalve technology. Next, the presenters will describe system-level synthesis includes operation scheduling and resource binding algorithms, and physical-level synthesis includes placement and routing optimizations. In this way, the audience will see how a "biochip compiler" can translate protocol descriptions provided by an end user (e.g., a chemist or a nurse at a doctor's clinic) to a set of optimized and executable fluidic instructions that will run on the underlying microfluidic platform. Testing techniques will be described to detect faults after manufacture and during field operation. A classification of defects will be presented based on data for fabricated chips. Appropriately fault models will be developed and presented to the audience. On-line and off-line reconfiguration techniques will be presented to bypass faults once they are detected. The problem of mapping a small number of chip pins to a large number of array electrodes will also be covered. Finally, sensor feedback-based cyberphysical adaptation will be covered. A number of case studies based on representative assays and laboratory procedures will be interspersed in appropriate places throughout the tutorial.

Agenda

Agenda

TimeLabelSession
14:30M08.1Session 1
00:00M08.1.1Technology and application drivers

00:00M08.1.2Synthesis techniques

16:30M08.2Session 2
00:00M08.2.1Testing and design-for-testability

00:00M08.2.2Cyberphysical integration and dynamic adaptation

M01 Development of mixed-criticality systems based on system partitioning

Date: 
2014-03-24
Time: 
09:30-13:00
Location / Room: 
Konferenz 1

Speakers

Alfons Crespo, Universidad Politécnica de Valencia, ES (Contact Alfons Crespo)
Alejandro Alonso, Universidad Politécnica de Madrid, ES (Contact Alejandro Alonso)
Jon Pérez, IK4-IKERLAN, ES (Contact Jon Pérez)

Description

Modern embedded applications typically integrate a multitude of functionalities with potentially different criticality levels into a single system. In addition, the increasing power of mono-core and multi-core processors make it possible to integrate them in a single platform. However, this implies a number of challenges, being the integration of mixed-criticality applications one of them. System partitioning emerges as a powerful alternative for dealing with these challenges. An hypervisor allows creating several virtual machines, that run with spatial and temporal isolation. Applications are assigned to partitions, according to several criteria, such as its criticality. Resources are assigned to virtual machines, to guarantee the fulfilment of applications time requirements. This approach is also valid for multi-cores. This tutorial will introduce the attendee the basic techniques in the development of partitioned high integrity embedded systems, which will be illustrated with an industrial case study. This development relies on the XtratuM hypervisor and supporting tools for validation, partitioning, and code and configuration files generation. This tutorial will benefit attendees from the industry, as it will show in a practical manner the basics in the development of partitioned embedded systems. They could have an idea on how to integrate this approach on their current practices. Attendees from the academia will get acquainted with advance development techniques and open research topics. In addition, the availability of the development framework can be the base of laboratory assignments on advanced courses.

Agenda

Agenda

TimeLabelSession
09:30M01.1Session 1
00:00M01.1.1Challenges in the development of high-integrity embedded systems
Jon Pérez, IK4-IKERLAN, ES

00:00M01.1.2Mixed criticality systems based on system partitioning
Alfons Crespo, Universidad Politécnica de Valencia, ES

00:00M01.1.3The XtratuM hypervisor
Alfons Crespo, Universidad Politécnica de Valencia, ES

11:30M01.2Session 2
00:00M01.2.1Framework for the development of mixed criticality systems
Alejandro Alonso, Universidad Politécnica de Madrid, ES

00:00M01.2.2Use case: development of a mixed-criticality embedded system; Aerospace (Alejandro) and Wind-power (Jon)
Alejandro Alonso1 and Jon Pérez2
1Universidad Politécnica de Madrid, ES; 2IK4-IKERLAN, ES

00:00M01.2.3Conclusion and future directions

M07 L4/Fiasco.OC - A Microkernel OS Designed for Security, Real-Time And Reliability

Date: 
2014-03-24
Time: 
14:30-18:00
Location / Room: 
Konferenz 1

Speakers

Hermann Härtig, Technische Universität Dresden, DE
Adam Lackorzynski, Kernkonzept GmbH, DE
Carsten Weinhold, Technische Universität Dresden, DE
Björn Döbel, Technische Universität Dresden, DE

Description

Modern embedded systems contain an increasing amount of software components with differing requirements in terms of real-time guarantees, security isolation, and reliability. In order to reduce production cost it is desirable to consolidate many such applications into a single hardware platform. Such consolidation requires an operating system that suits these differing application requirements. L4/Fiasco.OC is a microkernel operating system developed as a research project at TU Dresden and now commercially supported by Kernkonzept GmbH. The operating system has been constantly evolved for the past 15 years to accomodate real-time, security, and reliability use cases. Commercially, the microkernel is the foundation of Deutsche Telekom's SIMKo3 high-security smartphone, which was certified for German Government use in September 2013. This tutorial will give an insight into Fiasco.OC's features. Talks by Fiasco.OC developers and researchers will explore usage enarios. A hands-on session lets participants get first-hand experience in Fiasco.OC system setup and application development.

Agenda

Agenda

TimeLabelSession
14:30M07.1Session 1
00:00M07.1.1Why we need microkernels
Hermann Härtig, Technische Universität Dresden, DE

00:00M07.1.2Isolation for Security, Portability, and Real-Time
Adam Lackorzynski, Kernkonzept GmbH, DE

00:00M07.1.3Building a Secure System on top of Fiasco.OC
Carsten Weinhold, Technische Universität Dresden, DE

00:00M07.1.4Fiasco.OC for Reliability and Fault Tolerance
Björn Döbel, Technische Universität Dresden, DE

16:30M07.2Session 2
00:00M07.2.1Hands On Session (Please bring your laptop)

00:00M07.2.2Practical Introduction to running L4/Fiasco.OC

00:00M07.2.3System Setup and Application Development

Syndicate content