2.7 Yield and Reliability for Robust Systems

Printer-friendly version PDF version

Date: Tuesday 25 March 2014
Time: 11:30 - 13:00
Location / Room: Konferenz 5

Chair:
Joan Figueras, UPC, ES

Co-Chair:
Jose Pineda de Gyvez, NXP, NL

Robustness is increasingly a requirement for SOCs and memories, and effects such as wearout, BTI, and soft errors are important to consider as part of design. Another important component of robust design is tolerance of rare events. Understanding design robustness helps predict and enhance yield.

TimeLabelPresentation Title
Authors
11:302.7.1(Best Paper Award Candidate)
COMPREHENSIVE ANALYSIS OF ALPHA AND NEUTRON PARTICLE-INDUCED SOFT ERRORS IN AN EMBEDDED PROCESSOR AT NANOSCALES
Speakers:
Mojtaba Ebrahimi1, Adrian Evans2, Mehdi B. Tahoori1, Razi Seyyedi1, Enrico Costenaro3 and Dan Alexandrescu3
1Karlsruhe Institute of Technology, DE; 2iRoC Technologies, DE; 3iRoC Technologies, France, FR
Abstract
Radiation-induced soft errors have become a key challenge in advanced commercial electronic components and systems. We present results of Soft Error Rate (SER) analysis of an embedded processor. Our SER analysis platform accurately models all generation, propagation and masking effects starting from a technology response model derived using TCAD simulations at the device level all the way to application masking. The platform employs a combination of empirical models at the device level, analytical error propagation at logic level and fault emulation at the architecture/application level to provide the detailed contribution of each component (flip-flops, combinational gates, and SRAMs) to the overall SER. At each stage in the modeling hierarchy, an appropriate level of abstraction is used to propagate the effect of errors to the next higher level. Unlike previous studies which are based on very simple test chips, analyzing the entire processor gives more insight into the contributions of different components to the overall SER. The results of this analysis can assist circuit designers to adopt effective hardening techniques to reduce the overall SER while meeting required power and performance constraints.
12:002.7.2BIAS TEMPERATURE INSTABILITY ANALYSIS OF FINFET BASED SRAM CELLS
Speakers:
Seyab Khan1, Innocent Agbo2, Said Hamdioui3, Halil Kukner4, Ben Kaczer4, Praveen Raghavan5 and Francky Catthoor4
1Technical University Delft, NL; 2TU Delft, NL; 3Delft University of Technology, NL; 4IMEC, BE; 5imec, BE
Abstract
Bias Temperature Instability (BTI) is posing a major reliability challenge for today's and future nano-devices as it degrades their performance. This paper provides a comprehensive analysis of BTI impact, in terms of timedependent degradation, on FinFET based SRAM cell. The evaluation metrics are read Static Noise Margin (SNM), hold SNM and Write Trip Point (WTP); while the aspects investigated consist dependence on the supply voltage, cell strength, and design styles (6 versus 8 Transistors cell). A comparison between FinFET and planar CMOS based SRAM cells degradation is also covered. The simulation results for FinFET based cells show that: (a) Read SNM of the cell degrades more (by 16.72%) than the other metrics (6.82% in WTP and 14.19% in hold SNM) (b) 12% increment in the cell's supply voltage enhances its read SNM by 9% (c) Strengthening only the pull-down transistors in the cell by 1.5 reduces BTI induced read SNM degradation by 26.61% (d) 8T SRAM cells has 1.43 higher WTP than 6T cell; however, the cells suffer from 31.13% higher read SNM and 8.05% higher hold SNM degradations than 6T SRAM cells and (e) FinFET based SRAM cells are more vulnerable to BTI degradation than planar CMOS based cells
12:302.7.3SSFB: A HIGHLY-EFFICIENT AND SCALABLE SIMULATION REDUCTION TECHNIQUE FOR SRAM YIELD ANALYSIS
Speakers:
Manish Rana and Ramon Canal, Universitat Politecnica de Catalunya, ES
Abstract
Abstract--- Estimating extremely low SRAM failure-probabilities by conventional Monte Carlo (MC) approach requires hundreds-of-thousands simulations making it an impractical approach. To alleviate this problem, failure-probability estimation methods with a smaller number of simulations have recently been proposed, most notably variants of consecutive mean-shift based Importance Sampling (IS). In this method, a large amount of time is spent simulating data points that will eventually be discarded in favor of other data-points with minimum norm. This can potentially increase the simulation time by orders of magnitude. To solve this very important limitation, in this paper, we introduce SSFB: a novel SRAM failure-probability estimation method that has much better cognizance of the data points compared to conventional approaches. The proposed method starts with radial simulation of a single point and reduces discarded simulations by: a) random sampling -only- when it reaches a failure boundary and after that continues again with radial simulation of a chosen point, and b) random sampling is performed -only- within a specific failure-range which decreases in each iteration. The proposed method is also scalable to higher dimensions (more input variables) as sampling is done on the surface of the hyper-sphere, rather than within-the-hypersphere as other techniques do. Our results show that using our method we can achieve an overall 40x reduction in simulations compared to consecutive mean-shift IS methods while remaining within the 0.01-Sigma accuracy.
13:00IP1-12, 861INFORMER: AN INTEGRATED FRAMEWORK FOR EARLY-STAGE MEMORY ROBUSTNESS ANALYSIS
Speakers:
Shrikanth Ganapathy1, Ramon Canal1, Dan Alexandrescu2, Enrico Costenaro2, Antonio Gonzalez3 and Antonio Rubio1
1Universitat Politecnica de Catalunya, ES; 2iRoC Technologies, FR; 3Intel and Universitat Politecnica de Catalunya, ES
Abstract
With the growing importance of parametric (process and environmental) variations in advanced technologies, it has become a serious challenge to design reliable, fast and low-power embedded memories. Adopting a variation-aware design paradigm requires a holistic perspective of memory-wide metrics such as yield, power and performance. However, accurate estimation of such metrics is largely dependent on circuit implementation styles, technology parameters and architecture-level specifics. In this paper, we propose a fully automated tool - INFORMER that helps high-level designers estimate memory reliability metrics rapidly and accurately. The tool relies on accurate circuit-level simulations of failure mechanisms such as ageing, soft-errors and parametric failures. The obtained statistics can then help couple low-level metrics with higher-level design choices. A new technique for rapid estimation of low-probability failure events is also proposed. We present three use-cases of our prototype tool to demonstrate its diverse capabilities in autonomously guiding large SRAM based robust memory designs.
13:01IP1-13, 121WEAR-OUT ANALYSIS OF ERROR CORRECTION TECHNIQUES IN PHASE-CHANGE MEMORY
Speakers:
Caio Hoffman, Luiz Ramos, Rodolfo Azevedo and Guido Araújo, University of Campinas, BR
Abstract
Phase-Change Memory (PCM) is new memory technology and a possible replacement for DRAM, whose scaling limitations require new lithography technologies. Despite being promising, PCM has limited endurance (its cells withstand roughly 10^8 bit-flips before failing), which prompted the adoption of Error Correction Techniques (ECTs). However, previous lifetime analyses of ECTs did not consider the difference between the bit-flip frequencies of data and code bits, which may lead to inaccurate wear-out analyses for the ECTs. In this work, we improve the wear-out analysis of PCM by modeling and analyzing the bit-flip probabilities of five ECTs. Our models also enable an accurate estimation of energy consumption and analysis of the endurance-energy trade-off for each ECT.
13:02IP1-14, 344APPROXIMATING THE AGE OF RF/ANALOG CIRCUITS THROUGH RE-CHARACTERIZATION AND STATISTICAL ESTIMATION
Speakers:
Doohwang Chang1, Sule Ozev1, Ozgur Sinanoglu2 and Ramesh Karri3
1Arizona State University, US; 2New York University Abu Dhabi, AE; 3Polytechnic Institute of New York University, US
Abstract
Counterfeit ICs have become an issue for semiconductor manufacturers due to impacts on their reputation and lost revenue. Counterfeit ICs are either products that are intentionally mislabeled or legitimate products that are extracted from electronic waste. The former is easier to detect whereas the latter is harder since they are identical to new devices but display degraded performance due to environmental and use stress conditions. Detecting counterfeit ICs that are extracted from electronic waste requires an approach that can approximate the age of manufactured devices based on their parameters. In this paper, we present a methodology that uses information on both fresh and aged ICs and tries to distinguish between the fresh and aged population based on an estimate of the age. Since analog devices age mainly due to their bias stress, input signals play less of a role. Hence, it is possible to use simulation models to approximate the aging process, which would give us access to a large population of aged devices. Using this information, we can construct a statistical model that approximates the age of a given circuit. We use a Low noise amplifier (LNA) and an NMOS LC oscillator to demonstrate that individual aged devices can be accurately classified using the proposed method.
13:00End of session
Lunch Break in Exhibition Area
Sandwich lunch