IRVS VLSI IDEA INNOVATORS

IRVS VLSI IDEA INNOVATORS
VLSI Project, Embedded Project, Matlab Projects and courses with 100% Placements

Thursday, May 5, 2011

Functional safety poses challenges for semiconductor design

To manage systematic and random failures, vendors have applied functional safety techniques at the system level for decades. As the capability to integrate multiple system-level functions into a single component has increased, there’s been a desire to apply those same practices at the semiconductor component or even subcomponent level.

Although the state of the art in functional safety is not yet well aligned with the state of the art in semiconductors, recent work on the IEC 61508 second edition and ISO 26262 draft standards have brought improvements. Many challenges remain, however.

Texas Instruments and Yogitech, a company that verifies and designs mixed-signal system-on-chip solutions, are working together to solve the challenges in standards committees as well as on new TMS570 microcontroller designs. (See Figure 1 below for an example of current-generation designs.

Standards and analysis

All elements that interact to realize a safety function or contribute to the achievement of a semiconductor safety goal must be considered. Regrettably, the available standards aren’t consistent in application or scope. For example, IEC 61508 makes a general distinction between system design and software design, while ISO 26262 respects separate system, hardware component and software component developments.

So how should we consider reusable subcomponent modules such as an analog/digital converter or processor core? “Hard” modules, such as A/D converters, have a fixed implementation and can easily be developed according to hardware component guidelines. “Soft” modules, such as processor cores, are delivered as source code and have no physical implementation until synthesized into a design.

It is not possible to perform some levels of quantitative safety analysis on the “soft” module until it’s synthesized, as it blurs the line between hardware and software components. Trial synthesis with well-documented targets is thus recommended to allow for the calculation of reference safety metrics, so that potential users can evaluate a module’s suitability for their design.

To ensure functional safety, it is critical to understand the probability of random failure of the elements that constitute a safety function or that achieve a safety goal. In traditional analysis, each component in the safety function is typically treated as a black box, with a monolithic failure rate.

Traditional failure rates are estimated based on reliability models, handbooks, field data and experimental data. Those methods often generate wildly different estimates; deltas of 10x to 1,000x are not uncommon. Such variation can pose significant system integration hurdles.

How can you perform meaningful quantitative safety analysis without component failure rates estimated to the same assumptions? One solution is to standardize estimation of failure rates based on a single generic model, such as the one presented in IEC TR 62380 (currently considered in the ISO 26262 draft guidelines). Another is to focus on ratiometric safety analyses—calculations that compare ratios of detected faults to total faults, instead of focusing on the absolute magnitude of failure rate.



This information is shared by www.irvs.info

No comments:

Post a Comment