Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2000, IEEE Design & Test of Computers
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
2000, IEEE Transactions on Instrumentation and Measurement
2005, 2005 IEEE Instrumentationand Measurement Technology Conference Proceedings
2011
Modern electronics typically consist of microprocessors and other complex integrated circuits (ICs) such as FPGAs, ADCs, and memory. They are susceptible to electrical, mechanical and thermal modes of failure like other components on a printed circuit board, but due to their materials, complexity and roles within a circuit, accurately predicting a failure rate has become difficult, if not impossible. Development of these critical components has conformed to Moore's Law, where the number of transistors on a die doubles approximately every two years. This trend has been successfully followed over the last four decades through reduction in transistor sizes creating faster, smaller ICs with greatly reduced power dissipation. Although this is great news for developers and users of high performance equipment, including consumer products and analytical instrumentation, a crucial, yet underlying reliability risk has emerged. Semiconductor failure mechanisms, which are far worse at these...
2000, IEEE Design & Test of Computers
2010, 2010 IEEE Aerospace Conference
2013, International Journal of System Assurance Engineering and Management
2000, IBM Journal of Research and Development
2007, … . 45th annual. ieee …
2008, Solid-State Electronics
2011, 2011 12th Latin American Test Workshop (LATW)
2009, Microelectronics Reliability
2000, IEEE Transactions on Electron Devices
2003, Journal of Electronic Testing
The effectiveness of single threshold IDDQ measurement for defect detection is eroded owing to higher and more variable background leakage current in modern VLSIs. Delta IDDQ is identified as one alternative for deep submicron current measurements. Often delta IDDQ is coupled with voltage and thermal stress in order to accelerate the failure mechanisms. A major concern is the IDDQ limit setting under normal and stressed conditions. In this article, we investigate the impact of voltage and thermal stress on the background leakage. We calculate IDDQ limits for normal and stressed operating conditions of 0.18 µm n-MOSFETs using a device simulator. Intrinsic leakage current components of transistor are analyzed and the impact of technology scaling on effectiveness of stressed ?IDDQ testing is also investigated.
2000, IEEE Instrumentation & Measurement Magazine
2000
Understanding the effectiveness of their production tests is a critical task for IC suppliers. Numerous trends suggesting that conventionally applied test methods must change to meet future needs will make the task even more critical – and difficult – in the future. This paper presents characterization and diagnostic data and ideas aimed at helping IC suppliers understand test effectiveness.
Solid-state lighting (SSL) products offer very high energy efficiencies (approximately 90%) and the possibility of very long lifetimes (on the order of 20,000–100,000 h; or 10–30 years). A complete SSL product is a complex optoelectronic system consisting of many interacting subsystems. Reliability assurance is therefore a complex task and requires an integrated system-level approach. The current state of the art is that the reliability of the light engines has received far more attention from SSL engineers than the driver electronics. This chapter provides an overview of reliability activities in the context of developing reliable driver electronics for SSL products.
2005, Microelectronics Reliability
2000, IEEE Transactions on Nanotechnology
2006, Microelectronics Reliability
2000, IEEE Transactions on Electron Devices
2001, Journal of Electronic Testing
Device scaling has led to the blurring of the boundary between design and test: marginalities introduced by design tool approximations can cause failures when aggressive designs are subjected to process variation. Larger die sizes are more vulnerable to intra-die variations, invalidating analyses based on a number of given process corners. These trends are eroding the predictability of test quality based on stuck-at fault coverage. Industry studies have shown that an at-speed functional test with poor stuck-at fault coverage can be a better DPM screen than a set of scan tests with very high stuck-at fault coverage. Contrary to conventional wisdom, we have observed that a high stuck-at fault test set is not necessarily good at detecting faults that model actual failure mechanisms. One approach to address the test quality crisis is to rethink the fault model that is at the core of these tests. Targeting realistic fault models is a challenge that spans the design, test and manufacturing domains: the extraction of realistic faults has to analyze the design at the physical and circuit levels of abstraction while taking into account the failure modes observed during manufacture. Practical fault models need to be defined that adequately model failing behavior while remaining amenable to automatic test generation. The addition of these fault models place increasing performance and capacity demands on already stressed test generation and fault simulation tools. A new generation of analysis and test generation tools is needed to address the challenge of defect-based test. We provide a detailed discussion of process technology trends that are responsible for next generation test problems, and present a test automation infrastructure being developed at Intel to meet the challenge.
2018, System of System Failures
2010, 2010 10th IEEE International Conference on Solid-State and Integrated Circuit Technology
2015
2003, Microelectronics Reliability
1999, 1999 IEEE International Integrated Reliability Workshop Final Report (Cat. No. 99TH8460)
2000, Journal of Electronic Testing
This paper deals with the introduction of a highly doped n-type diffusion, called field stopper, in the bottom of each peripheral groove of double MESA-GLASS AC switch (e.g. Triac) to strengthen its reliability in more severe applications. From a set of high temperature and reverse bias (HTRB) tests, this new process flow allows multiplying the device lifetime by 11.3 compared to standard products when junction temperature and bias are fixed to 150 °C and 800 V AC respectively. An empirical acceleration model is proposed. This model is characterized by activation energy and voltage constant about 0.48 eV and 6.7 mV-1 , respectively. Finally, we suggest that ionic conduction through the molding compound is the main physical phenomenon involved in the double MESA-GLASS AC switch aging.
2013, 2013 European Conference on Circuit Theory and Design (ECCTD)
2010, 2010 IEEE International Integrated Reliability Workshop Final Report
1994, Journal of Electronic Testing
1998, Microelectronics Reliability
1989
Advanced measurement methods using microelectronic test chips are described. These chips are intended to be used in acquiring the data needed to qualify Application Specific Integrated Circuits (ASIC's) for space use. Efforts were focused on developing the technology for obtaining custom IC's from CMOS/bulk silicon foundries. A series of test chips were developed: a parametric test strip, a fault chip,
Advancements in electronics research triggered a vision of a more connected world, touching new unprecedented fields to improve the quality of our lives. This vision has been fueled by electronic giants showcasing flexible displays for the first time in consumer electronics symposiums. Since then, the scientific and research communities partook on exploring possibilities for making flexible electronics. Decades of research have revealed many routes to flexible electronics, lots of opportunities and challenges. In this work, we focus on our contributions towards realizing a complimentary approach to flexible inorganic high performance electronic memories on silicon. This approach provides a straight forward method for capitalizing on the existing well-established semiconductor infrastructure, standard processes and procedures, and collective knowledge. Ultimately, we focus on understanding the reliability and functionality anomalies in flexible electronics and flexible solid state memory built using the flexible silicon platform. The results of the presented studies show that: (i) flexible devices fabricated using etch-protect-release approach (with trenches included in the active area) exhibit ~19% lower safe operating voltage compared to their bulk counterparts, (ii) they can withstand prolonged bending duration (static stress) but are prone to failure under dynamic stress as in repeated bending and re-flattening, (iii) flexible 3D FinFETs exhibit ~10% variation in key properties when exposed to out-of-plane bending stress and out-of-plane stress does not resemble the well-studied in-plane stress used in strain engineering, (iv) resistive memories can be achieved on flexible silicon and their basic resistive property is preserved but other memory functionalities (retention, endurance, speed, memory window) requires further investigations, (v) flexible silicon based PZT ferroelectric capacitors exhibit record polarization, capacitance, and endurance (1 billion write-erase cycles) values for flexible FeRAMs, uncompromised retention ability under varying dynamic stress, and a minimum bending radius of 5 mm, and (vi) the combined effect of 225 °C, 260 MPa tensile stress, 55% humidity under ambient conditions (21% oxygen), led to 48% reduction in switching coercive fields, 45% reduction in remnant polarization, an expected increase of 22% in relative permittivity and normalized capacitance, and reduced memory window (20% difference between switching and non-switching currents at 225 °C).
2000, Proceedings of the IEEE
Journal of Low Power Electronics and Applications
The voltage to damage a chip under the ESD test is often higher than several hundred volts. However, we have observed that the voltage below 6V still can damage the chip to induce the yield-loss of a product in the production line. It is because that the voltage is high enough to damage the components of the low voltage circuits (1.8V), but is still too low to turn on the ESD protection device.