Cheryl Tulkoff

Design for Excellence in Electronics Manufacturing


Скачать книгу

authors hope this book helps you to manufacture your products with better reliability and greater customer satisfaction.

      2 Establishing a Reliability Program

      

      A comprehensive, well‐thought‐out reliability program ensures that companies can achieve their quality, reliability, and customer satisfaction targets on time, on schedule, and within budget. Reliability is the measure of a product's ability to perform a required function under stated conditions for an expected duration. By definition, reliability is specific to each application – there is no one‐size‐fits‐all definition. So, it can be useful to start with what reliability is not along with some common myths about reliability.

       Myths of Reliability

       Myth 1: Don't worry about design, because most problems are caused by defects from suppliers. While many product failures can be traced back to supplier or manufacturing issues, the most severe warranty issues tend to be design related. Design flaws can affect every product at every customer. As a result, design issues are more likely to result in a recall and have a much more significant impact on a company's bottom line.

       Myth 2: The design is intended for more rugged environments; therefore, nothing can be learned from consumer electronics. The stresses experienced during the operation of a computer or mobile phone can far exceed any loads applied to military, avionics, and industrial designs. For example, laptop computers left in the back of a car can experience temperatures as high as 80 C on a hot summer day. Combine that with component temperatures that can exceed 100 C during operation and products can be exposed to thermal cycles in number and severity that exceed those experienced in commercial and military applications.

       Myth 3: Design verification is the same as product qualification. The purpose of design verification is to understand the margins of a design. This is typically performed on prototype units using small sample sizes (one to three units is common). Tests performed during design verification include highly accelerated life testing (HALT), corner‐case testing, UL testing, ship‐shock, etc. Once the design is proven to be robust, product qualification can then be performed. The purpose of product qualification is to demonstrate that design and manufacturing processes are sufficiently robust to ensure the desired quality and lifetime. Product qualification should be performed on a pilot production, not prototypes, and should have a sufficiently large sample size (5 to 20 units) to have some confidence in capturing gross manufacturing issues. There are substantial risks in performing product qualification tests on prototypes. Prototypes that pass qualification may not be representative of production units. This increases the risk of qualification testing failing to capture potential field issues. If prototypes fail qualification testing, these failures may be irrelevant, and attempts at root‐cause identification may be a misuse of time and resources.

       Myth 4: Highly accelerated life testing (HALT) can be used to demonstrate product reliability. HALT demonstrates product robustness. Only accelerated life testing can demonstrate reliability. What's the difference? Robustness is the measure of a product's ability to withstand stress. For example, one inch of steel is more robust than one mil of paper. This measurement is often defaulted to time zero, which can be either immediately after manufacturing or when the product first arrives at the customer. Reliability is the measure of a product's ability to perform a required function under stated conditions for an expected duration.

       Myth 5: Reliability is all predictive statistics. Companies that produce some of the most reliable products in the world spend a relatively insignificant percentage of their product development performing predictive statistical assessments. For example, many original equipment manufacturers (OEMs) in telecommunications, military, avionics, and industrial controls require a mean time between failures (MTBF) number from their suppliers. MTBF, sometimes referred to as average lifetime, defines the time over which the probability of failure is 63%. The base process of calculating MTBF involves applying a constant failure rate to each part and summing the parts in the design. While there have been numerous claims over the years of improvement on this number by applying additional failure rates or modifying factors to consider temperature, humidity, printed circuit boards, solder joints, etc., there are several flaws to this approach. The first is misunderstanding what it means. The average engineer often expects a product with an MTBF of 10 years to operate reliably for a minimum of 10 years. In practice, this product will likely fail far before 10 years. Second, the primary approach for increasing MTBF is to reduce parts count. This can be detrimental if the parts removed are critical for certain functions, such as filtering, timing, etc., that won't affect product performance under test, but will influence product reliability in the field.Unlike many other elements of the design and development process, reliability requires thinking about failure. For example, successful reliability testing requires failure, unlike most other forms of testing, where the goal is to pass.

      

       Best Practices

      There are no universal best practices. Every company must choose the appropriate set of practices and implement a program that optimizes return on investment in reliability activities. Reliability is all about cost‐benefit trade‐offs. Since reliability activities are not a direct revenue generator, they are strongly driven by cost. By increasing efficiency in reliability activities, companies can achieve a lower risk at the same cost; and addressing reliability during the design phase is the most efficient way to increase the cost‐benefit ratio. Industry rules of thumb indicate the following returns on investment (Ireson and Coombs 1989):

       Issue caught during design: 1 cost

       Issue caught during engineering: 10 cost

       Issues caught during production: 100 cost

       Additional Economic Drivers

       Use environment and design life

       Manufacturing volume

       Product complexity

       Margin and profit requirements

       Schedule and delivery needs

       Field performance expectations and warranty budget

      2.2.1 Best‐in‐Class Reliability Program Practices

       Establish a reliability goal and use it to determine reliability budgeting.

       Quantify the use environment. Use industry standards and guidelines when aspects of the use environment are common. Use actual measures when aspects of the use environment are unique, or there is a strong relationship with the end customer. Don't mistake test specifications for the actual use environment. Clearly define the median and realistic worst‐case conditions through close cooperation between marketing, sales, and the reliability team.

       Perform assessments appropriate for the product and end‐user. These assessments require an understanding of material‐degradation behavior, either by test to failure or by using supplier‐provided data. The recommended assessments include:– Thermal stress– Margin or safety‐factor demonstration (stress analysis that includes step stress tests (e.g. HALT) to define design margins)– Electrical stress (circuit, component derating, electromagnetic interference [EMI])– Mechanical stress (finite element analysis)– Applicable product characterization tests (not necessarily verification and validation tests)– Life‐prediction validation (accelerated life test [ALT])– Mechanical loading (vibration, mechanical shock)– Contaminant testing

       Perform design review based on failure mode (DRBFM, Toyota methodology). This readily identifies CTQ (critical to quality) parameters and