8/13/2019 Mod10-NVN QA QC http://slidepdf.com/reader/full/mod10-nvn-qa-qc 1/136 MODULE NO -10 Introduction to Control charts Statistical process control • Statistical process control is a collection of tools that when used together can result in process stability and variability reduction. •A stable process is a process that exhibits only common variation, or variation resulting from inherent system limitations. •A stable process is a basic requirement for process improvement efforts. Advantage of a stable process •Management knows the process capability and can predict performance, costs, and quality levels. •Productivity will be at a maximum, and costs will be minimized. • Management will be able to measure the effects of changes in the system with greater speed and reliability. •If management wants to alter specification limits, it will have the data to back up its decision. Categories of variation in piece part production •Within-piece variation •Piece-to-piece variation •Time-to-time variation Source of variation Variation is present in every process due to a combination of the equipment, materials, environment, and operator. The first source of variation is the equipment. This source includes tool wear, machine vibration, work holding-device positioning, and hydraulic and electrical fluctuations. When all these variations are put together, there is a certain capability or precision within which the equipment operates. The second source of variation is the material. Since variation occurs in the finished product, it must also occur in the raw material (which was someone else's finished product). Such quality characteristics as tensile strength, ductility, thickness, porosity, and moisture content can be expected to contribute to the overall variation in the final product. A third source of variation is the environment. Temperature, light, radiation, electrostatic discharge, particle size, pressure, and humidity can all contribute to variation in the product. In order to control this source, products are sometimes manufactured in
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
• Statistical process control is a collection of tools that when used together canresult in process stability and variability reduction.
• A stable process is a process that exhibits only common variation, or variationresulting from inherent system limitations.
• A stable process is a basic requirement for process improvement efforts.
Advantage of a stable process
• Management knows the process capability and can predict performance, costs,and quality levels.
• Productivity will be at a maximum, and costs will be minimized.
• Management will be able to measure the effects of changes in the system withgreater speed and reliability.
• If management wants to alter specification limits, it will have the data to back upits decision.
Categories of variation in piece part production
• Within-piece variation
• Piece-to-piece variation
• Time-to-time variation
Source of variation
Variation is present in every process due to a combination of the equipment,
materials, environment, and operator.
The first source of variation is the equipment. This source includes tool wear,
machine vibration, work holding-device positioning, and hydraulic and electrical
fluctuations. When all these variations are put together, there is a certain capability orprecision within which the equipment operates.
The second source of variation is the material. Since variation occurs in the finished
product, it must also occur in the raw material (which was someone else's finishedproduct). Such quality characteristics as tensile strength, ductility, thickness, porosity,
and moisture content can be expected to contribute to the overall variation in the final
product.
A third source of variation is the environment. Temperature, light, radiation,
electrostatic discharge, particle size, pressure, and humidity can all contribute to variationin the product. In order to control this source, products are sometimes manufactured in
white rooms. Experiments are conducted in outer space to learn more about the effect of
the environment on product variation.
A fourth source is the operator This source of variation includes the method by which
the operator performs the operation. The operator's physical and emotional well-being
also contribute to the variation. A cut finger, a twisted ankle, a personal problem, or aheadache can make an operator's quality performance vary. An operator's lack of
understanding of equipment and material variations due to lack of training may lead tofrequent machine adjustments, thereby compounding the variability.
The above four sources account for the true variation. There is also a reportedvariation, which is due to the inspection activity. Faulty inspection equipment, the
incorrect application of a quality standard, or too heavy a pressure on a micrometer can
be the cause of the incorrect reporting of variation. In general, variation due to inspection
should be one-tenth of the four other sources of variations. It should be noted that three ofthese sources are present in the inspection activity-an inspector, inspection equipment,
and the environment.
Chance and Assignable Causes of Quality Variation
As long as these sources of variation fluctuate in a natural or expected manner, astable pattern of many chance causes (random causes) of variation develops. Chance
causes of variation are inevitable. Because they are numerous and individually of
relatively small importance, they are difficult to detect or identify.
When only chance causes are present in a process, the process is considered to be in a
state of statistical control. It is stable and predictable. However, when an assignable cause
of variation is also present, the variation will be excessive, and the process is classified asout of control or beyond the expected natural variation.
• A process that is operating with only chance causes of variation present is said tobe in statistical control.
• A process that is operating in the presence of assignable causes is said to be out ofcontrol.
• The eventual goal of SPC is reduction or elimination of variability in the processby identification of assignable causes.
Control chart
• Control chart was developed to recognize constant patterns of variation.
• When observed variation fails to satisfy criteria for controlled patterns, the chartindicate this.
• Control chart allow us to distinguish between controlled and uncontrolled
A typical control chart has control limits set at values such that if the process is incontrol, nearly all points will lie between the upper control limit (UCL) and the lower
control limit (LCL).
Definition :
A control chart is defined as a statistical tool used to detect the presence of assignablecauses in any manufacturing systems and it will be influenced by the pure system of
chance causes only
Control charts are of two types : Variable control charts and attribute control charts
Variable Control charts : A variable control chart is one by which it is possible tomeasure the quality characteristics of a product. The variable control charts are
(i) - chart
(ii) R – chart
(iii) – chart
Attribute Control chart : An attribute control chart is one in which iti is not possible to
measure the quality characteristics of a product i.e., it is based on visual inspection only
like good or bad success or failure, accepted or rejected. The attribute control charts are.
(i) p - chart
(ii) np – chart(iii) c – chart
(iv) u - chart
Objectives of control charts
• Control charts are used as one source of information to help whether an item oritems should be released to the customer.
• Control charts are used to decide when a normal pattern of variation occurs, theprocess should be left alone when an unstable pattern of variable occurs which
indicates the presence of assignable causes it requires an action to eliminate it.
• Control charts can be used to establish the product specification.
• To provide a method of instructing to the operating and supervisory personnel(employees) in the technique of quality control.
State of Statistical ControlA manufacturing process is said to be in a state of statistical control whenever it is
operated upon by a pure system of chance causes. The display of points in the X bar
chart and R chart will be distributed evenly and randomly around the center line and all
the points should fall between the UCL and LCL.
Control Charts - in Control VS Chance Variation
State of Lack of Control
A process is said to be in a state of lack of control whenever the state of statisticalcontrol does not hold good. In such a state we interpret the presence of assignable causes,
the reason for lack of control are
• Points violating the control limits
• Run• Trend
• Clustering
• Cycle pattern
Control Charts Interpretation
• Special: Any point above UCL or below LCL
• Run : > 7 consecutive points above or below centerline
• 1-in-20: more than 1 point in 20 consecutive points close to UCL or LCL
• Trend: 5-7 consecutive points in one direction (up or down)
Problem 1. Control charts for X bar and R are maintained on a certain dimension of a
manufactured part which is specified as 2.05 ± 0.02 cms. Subgroup size is 4. The values
of X bar and R are computed for each subgroup. After 20 subgroups. = 41.283 andR = 0.280. If the dimensions fall above USL, rework is required, if below LSL, the part
must be scrapped. If the process is in statistical control and normally distributed.
(a) Determine the 3 control limits for X bar and R chart.
(b) What is process capability
(c) What can you conclude regarding its ability to meet specifications(d) Determine the percentage of scrap and rework
(e) What are your suggestions for improvement.
Solution. = 41.283
R = 0.280n = Sample size = 04Number of subgroup (K) = 20
1 is greater than USL – LSL , the process is not capable of meeting thespecification limit i.e., 0.0407 > 0.04.
Note: 1. If 6 1 is less than (USL – LSL). The process is capable of meeting the
specification. There should not be any rejection. If rejection occurs we can
conclude that, the process is not centered properly.
2. If 61 is equal to (USL – LSL), the process is exactly capably of meeting thespecification limits. But tight tolerances are provided. We have to prefer a skilled
operator for operating the machine.
3. If 6 1 is greater than to (USL – LSL), the process is not capable of meeting the
specifications limits. The rejections are inevitable.
Since it is symmetric the percentage of scrap is also 0.16%.
(ii) Widening the specification limits, for this we have to consult the design engineer,
whether the product performs its function satisfactorily or not.
(iii) Decrease the dispersion, for this we have to prefer a skilled operator and very goodraw material and a new machine, practically which is difficult.
(iv) Leave the process alone and do the 100% Inspection.(v) Calculate the cost of scrap and rework, whichever is costly make it zero, accordingly
change the process centre.
Problem 2.
Subgroup of 5 item each are taken from a manufacturing process at regular intervals.
A certain quality characteristic is measured and X bar , R values computed for eachsubgroup. After 25 subgroup
= 357.5,
R = 8.8. Assume that all the points are within the control limits onboth the charts. The specifications are 14.4 ± 0.4
(a) Compute the control limits for X bar and R chart
(b) What is the process capability
(c) Determine the percentage of rejections if any(d) What can you conclude regarding its ability to meet the specifications.
(e) Suggest the possible scrap for improving the situation. (note: n=5 from tables
Development and use of X bar– S Chart With Real life data
Note :Although X- bar and R charts are widely used, it is a occasionally desirable to
estimate the process standard deviation directly instead of indirectly through the use ofthe range R. This leads to control charts for X-bas and S, where S is the sample standard
deviation. Generally X-bar and s charts are preferable to their more familiar counter
parts, X – bar and R charts when either
1. The sample size n is moderately large ---say n>10 or 12.
2. The sample size n is variable
Problem – 6
The following data presents the inside diameter measurements on the piston rings toillustrate the construction and the operation of X bar and S chart. The subgroup size isfive.
The control limits for the x bar chart based on S bar are identical to the X bar chartcontrol limits, where the limits were based on R bar They will not always be the same,
and in general, the X bar chart control limits based on S bar will be slightly differentthan limits based on R bar.
We can estimate the process standard deviation using the fact that S/c4 is an unbiased
estimate of . Therefore, since c4 = 0.9400 for samples of size five, our estimate of the
process standard deviation is
This estimate is very similar to that of obtained via the range method.
Problem -7
A certain product has a specification of 120 ±5. At present the estimated process
average is120 and 1 = 1.5(a) Compute the 31limits for X bar , R chart based on a subgroup size of 4
(b) If there is a shift in the process average by 2%, What percentage of product
will fail to meet the specification.(c) What is the probability of detecting the shift by X bar - chart
Problem - 8 Subgroup of 4 items each are taken from a manufacturing process at regular intervals.
A certain quality characteristic is measured and X bar , R values are computed for each
subgroup. After 25 subgroup. X bar = 15350, R = 411.1.
(a) Compute the control limits for X bar, R chart.
(b) Assume all the points are falling within the control limits on both the charts.The specification limits are 610 ± 15. If the quality characteristic is Normally
distributed what percentage of product would fail to meet the specifications.
(c) Any product that falls below L will be scrapped and above U must be
reworked. It is suggested that the process can be centered at a level so that not
more than 0.1% of the product will be scrapped. What should be the aimed
value of to make the scrap exactly 0.1%.
(d) What percentage of rework can be expected with this centering.
Six sigma is several things. First, it is a statistical measurement. It tells us howgood our products, services and processes really are. The Six sigma method allows us to
draw comparisons to other similar or dissimilar products, services and processes. In this
manner, we can see how far ahead or behind we are. Most importantly, we see where weneed to go and what we must do to get there. In other words, Six sigma helps us to
establish, our course and gauge our pace in the race for total customer satisfaction
For example, when we say a process is 6 sigma, we are saying it is Best-in-Class.Such a level of capability will only yield about 3 instances of nonconformance out of
every million opportunities for nonconformance. On the other hand, when we say that
some other process is 4 Sigma, we are saying it is average. This translates to about 6200nonconformities per million opportunities for nonconformance. In this sense, the sigma
scale of measure provides us with, a “goodness micrometer” for gauging the adequacy of
our products, services and processes. Six Sigma as a business strategy can greatly helpus to gain competitive edge. The reason for this is very simple – as you improve the
sigma rating of a process, the product quality improves and costs go down. Naturally, the
customer becomes more satisfied as a result. Let us remember there is no economics of
quality. It is always cheaper to do “Right Things, “Right First Time”.
WHAT DOES “METRICS” STAND FOR ?
M MeasureE Everything
T ThatR Results
I In
C CustomerS Satisfaction
Applicability's of six sigma
The first step toward improving the sigma capability of a process is defining what the
‘customer’ expectations are. Next, you “map” the process by which you get the workdone to meet those expectations. This means that you create a ‘box diagram’ of theprocess flow; I.e.; identifying the steps within the process. With this done, you can now
affix success criteria to each of the steps. Next, you would, want to record the number of
times each of the given success criteria is not met and calculate the total defects-per-opportunity (TDPO). Following this, the TDPO information is converted to defects-per-
opportunity (DPO) which in turn, is translated into, a sigma value (). Now, you are
ready to make direct comparisons – even apples and Oranges if you want.
Three Sigma would be equivalent to one misspelled word per 15 pages of text . Six
sigma would be equivalent to one misspelled over 300000 pages, quite a differenceindeed. Now, let’s put this in real world terms. Some corporations are already running
Six Sigma. It is self-evident that they’re going to perform better over the long haul. Forexample, several prestigious Japanese Companies(which are doing so well in the World
market place) are currently running at or near the 6 sigma Level.
SIGMA - ()
Sigma is a letter in Greek alphabet.
The term “sigma” is used to designate the distribution or spread about the me an(average) of any process or procedure.
The Sigma rating indicates how often defects are likely to occur. The higher the sigmarating, the less likely a process will produce defects. As sigma rating increases, costs godown, cycle time goes down and customer satisfaction goes up.
QUALITY IMPROVEMENT
=
PRODUCTIVITY IMPROVEMENT
=
COST REDUCTION
RIGHT FIRST TIME AND EVERY TIME
What is a defect A defect is any variation of a required characteristic of the product (or its parts) or
services which is far enough from its target value to prevent the product from fulfilling
the physical and functional requirements of the customer, as viewed through the eyes of
your customer.
A defect is also anything that causes the processor or the customer to make adjustments.
Anything That Dissatisfies Your Customer
The Common Metric: Defects per Unit (DPU)
DPU is the best measure of the overall quality of the process.
Then there are 5 opportunities for the defects to occur. Then, the total no.
opportunities =m u = 5x500 = 2500.Defects per opportunity, d.p.o = d/(m u) = 10/2500 = 0.004
If expressed in terms of d.p.m.o. (defects per million
opportunities) it becomes. d.p.m.o. = d.p.o x 106 = 4000 PPMFrom d.p.o., we go to the normal distribution tables and
calculate ZLT and corrected to ZST by adjusting for shift (1.5
) then.ZLT = 2.65; andZST = 2.65 + 1.5 = 4.15
No. of opportunities = No. of points checked.
If you don’t check some points then it becomes a passive opportunity. We should takeonly active opportunities into our calculation of d.p.o., and Sigma level.
Cost / QualitySix Sigma has shown that the Highest Quality Producer Is the Lowest Cost Producer
Process capability process potential index (Cp) The greater the design margin, the lower the DPU.
Design Margin is measured by Capability Index (Cp),
Where :The numerator is controlled by Design Engineer
Cp = Maximum allowable Range of Characteristic
Normal Variation of Process
The denominator is controlled by
process Engineering.
If Ford says,
Cp should be more than 1.33 for regular production.
Cp should be more than or equal to 1.67 for new jobs.
Motorola says, Cp should be more than 2.0 for all jobs.
That implies, (U – L) / 6 = 2.0 or (U – L) = 12 i.e; (U – L) = ± 6
Hence the name Six Sigma.
± 3
Process capability means 0.27%. I.e., 2700 PPM shall be out of specification.
± 6 Process capability shall mean 2.5 Parts per Billion, shall be outside the specificationlimits.
The six Sigma Methodology is a five phase improvement cycle that are employed in a
project oriented fashion through the1. Define
2. Measure
3. Analyze4. Improve
5. Control
Step 1 : Define :
Define The Customer, Critical to quality (CTQ) issues, And the Core business
Process involved. Define who customers are, what their requirements areand what their expectations are. Define Project boundaries – the start and stop of the
process. Define the process to be improved by mapping the process flow.
Step 2 : Measure :
Develop a data collection plan for the process. Collect data, to determine types ofdefects and metrics.
Measure the current performance of the core business process involved.
Step 3 : Analyze :
Analyze the data collected to determine the root causes of defects and opportunities
for improvement. Identify the gaps between current performance and goalperformance. Prioritize opportunities to improve. Identify sources of variation.
Step 4 : Improve :
Improve target solutions by designing creative solutions to fix and prevent problems.
Create innovative solutions using technology and discipline. Develop and deployimplementation plan.
Control the improvements to keep the process on the new course. Prevent revertingback to the “old way” Control the development, documentation and Implementation of
an ongoing monitoring plan.
Step 1 : Define:
The Problem definition has five major elements. The Business Case.
Identifying the Customers of the project, their needs & requirements.
The problem statement. Project Scope. Goals & Objectives.
Step 2 : Measure
Calculating Sigma Value for discrete data
The data being collected for this project is discrete, to calculate sigma using thediscrete method, there are three items being measured. They are :
1. Unit : The item produced.
2. Defect : Any event that does not meet customer’s requirement.
3. Opportunity : A chance for a defect to occur.
Step 2 : Measure
The Formula to Calculate DPO.
Number of Defects
DPO =
Number of opportunities x Number of units produced
DPMO = DPO x 1000000
This calculation is called defects per Million Opportunities.
Step 2 : Measure
Performance Measures
For the Month of December :Total Number of rings produced = 86,702
SIX SIGMA AS A GOAL (Distribution shifted to ± 1,5 )
Sigma Level Defects in PPM Yield in %
2
3 4 5 6
308,538
66,80762102333.4
69.1462
98.319899.379099.976799.99966
Legends
“m” : Number of Opportunities.“N” : Number of parts.“d” : Number of defects.
“dpu” : Defects per unit.“dpo” : Defects per opportunity.“Yft” : First time yield.“Yrt” : Rolled thru put yield.“dpmo” : Defects per million opportunities.“TDPU” : Total defects per unit.“Zlt” : Long term sigma level“Zst” : Short term sigma level
Formulae“dpu” = d / N“dpo” = dpu/m“Yft” = e-dpu“dpmo” = dpo X 106“TDPU” = sum of dpu“Yrt” = e-TDPU“YPO” = Yrt 1/m = e-dpo“dpo” of the over all process = (1-Ypo)Cpk = Zlt / 3Z = (USL – X bar)/ Cp = Zst/3
Process Capability It is a measure of the inherent uniformity of the process. Before examining the sourcesand causes of variation and their reduction we must measure the variation.YARD-STICKS OF PROCESS CONTROL:Cp - measuring capability of a processCpk - Capability of process, but corrected for non-centering
Process Capability Indices:Cp is a measure of spread.Cp = Specification (S) / Process width (P)Cpk is a measure of centering the process and its spread.
Cpk is minimum of Cpu= (USL - µ) / 3σ and Cpl = (µ - LSL) / 3σ The relationship between Cp and Cpk is Cpk = (1 – k) Cp
where : k : Correction factor and is the minimum of (T -µ ) / S / 2 or (µ - T ) / S / 2
In a company manufacturing and assembling of Printed Circuit Boards(PCB’s),the
rejection rate was found to be very high. Upon study it was noticed that there are 16stages in the assembly process of Printed Wiring Boards(PWB’s),out of which therejection rate was more during the wave soldering process compared to the other stages.
This wave soldering process stage is a critical stage of assembly.
Hence Wave Soldering Process Stage was selected for the study, in order to reduce theprocess variability and to minimize the rejection rate. The PCB’s are classified as singlelayered, bi-layered and multi-layered boards. In the assembly section of this company,two types of PCB’s are being assembled.
On-line inspection data for the wave soldering process were collected and the attribute
control charts (p and c) were plotted which showed that the process was not in a state ofstatistical control.
The fraction rejection was found to be 0.2 (i.e., 20%) and the average defects per unitwere 1.67 for multi-layered boards and 0.5 for bi-layered boards respectively.
The on-line inspection data for wave soldering process with the existing process
parameter values was collected and sigma(σ) was calculated.
For bi-layered boards the sigma level was 3.39 and for multi-layered boards the sigmawas 3.33,which are given below.
On-line Data for Bi-layered Boards- calculationNumber of defects = 71Total no. of soldering points = 233287Defects per opportunity =
Total no. of defects = 71 ….(1)Total no. of soldering points 233287
= 0.000304From the Normal Tables ,the value of sigma is 3.39
Table 2. Product Type: Multi-layered Boards
Standard Process Parameters
Baking Temperature 75˚C
Preheat Temperature 320˚C
Hot-air Temperature 340˚C
Solder Temperature 255˚C
Solder wave height 12.5mm
On-line Data for Multi-layered Boards- calculation
Number of defects = 38Total no. of soldering points = 106828Defects per opportunity =Total no. of defects = 38 ….(2)Total no. of soldering points 106828
= 0.00036
From the Normal Tables, the value of sigma is 3.33
After conducting the Brain storming session with the operators, foreman and themanager, the causes for the rejection of the PWA’s were traced out. During theinspection of the wave soldered PWA’s, it was found that the rejections were due to thefollowing causes.
The necessary calculations are made for both Bi-layered and Multi-layered boardsand analysis was carried out for the collected data with the help of Pareto Diagram whichshowed that blow holes and solder bridges constituted for majority of rejection.
After discussions with the operators, foreman and manager, the causes for the blowholes and solder bridges were identified.
The cause and effect diagrams were drawn and critical process parameters (controlfactors) that influence the wave soldering process were identified as:
Two noise factors viz., ambient temperature and humidity, each with two levels areconsidered for the experimentation. In order to optimize the above identified wavesoldering process parameters, the Orthogonal Array Approach of DOE was applied.Three levels were fixed for each of the above five critical factors which are shown inTables 3 and 4 for both bi-layered and multi-layered boards respectively. With theapplication of the Linear Graphs the number of experiments to be conducted are 27 forthe factors. OA Table and physical layout for the Bi-layered and Multi-layered boards areprepared. 27 experiments were carried out for both Bi-layered and Multi-layered PCB’sseparately with a sample size of two each.
Preheat Temperature 320˚C 325˚C 330˚CHot air Temperature 340˚C 345˚C 350˚C
Solder Temperature 250˚C 255˚C 260˚C
Solder wave height 11.5mm 12.5mm 13.5mm
Analysis of Data and ResultsThe experimental results were analyzed to establish the optimum process parameter
values for Baking Temperature, Pre-heat Temperature, Hot-air Temperature, SolderTemperature and Solder wave height. The responses were calculated for each of theexperiments for both bi-layered and multi-layered PCB’s. From the response matrices,
the SIGNAL-TO-NOISE (S/N) ratios were calculated using the formula :
=10 log ((1/p) - 1) ……(3)where p =1 - % good …….(4)
For example ,p= 1 – (25/100) = 75=10 log ((1/0.3) – 1) = - 4.7712
The S/N ratios for each of the experiments were calculated for bi-layered and multi-layered boards. The analysis of variance (ANOVA) was carried out for bi-layered and
multi-layered boards. The optimal levels of parameters were established based on thehighest value of S/N ratios. Optimized Factor Level for the wave soldering process forboth bi-layered and multi-layered PCB’s are given below
FACTORS Bi-layered Multi-layered
Banking Temperature Level-3 85˚C Level-3 85˚C
Preheat Temperature Level-3 310˚C Level-3 330˚C
Hot air Temperature Level-3 330˚C Level-3 260˚C
Solder Temperature Level-3 250˚C Level-3 260˚C
Solder wave height Level-2 1.5mm Level-3 2.5mm
Confirmation Run Further experiments were carried out with the optimized levels of the above
parameters for both bi-layered and multi-layered PCB’s taking a sample size of 8 each tocheck the validity of the levels of the optimized parameters.
The sigma levels were calculated again for the data collected and were found to be4.1 for bi-layered and 4.125 for multi-layered PCB’s which shows that the processvariability is decreased and process capability(cp and cpk) is increased. The percentageof rejections was again calculated which is reduced to 0.2% from 20%.
ConclusionsIn this case study, both on-line quality control techniques like control charts, paretodiagram and cause and effect diagram as well as off-line quality control techniques areapplied before the manufacture of the product to control the process. The sigma level forbi-layered PCB was improved from 3.39sigma to 4.1sigma and the multi-layered PCBfrom 3.33sigma to 4.125sigma.
Since the sigma levels were increased considerably using the Orthogonal approach ofDOE, it is evident that the application of the DOE technique (during the early stagesitself) is very effective in improving the quality of any process or product by optimizingthe parameters in order to yield a product which can be produced with minimum cost and
with minimum variation. The optimal levels for the factors obtained using OA approachand levels of sigma for both bi-layered and multi-layered PCB’s are summarized below:
Table 5: Comparison of the levels of five parameters for bi-layered PWS and multi-layered PWS
FACTORS Bi-layered PWA’s Multi-layered PWA’s
Present Optimum Present Optimum
Baking Temperature 75˚C 85˚C 75˚C 85˚C
Pre-heat Temperature 300˚C 310˚C 320˚C 330˚C
Hot-air Temperature 320˚C 330˚C 340˚C 345˚CSolder Temperature 245˚C 250˚C 255˚C 260˚C
Solder Wave height 11mm 11mm 12.5mm 13.5mm
Table 6. Comparison of the Sigma Levels
Type of Printed WiringAssembly
Present Level ofSigma
Improved Levelof Sigma
Bi-layered 3.39 4.1
Multi-layered 3.33 4.125
With lesser number of experiments in Orthogonal Array approach of DOE, it ispossible to achieve the same effective results as compared to other techniques of DOElike Full Factorial, Fractional Factorial, Randomized Block Design etc.
In most of the Indian industries, the acceptance criterion is only on the basis ofspecification limits specified by the designer. If any characteristic of a product / processfalls between the specified limits, it is taken for granted that the product is uniformly
good. But as per Taguchi’s QLF, as the functional characteristic of a product deviatesfrom the target value, it causes loss to the society.
The more the deviation, the more is the loss, even if it is within the specified limits.Robust engineering methods are recommended at the early stages of product design toachieve the higher sigma levels. Robust engineering also reduces the time to market withthe help of two step optimization.
The results obtained from small scale laboratory experiments can be repeated underthe large scale manufacturing conditions if the output characteristics are selectedappropriately using S / N ratio.
Introduction to six sigma
Problem 1A press brake is set up to produce a formed part to a dimension of 3 ± 0.005. A
process study reveals that the process limits are at 3.002” ± 0.006, i.e., at a minimum of2.996 and a maximum of 3.008. After corrective action, the process limits are broughtunder control to 3.001 ± 002.
Question:1
Question 1. Calculate the Cp and Cpk of the old process.Question 2. Calculate the Cp and Cpk of the corrected process
Answers:Question 1.
specification with (s) = 0.010; process width (p) = 0.012
So Cp= S/P = 0.10/0.012 = 0.833= 3.002; design center (D) = 3.000
Introduction to Robust Design and its applications
Out line
• Introduction.• Quality Control• Quality Engineering• Taxonomy of Quality• Evaluation of Quality loss• Tools used in Robust Design• Process Capability• Conclusions.
Introduction
Dr. G. Taguchi:
Dr. Taguchi, is born in 1924. He started his career from Naval institute of Japanbetween 1942 – 45 and then with ministry of public health and welfare. Later he joinedministry of education subsequently moved to Nippen telephone at japan.
Dr. Taguchi is the inventor of the famous orthogonal array OA techniques for thedesign of experimentation. He published his first book on OA in 1951 Taguchi alsovisited Indian statistical institute between 1954 – 55. He wrote a book on design ofexperiments.
Dr. Taguchi philosophy is Robust Engineering Design. He blended statistics withengineering applications and pioneered work in Industrial Experimentation. He is also theinnovator of the quality loss function concept and promoted robust design related to thishe propagated signal to noise ratio phenomenon in SPC. He developed a three stage offLine QC methods viz., system design, parameter design and Tolerance design.
Genichi Taguchi, a Japanese statistician, is in the forefront of the pioneers of theQuality Control. His major contribution is the concept of Robust Design, which isacclaimed as the most significant one throughout the world. His concepts haverevolutionized the very idea of quality control and hence these techniques are widely
applied by the manufacturing and service industries successfully in the advancedcountries like Japan, US, UK, etc. In the early 1970’s, Taguchi developed the conceptof the Quality Loss Function.
• Quality is defined as “fitness for use”.• As per G. Taguchi, the quality of a product is the (minimum) loss imparted by
the product to the society from the time the product is shipped.
• Quality plays a vital role in all walks of life, starting from the household to bigengineering and service industries.
Quality Control
• It is an activity of ensuring the manufacturing of good quality products whichsatisfy the customers’ needs.
• Quality control techniques are broadly classified into On-line and Off-line.
• The On-line quality control techniques are applied to monitor a manufacturingprocess to verify the levels of quality of goods already produced.
• The Off-line quality control techniques are applied to improve the quality of aproduct/process in the design stage itself i.e., before the products aremanufactured and made available to customers.
For any company to compete in the world class market scenario, its leaders mustunderstand, digest, disseminate and guide the implementation of simple and powerfultools that go well beyond the traditional quality control techniques. They are
• Design of Experiments (DOE)
• Multiple Environment Over Stress Test (MEOST)
• Quality Function Deployment (QFD)
• Total Productive Maintenance (TPM)• Benchmarking
• Poka-Yoke
• Next Operation As a Customer (NOAC)
• Supply Chain Management (SCM)
• Failure Mode and Effect Analysis (FMEA) and
• Cycle Time Reduction
The Design of Experiments (DOE) is one of the most powerful techniques that helpsto achieve the world-class quality.
Quality EngineeringIt consists of the activities directed at reducing the variability and thereby reducing
the loss. The fundamental principle of robust design is to improve the quality ofproduct/process by minimizing the effect of the causes of variation without eliminatingthe causes. This is achieved by optimizing the product/process design to make theperformance minimum sensitive to the cause of variation. The robust design processencompasses three stages namely
System Design – It is the process of applying scientificand engineering knowledge to produce a basic functionalprototype design.
• Development of a system to function under an initial set of nominal conditions.• Requires technical knowledge from science and engineering.
• Originality / Invention / Marketing strategy.
Parameter Design - It is the process of investigation towards identifying the settings ofdesign parameters that optimizes the performance characteristics and reduces thesensitivity of engineering design to the source of variation (noise factors).
• Determination of control factor levels so that the system is least sensitive tonoise.
• Involves use of orthogonal arrays and signal – to Noise Ratio.• Improves quality at minimal cost.
Tolerance Design – It is the process of determining the tolerances around the nominal
settings identified in the parameter design process.• Specification of allowable ranges for deviations in parameter values.• Involves cause detection and removal of causes.• Typically increases product cost. However, cost may be minimized by
experimenting to find tolerances that can be relaxed without adversely affectingquality.
Taxonomy Of QualityThere are three fundamental issues regarding quality:· To evaluate the quality· To improve quality cost-effectively and· To monitor and maintain quality cost-effectively. Quality characteristics are
classified into two:· Variable characteristic and· Attribute characteristic
Variable characteristics can be classified into three types:
· Nominal-The-Best : A characteristic with a specific target value.Examples: Dimension, Clearance, Viscosity etc.
· Smaller-The-Better : Here the ideal target value is zero.Examples: Wear, Shrinkage, Deterioration etc.
· Larger-The-Better : The ideal target value is infinityExamples: Strength, Life, Fuel efficiency etc.
· Attribute Characteristics : Based on visual inspectionExamples : Appearance, Taste, Good/bad, etc.
• Traditional interpretation of quality loss• Taguchi’s interpretation of quality loss.
Traditional interpretation of quality loss Step Function
Taguchi emphasizes that the loss incurred by a product which falls close to LSLand which falls just below LSL is almost same. The problem with most of the traditionalmeasures of quality (rework rate, scrap rate, Cp, Cpk, etc.) is that by the time we getthese figures, the product is already either in production or in the hands of customer.
Taguchi’s interpretation of Quality Loss Function (TQLF)
• The objective of TQLF is the quantitative evaluation of loss resulting from thefunctional variation of the output quality characteristic from the target value.
• The two important points to be considered to establish Taguchi’s QLF are
To establish the characteristics of Taguchi’s QLF, the two important aspects to be
considered are consumer tolerance and consumer loss.
Tqlf For Nominal-The-Best (NTB):
Taguchi’s QLF for the case of NTB is given byL(y) = K * (y-T) 2
Where L(y) = loss is rupees per unit of product for the output characteristic of ‘y’
T = Target value of ‘y’K = Constant of proportionality that depends upon financial importance of the outputcharacteristic
Taguchi recognized the loss as a continuous function and it does not occur suddenly.
The quadratic representation of QLF i.e., L (y) = K * (y-T) 2 .
Where Loss L(y) is minimum at y =T and L(y) increases as ‘y’ deviates from thetarget value T.
∴ K = L(y) = Ao
(y-T)2 ∆ 02
where ‘∆0’ is the consumer tolerance and ‘Ao’ is the consumer loss which areshown in Figure below.
TQLF for Nominal-The-Best
TQLF For Smaller-The-Better (STB)
When the out-put characteristic is to be a minimum value, the loss function ischaracterized as “Smaller-The-Better”. The examples for STB are shrinkage,pollution, radiation leakage etc.
The ideal value for this is zero. The Loss function is slightly different but the procedureis same as the Nominal–The–Best.
For STB, the loss function is given byL(y) = K * y 2 and K = A0 / y0 2
TQLF For Larger-The-Better (LTB)
The loss function for LTB is the reciprocal of the Smaller-The-Better case and isgiven by L (y) = K * (1 / y2) and K = Ao * yo2 . This is shown in figure below.Some examples of LTB are strength of a permanent adhesive, strength of a welded joint,fuel efficiency, corrosion resistance etc. The ideal value for LTB is infinity.
It is, therefore, very much essential to analyze and quantify the losses to the societyusing TQLF. This will help in identifying the level of one’s own quality in comparisonwith the competitors’ quality to take remedial actions for improvements, if necessary, tocompete in the present day global competition. The TQLF can be used to determine theoptimum tolerances for the levels of optimized parameters determined by the parameterdesign technique.
Tools Used in Robust Design
• Signal- To - Noise Ratio (S/N) - which measures quality
• Orthogonal Arrays - which are used to study many design parameterssimultaneously
The Quality Characteristic is denoted by p, a fraction assuming values between 0 and1. When the fraction defective is p, on an average we have to manufacture 1 / (1-p) to
produce one good piece.For every good piece produced there is a waste and hence a loss,equivalent to the cost of the processing {1/(1-p) - 1} pieces.
Thus, the Quality Loss Q is given byQ = K (p / 1-p)
Where K is the cost of processing one piece Ignoring K, we obtain the objective functionto be maximized in the decibel scale as
Ideal Function Realityy = Output Response y = Output ``
y y
M = Input Energy M = Input Energy
The most common way of expressing a design’s Ideal Function is :y = MWhere y = Output response
Signal factors – these are the parameters set by the user of the product to express theintended value for the response of the product. The signal factors are selected by thedesign engineer based on the engineering knowledge of the product being developed.
Control factors – these are any design parameters of a system that engineers canspecify by nominal values and maintain cost effectively.
Noise factors – these are the variables that affect the system function and are eitheruncontrollable or too expensive to control.
The Engineered System consists of four (4) components:
Six Sigma Process CapabilityIt is a measure of the inherent uniformity of the process. Before examining the sourcesand causes of variation and their reduction we must measure the variation.
Yard-Sticks Of Process Control:Cp - measuring capability of a processCpk - Capability of process, but corrected for non-centering
Process Capability Indices:
Cp is a measure of spread.Cp = Specification (S) / Process width (P)Cpk is a measure of centering the process and its spread.
Cpk is minimum of Cpu= (USL - µ) / 3σ and Cpl = (µ - LSL) / 3σ The relationship between Cp and Cpk is Cpk = (1 – k) Cp
where :k : Correction factor and is the minimum of
The manufacturing environment, by its very nature, relies on two types of
measurements to verify quality and to quantify performance: (1) measurement of itsproducts, and (2)measurement of its processes. Therefore, product evaluation and process
improvement require accurate and precise measurement techniques. Due to the fact that
all measurements contain error, and in keeping with the basic mathematical expression:Observed value = True value + Measurement Error, understanding and managing
"measurement error," generally called Measurement Systems Analysis (MSA), it is an
extremely important function in process improvement (Montgomery, 2005).
MSA is a comprehensive set of tools for the measurement, acceptance, and analysis
of data and errors, and includes such topics as statistical process control, capabilityanalysis, and gauge repeatability and reproducibility, among others
(Besterfield,2004).MSA recognizes that measurements are made on both simple and
complex products, using both physical devices and visual inspection devices that rely
heavily on human judgment ofproduct attributes. Purpose of MSA is to statistically verify that current measurement
systems provide:
– Representative values of the characteristic being measured– Unbiased results
– Minimal variability
Organizational Uses of MSA are:
• Mandatory requirement for QS 9000 certification.
• Identify potential source of process variation.• Minimize defects.
• Increase product quality.
All measurement processes will contain some amount of variation. The variation can
come from one of two sources; 1) the difference between parts made by any process, and
2) the method of obtaining the measurements are imperfect. Measurement System Errorsare of two types, they are as follows
• Accuracy: difference between the observed measurement and the actualmeasurement.
• Precision: variation that occurs when measuring the same part with the same
Any measurement system can have any of these problems. One could have ameasurement device that measures parts with very little variation, but is not accurate.
One could also have an instrument where the average of the measurements are very close
to the actual value, but has a large variance (not precise). Finally, one could have a device
that is neither accurate nor precise.
Measurements are said to be accurate if their tendency is to center around the actual
value of the entity being measured. Measurements are precise if they differ from oneanother by a small amount.
Measured Value = ƒ (TV + Ac + Rep + Rpr)
TV = true value
Ac = gauge accuracyRep = gauge repeatability
Rpr = gauge reproducibility
Measurement system components:
• Equipment or gage– Type of gage
• Attribute: go-no go, Vision systems (part present or not present)
• Variable: calipers, probe, coordinate measurement machines– Unit of measurement - usually at least 1/10 of tolerance
• Any system having greater than 30% gauge R&R is considered inadequate. As
seen in Table 1, all four characteristics’ %R&R is inadequate.
• Investigation of the measurement system led to a subsequent reduction of %R&Rin three of the four characteristics to between 12% and 23%.
• Further investigation of the fourth characteristic, inlet hole diameter, led theexaminers to a manufacturing problem. The team discovered high ovality in the
inlet hole, which was caused by the cutting tool. The tool was modified to reduce
ovality.
• Benefits of the study.
1. Reduced measurement variation.
2. Increased operator confidence regarding their aptitude for conducting gauge R&Rstudies.
3. Paved the way for further studies within the firm.
This is a technique to measure the precision of gages and other measurement systems.
The name of this technique originated from the operation of a gage by different operators
for measuring a collection of parts. The precision of the measurements using this gageinvolves at least two major components: the systematic difference among operators and
the differences among parts. The Gage R&R analysis is a technique to quantify each
component of the variation so that we will be able to determine what proportion of
variability is due to operators, and what is due to the parts.
A typical gage R&R study is conducted as the following: A quality characteristic of
an object of interest (could be parts, or any well defined experimental units for the study)is selected for the study. A gage or a certain instrument is chosen as the measuring
device. J operators are randomly selected. I parts are randomly chosen and prepared for
the study.
Each of the J operators is asked to measure the characteristic of each of the I parts for
r times (repeatedly measure the same part r times). The variation among the mreplications of the given parts measured by the same operation is the Repeatability of the
gage. The variability among operators is the Reproducibility.
Gage repeatability and reproducibility studies determine how much of your observedprocess variation is due to measurement system variation. The overall variation is broken
down into three categories: part-to-part, repeatability, and reproducibility. The
reproducibility component can be further broken down into its operator, and operator bypart, components.
Gage R&R Studies
Gage repeatability and reproducibility (R&R) studies involve breaking the total gage
variability into two portions:
repeatability which is the basic inherent precision of the gagereproducibility is the variability due to different operators using the gage.
• 5.15 Sigma = the factor standard deviation. 5.15 was developed empirically to
approximate the gage population distribution variation.×5.15
• % Contribution = repeatability variance/ total variation variance.×Percentcontribution of each factor based upon the variance. Repeatability = 100
• % Study Variation = 5.15 repeatability standard deviation/ 5.15 total variation
standard deviation.× the total variation standard deviation. Repeatability = 100 ×
the factor standard deviation divided by 5.15 ×5.15
• % Tolerance 5.15 repeatability standard deviation/tolerance.× the factor
standard deviation divided by the tolerance. Repeatability = 100 ×= 5.15
• % Process Variation = 5.15 x the factor standard deviation divided by theprocess variation. Repeatability = 100 x 5.15 repeatability standard deviation/
process variation.
• Number of Distinct Categories = part standard deviation divided by the total
A specification is a definition of a design. The design remains a concept in the mindof the designer until he define it through verbal description, sample, drawing, writing etc.
It defines in advance what the manufacturer expects to make. It defines what the
consumer can expect to get. The specification serves as an agreement between
manufacturer and consumer on the nature of characteristics of the product.
It is helpful to recognize the distinction between a design specification and an
inspection specification. The design specification deals with what is desired in amanufactured article, i.e. it deals with the specification function. In contrast, the
inspection specification deals with means of judging whether what is desired is actually
attained, in other words it deals with inspection function (quality of conformance).
Specifying the tolerance. It is practically impossible to manufacture one article exactly
like another or one batch like another. Variability is one of the fundamental concepts ofmodern quality control. Therefore, the range of permissible difference in dimension have
been standardized under the name limits. The limit of size for dimension or a part are two
extreme permissible sizes for that dimension (high limit and low limit).
Design engineer have a tendency to specify tight tolerance for the following reason.
1. Lack of time. Tolerance are to be set up on many dimension, therefore, the designersmay not have sufficient time to give much attention to tolerances on all dimension.
Therefore, to be on the safer side the designers are tempted to specify much closertolerance
2. The concept of factor of safety. Designers have been taught to allow for the
unexpected or the unusual i.e. overloading of the machine, use of unintended purpose,
change in the condition of use . The designers may assume more factor of safety toanticipate failure of conformance by the shop.
3. Setting tolerances assuming ideal conditions. Design engineering seems to specifytolerance with reference to some what idea l conditions, assuming good machine, well
trained operators, skilled supervision and good working condition. Or they use reference
tables which may tacitly assume such factor. In actual practice, nearly ideal conditionmay be obtained during some part of the process, but almost never for say, extended
period of time.
4. Lack of knowledge of the production process. The designers may not have sufficient
knowledge about the production process. Therefore, they may design the product with
little or no critical consideration of the various production problems involved to meet the
5. Lack of information about the process capability. In some case the designers do not
have information regarding the production facilities available in the plant, their conditionand process capability.
6. Lack of awareness of the quantitative effect of tolerance decision on factory
economy
7. Tendency of shop personnel to loose up the tolerances. The design may beconscious of the difference between the blue print tolerance and those which are actually
enforced. Therefore, in order to get what they think they need, they tend to specify loser
tolerance then they believe necessary.
Definition: It is not possible to manufacture each & every item identically. It is a customary to
allow a certain variability in the measured quality characteristic called the tolerance.Generally in any industry design section specifies what is to be produced &sets the
dimension & tolerance of the characteristic. The responsibility of the manufacturing dept,is to manufacture the items according to the specification laid down by the designdepartment. The inspection dept. checks whether the product is meeting the specification
given by the design dept, unless there is a proper co-ordination it is difficult to
manufacture the item exactly.
while establishing the specification limits the fall pts must be considered
1. Functional utility of the product.
2. Capability of the product and process.3. Inspection procedures.
Tolerance spread: T = (U-L)It is set by the engg. design section to design the mini & max values available for the
1. 100% of the interchangeability of thecomponents is possible for assembly
2. The tolerance of the interacting
dimension are smaller than the
necessary.
1 . A small % of the assembly will falloutside the tolerance limit but this can be
corrected with a selective assembly
2. This method permits a larger tolerance
On the interacting dimension
3. No assumption are necessary
4. No special process control procedures
are required
3. The interacting dimension must be
independent of each other & each
characteristic must follow a normaldistribution
4. The process average of each
components must be maintained at anominal dimension value (target value)
Statistical Theory of Tolerances
Statistical Tolerance
Use of statistical method of tolerance can lead to economic production, when we are
dealing with interacting dimensions. Interacting dimensions are those which mate ormerge with other dimensions to create a final result.
A dimension of an assembled product may be the sum of the dimensions of severalparts. Or an electrical resistance may be the sum of several electrical resistances of parts.Or a weight may be the sum of number of weights of parts. In such situations it is
necessary to determine the relationship of the tolerances of the sum.
The statistical theory of tolerance results in larger component tolerances with no
change in manufacturing process and no change in the assembly tolerance. Larger
tolerances, increase the production
Output, minimize waste of material and productive effort and are generally
responsible for reduction in manufacturing costs. This is the effect of statistical approach.
If an overall tolerance is fixed but not being met, then the problem is which
component tolerances should be reduced to meet overall tolerance. The statistical
theorem can help to determine which of the component tolerances have the greatest effecton the overall tolerances. This information, when coupled with economic considerations
on achieving a smaller tolerance, can form basis for a decision.
A risk involved in the use of statistical theorem is that : it is possible that an assembly
will result which falls outside of the assembly tolerance. However, the chance can becalculated and a judgment made on whether or not to accept risk. The probability that
assembly length will fall outside the tolerance limits can be found out by analyzing the
area under the normal curve for assembly lengths.
1. The component dimensions are independent and the components are assembled
randomly.2. Each component dimension should be normally distributed. Some departure from this
assumption is permissible.
3. The actual average for each component is equal to the nominal value stated in thespecification.
Problem 1.
Manufacturer A produces a metal piece the dimension of which is normally
distributed with based on a subgroup size of 4. Themanufacturer B produces a 2nd metal pieces which is also normally distributed withbased on a subgroup size 9 of a company C purchases
these 2 parts & assembles them together to obtain a combined dimension of 15 cm. What
% of the combined assembly.Would you expect to have the dimension is excess of 15.006 cm.
Solution :
from the table, for a subgroup size of 4, d2=2.059
A Hanna-Varnum Diagram is used to determine the probability of interference when
2 normal distributions overlap. This diagram is plotted for the ratio of differences
between the 2 means to a smaller standard deviation V/s ratio of standard deviations.
Steps involved
Step 1: Divide the larger standard deviation by smaller standard deviation.
Step 2: Locate this value on the lower scale (x-axis)Step 3: subtract the average ie difference between the 2 means
Step 4: Divide this difference by a smaller standard deviation.
Step 5: Indicate this value on the vertical scale.
Step 6: Find the point which is above the lower scale value to the right of the verticalscale value.
Step 7: Determine the interference risk from the % curves passing nearest to this pointwhich is shown in the graph.
Interference Tolerance
It is defined as a negative clearance. Interference exists in a situation where the shaftdiameter > than the bearing diameter. If the negative clearance is present, considered it
as zero.
Problem-3
Two mating parts X & Y have an average clearance of 0.015mm. control chartanalysis indicates that the standard deviations of X & Y are 0.025 and 0.075mmrespectively. Find the probability of interference between the 2 distributions and also
probability of clearance being greater than 0.0175 assume normal distribution and
The Probability of interference between the Two distribution is
Since it is an interference, it is –ve clearance i.e 0
From the normal tables, value of z = 0.4286
ie 42.86%
42.86% assembled items have probability of interference between 2 distribution
Problem-4
Dimensions of Two mating parts E and F are normally distributed with averages of251.0mm and 250.0mm and standard deviations of 0.1mm and o.3mm respectively. If the
parts are assembled randomly, what percent of the assemblies will have (a) clearance
greater than 1.2mm and (b) no defective parts if the specifications of E and F are 251.0±0.2mm and 250.0±0.5mm respectively
Control chart analysis indicates that the standard deviation of the 2 mating parts C &
D are 0.0016 & 0.004 cm respectively. It is desired that the probability of smaller
clearance than 0.004 cm should be 0.005. What distribution between the average
dimensions of C & D should be specified by the designer. Assume the data followsnormal distribution & random assembly with this distribution specified what is the
probability that the parts assembled at random will have a greater clearance than 0.024
From the tables, the probability less than 0.024 is 0.9817
There fore % of assembled items having clearance > 0.024
Is 1-0.9817
=0.0183
i.e., 1.83%.
Problem 6.
Control chart analysis indicates that the standard deviation of the 2 mating parts willhave identical values of 0.0013cm. It is desired that the probability of clearance less than
0.003 cm should be 0.002. What distribution between the average values of these
dimensions should be specified by the designer. Assume normal distribution & random
assembly with this distribution specified, what is the probability that the assembled items
will have clearance > 0.009cm.
Solution :
(a) For the probability of 0.02, the Z value from the normal tables is -2.05
(b) % of assembled items having clearance greater than 0.009 cm
Problem 9.The gross weight of cement bags with cement at the terminal dispatch stage at a
cement factory was known to follow a ND with mean =51 kg & SD = 400 gms. It is
known that the weight of cement bags before filling follows a ND with mean = 500 gms
& SD = 20 gms. If the net weight of the specimen is 50 kg minimum can it be assumedthat all the bags have the minimum net weight. If not, what % of the bags have
underweight. What should be the minimum mean gross weight required in order to have
Setting Specification Limits on Discrete Components
It is often necessary to use information from a process capability study to set
specifications on discrete parts or components that interact with other components to
form the final product. This is particularly important in complex assemblies, or to preventtolerance stack-up where there are many interacting dimensions. This section discusses
some aspects of setting specifications. on components to ensure that the final product
meets specifications.
Linear Combinations:
In many cases, the dimension of an item is a linear combination of the dimensions ofthe component parts. That is, if the dimensions of the components are x 1, x2, ……, xn,
then the dimension of the final assembly is y = a1x1 + a2x2 + …. + anxn
If the xi are normally and independently distributed with mean µ i and variance i2,
then y is normally distributed with mean µ y=in =1and
variance 2y= i
n=1 ai2 I
2. There fore, if µ I and i2 are known for each component, the
fraction of assembled items falling outside the specifications can be determined.
Problem-1.A linkage consists of four components as shown in Fig. The lengths of x 1, x2, x3 and
x4 are known to be x1~ N(2.0, 0.0004), x2 ~ N(4.5, 0.0009), x3 ~ N(3.0, 0.0004), and x4 ~
N(2.5, 0.0001). The lengths of the components can be assumed independent, becausethey are produced on
Different machines. All lengths are in inches
The design specifications on the length of the assembled linkage are 12.00 ± 0.10. Tofind the fraction of linkages that fall within these specification limits, note that y is
normally distributed with mean µ y =2.0 + 4.5 + 3.0 + 2.5 = 12.0 and variance
To find the fraction of linkages that are within specification, we must evaluate
P{11.90 y 12.10} =P {y 12.10} – P {y 11.90}
= (2.36) - (-2.36)
= 0.99086 – 0.00914= 0.98172
There fore, we conclude that 98.172% of the assembled linkages will fall within the
specification limits.
Problem-2Consider the assembly shown in Fig. Suppose that the specifications on this assemblyare 6.00 ± 0.06 in. Let each component x1, x2, and x3 be normally and independently
distributed with means µ 1=1.00 in., µ 2=3.00 in., and µ 3=2.00 in., respectively. Suppose
that we want the specification limits to fall inside the natural tolerance limits of theprocess for the final assembly so that Cp= 1.50, approximately, for the final assembly,
this implies that about 7 ppm defective is allowable.
The length of the final assembly is normally distributed. Furthermore, if the allowable
number of assemblies nonconforming to specifications is 7 ppm, this implies that the
natural tolerance limits must be located at µ ± 4.449 y. Now µ y = µ 1 + µ 2 + µ 3 =1.00 +3.00 + 2.00 = 6.00, so the process is centered at the nominal value. Therefore, the
maximum possible value of y that would yield the desired value of Cp is
That is if y 0.0134, then the number of nonconforming assemblies produced will
be less than or equal to 7 ppm. Now let us see how this affects the specifications on the
individual components. The variance of the length of the final assembly is
Suppose that the variances of the component lengths are all equal; that is ,
(say). Then
And the maximum possible value for the variance of the length of any component is
Effectively, if 2 0.00006 for each component, then the natural tolerance limits for
the final assembly will be inside the specification limits such that Cp = 1.50.
This can be translated in to specification limits on the individual components. If weassume that the natural tolerance limits and the
Specification limits for the components are to coincide exactly, then the specification
limits for each component are as follows
Problem-3.
A shaft is to be assembled into a bearing. The international diameter of the bearing isa normal random variable – say, x1 – with mean µ 1 = 1.500 in. and standard deviation 1 = 0.0020 in. The external diameter of the shaft – say, x2 – is normally distributed with
mean µ 2 = 1.480 in. and standard deviation 2 = 0.0040 in. The assembly is shown in
figure. When the two parts are assembled. Interference will occur if the shaft diameter is
larger than the bearing diameter – that is, if y = x1 – x2 < 0 Note that the distribution of yis normal with mean
Which indicates that very few assemblies will have interference. In problems of this
type, we occasionally define a minimum clearance – say, C – such thatP {interference < C} =
Thus, C becomes the natural tolerance for the assembly and can be compared with the
design specification. In our example, if we establish = 0.0001 ( i.e., only 1 out of
10,000 assemblies or 100 ppm will have clearance less than or equal to C), then have
Which implies that C = 0.020 – (3.77) 0.00002 = 0.0034. That is, only 1 out of 10,000
assemblies will have clearance less than 0.0034 in.
Problem-4.Figure below shows an assembly consisting of four components. The lengths of x1,
x2, x3 and x4 are known to be x1~ N(2.5, 0.032), x2 ~ N(2.4, 0.022), x3 ~ N(2.4, 0.042),and x4 ~ N(3.0, 0.012). The lengths of the components can be assumed independent,
because they are produced on different machines. All lengths are in cm.
The design specifications are 10.25 ± 0.15. To find the fraction of linkages that fall
within these specification limits, note that y is normally distributed with mean
µ y =2.5 + 2.4 + 2.4 + 3.0= 10.3 cm and variance
2y = 0.03
2 + 0.02
2+ 0.04
2+ 0.01
2= 0.003 cm
2
To find the fraction of components that are within specification, we must evaluate
There fore, we conclude that 96.54% of the assembled components will fall within the
specification limits.
Statistical Theory of Tolerances
Problem 5.Control chart analysis indicates that the standard deviation of the 2 mating parts will
have identical values of 0.0013cm. It is desired that the probability of clearance less than0.003 cm should be 0.002. What distribution between the average values of these
dimensions should be specified by the designer. Assume normal distribution & random
assembly with this distribution specified, what is the probability that the assembled items
will have clearance > 0.009cm.
Solution :
(a) For the probability of 0.02, the Z value from the normal tables is -2.05
(b) % of assembled items having clearance greater than 0.009 cm
From the normal tables, Z=1.21, the probability = 0.8869 i.e. 88.69%
There fore % of items having clearance > 0.009 cm is 100 – 88.69 = 11.31%
Introduction:The concept of reliability has been known for a number of years, but it has assumed
greater significance ,and importance during the past decade, particularly due to impact ofautomation, development in complex missile and space programmers. The manufacture
of highly complex equipment has served to focus greater attention on reliability. The
complex products, equipments are made up of hundreds or thousands of components
whose individual reliability determines the reliability of the entire equipment. Usingvarious types of materials and fabricating operations, the industry has to build reliable
performance into equipment and the products manufactured.
As regards to the Indian industry, the reliability concept is yet to find a footing. The
solutions to many of the problems of quality and economy remain handicapped because
of inadequate appreciation of the reliability principles and techniques. However,
reliability is only one of the tools of the management which must be supplemented byother tools like quality control and design of experiments for the solution of problems of
quality and cost.
Quality Control and Reliability
Quality control maintains the consistency of the product and thus affects reliability.
But it is entirely a separate function. Reliability is associated with quality over the longterm whereas quality control is associated with the relatively short period of time,
required for manufacture of the products. The task of reliability is to see that in a product
design, full account has been taken of every contingency which may cause a breakdownin use and to forecast the components or assemblies that are likely to become defective in
service. However, the equipment is designed, still it may be unreliable, if somecomponent has not been fully evaluated under all service conditions, even if the
production standards have been maintained by quality control during manufacture.
Need for a reliable product
The reliability of a system, equipment or product is very important aspect of qualityfor its consistent performance over its expected life span. In fact, Uninterrupted service
and hazard free operation is the essential requirement of large complex systems like
electric power generation and distribution plants or communication network such asrailways, aero plane, automobile vehicles etc. In these cases a sudden failure of even a
single component, assembly or system results in a health hazard, accident, or interruption
in continuity of service.
Thermal power plants provide electric power for domestic, commercial, industrial and
agricultural use. Reliability problems may cause shut down or reduced generation of
power resulting in load shedding and many other problems including loss of productiveactivities. Failure of anyone system of an air-craft may result in forced landing or an
accident. Sudden stoppage of suburban railway train due to fault in the single system
faulty carriage, interruption in the power supply or faulty track, sets up a chain of events
leading to disruption of service or accidents. Similarly, sudden failure of a car break
system while it is running may cause severe accident. Unpredicted failure of a singlecritical component may be cause of anyone of the above.
What is true of power plants, air-crafts, railways etc. is also true for other products
like washing machine, mixer grinder, T.V. sets, Refrigerators etc. though failure of suchproducts may cause inconvenience on a smaller scale. The problem of assuring and
maintaining has many responsible factors, including original equipment design, control ofquality during manufacture, acceptance inspection, field trials, life testing and design
modifications. Therefore, deficiencies in design and manufacture of products which go to
build such complex systems needs to be detected by elaborate testing at the development
stage and later corrected by a planned programme of maintenance.
Definitions of Reliability
Reliability is ordinarily associated with the performance of the product. However,there would be little point in having an electric lamp which may light at the time of
purchase but which may burn off after 200 hours of use. Reliability is the probabilitythat a device will perform its intended function satisfactorily without failure for the statedperiod of time under the specified operating conditions.
In the above definition there are 4 factors which are essential to the concept of
reliability. Whenever the customer purchases a product he expects that it should givesatisfactory performance over a reasonably long period. Hence, what is important is that a
product should function and continue to function for a reasonable time. In practice, in
majority of the cases, it may not be possible to test each and every product for its life orother performance requirements. Nevertheless, it is a well known experience that each
individual unit of product varies from the other units; some may have relatively long life.
In view of the existence of this variation,
Reliability is the probability of a product functioning in the intended manner over
its intended life under the environmental conditions encountered.
From this definition, there are, four factors associated with reliability. These are :
1) Numerical value 2) Intended function 3) Life 4) Environmental condition
The introduction of this element of probability really makes the quantitative
measurement of reliability possible. In other words, such measurements help to make
reliability a number-a probability-that can be expressed as a standard.The second consideration for a product to be reliable is that it must perform a certain
function or do a certain job when called upon. The phrase 'functioning in the intended
manner' (satisfactory performance) implies that the device is intended for certainapplication.
For example, in the case of electric iron, the intended application is that of applying
intended degree of heat to the various types of fabrics. If instead it is used to keep a room
at a certain temperature, the electric iron might be inadequate because of the greater area
to be heated and the change in environment.
The third consideration for a product to be reliable is that it of time which ensures that
the product is capable of working satisfactorily throughout the expected life. The fourth
consideration for a product to be reliable is that of the environment conditions whichhave to be viewed broadly so as to include storage and transport conditions. Since these
conditions too have significant effect on product reliability. When an equipment workswell, and works whenever called upon to do the job for which it is designed, such
equipment is said to be reliable. Failure is defined as the inability of an equipment not to
breakdown in operation.
The causes of unreliability of the product are many: one of the major causes is the
increasing complexity of product. The multiplication law of probability Illustrates this
very simply. Given an assembly made up of five components, each of which has areliability of function of 0.95, the reliability of function of assembly is (0.95)5 or about
0.78. Many assemblies which are electronic in nature involve thousands of parts (aballistic missile has more than 40,000 parts). Therefore, to have reasonable chance ofsurvival for such assemblies the component reliability is of prime importance.
Basic Elements of Reliability
The basic elements required for an adequate specification or definition of reliabilityare as follows:
1. Numerical value of probability.
2. Statement defining successful product performance.3. Statement defining the environment in which the equipment must operate.
4 Statement of the required operating time.
5. The type of distribution likely to be encountered in reliability measurement.Reliability follows the
distribution of Poisson
where mean life
T required life
Failure Pattern for Complex Product Complex products often follow a familiarpattern of failure. When the failure rate (number of failures per unit time) is plotted
against a continuous time scale, the resulting chart is known as "bath tub curve" (because
This curve exhibits three distinct zones. These zones differ from each other in
frequency of failure and in the cause of failure pattern. These are as follows:
1.Early failure period: (or burn in or the debugging period). This is characterized by
high failure rates. It begins at the first point during manufacture
That total equipment operation is possible and continues for such a period of time as
permits (through maintenance and repairs), the elimination of marginal parts initiallydefective though not inoperative and unrecognizable as such until premature failure
Commonly, these are early failures resulting from defect in manufacturing, or other
deficiencies which can be detected by debugging, running on or extended testing.Failures in this zone are due to one or more of the following causes: ( i.e., assignable
causes)
• Design deficiency• Manufacturing error
2. Random failure period: The constant failure rate period. It is characterized by a moreor less constant failure rate. This is the rate at which the normal usage of the product
occurs without any expectation of failures. Failures in this zone are due to chance causes.
These are chance failures which may result from the limitations inherent in the design
plus accidents caused by usage or poor maintenance or hidden defects which escapeinspection. The period from A to B is the normal operating period in which the average
failure rate remains fairly constant.
3.Wear out period: These are failures due to abrasion fatigue, creep, corrosion,
vibration etc., e.g., the metal becomes embrittled, the insulation dries out. A reduction in
failure rate requires preventive replacement of these dying components before they resultin catastrophic failure Failures in this zone are due to one or more of the following
causes:
• Ageing• Reduction
• Wear & tear.
Achievement of Reliability. There are five effective areas for the achievement ofreliability of the product. They are
(I) Design
(ii) Production
(iii) Measurement and testing
(iv) Maintenance and
(v) Field operation.
Design is the main cause of unreliability and a greater percentage of causes of
The following factors should be considered for achieving a reliable design:
1. Simplicity of product. The design should be as simple as possible. Error rate is
directly proportional to complexity. The greater the number of components the greater
the chance of failure. Increased reliability is a natural by-product ofequipment/simplification.
2. Derating : Derating means providing a large safety margin. It is also used as a method
of achieving design reliability. For example, a material with tensile strength of 10,000
kg/cm2 might be prescribed where only 7,000 kg/cm2 is required.
3 . Redundancy. Redundancy is the provision of stand-by or parallel components or
assemblies to take over in the event of failure of the primary item.Even though we use
most reliable components and keep their number a minimum, there may be one or twosuch components which may have lesser reliability.
To overcome this, more number of such components are included and so arranged thatthe whole equipment will continue to survive so long as at least one of these components
survives. This technique is known as redundancy. Auxiliary power generators are
examples of redundant items. They are put in service when the primary system fails.
4 . Safe operation: Part should be designed with fail safety in mind. How the component
fails is of importance. If possible, failure should occur in non catastrophic manner and
should do not harm to operator.Auxiliary power generators are examples of redundantitems. They are put in service when the primary system fails.
5. Protection from extreme environmental conditions. An item protected fromextremes of environmental conditions will have increased reliability. For example, pilots
of supersonic space-craft are protected from the effects of extremes of heat and cold.
Electric motors of common household appliances are rubber mounted to protect themfrom vibration.
6. "Maintainability" and "serviceability" are important considerations in designing for
reliability. Ease of maintenance and service contributes to higher field reliability. It isevident that an item which is easy to maintain naturally receives better maintenance and
service.
Reliability Tests. Reliability testing means the tests conducted to verify that a product
will work satisfactorily for a given time period. Reliability testing therefore consists of
functional test, environmental test and life testing.
Functional Test. Functional testing involves a test to determine if the product will
function at time zero.
Environmental Test. Environmental conditions (temperature, humidity, vibration, etc.)are critical to many products.
Environmental test consists of determining the expected environmental levels and then
carrying the functional test under the environments under which the product has tooperate.
Relationship between the failure rate and the mean time between failures (MTBF) &
MTTF & MTTR :MTBF ( repairable systems only) is defined as mean time intervalbetween the successive failure. It is denoted by . Related to the MTBF is the failure rate
which is denoted by .
The failure rate is the reciprocal of the MTBF
If a large number of items are placed in the test of the same type & operated until each
one fails.
The mean time to failure ( irreparable systems)
Where n= number for items failedT = test duration
Prove that the failure rate is the reciprocal of the MTBF:
Proof: Let us test ‘n’ items each ‘t’ hours & items which fails are repairable.suppose that there are ‘r’ failures, that failure rate is
The MTBF of a certain unit is 50hrs. Calculate the reliability for 75hrs of operatingperiod. If the reliability of the unit is increased by 10 % , 20%, 30%, 40%&, & 50%,
calculate.
(a) The% changes in the MTBF that is necessary.(b) Plot a graph , % change in reliability V/S % change in MTBF.
A 750 hrs life test is performed on 6 components. One component fails after 350 hrs
of operation. All others survive the test. Compute the failure rate
.Solution:
Redundancy
One of the methods for improving the reliability of the system is by initializing the
concept of redundancy. To enhance the reliability of the system, quite often additionalunits are built in to the system to perform the same functions. In such a system, one
component failure will not necessarily cause the system failure.
Since additional components are available to perform the same function. Redundancy
is defined as the characteristic of the system by virtue of which marginal component
failures are prevented from causing system failures due to the presence of additional
components.
In order to increase the reliability of the system select the component which has the
least reliability & then arrange it parallely?
For example:
RS = (RA). (RB). (RC)
= (0.8). (0.5). (0.7)
= 0.28
In order to increase the above systems reliability, select the component which has the