Schlagwort-Archive: curse

The Curse of Cp and Cpk

In factory automation the calculations of Cp (Process capability index) and Cpk (measure of process capability) values are a good tool to monitor and proof performance and quality. I do not give here the full theory for the calculation of Cp and Cpk and all the math behind it. I want to show some issues from my professional experience which arise in understanding and also in expectations.

Some Theory for Recapitulation

For a given process measurements are performed and for a single parameter the average (Avg) and the Standard Deviation (StdDev) are calculated. The Cp and Cpk values are calcualted in the following way:

Cp = (USL – LSL) / (6 * StdDev)

Cp,upper = (USL – Avg) / (3 * StdDev)

Cp,lower = (Avg – LSL) / (3 * StdDev)

Cpk = Min(Cp,upper, Cp,lower)

The Cp value gives the theoretical capability of the process. It assumes a centered production and evaluates the best possible outcome for the given process deviation. This is the target to reach for process centering. The Cp value it self should be maximized with minimizing process deviation.

The Cpk value gives the current process performance and takes also into account if the process is not centered. If the process is centered the equation Cp = Cpk is valid.

For a Six Sigma production, values of Cp=2.0 and Cpk=1.5 are expected. The theoretical process capability should show a process deviation which matches six times into the process specification. A normal variation of the process around its center is always present and can not be avoid completely. Therfore the long time Cpk value is allowed to be smaller due to experiences of process variations in field of 1.5 sigma around the target.

If everything is Six Sigma ready, one gets only 3.4 violations of the specification into each direction out of one million samples. That means 6.8 violations in total for one million samples.

The Issue of Normal Distribution

The first issue and seldom questioned is: „Are my parameters normally distributed?“

In reality I have seen processes with Cp and Cpk values smaller than expected or wanted, but without any fail. The reason is that the parameters used are not normally distributed and the caluculations and statistics go fail. One can think of a rectangular distribution which is always within specification, e.g. between LSL/2 and USL/2. The calculation of Cp and Cpk give a Sigma level which tells to have a messy process. But in reality everything goes fine and no violation is to be expected. Only the math gives numbers out of wrong expectations.

A gate oxid growth for example can not be normally distributed due to the impossibility of negative gate oxid thicknesses. Processes which are under SPC (Statistical Process Control) control where corrective actions are performed, can also not be normally distributed. The influence of the corrective actions destroy the normal distribution if it were present before-hand.

One should always do a test for normal distribution first before calculation Cp and Cpk. In reality only a few processes are quite well normally distributed and for them the normal Cp and Cpk measurements are suitable. In most processes one should use different distributions for calculations or better do a counting of violations and a calculation back to a Sigma level. Those gives better information about real processes.

Hunting for Cp and Cpk

In a lot of industries, quality is one of the most important values and in some even life saving like in aerospace, automotive and medical industry. The pressure to reach and proof quality is very strong and necessary to stay in business and to challenge ones competitors. Cp and Cpk values are crucial for all purposes.

Cp and Cpk strongly depend on the specifications used. This leads to the next section…

Setting the Limits

When it comes to customer satisfaction, good Cp and Cpk values are important goals. It’s sometimes hard to reach them technically, but choosing the right limits can help. I heard, that the specification limits can be adjusted to meet Cp and Cpk value requirments and that these limits do not have any restrictions due to the fact that the process is controlled trougth its control limits. I knew afterwards I have to write about the right limits. Even a professional consultant on this mentioned something like that and I could only explain why there are technical limits for process and quality engineering.

At first we should define which limits are available for process control and what the purposes of them are.

Functional Limits

These limits define within which limits the product is suspected to work. Outsite these limits the product is just waste, not to be used and therefore to be dumped. There is nothing more to say… 😉

Specification Limits

These limits are negotiated with the customers or internal limits which are defined to decide whether a product is to be delivered or not. Specification limits can be defined multiple times for different customers, markets or functions, because not all product purposes need the same quality. The only requirement is, that the specification limits are equal to the functional limits or smaller within the functional limit range. A product which does not work at all is not suitable for any market except for selling to waste recycling.

Control Limits

Control limits are used for triggering corrective actions. If the process runs out of the control limits, a production system should signal this event immeditially and someone responsible has to start corrective actions to get the process back to target or the deviation back to a normal level. The control limits need to be defined left and right (or above and below) the target to make sense for corrective actions. For the purpose to start corrective actions, the control limits also have to be defined between the specification limits. If the control limit triggers an event outsite specification, it’s too late for any meaningful actions. Several control limits can be defined if necessary to separated different levels of escalation or different levels of inversive corrective actions.

Relationship of Limits

Regarding the facts above there is a simple relationship between all these limits which can be written as:

UFL >= USL > UCL > TARGET > LCL > LSL >= LFL

Shortcut Name Meaning
UFL Upper Functional Limit The upper limit for the product to work properly
USL Upper Specification Limit The upper limit of the product specification as delivery criterium
UCL Upper Control Limit The upper control limit as trigger for corrective action
TARGET Target The production target and optimum goal for production
LCL Lower Control Limit The lower control limit as trigger for corrective action
LSL Lower Specification Limit The lower limit of the product specification as delivery criterium
LFL Lower Functional Limit The lower limit for the product to work properly

The Curse Revealed

Where is the curse now in all this? Everything is well defined and we do the right statistics!?

In reality, most parameters are not normally distributed and the calculation of Cp and Cpk give numbers implying a worse process than really present. The numbers do not look very well because of wrong asumptions of normal distributions. The correct calculation is difficult and only statistics based on event couting is really accurate, but the number of events for counting need to exceet the 1 million mark, which is not really practical.

Customers also want Cp and Cpk values which do have a well defined sigma level like 3 (Cp = 1.0), 4 (Cp = 1.33), 5 (Cp = 1.67) or 6 (Cp = 2.0). The specification limits can only be expanded to functional limits and the process has to have a deviation and variation to meet this goal which is sometimes not really meetable due to physical constraints of machines or processes. Sometimes the process capability can not be proofed accurately with Cp and Cpk values.

In semiconductor industry for example one has to deal with physical issues. For an 8nm tunnel oxid for example one has to grow a layer of roughly 30 atoms to meet the thickness. The specification is set to 8nm +/- 1nm. 1nm is roughly 4 atoms. To meet the Six Sigma level one has to have a process deviation of a 6th of this 4 atoms, which is 2/3 of an atom. Therefore, to meet the Six Sigma level one has to prepare with a process variation of 2/3 of an atom and that in a process which processes a whole 8 inch wafer. For the whole area only a 2/3 of an atom of variation is allowed. These kind of tunnel oxids are prepared in 0.6um technologies. The current technologies in CMOS go down to smaller than 0.03um and the tunnel oxids become thinner and thinner… We obviously meet the physical limitations here.

What can be done, when customers have their expectations and the physics is at its end? Sometimes there is a real gap between needs, expectations and physical capabilities.

Solution

Sometimes we have to slow down, to sit together and to discuss in detail and technically correct what’s going on. If the Cp and Cpk values are used for production control, it’s a good step, but the blind hunt for numbers is not leading to the final target: Quality. The numbers are not always correct and the expectations have to be adjusted to the reality. Every production process should be optimized for best possible and effortable quality, but this has to be made transparent to external and internal customers of manufactoring parameters.

For managers, engineers and customers, there should be open discussions and if needed trainings to get background knowledge to get into constructive discussions and decision making procedures. Otherwise there is a large potential for professional conflicts…

The Curse of GOTO

When I started to learn programming professionally in 1991 at the Schülerrechenzentrum Dresden I was always told: „Do not use GOTO statements! It is never needed due to other possibilities which are available. GOTOs do disturb the program flow, the understanding of the program becomes more difficult and the program is not well structured anymore.“ I started to program some years before that on a C64 and there was practically no other way than using GOTOs and therefore, it sounded strange to me at first.

At the Schülerrechenzentrum Dresden I learned to program Turbo-Pascal and I was taught to use never ever GOTOs, but to think about a good program flow and to implement decisions and jumps with IF statements and calls of functions and procedures. Over the years I always followed this advice, but from time to time I started thinking about the usage again, but I never found a situation where a GOTO statement would be a better choice than other possibilities of implementation.

Following some facts and taught why not to use GOTOs.

Complexity

GOTOs increase source code’s complexity or seem to increase it. If there is a label in source one never knows exactly what GOTOs and how much will jump there in which circumstances and where are they located. If there is a GOTO around, the search for the corresponding label starts. The flow is not obvious any more and a label is not always distinct.

In the name of complexity GOTOs should be replaced by IF statements and function calls. These are easy to understand and with the right indentation and naming it’s quite obvious what the program does and what the intentions are.

Maintainable Code and Human Understanding

Try to understand code which is written with GOTOs. You always have to find out, where the fitting label is and under which circumstances the GOTO is invoked and what the current status of all variables are. It becomes more difficult in languages where a GOTO is allowed to jump out of the current function into another one or where the GOTO is allowed to jump into loops from outside. It’s very difficult to see and understand what the initial values are after the jump and how the program will proceed. A GOSUB and a RETURN is the same mess.

„Any fool can write code that a computer can understand. Good programmers write code that humans can understand. „
– Martin Fowler –

For maintainable code, source code is needed without such difficulties. A clean source code exposes all its conditions and its program flow. A GOTO messes this up and should therefore never used.

Refactoring

In Martin Fowler book „Refactoring – Improving the Design of Existing Code“ a lot of techniques and patterns are described for improving and developing good code. Refactoring, improving the code, can only be done, if the program flow is obvious and the behavior can be foreseen for any change. If a GOTO is within the area of the refactored code, the change or move of the corresponding label can dramatically change the programs behavior and it’s not that obvious in the first place.

In the name of maintainable code, refactoring should be possible. GOTOs lead to source code which is not easily refactorable and it should therefore be avoided under any circumstance.

For further information about clean code, have a look to the book Clean Code by Robert C. Martin.