Norma Uni Iso 2859 1 En

Posted on by admin

Users are allowed to make derivative use of this publication. Derivative applications, which are based on or use information from this publication shall include a statement that is well visible to the users clarifying that this is an implementation of the publication and stating that such reproduction is with the permission of CEN and UNE as copyright owners.CEN and UNE bear no liability from the use of the content and implementation of such derivative application and give no warranties expressed or implied for any purpose of such implementation. In case of doubt, users shall always refer to the content of the publication provided by UNE which makes available the official authoritative text here.

. 2.6k Downloads.AbstractThis paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions.

We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced.

Iso

What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD.

Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout. In commerce, any negotiation puts buyer and supplier in direct conflict. Although the exchange of products and services can take place either with legal contracts or as informal agreements promoting the welfare of all participants, the main characteristic of negotiation is the attempt of one adversary to gain more.

Even in honest and open negotiations with a relatively free flow of well-defined objectives among all participants, there are still differences between the antagonisms of buyers and sellers. Each adversary is an independent decision maker at least in theory, capable of assuming responsibility for her own decisions. In the commerce of large lots of standardized goods, statistical modeling and the concepts of probability can distinguish between different points of view, recognizing and revealing the conflicts inherent in negotiations. Consequently, to ensure the quality of large lots, each party may require different contractual sampling plans which specify lot size ( N), sample size ( n) and the maximum number of defective parts ( c) in the sample that still allows for lot acceptance, the formal symbols are PL( N, n, c).The main objective of this paper is to discuss the relationship between acceptance sampling and formal hypothesis tests as developed by NP. Considering that the pioneering work of Dodge and Romig (DR) ( ) in acceptance sampling, which has survived decades of academic debate and practice, arrived before the formalization of hypothesis testing by NP, the question is why bring hypothesis testing into the discussion at all. Throughout the rest of this paper, we will attempt to show that, if used appropriately, hypothesis testing offers a more logically complete structure to decision-making and therefore to better decisions.It is common in the literature (for example, Shmueli; Hund ), but not in the original pioneering work of DR, to associate consumer and producer risk with the concepts of the probability of type II and type I error, respectively. In our approach, we shall go further and develop the generalization that both consumer and producer feel the cost of both errors.

In other words, we shall explicitly allow for two type I errors and two type II errors depending on the perspective of the consumer or the producer. We will show that the decision-making process may be compromised by the commonly used simplification that type I error is felt only by the producer and in like manner type II error is inclusive only to the consumer. Hypothesis testing from NP, by offering a common theoretical structure, can produce a better understanding of the application of sampling procedures and their results. In a series of examples, we show that measures of risk will be more reliable and risk itself lowered.Deming ( ) was opposed to acceptance sampling. He argued that inspection by sampling leads to the erroneous acceptance of bad product as a natural and inevitable result of any commercial process, which in turn leads to the abandonment of continuous process improvement at the heart of the organization.

Deming’s position that inspection should be either abandoned altogether or applied with 100% intensity has been debated in the literature (see Chyu and Yu, for review and a Bayesian approach to the question), and his position is supported by some. Even though acceptance sampling is only a simple counting exercise with no analysis for uncovering the causes of non-conforming quality, our position is that acceptance sampling should be an integral part of the commercial–industrial process and, even when perfect confidence reigns between buyer and seller, but sampling itself should never be abandoned. Deming ( ), however, was very much in accord with statistical studies by random sampling that are restricted to inferring well-defined characteristics of large populations, just not as a procedure for continuous quality improvement.In the next section, we discuss traditional acceptance sampling emphasizing those concepts modified in the rest of the text.

Sections “Lot tolerance percent defective (LTPD) in consumer risk” and “Acceptable quality limit (AQL) in producer risk” will present the traditional relationships between AQL and producer risk, and LTPD and consumer risk. Section “A unique sampling plan for both parties—DR tradition” closes the discussion of traditional acceptance sampling offering the possibilities of constructing sampling plans that are unique for both producer and consumer. In section “Acceptance sampling via hypothesis tests”, we will lay out our interpretation of NP hypothesis testing and its connection to acceptance sampling. The next two sections will attempt a synthesis of basic concepts in NP hypothesis testing and acceptance sampling.

We then propose new procedures for the solution to unique sampling plans that simultaneously satisfy producer and consumer. Finally, the last two sections present conclusions and ideas for future work in the area. A series of appendices offer review material for statistical concepts frequently used in acceptance sampling, including R snippets that give a brief description of the R code used in figures and tables.

Traditional acceptance sampling. DR formally introduced inspection sampling in 1929 and in fact only from the viewpoint of the consumer. The priority given to the consumer will be an important ingredient for the discussion of hypothesis testing in this paper.

They mention producer risk only marginally. In 1944, they emphasize even more clearly their position that consumer risk is their first priority (Dodge and Romig ).The first requirement for the method will, therefore, be in the form of a definite assurance against passing any unsatisfactory lot that is submitted for inspection. For the first requirement, there must be specified at the outset a value for the lot tolerance percent defective (LTPD) as well as a limit to the probability of accepting any submitted lot of unsatisfactory quality. The latter has, for convenience, been termed the Consumer’s Risk.Both consumer and producer are concerned with the quality of the lot measured by the percentage ( p = X/N) of defective items. Values of p close to zero indicate that the lot is high quality. In traditional acceptance sampling, it is natural to assume that the producer requires a relatively low maximum value for p to guarantee that the lot is, in fact, acceptable to the consumer.

The producer calls this limiting value for p the acceptable quality level (AQL). Even though management and business strategy determine the value of AQL, it should reflect the actual value of quality reached by the producer. A value of AQL lower than the dictates of the production line will lead to sequential rejections. On the other hand, the consumer in question will allow for a limiting value of p that is a maximum value for defining the defective rate tolerable to the consumer who calls this value the LTPD. Any value of p greater than LTPD signifies that the consumer will reject the lot as low quality. Both producer and consumer know that AQL should be substantially lower than LTPD; this signifies that lots have relatively high quality when they leave the producer and avoid rejection by the buyer. (1)The equation illustrates that the frequency of FP s depends on the chosen values of c and AQL.

For example, if they were chosen to result in P( FP) = 5%, then, in the universe of high-quality lots, 5% of all samples would indicate in error that the lot was unacceptable. DR label Eq. ( ) as producer risk. The producer who rejects a good product is creating a problem that in fact does not exist, perhaps even stopping the assembly line to find solutions to difficulties only imagined. Traditional acceptance sampling refers to Eq. ( ), the probability of type 1 error, as α. We emphasize that DR never associated producer and consumer risk to type I and type II error. DR labeled Eq. ( ) as consumer risk since acquiring bad product would harm assembly lines or retail with low-quality inputs and merchandise. In traditional acceptance sampling, the probability of type 2 error (Eq. ( )) takes the name of β.

In the application of acceptance sampling, the producer and consumer predetermine the acceptable values for P(FN) and P(FP) along with LTPD and AQL. The solution for n and c called a sampling plan (or sample design) PL( n, c) is mathematically determined from the binomial or Poisson distribution. All of this information is summarized Table. The consumer defines the LTPD as the maximum acceptable rate of poor quality. The sampling plan, if well thought out, possesses values for n and c that indicate little chance of acceptance if p is greater than the LTPD tolerated by the consumer.

Specifically, this means that the probability of error P( x ≤ c/ p ≥ LTPD) = β is very small. In other words, the consumer protects herself against poor quality by choosing an adequate sampling plan that keeps her risk at a low and tolerable level for undesirable levels of p.

The sampling plan is PL(3000, 200, 0), remembering that it is generally beneficial to the buyer to have a very small c. In this example, LTPD is 1%. With p equal to 1% or greater (quality worse), there is a probability of still accepting the lot equal to 0.125 or less. Depending upon the necessities and market power of the buyer, consumer risk of less than 12.5% may be required. It is important to emphasize that the sampling plans analyzed in this section follow the cumulative hypergeometric distribution (Appendix “ ”). Fig. 1Hypergeometric sampling plan as OCC for PL(3000, 200, 0) and LTPD = 0.01Along the OCC, the pair of values LTPD and P(LTPD) signifies a single point.

There are several configurations of PL( N, n, c) compatible with a given pair of values for LTPD, P(LTPD), each configuration producing different shapes for the OCC. The choice of configuration, in practice, is not as free as it seems. Technology and the commercial terms of the negotiation usually impose lot size N.

The value of c usually does not flee too far from zero. In the end, only sample size n remains unknown.

We discuss this question further in what follows. Table shows new calculations for consumer sampling plans defined by P(LTPD) and LTPD.

The columns labeled letter, N, n, and c are common to most sampling standards. Shmueli ( ) uses ANSI/ASQC Z1.4 and ISO 2859 (, ) extensively. Note that in the table adequate sampling plans for the consumer are not abundant. There are few plans that produce a risk factor less than 10%.

They appear mostly in the last three lines of the table. Table produces comparable results for producer risk. This exercise in comparing consumer and producer risk serves to demonstrate the difficulty for two bargaining parties to find one unique plan that would satisfy the minimum risk requirements of both simultaneously.

We will return to this topic after the discussion of producer risk. CONSUMER common standards, single sample normal, level II—hypergeometric distributionLetterNLTPD = 0.65%Consumer riskLTPD = 1.0%Consumer riskLTPD = 1.5%Consumer riskncncncJ1200.435800.20.41250.10.43250.10.25001.10.1312001.11.1723152.1.1753152.1.1225004.128N1.16004.130P1.10007.087P5.108007.088Q500,001+25013.106. PRODUCER Common standards, single sample normal, level II hypergeometric distributionLetterNAQL = 0.65%Producer riskAQL = 1.0%Producer riskAQL = 1.5%Producer riskPRODUCER riskncncnccJ1200.5.00.58250.8645.00.56250.8545.00.74001.8126.00.86.01.82.11.82.10.87.221N1.83004.8709.222P1.89007.91312.424P5.89007.91212.424Q500,001+2513.89418.508Acceptable quality limit (AQL) in producer risk.

Producer error comes from the idea that the producer suffers more from the rejection of good lots than the acceptance of bad ones. To calculate producer risk, the producer must decide upon the value of AQL. If p ≤ AQL, then the batch is defined as good, and likewise, if p  AQL lots are considered non-compliant.

Well-chosen AQL and corresponding sampling plans reduce producer risk and therefore increase the probability of not rejecting good lots. The producer should offer items that bring high levels of satisfaction to the consumer and consequently renewed contracts. This means that AQL should always be less than LTPD. In Fig., the sampling plan is PL(3000, 10, 0), and AQL is 0.5%. For p equal to 0.5%, the probability of accepting the lot is equal to P(AQL) = 0.951.

Norma

Since the sum of the probabilities of accepting the good lot and rejecting the good lot is equal to unity (see the definition of α in Table ), the probability of rejection of the good lot is 0.049 (= 1 − 0.951). If p is less than the AQL of 0.5%, high quality is present; the producer is more likely to accept the lot. Remember that the probability of rejecting good lots is producer risk. In Fig., the horizontal line P(AQL) divides the vertical axis at 0.951, and the part above that point up to the limit of one is the producer risk 0.049. In industry, a producer risk 1 − P(AQL) below 5% is very attractive and usual for acceptance sampling.

In case 5–5, both producer risk 1 − P(AQL) and consumer risk P(LTPD) are 5%. AQL and LTPD are set at 0.005 and 0.01, respectively.

We have drawn the corresponding ROC curve (see Appendix “ ”) using the hypergeometric function for c = 10, 11, 12, 13. The plan PL(3000, 1600, 11) satisfies the risk conditions specified by buyer and seller, that both risks be less than 5%. Because of the discreteness of the probability function, consumer risk is 4.9% and producer risk is 3.2% at c = 11. Along the ROC curve, the value of c changes and accordingly the values of α and β. For example, the plan PL(3000, 1600, 12) is supported by α = 0.7% and β = 9.9%.

This last plan is much better for the producer and much worse for the consumer. The higher ROC curve originates from the binomial distribution with the same sampling plan; nevertheless, due to the mathematics of the binomial, consumer and supplier risks result in much larger values, greater than 10%. The binomial deceives the decision makers into seeing almost double the risk where it does not exist. Case 5–10, illustrated in Fig., is the most encountered in practice: consumer risk at 10% and producer risk 5%. Buyers (who are disinterested or ignorant to the disadvantages) apply sampling plans that follow these risk levels even though they are prejudicial to the buyer himself.

For AQL and LTPD at 0.005 and 0.01, respectively, the plan PL(3000, 1400, 10) satisfies the risk conditions specified. Buyer risk is 0.098 and producer risk is 0.034. This plan is slightly easier to apply than case 5–5, given the smaller sample n and acceptance number c. Case 10–5 in Fig., represents a sampling plan that pleases the buyer and demonstrates his market power by putting the seller at a disadvantage. This case is actually quite frequent when the seller is a small or medium sized establishment and the buyer is a large retailing or manufacturing firm; producer risk 1 − P(AQL p) has been placed at 10% while consumer risk P(LTPD c) remains at 5%. AQL continues to be 0.005. The resulting unique sampling plan is PL(3000, 1400, 9).

The buyer should be very pleased with this plan represented by a risk factor of 4.7%, while on the other side, the supplier finds his position weakened, as he is obligated to produce at a relatively high-quality rate AQL of 0.005 and must confront a risk factor of 9.7%. Once again, the difference is large between the outcomes of the hypergeometric and the binomial probability functions.

Iso 2859 1 1999 Sampling Plan

Fig. 5ROC curve hypergeometric and binomial sampling plans PL(3000, 1400, c): advantageous to the consumer Case 10-5 AQL = 0.5% and LTPD = 1.0%The last case in Table, where both consumer and producer risks are 10%, is not analyzed due to its very rare occurrence.What is unclear in traditional acceptance sampling is the necessity of linking AQL exclusively to the producer and LTPD exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. We also question why type I error is always associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature and the remainder of this article will elaborate a response.

Uni

In the next sections, we show that hypothesis test concepts from NP are relevant to practical applications of acceptance sampling, but only if the specific nature of the decision maker is taken into account. Acceptance sampling via hypothesis tests. Historically, the work of Dodge and Romig ( ) appeared before the concepts of hypothesis testing received wide acceptance in practice. Their work depends exclusively on probability functions, and the probabilistic interpretation of the concepts of producer and consumer risk some years before Neyman and Pearson ( ) offered their seminal interpretation of type I and type II error.DR worked in industry and commerce and, subsequently, the design of acceptance sampling they developed, because of the innate conflict between buyers and sellers, was strictly applicable to this environment. Our review of hypothesis testing is at most a simple skeleton of the relevant area of scientific methodology, better elaborated in works like Rice (, chapter 9) and the original work of Neyman and Pearson ( ).

Nevertheless, our interpretation of acceptance sampling in light of hypothesis testing is new to the literature. First, we will concentrate on the nature and definition of the null hypothesis.Simply stated, a hypothesis is a clear statement of a characteristic of a population and usually its numerical value, or of a relationship among characteristics (something happens associated with something else), that may or may not be true. It carries with itself a doubt that calls for evaluation. Hypotheses are not unique but come in pairs (or multiples not reviewed here) of exclusive statements in the sense that if one statement is true then the other statement is false. When the decision maker judges one of the hypotheses as true, he necessarily judges the other as false. The lot is conforming or non-conforming.

Vaccination drives reached the target population or not. Your candidate is winning the election campaign or is not winning. The accused is either innocent or guilty. From the viewpoint of the decision maker, the consequences of incorrectly rejecting one of the hypotheses are usually more severe than those of incorrectly rejecting the other. As we have seen above, lots are either conforming or non-conforming, and for the consumer for instance, incorrectly accepting the non-conforming lot committing the false negative can be disastrous. In such a case, the null hypothesis is the statement that costs the most when wrongly judged (Rice ).

This nomenclature serves to organize relevant social or industrial questions or laboratory experiments. The null carries the symbol Ho, the alternative hypothesis Ha. From the consumer’s point of view, therefore, the null hypothesis is that the lot is non-conforming. Rejecting this null when it is true incurs extremely high costs for the consumer. In similar fashion but from the producer point of view, the null hypothesis is that the lot is conforming, because as mentioned already, rejecting this null has extremely high costs for the producer. We illustrate these differences in Table.

The hypothesis test attempts to classify the lot by accepting or rejecting the null usually by examining a small random sample. In Table, the decision maker indicates states of the null by examining a small sample of the population and consequently accepting or rejecting the null hypothesis. For the purpose of this article, we follow statistical methodology; a random sample from the relevant population indicates the state of the null. However, other methods are available outside the realm of Statistics, like flipping a coin or throwing seashells in a basket.

In the population itself, the null is, in reality, either true or false, even though this condition in the population is unknown to the decision maker, and continues to be unknown even after the sampling procedure. As shown in Table, the result of the acceptance sampling procedure can have one of four possible results. Real states of the null hypothesis Ho in the populationDecision maker chooses between states of the null hypothesis HoAccept HoReject HoTrue HoCorrectType I errorFalse HoType II errorCorrectTwo quadrants are labeled as correct, and the other two as errors.

Norma Uni Iso 2859 1 En Direct

In general, we would like to maximize the probability of falling into the correct boxes and minimize the probability of error. Following NP, accepting as true the false null is a type II error, whereas rejecting a true null is type I error. The exact definition of the null hypothesis is crucial, and as stated above should be defined as the condition that incurs the highest cost if chosen in error. The choice of which of the two hypotheses is to be the null depends, therefore, on the decision maker, and how he perceives the distinctive costs of the two errors.In acceptance sampling, the statistical test of the validity of the null hypothesis is based on the relationship between x and c, given the value of n. When the null hypothesis suffers rejection, the researcher makes an inference as inference as to the population value of the characteristic in the hypothesis test. However, the value of the characteristic is not a point estimate but rather only a probabilistic generalization of a region of values inferred from x and c.

In other words, the rejection of the null does not imply anything about the point value of p itself other than its role in determining the conformance of the lot. Even the construction of confidence intervals do not supply isolated point estimates of the population parameters but rather an interval of probable variation around the point estimate.We have assumed in the discussion above that the null for the producer is that the lot is conforming, or analogously the production line is stable producing a good product. The engineer who tries to correct problems that do not exist (he rejects the null when it is true) is wasting precious time in worthless activities. This is a type I error, the basis of the p value to the statistician and equivalent to producer risk for the industrial engineer. Increasing the value of the cutoff c will decrease the probability of type I error by making rejection of the lot more difficult. However, increasing the value of c makes acceptance easier and therefore will increase the probability of type II error, β p.

Iso 2859 10 Pdf

Considering that type II error is relatively less important for the producer, the tradeoff tends to be attractive for the producer. From the consumer’s side of the story and contrary to the producer, the null should be that the lot is non-conforming, in other words, that the consumer should naturally distrust the quality of the lot or process.In traditional acceptance sampling, consumer risk has been exclusively the subject of the consumer, and producer risk the subject of the producer. Nonetheless, there is no conceptual reason to restrict each risk factor to only one adversary in the negotiation process.

Logically, there is no reason why the producer should not recognize and react to the probability of accepting the bad lot what has been called up to now consumer risk. Accepting the bad lot is certainly a problem for the producer, however, as described earlier a problem of secondary intensity. Likewise, rejecting the good lot and committing a false positive is also a problem for the consumer but of only moderate intensity. From the viewpoint of the consumer we have, LTPD c and P(LTPD c) and furthermore AQL c and P(AQL c). Here the consumer feels both risks, primarily the probability of accepting bad lots P(LTPD c), and less intensely the probability of rejecting good lots 1 − P(AQL c). The consumer can and should construct his sampling plan using both risks recognizing that less consumer risk should be his objective since its repercussions are more costly, while he tolerates more secondary risk.

Specifically, the consumer, for example, could use a P(LTPD c) of 3% and a 1 − P(AQL c) of 10%.The producer could follow analogous procedures. The producer will apply not only the risk pair AQL p and 1 − P(AQL p) as would be traditional, but also the risk pair LTPD p and P(LTPD p) recognizing that the producer suffers from his own secondary risk even though by a lesser degree. For example, the producer could set 1 − P(AQL p) to 1% and P(LTPD p) 10%.The correct routine for hypothesis testing is that first, we elaborate the hypothesis by conceptualizing an important characteristic of the population, and only then, in a second step, are the relevant probabilities of the resulting sampled data calculated. More importantly, the state of the hypothesis in the population is usually unknown, and will remain that way forever. Of course, one day in the future, end users will know the quality of the lot with certainty, depending upon the availability of all appropriate data. Nevertheless, even after ample time has passed, lot quality will remain elusive.This section has been a first attempt at generalizing risk factors to both players.

We have allowed consumers to recognize producer risk and producers may now acknowledge consumer risk. However, we have kept the two decision makers each as a self-determining unit. In later sections, we attempt to generalize acceptance sampling to the case of both risks applying to both producer and consumer simultaneously. Acceptance sampling from the viewpoint of the decision maker. As seen above, the definition of the null depends on point of view. The producer must decide on a limiting value for AQL p above which the lot is unacceptable.

In other words, if the fraction defective p is less than AQL p ( p ≤ AQL p), then the lot is defined by the producer as conforming. On the other hand, for the consumer, ( p ≤ LTPD c) defines the conforming lot. Under no circumstances should we assume that the values of AQL p and LTPD c are equal, nor should they be, given that they come from distinct decision makers on opposite sides of the negotiation.

The probability of type I errors ( α P and α C) called primary risks is given in Eqs. ( ) and ( ). Both equations represent the rejection of the respective null when it is true. Similarly, the probability of type II errors ( β P and β C) here called secondary risks is given in Eqs. ( ) and ( ). These four equations can be collapsed back to the original Eqs. ( ) and ( ) by assuming unique values of c and n and assuming AQL p = AQL c and LTPD c = LTPD p.

Consequently, α P = β C and α C = β P. Considering that the producer and the consumer are independent decision makers, there is no reason to expect these equalities in the real world. It is essential for the logic of this paper to understand the relative importance of FP and FN for the decision makers. For producers, FPs are more important, and for consumers, FNs are more important as illustrated in the next tables. One further equation completes the concepts for hypothesis testing: α.