The process capability index has become an efficient tool for measuring a supplier’s process performance. is one popularly used index for assessing non-normal process capability when the process violates the normality assumption. Unfortunately, this index cannot accurately reflect the process yield, so it may produce a serious result if the practitioner compares the calculated value with the capability requirement to determine whether the process meets that requirement. Hence, this study modifies to provide an adequate measure of lognormal process capability. In addition, an estimator of this modified index is also provided. Simulations show that the bias of this estimator is slight, and the coverage probability of capability testing is close to the nominal confidence. This means that our proposed method is adaptable for use.
Today, increasing numbers of enterprises outsource the procurement of services or products from an outside supplier in order to cut costs.1 One of the biggest reasons why both small businesses and larger companies outsource is that outsourcing cuts the cost of hiring and training employees while still fulfilling labor needs. In this manner, a business allocates tasks to its outsourcing partner, which shares the workload of the employees of their own businesses. This can allow the businesses to develop their internal task force and use them more efficiently.
In short, firms utilize outsourcing as a strategic tool to leverage globally dispersed resources in order to focus on their core competencies and improve efficiency in this increasingly competitive business environment.2 However, it must be noted that the more the firms rely on outsourcing, the more they depend on their suppliers, and the more important it is to manage and develop suppliers in order to achieve and maximize the benefits of outsourcing.2 Hence, a business’ success is highly dependent on its interactions with suppliers. As noted by Wu et al.,3 a suitable supplier can substantially decrease production lead time, reduce purchasing costs, strengthen corporate competitiveness, and increase customer satisfaction. Obviously, an effective method for solving the supplier selection problem is essential to a business. An important consideration for a manufacturer is how to select, evaluate, observe, and finally create a cooperative relationship with existing or new suppliers.4 Supplier selection has increasingly been regarded as one of the most important strategies in the globalization era.5
In past years, various criteria for selecting better suppliers have been investigated. Dickson6 concluded that quality and delivery are two of the most demanded items for component manufacturers. Moreover, Weber et al.7 considered quality as having “extreme importance” and delivery as having “considerable importance” to the manufacturers. Furthermore, Pearson and Ellram8 surveyed 210 members of the National Association of Purchasing Management and indicated that quality is the most important criterion for selecting a supplier. Recently, Olhager and Selldin9 investigated 128 samples of Swedish manufacturing firms and concluded that many aspects are important when companies choose supply chain partners but quality is the most important criterion. In summary, quality is regarded as the most fundamental factor for supplier selection.
However, the basic step for processing supplier selection is accurately evaluating the supplier’s performance to enable a determination of the list of qualified suppliers. The principal purpose of supplier evaluation is to ensure that a portfolio of best-in-class suppliers is available for use.10 Supplier evaluation is a process applied to current suppliers in order to measure and monitor their performance for the purposes of mitigating risk, reducing costs, and driving continuous improvement.11
Since accurately evaluating a supplier’s performance is important, the principal purpose of this study is to build a practical procedure to measure accurately a supplier’s process performance in terms of quality, the primary criterion for supplier selection. Among various quality assurance activities, process capability indices (PCIs) are widely used to interpret process capability because it is straightforward and easy to use. Using PCIs, a quality engineer can trace and improve a poor process, and a downstream manufacturer can efficiently assess his supplier’s process performance to ensure reliable decision-making on whether to purchase goods from his supplier.
The book by Kotz and Lovelace12 provided a complete illustration of the developing history of PCIs. Kotz and Johnson13 reviewed approximately 170 important studies of PCI and conducted a clear and penetrating analysis of them. Spiring et al.14 reviewed related papers published in 1992–2002 and categorized them systematically. Wu et al.15 discussed the relationships of quality assurance and some major PCIs in relation to four aspects, namely, process consistency, process relative departures, process yield, and expected relative loss. In addition to the review paper above, see literature16–22 for more discussion of PCIs.
Among various PCIs, the index 23 remains the most widely used because it can provide bounds on process yield for normally distributed processes;24 that is, 2Φ (3 Cpk) − 1 ≤ yield% < Φ (3 Cpk),25 where is the cumulative distribution function of the standard normal distribution. However, processes are often non-normal in real-world applications. For non-normal processes, considerable evidence has indicated that will mislead managers into making incorrect decisions.
Because applying to assess non-normal process capability can lead to inappropriate decisions, approaches for assisting the use of PCIs for non-normal populations have been developed rapidly in past years.24 One popularly used method was proposed by Chang et al.26 They provide an index, , that adjusts in accordance with the degree of skewness of the underlying population using different factors in computing the deviations above and below the process mean.26
The index provides a calculation for non-normal process capability, but this calculated value can result in a serious bias relative to actual yield. To overcome this drawback, this study modifies to obtain a suitable evaluation of non-normal process capability. The lognormal distribution is considered as the process distribution, and similar to Liao et al.,24 the curve fitting approach is used as the major technique for achieving our goal. The reasons for which we select index as an object for modification are as follows: (1) index is one of the well-known indices for assessing non-normal process capability and (2) the study of Chang et al.26 showed that performs better than another index 27 in assessing lognormal process capability. In addition, the study of Wu and Swain27 showed that performs better than the other well-known index, ,28 and Clements’ method,29 a method designed for assessing non-normal process capability, in assessing lognormal process capability. Since index has better performance than several well-known indices, we choose it for modification.
The remainder of this article is organized as follows. Section “Weighted standard deviation index” introduces . Section “Modified weighted standard deviation index” illustrates how to apply curve fitting techniques to obtain a correction factor for for lognormal distributed processes. In this manner, the modified index can interpret lognormal process capability more suitably than . Section “Point estimator for the modified index” provides an estimator for estimating . For capability testing purposes, section “Confidence interval for testing capability” proposes a procedure based on standard bootstrapping to derive the lower confidence interval bound for . Section “An example” is an example for examining the applicability of our approach. Some conclusions are drawn in section “Conclusion.”
Weighted standard deviation index
For normally distributed processes, the most widely used index is defined explicitly as follows
where is the process mean, is the process standard deviation, and and are the upper specification and lower specification limits, respectively. Table 1 lists five values, their corresponding yields, and non-conformities in units of parts per million (NCPPM), according to the following formula: 2Φ (3 Cpk) − 1 = yield%. A quality engineer can obtain the process yield or NCPPM based on a given value.
Some values, their corresponding yields, and non-conformities.
Yield
NCPPM
1.00
0.9973002039
2699.796
1.33
0.9999339267
66.073
1.50
0.9999932047
6.795
1.67
0.9999994557
0.544
2.00
0.9999999980
0.002
NCPPM: non-conformities in units of parts per million.
However, many process distributions violate the normality assumption. In the past, many approaches had been developed for assessing the capabilities of non-normal populations. Chang et al.26 classified these approaches. As mentioned in the works by Chang et al.26 and Pearn and Kotz,30 the first category involves the use of data transformation techniques. The second category involves the replacement of the unknown process distribution by an empirical distribution or by a known three- or four-parameter distribution. The third category involves a modification of the standard definition of PCIs in order to increase their robustness. The fourth category involves the use of heuristic arguments to develop new indices.
Among the methods mentioned above, a heuristic index of , namely, , is the most well-known for assessing non-normal process capabilities. Suppose that is the quality characteristic. Chang et al.26 used the weighted standard deviation (WSD) method to adjust to be , defined as follows
where . If the population is symmetric, then is equal to , and reduces to .26
Generally, we cannot obtain overall observations of quality characteristics; therefore, we need to drop samples from a stable process to estimate . Suppose that the sample observations are , where is the sample size. Chang et al.26 proposed the following estimator
where , , , , if , and , if .
This study considers a lognormal process distribution. As noted by Limpert et al.,31 many measurements show a more or less skewed distribution. Skewed distributions are particularly common when mean values are low, variances are large, and values cannot be negative. Such skewed distributions often fit the lognormal distribution closely.31 Moreover, Huang et al.32 indicated that observed quality data often follow a positive skewed distribution, such as the lognormal distribution, in many real-world applications. Therefore, it is worthwhile to discuss lognormal process capability assessment in this study. If follows a lognormal distribution, the probability density function (PDF) of can be expressed as follows
where is the location parameter and is the scale parameter. Figure 1(a)–(c) shows plots of the PDFs of the lognormal distribution for θ = 0.15, 0.25, and 0.35 with ω = 0.230, 0.323, and 0.417. These figures show that if increases, then the plot of the PDF becomes more skew.
Plots of the PDF of lognormal distributions with (a) θ = 0.150 and various ω, (b) θ = 0.250 and various ω, and (c) θ = 0.350 and various ω.
To show that results in a serious bias relative to actual yield, we set θ = 0.15, 0.25, and 0.35 and ω = 0.230, 0.253, 0.277, 0.300, 0.323, 0.347, 0.370, 0.393, and 0.417 to calculate the actual values of NCPPM with the given specification limits LSL = 0.12 and USL = 4.5. Notably, the parameters, and , that we consider here can limit the actual NCPPM to be within 0.001 and 3.000 because the NCPPM values of processes for capability analysis are often in this scope (see Table 1). Therefore, a large NCPPM condition and an extremely small NCPPM condition are both excluded in our discussion. On the basis of these distributions, we also calculate the values of via equation (2) and derive the corresponding NCPPMs of these values. All the results are summarized in Table 2. Notably, the “reflected NCPPM by ” is obtained by .
Actual and reflected NCPPM by of various lognormal distributions.
Actual NCPPM
Calculated
Reflected NCPPM by
0.150
0.230
0.002
1.416
21.589
0.253
0.043
1.297
99.329
0.277
0.508
1.195
338.513
0.300
3.187
1.111
854.688
0.323
13.814
1.040
1806.758
0.347
47.653
0.976
3424.466
0.370
126.271
0.922
5697.209
0.393
285.017
0.874
8758.424
0.417
582.783
0.829
12,834.400
0.250
0.230
0.025
1.431
17.628
0.253
0.358
1.311
83.755
0.277
2.986
1.207
292.863
0.300
14.559
1.123
753.832
0.323
51.674
1.051
1618.458
0.347
150.722
0.986
3108.247
0.370
350.231
0.931
5225.356
0.393
708.817
0.883
8103.810
0.417
1317.558
0.838
11,967.630
0.350
0.230
0.261
1.445
14.649
0.253
2.539
1.324
71.678
0.277
15.476
1.219
256.589
0.300
59.805
1.134
672.212
0.323
176.461
1.061
1463.819
0.347
440.713
0.995
2845.310
0.370
906.935
0.939
4829.145
0.393
1659.246
0.891
7549.606
0.417
2823.783
0.845
11,228.490
NCPPM: non-conformities in units of parts per million.
Table 2 indicates that results in serious bias in reflecting the actual NCPPM. This underestimates the process performance. In current practice, a process is termed “Inadequate” if , “Marginally Capable” if , “Satisfactory” if , “Excellent” if , and “Super” if .33 If a process satisfies the normality assumption, then the process can be graded by its value. For instance, if the value is 1.40, then the process is categorized as Satisfactory. However, the process is often non-normal, and then, we can just use instead of to assess the process capability. In current use, the decision rule for determining the process performance based on is the same as that based on . For instance, if the calculated value is also 1.40, then the process is also categorized as Satisfactory.
Therefore, we consider the case in which θ = 0.150 and ω = 0.300 to explain why underestimates the process performance. On the basis of the preset and parameters, we can determine that the proportion outside the specification limits (USL = 4.50 and LSL = 0.12) is . This means that the NCPPM is 3.187. If a process’ NCPPM is 3.187, then the process, according to Table 1, can be categorized as Excellent. However, by the definition of , we determine that the value is 1.111. Thus, the process, according to the decision rule for determining the process performance, must be categorized as Marginally Capable. Therefore, we can find that underestimates the process performance as well as the process yield.
Modified weighted standard deviation index
Since can significantly underestimate the process yield, this section modifies by multiplying it by a correction factor . This reduces the bias of yield estimation and increases the reliability of decision-making in capability analysis.
Suppose that our modified WSD index is defined as follows
To derive the correction factor , we attempt to make our modified index as well as reflect the actual NCPPM. That is, is defined such that the ratio is approximately unity but does not exceed 1. To achieve this goal, we calculate the values for parameters θ = 0.150, 0.175, 0.200, 0.225, 0.250, 0.275, 0.300, 0.325, and 0.350 and ω = 0.230, 0.253, 0.277, 0.300, 0.323, 0.347, 0.370, 0.393, and 0.417. As a result, a total of 81 values are obtained.
Since is a function of and , we let be a dependent variable, with and as the independent variables. By curve fitting
The coefficient of determination of this model is R2 = 99.99%; therefore, we conclude that this model is sufficient to fit those values. Since our goal is to make equal to 1, is equal to in equation (6).
Figure 2(a)–(c) shows plots of the NCPPM and reflected NCPPM calculated using and with respect to for θ = 1.50, 2.50, and 3.50, separately. All figures indicate that the calculated NCPPM values derived from are much closer to the NCPPM than those derived from . Therefore, our modified index can provide a more suitable interpretation for process capability.
Actual and reflected NCPPM calculated by and for (a) θ = 0.150 and various , (b) θ = 0.250 and various , and (c) θ = 0.350 and various .
Point estimator for the modified index
Intuitively, we will substitute the sampling observations into the formula to estimate , where is an estimator of . However, this estimator is not an unbiased estimator of . Therefore, this section also uses curve fitting to modify this estimator.
To show the relative bias of , we calculate the values for the various sample sizes n = 10(10)200 and lognormal distributions that we considered in section “Modified weighted standard deviation index”; therefore, a total of combinations are considered here for discussion. Since the sampling distribution of is complex and mathematically intractable, we calculate the values with N = 10,000 on each combination. Then, the relative bias of each combination can be estimated by , where . Notably, and are used to estimate the parameters and . According to the results, the relative bias of is somewhat significant; hence, we modify the estimator in the following.
Letting , we can obtain a total of 1620 values for the parameters , , and . Since is a function of , , and , we let be a dependent variable, with , , and as the independent variables. Using curve fitting, we can determine that
The coefficient of determination of this model is R2 = 99.90%; therefore, we conclude that this model is sufficient to fit those values. Since our goal is to make equal to 1, where is the correction factor for , is equal to in equation (7), and the modified estimator for is
Since is unknown, we also recommend and for estimating and . The estimates of and can be obtained in this manner.
To compare with , we conduct Monte Carlo simulation with 10,000 replications. In these simulations, we use and to estimate and and apply them to . Then, and can be calculated. Figure 3(a)–(f) shows plots of the relative biases in percentage of these estimators for the parameters θ = 0.150, 0.250, and 0.350 and ω = 0.230 and 0.417. Figure 3(a)–(f) shows that the modified estimator significantly reduces the bias in estimating . Moreover, the absolute values of these biases are almost smaller than 0.5%. Therefore, we conclude that is more accurate than for estimating .
Relative biases of and for (a) θ = 0.150 and ω = 0.230, (b) θ = 0.250 and ω = 0.230, (c) θ = 0.350 and ω = 0.230, (d) θ = 0.150 and ω = 0.417, (e) θ = 0.250 and ω = 0.417, and (f) θ = 0.350 and ω = 0.417.
In this study, the parameters, and , and the specification limits, USL and LSL, are set before we derive the correction factors. However, our proposed correction factors are still applicable even if , , or the specification limits vary because we can find that the modified index and its estimator are both functions of , , USL, and LSL.
Confidence interval for testing capability
Suppose that the capability requirement is , which is usually set as 1.00, 1.33, 1.50, 1.67, or 2.00, as listed in Table 1. The testing hypotheses for testing process capability can be defined as and . can accurately estimate , but we cannot compare directly to to determine whether the null hypothesis is to be rejected. In this case, we must take the sampling error into account. However, the sampling distribution of is also complex and mathematically untraceable; therefore, this study recommends standard bootstrapping34,35 to obtain the lower confidence interval bound for . If , then we reject the null hypothesis and conclude that the process meets the capability requirement; otherwise, we cannot reject the null hypothesis, and the process cannot be determined as capable. The bootstrap procedure is described as follows:
Step 1. Set the number of bootstrap samples .
Step 2. Drop a random sample of size from a stable process and record the critical quality characteristic as .
Step 3. Drop a sample of size from with replacement. This is a bootstrap sample .
Step 4. Compute , the estimate of , based on the data .
Step 5. Repeat steps 3 and 4 times, and bootstrap estimates of , namely, , are obtained.
Step 6. Compute the average value, , and standard deviation, , of those respective bootstrap estimates.
Step 7. Let be the degree of type I error. The standard bootstrap interval is , where is the upper percentile of a standard normal distribution.
An example
In the following, we consider the example provided by Pearn et al.36 of manufacturing thin-film transistor liquid-crystal display (TFT-LCD) products. The example assumes that the color filter is the most key component for a TFT-LCD and that the “thickness” of the color filter is the critical-to-quality (CTQ) factor. If the thickness of the color filter is not in control, then the TFT-LCD product might result in some degree of aberration.37
Suppose that a downstream manufacturer of a TFT-LCD product wants to evaluate his supplier’s process capability to be able to determine whether his company should accept the products. He decides to use process capability analysis to make further decisions; therefore, is adopted. The process specification limits of the thickness of the specific TFT-LCD product are LSL = 10 mm and USL = 20 mm. According to his company’s requirement, the process must be at least Marginally Capable; that is, is determined as 1.00, and the statistical hypotheses are defined as and . If is rejected, then the manufacturer can accept his supplier’s products.
After randomly sampling products from a stable process, a batch of products with sample size n = 100 is collected (see Table 3). Figure 4 is the histogram of those quality observations. It shows that the process distribution is right-skewed, thereby enabling him to ensure that the process does not satisfy the normality assumption. In fact, past experiments indicate that the produced thickness data follow a lognormal distribution. Hence, the manufacturer uses equation (3) to obtain as 1.130. Since both and are unknown, he uses and to estimate them. As a result, the estimate of can be obtained as . By applying to , the estimate of is . Since the sampling error must be considered in statistical testing, bootstrapping is adopted to evaluate the 95% confidence interval bound. On the basis of B = 2000 bootstrap samples, the 95% lower confidence interval bound is L = 1.050. Since , the manufacturer rejects and accepts the supplier’s products.
The 100 measurements of the color filter thickness (unit: mm).
14.616
12.923
15.641
13.171
14.338
13.927
13.743
16.091
15.443
14.731
15.497
15.313
16.209
14.428
14.906
14.004
16.544
18.237
16.176
13.258
15.199
14.060
14.457
15.857
14.495
14.164
16.139
16.248
16.635
15.940
15.517
13.897
14.520
17.696
15.015
14.025
16.705
14.419
14.199
14.556
13.062
14.646
13.094
13.368
15.312
13.332
15.211
13.374
16.571
13.294
15.261
14.576
16.023
13.469
12.417
15.725
13.199
16.808
13.700
16.532
13.877
15.182
17.248
15.459
15.860
12.413
14.721
16.608
15.563
18.877
17.722
14.788
17.317
17.245
17.026
16.119
14.683
13.115
15.263
15.117
15.724
14.422
16.172
13.539
15.234
18.052
18.464
14.396
14.692
15.465
14.521
13.042
14.747
15.264
13.975
13.285
16.416
13.780
14.200
17.004
Histogram of the color filter thickness.
Practicability of the modified index
In section “An example,” we provided a TFT-LCD quality-testing example to show the usefulness of our proposed method. In our opinion, not only TFT-LCD quality but also almost lognormal distributed quality can be tested by our proposed method. Another example of lognormal quality data can be seen in evaluating the quality of engine oil in the automotive industry. Quality engineers need to measure the percent viscosity increase (PVIS) of engine oil after it has been put to the test for a specific period of time. Here, the PVIS data follow a lognormal distribution.32 In addition, products having lognormal quality characteristics can indeed be found in many applications, such as chemical, semiconductor, cutting, drilling, grinding, and polishing processes. In these processes, the CTQ factor varies, including thickness, flatness, depth, length, width, electric current, hardness, concentration, strength, operational life, viscosity, or noise. All these might follow a lognormal distribution. The practitioner can use the chi-square goodness-of-fit test to check whether the quality characteristic of interest follows a lognormal distribution. If it passes the test, our proposed method can be applied.
In the following, we provide Monte Carlo simulations to discuss the performance of our proposed method. The major criterion for measuring the performance is coverage probability, the proportion of the time that the confidence interval contains the nominal confidence of interest.38 If the coverage probability is too large, our proposed method must be determined to be conservative, whereas if the coverage probability is too small, then our proposed method must be determined to be liberal.39
The lognormal parameters in the simulation are set as θ = 0.150, 0.175, 0.200, 0.225, 0.250, 0.275, 0.300, 0.325, and 0.350 and ω = 0.230, 0.253, 0.277, 0.300, 0.323, 0.347, 0.370, 0.393, and 0.417. The specification limits are set as LSL = 0.12 and USL = 4.5. As mentioned in section “Weighted standard deviation index,” the parameters, and , that we considered here can limit the actual NCPPM to be from 0.001 to 3000. Notably, this setting has taken almost lognormal processes into account because the actual values of the NCPPM of almost real-world manufacturing processes fall in this range.
The coverage probabilities for sample sizes n = 100 and 200 after 3000 replications are conducted in the Monte Carlo simulations are listed in Tables 4 and 5, respectively. Obviously, those coverage probabilities are very close to the nominal confidence, 0.95, especially when the sample size is sufficiently large. On the basis of our simulation results, we can conclude that our proposed method is suitable for testing lognormal process capabilities.
Coverage probabilities in the simulations for n = 100.
0.230
0.253
0.277
0.300
0.323
0.347
0.370
0.393
0.417
0.150
0.969
0.964
0.960
0.968
0.969
0.963
0.964
0.967
0.964
0.175
0.964
0.964
0.966
0.963
0.972
0.971
0.964
0.968
0.970
0.200
0.968
0.971
0.970
0.967
0.967
0.957
0.970
0.972
0.971
0.225
0.962
0.967
0.968
0.962
0.964
0.969
0.967
0.963
0.964
0.250
0.965
0.969
0.965
0.962
0.971
0.965
0.963
0.965
0.970
0.275
0.966
0.968
0.964
0.969
0.968
0.968
0.966
0.968
0.964
0.300
0.960
0.960
0.967
0.959
0.962
0.968
0.963
0.971
0.961
0.325
0.959
0.966
0.968
0.962
0.970
0.968
0.969
0.975
0.972
0.350
0.957
0.965
0.966
0.970
0.962
0.966
0.964
0.966
0.973
Coverage probabilities in the simulations for n = 200.
0.230
0.253
0.277
0.300
0.323
0.347
0.370
0.393
0.417
0.150
0.961
0.966
0.959
0.959
0.955
0.966
0.957
0.961
0.950
0.175
0.967
0.963
0.962
0.965
0.966
0.956
0.954
0.959
0.957
0.200
0.963
0.960
0.958
0.963
0.963
0.962
0.962
0.952
0.957
0.225
0.964
0.967
0.958
0.965
0.961
0.962
0.956
0.958
0.959
0.250
0.965
0.968
0.959
0.955
0.959
0.961
0.955
0.963
0.955
0.275
0.959
0.964
0.960
0.964
0.959
0.955
0.957
0.954
0.968
0.300
0.967
0.958
0.964
0.959
0.959
0.956
0.960
0.960
0.958
0.325
0.963
0.963
0.960
0.962
0.960
0.961
0.958
0.961
0.952
0.350
0.962
0.966
0.964
0.957
0.967
0.956
0.959
0.955
0.955
Conclusion
This study considered the well-known non-normal PCI to assess lognormal process capability. Although can adjust in accordance with the degree of skewness, it still results in serious bias in the estimation of process yield. On the other hand, even if is calculated, the practitioner cannot judge correctly whether the capability meets the requirement because it significantly underestimates the process performance.
Hence, this study used curve fitting to modify to obtain a more suitable evaluation of lognormal process capability. The result shows that the modified index, , can significantly reduce the bias in assessing the process yield. In other words, the modified index, , can reflect actual non-conformities with more accuracy than .
Moreover, an estimator for estimating was also derived in this study using curve fitting. Through simulations, it was shown that the relative bias of is slight. Hence, using to estimate is appropriate.
This study provided a suitable estimation for measuring lognormal process capability, which can help the business manager to make reliable decisions regarding quality assurance activities. However, large NCPPM and extremely small NCPPM conditions were both excluded; hence, our proposed method might not be applicable in these cases. In the future, we can extend our approach to modify to assess gamma and Weibull process capabilities, since some processes behave according to these kinds of distributions.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
ORCID iD
Mou-Yuan Liao
References
1.
GengWHuY.Selection of outsourcing supplier based on AHP. Adv Mat Res2012; 490–495: 2921–2925.
2.
LiSKangMHaneyMH.The effect of supplier development on outsourcing performance: the mediating roles of opportunism and flexibility. Prod Plan Control2017; 28: 599–609.
3.
WuCWLiaoMYYangTT.Efficient methods for comparing two process yields—strategies on supplier selection. Int J Prod Res2013; 51(5): 1587–1602.
4.
YuKTShenCYHuangDK.Integrated model for supplier evaluation: taking the machine tool industry as an example. Proc IMechE, Part B:J Engineering Manufacture2009; 223(11): 1475–1482.
5.
ChanFTSChanHKIpRWLet al. A decision support system for supplier selection in the airline industry. Proc IMechE, Part B:J Engineering Manufacture2007; 221(4): 741–758.
6.
DicksonGW.An analysis of vendor selection systems and decisions. J Supply Chain Manag1966; 2: 5–17.
PearsonJNEllramLM.Supplier selection and evaluation in small versus large electronics firms. J Small Bus Manage1995; 33(4): 53–65.
9.
OlhagerJSelldinE.Supply chain management survey of Swedish manufacturing firms. Int J Prod Econ2004; 89: 353–361.
10.
RoylanceD.Purchasing performance: measuring, marketing, and selling the purchasing function. Aldershot: Gower Publishing Ltd, 2006.
11.
GordonSR.Supplier evaluation and performance excellence: a guide to meaningful metrics and successful results. Plantation, FL: J. Ross Publishing, 2008.
12.
KotzSLovelaceCR.Process capability indices in theory and practice. London: Arnold, 1998.
13.
KotzSJohnsonNL.Process capability indices—a review, 1992–2000. J Qual Technol2002; 34(1): 2–53.
14.
SpiringFLeungBChengSet al. A bibliography of process capability papers. Qual Reliab Eng Int2003; 19(5): 445–460.
15.
WuCWPearnWLKotzS.An overview of theory and practice on process capability indices for quality assurance. Int J Prod Econ2009; 117: 338–359.
16.
VännmanK.Safety regions in process capability plots. Qual Technol Quant M2006; 3(2): 227–246.
17.
ZhouXJiangPWangY.Sensitivity analysis-based dynamic process capability evaluation for small batch production runs. Proc IMechE, Part B: J Engineering Manufacture2016; 230(10) 1855–1869.
18.
PearnWLChengYC.Estimating process yield based on Spk for multiple samples. Int J Prod Res2007; 45(1): 49–64.
19.
PearnWLWuCH.Supplier selection for multiple-characteristics processes with one-sided specifications. Qual Technol Quant M2013; 10(1): 133–139.
20.
ChangTCWangKJChenKS.Capability performance analysis for processes with multiple characteristics using accuracy and precision. Proc IMechE, Part B: J Engineering Manufacture2013; 228(5): 766–776.
21.
PolanskyAMMapleA.Using Bayesian models to assess the capability of a manufacturing process in the presence of unobserved assignable causes. Qual Technol Quant M2016; 13(2): 139–164.
22.
OuyangLYHsuCHYangCM.A new process capability analysis chart approach on the chip resistor quality management. Proc IMechE, Part B: J Engineering Manufacture2013; 227(7): 1075–1082.
23.
KaneVE.Process capability indices. J Qual Technol1986; 18(1): 41–52.
24.
LiaoMYPearnWLLiuYL.Assessing the actual gamma process quality: a curve-fitting approach for modifying the non-normal flexible index. Int J Prod Res2015; 53(15): 4720–4734.
25.
BoylesRA.The Taguchi capability index. J Qual Technol1991; 23: 17–26.
26.
ChangYSChoiISBaiDS.Process capability indices for skewed populations. Qual Reliab Eng Int2002; 18(5): 383–393.
27.
WuHHSwainJJ.A Monte Carlo comparison of capability indices when processes are non-normally distributed. Qual Reliab Eng Int2001; 17(3): 219–231.
28.
JohnsonNLKotzSPearnWL.Flexible process capability indices. Pak J Stat1994; 10(1A): 23–31.
29.
ClementsJA.Process capability calculations for non-normal distributions. Qual Prog1989; 22(9): 95–100.
30.
PearnWLKotzS.Encyclopedia and handbook of process capability indices: a comprehensive exposition of quality control measures. Singapore: World Scientific Publishing Co Pte Ltd, 2006.
31.
LimpertEStahelWAAbbtM.Log-normal distributions across the sciences keys and clues. BioScience2001; 51(5): 341–352.
32.
HuangWHWangHYehAB.Control charts for the lognormal mean. Qual Reliab Eng Int2016; 32: 1407–1416.
33.
PearnWLChenKS.Making decisions in assessing process capability indexCpk. Qual Reliab Eng Int1999; 15: 321–326.
34.
EfronB.Bootstrap methods: another look at the jackknife. Ann Stat1979; 7: 1–26.
35.
EfronBTibshiraniRJ.Bootstrap methods for standard errors, confidence interval, and other measures of statistical accuracy. Stat Sci1986; 1: 54–77.
36.
PearnWLLiaoMYWuCWet al. Two tests for supplier selection based on process yield. J Test Eval2011; 39(2): 126–133.
37.
LiaoMYWuCWWuJW.Fuzzy inference to supplier evaluation and selection based on quality index: a flexible approach. Neural Comput Appl2013; 23(1): 117–127.
38.
DodgeY.The Oxford dictionary of statistical terms. Oxford: Oxford University Press, 2003.
39.
LiaoMY.Assessing process incapability when collecting data from multiple batches. Int J Prod Res2015; 53(7): 2041–2054.