Abstract
A dynamic adaptive particle swarm optimization and genetic algorithm is presented to solve constrained engineering optimization problems. A dynamic adaptive inertia factor is introduced in the basic particle swarm optimization algorithm to balance the convergence rate and global optima search ability by adaptively adjusting searching velocity during search process. Genetic algorithm–related operators including a selection operator with time-varying selection probability, crossover operator, and n-point random mutation operator are incorporated in the particle swarm optimization algorithm to further exploit optimal solutions generated by the particle swarm optimization algorithm. These operators are used to diversify the swarm and prevent premature convergence. Tests on nine constrained mechanical engineering design optimization problems with different kinds of objective functions, constraints, and design variables in nature demonstrate the superiority of the dynamic adaptive particle swarm optimization and genetic algorithm against several other meta-heuristic algorithms in terms of solution quality, robustness, and convergence rate in most cases.
Keywords
Introduction
A great number of optimization algorithms have been proposed to solve different engineering design optimization problems which are usually nonlinearly constrained ones. The optimization algorithms can be roughly divided into two categories: a stochastic algorithm and deterministic one. The traditional deterministic optimization methods, such as the steepest descend method, quasi-Newton method, and interior-reflective Newton method, are usually gradient-based algorithms and differentiable conditions of objective functions are required to meet. These methods are inefficient and inaccurate for complex optimization problems with strong nonlinearity and high dimensions especially when the objective functions and constraints are discontinuous and not smooth. 1 Numerous stochastic optimization algorithms, such as the particle swarm optimization (PSO) algorithm, 2 genetic algorithm (GA),3–5 firefly algorithm, 6 ant colony optimization, 7 artificial bee colony (ABC), 8 mine blast algorithm (MBA), 9 simulated annealing (SA) algorithm, 10 biogeography-based optimization (BBO) algorithm 11 , have been proposed to overcome these drawbacks. These stochastic optimization algorithms are usually meta-heuristic and inspired by physical and natural phenomena.
Among all these stochastic optimization algorithms, the PSO algorithm is widely applied to solve different engineering optimization problems as it is efficient in computation, easy for implementation, and reliable in searching for global optima.12–16 The PSO algorithm first proposed by Kennedy and Eberhart 2 is based on social sharing of information between individuals in a group and is originated from mimicking the flocking behavior of a swarm of fish and imitating the schooling behavior of birds. The PSO algorithm is made up of a population of particles which are randomly moving within the parameter space. The position of each individual particle in the parameter space denotes a candidate solution of the design optimization problem. By changing searching velocities and positions of particles, the optimal solution is found. The ability of searching optima of the PSO algorithm mainly relies on mutual interaction (social learning) and influence of individual particles (cognitive learning). Particles move toward the currently global best position of the swarm in each iteration. A particle can escape from a local optimum with the help of neighboring particles. But if most of its neighboring particles are limited to a local extreme point, it is attracted to the trap of the local optimum, and as a result, premature convergence of the algorithm and the stagnation phenomenon 17 occur. To overcome these drawbacks of the basic PSO algorithms, different improvements have been proposed. A descending dynamic inertia factor or accelerating factor is widely adopted to balance the convergence rate and space searching ability of the PSO algorithm during search process.16,18,19 Eberhart and Shi 20 applied a random inertia weight factor to deal with dynamic systems. Clerc 21 presented a constriction factor K to control the convergence velocity. Apart from using time-varying inertia weights (TVIW), time-varying accelerating coefficients (TVAC) were also proposed and used to control the convergence rate and solution quality.22,23 A co-evolutionary particle swarm optimization (CPSO) was presented by He and Wang 24 to solve constrained engineering optimization problems. They used a multiple-swarm technique to evolve decision solutions and adapt penalty factors. Later, Krohling and Coelho 25 improved the CPSO by dynamically adjusting the accelerated coefficients which satisfy Gaussian probability distribution. Worasucheep 26 presented a constrained PSO algorithm with the stagnation detection and dispersion mechanism to tackle real word nonlinear and constrained engineering optimization problems. Yang and colleagues27,28 proposed an accelerated particle swarm optimization (APSO) algorithm based on the basic PSO algorithm, in which the velocity vector is removed and particle best positions are replaced by randomness. This algorithm greatly improves calculation efficiency and implementation convenience. However, this algorithm is easily trapped in premature convergence particularly for the problems with high nonlinearity due to the deficiency of diversity. 1 This disadvantage was improved by Guedria 1 by incorporating memories of individual particles into APSO forming a new algorithm called improved adaptive particle swarm optimization (IAPSO).
To improve the swarm diversity and increase convergence rate, many hybrid optimization algorithms with some operators or other algorithms incorporated into PSO have been proposed.29–34 Novitasari et al. 29 proposed a hybrid algorithm that combines the SA with PSO algorithm to deal with constrained optimization problems. He and Wang 30 proposed a similar hybrid algorithm to optimize a support vector machine. Wang and Yin 31 introduced a ranking selection scheme into the basic PSO to automatically control search performance of the swarm, which results in a new algorithm called ranking selection–based particle swarm optimization (RSPSO). The crossover operators or mutation operators used in GAs were largely adopted by researchers and combined with PSO to generate new algorithms, such as the modified particle swarm optimization (MPSO), 32 quantum-behaved PSO using mutation operator with Gaussian distribution (G-QPSO), 33 straightforward particle swarm optimization (SPSO) with a logistic chaotic mutation operator, 34 self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO-TVAC), 22 and so on. These operators increase swarm diversity and prevent premature convergence and stagnation of the PSO algorithms. The hybrid optimization algorithms talked above have been used to solve different specific engineering optimization problems.
In this work, a dynamic adaptive particle swarm optimization and genetic algorithm (DAPSO-GA) previously proposed by us in Zhu et al. 35 is used to solve constrained engineering design optimization problems with different kinds of design variables. A dynamic adaptive inertia factor is used in the PSO algorithm to adjust its convergence rate and control the balance of global and local optima exploration. GA-related operators including a selection operator with time-varying selection probability, crossover operator, and n-point random mutation operator are incorporated into the PSO to further exploit the optimal solutions generated by the PSO-related algorithm. These operators are used to diversify the swarm and prevent premature convergence. The remainder of this work is organized as bellows. The DAPSO-GA for both continuous and discrete optimization problems with constraints is specifically introduced in section “Introduction of the DAPSO-GA.” In section “Constrained engineering optimization problems,” four benchmark constrained engineering optimization problems with continuous design variables and five ones with discrete or mixed design variables are used to evaluate performance of the DAPSO-GA on real word engineering optimization problems. Conclusions are drawn in section “Conclusion.”
Introduction of the DAPSO-GA
The DAPSO-GA is a hybrid algorithm that combines the GA and PSO algorithm. Specifically, the GA-related operators including selection, crossover, andn-point random mutation operators are incorporated into the PSO algorithm with craft. These GA-related operators are used to diversify the swarm and further explore the possible optima based on the feasible solution provided by the PSO algorithm.
PSO-related algorithm
Basic PSO algorithm
The basic PSO algorithm is made up of a population of particles that are randomly spread within the parameter space. The position of each individual particle in the parameter space denotes a candidate solution of the design optimization problem. Each particle has a velocity and moves in the parameter space. The position and velocity of the particle i are adjusted in each iteration
where
The procedure of the basic PSO algorithm begins with population initialization of particles with random positions and velocities. The positions and velocities of each particle are then updated by equations (1) and (2). After that, the corresponding fitness of each particle is evaluated and ranked, and
PSO-related algorithm in the DAPSO-GA
A dynamic adaptive inertia factor
where
in which
with
Each particle position
in which D is the particle dimension and
in which
GA-related algorithm
In the DAPSO-GA, GA-related operators, that is, the selection operator with time-varying selection probability, crossover operator, and n-point random mutation operator are introduced to further exploit the optimal solutions generated by the adaptive PSO algorithm. GA uses a population which consists of individuals or chromosomes and each individual stands for a potential solution. In the GA-related algorithm, each particle in the swarm is regarded as an individual or chromosome and the swarm constitutes a population. Each individual is represented by applying decimal coding (the real value).
Adaptive dynamic selection operator
A particle that meets the GA-selection criterion below is selected to update its position via the following crossover and mutation operators in iteration
where
Crossover and mutation operator
When the GA-selection criterion is met, the following two GA-related operators are used to update the particle position: randomly generate a number
Crossover operator
A random crossover operator is adopted here to generate a new individual (particle). The flowchart of the crossover operator is illustrated in Figure 1. First, two particles should be selected as parents (pa and ma) for breeding. Suppose the ith particle is already selected as pa according to the GA-selection criterion, and then another jth particle is randomly selected as ma from the swarm, where

Flowchart of the crossover operator of the GA-related algorithm in the DAPSO-GA.
Mutation operator
An n-point random mutation operator is used, where n is the mutation dimension (i.e. the number of components or genes of the selected particle or chromosome for mutation) which is a random integer in

Procedures of the n-point mutation operator.
Implementation procedure of the DAPSO-GA algorithm
Flowchart of the DAPSO-GA is shown in Figure 3 and it is briefly described as follows:
Step 1: Set initial values of the optimization parameters including the population size M, maximum number of generations (iterations) S, maximum and minimum inertia factors
Step 2: Initialize the swarm: randomly generate a swarm with a size of M and the initial position of each particle is given by
Step 3: Evaluate the fitness value of each initially generated particle and rank their positions. The initial best particle position
Step 4: Update the current position
Step 5: Evaluate the current fitness value of each particle, and update the best particle position
Step 6: Generate new particles (offspring) according to the GA-related algorithm to diversify the swarm. If the GA-selection criterion in equation (9) is met, the crossover operator and n-point random mutation operator are applied to update the position of a selected particle to generate a new particle
Step 7: Evaluate the fitness value of the new particle
Step 8: Repeat the above steps 4–7 until the termination criterion, which is a predefined number of iteration, is met and then output the optimal results.

Flowchart of the DAPSO-GA.
Strategies of the DAPSO-GA for discrete optimization problems
The DAPSO-GA talked above is suitable for a continuous optimization problem, but cannot handle the optimization problems with discrete variables. For the discrete optimization problems, the DAPSO-GA can be modified using the rounding off approach. In this approach, either the continuous or discrete variables are treated as continuous variables during optimization processes. Only at the end of the optimization procedure, the discrete variables will be rounded off to evaluate the fitness value of each particle as shown below
Values of the discrete variables are in fact not changed as seen in equation (12) and keep unchanged until at the end of each generation of iteration. For convenient description, the DAPSO-GA using the rounding off approach is called a discrete DAPSO-GA and is used to solve the discrete optimization problems later.
Constraints handling
For constrained optimization problems, a feasible solution should satisfy all boundary constraints in the form of the equalities and/or inequalities. Two strategies are used in this work to handle the constraints on design variables and problem-specific constraints. In the DAPSO-GA, each particle position will be reset to the maximum or minimum boundary value once the limits on design variables are violated. Global optima usually occur on or near the boundary of the solution (design) space for the majority of design optimization problems. 9 Hence, this strategy can increase the probability for finding global optimal solutions. Penalty function strategies such as the penalty factor method1,35–38 and the concept of parameter free penalty function39,40 are widely used to solve different constrained optimization problems. The penalty factor method is adopted in this work to handle the problem-specific constraints. The constrained optimization problem using the penalty function strategy can be described as bellow
where
Constrained engineering optimization problems
In this section, nine famous constrained benchmark mechanical engineering optimization problems which have different objective functions, design variables and constraints in nature are adopted to test the performance of the proposed DAPSO-GA in terms of solution quality and stability as well as convergence rate. These 10 constrained engineering optimization problems are divided into continuous and discrete optimization problems according to the categories of their variables, and the rounding off strategy talked in section “Strategies of the DAPSO-GA for discrete optimization problems” is used in the DAPSO-GA to deal with the discrete optimization problems. Statistical results and best solutions of all algorithms for these engineering optimization problems are obtained over 30 independent runs.
Constrained engineering optimization problems with continuous variables
Tension/compression spring design problem
Figure 4 shows a schematic of a tension/compression spring.
41
The design aim of the tension/compression problem (i.e. the objective function

Schematic of the tension/compression spring.

Convergence history of GA, standard PSO, and the proposed DAPSO-GA for the tension/compression spring design problem.
Comparison of optimal solutions obtained from different optimization algorithms for tension/compression spring design problem.
IAPSO: improved adaptive particle swarm optimization; APSO: accelerated particle swarm optimization; DV: design variable; G-QPSO: quantum-behaved PSO using mutation operator with Gaussian distribution; MBA: mine blast algorithm; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for tension/compression spring design problem.
SD: standard deviation; IAPSO: improved adaptive particle swarm optimization; APSO: accelerated particle swarm optimization; MBA: mine blast algorithm; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm; PSO: particle swarm optimization; CPSO: co-evolutionary particle swarm optimization; NFE: number of function evaluation; G-QPSO: quantum-behaved PSO using mutation operator with Gaussian distribution; HPSO: hybrid particle optimization algorithm; PSO-DE: Particle swarm optimization with differential optimization.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.
Figure 6 shows the inertia weight versus number of iterations of the DAPSO-GA on the tension/compression spring design problem. From Figure 6, the inertia weighting factor varies between 0.7 and 0.4. A large inertia weighting factor is used when the fitness value of a particle is far away from the global best fitness value; otherwise, a small one is used. The dynamic inertia weighting factor adaptively adjusts the search velocity so that the exploitation and exploration are well balanced.

Inertia weight versus number of iterations.
Symmetric three-bar truss design problem
Figure 7 presents the schematic diagram of a symmetric three-bar truss structure. The symmetric three-bar truss structure is made up of steel and is subjected to two constant loadings

Schematic diagram of the three-bar truss.
Comparison of optimal solutions obtained from different optimization algorithms for the three-bar truss design problem.
DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm; DV: design variable; PSO-TVAC: Particle swarm optimization with time-varying accelerating coefficients.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for the three-bar truss design problem.
SD: standard deviation; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm; NFE: number of function evaluation.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.

Convergence history of GA, standard PSO, and the proposed DAPSO-GA for the three-bar truss design problem.
Welded beam design
The welded beam design problem is a famous constrained optimization problem which is widely used as a benchmark problem to evaluate performance of newly proposed optimization algorithms.
9
Figure 9 shows the schematic diagram of a welded beam structure which consists of a beam and weld. The optimization target is the minimum fabrication cost of the beam subject to constraints on bending and shear stress (

Schematic diagram of the welded beam.
The optimization algorithms previously used to solve this design optimization problem include GA3,
56
GA4,
35
APSO, IAPSO, MBA, LCA, WCA, DE, SC, NM-PSO, PSO-DE, HPSO,
29
CPSO,
24
CAEP, GA1, hybrid PSO-GA (HPSO),
39
ABC2,
40
and GA2. Table 5 presents the comparison of optimal solutions provided by the previously reported algorithms and proposed DAPSO-GA. From Table 5, a new optimal solution, which is better than those provided by previously proposed algorithms, is found by the proposed DAPSO-GA with the objective function value of 1.6600473. Note that the optimal solution provided by CAEP is infeasible as the constraints
Comparison of optimal solutions obtained from different optimization algorithms for the welded beam design optimization problem.
IAPSO: improved adaptive particle swarm optimization; APSO: accelerated particle swarm optimization; MBA: mine blast algorithm; CPSO: co-evolutionary particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm; DV: design variables.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for the welded beam design optimization problem.
SD: standard deviation; APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; MBA: mine blast algorithm; CPSO: co-evolutionary particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.

Convergence history of GA, standard PSO, and the proposed DAPSO-GA for the welded beam design problem.
Belleville disc spring design problem
As shown in Figure 11, Belleville disc spring is made up of several conical discs with uniform rectangular cross-sections. The design objective of the Belleville disc spring is to minimize its total weight subject to geometric constraints concerns the outer and inner diameter, slope and height to maximum height, and kinematic and strength constraints concerns the compression deformation and stress and height to deformation. There are four design variables for this design problem including the spring external and internal diameters (

Schematic diagram of the Belleville disc spring.
The optimization algorithms previously used to solve this design optimization algorithm include MBA, ABC, teaching-learning-based optimization (TLBO), 57 treating constrains as objectives (TCO), 58 Siddall, 59 Gene AS1, 60 and Gene AS2. 60 Table 7 presents the comparison of optimal solutions provided by the previously reported algorithms and proposed DAPSO-GA. Note that the optimal solutions provided by the Gene AS1 and Siddall are infeasible as the first and second constraints are violated by them, respectively. Hence, they are not used for comparison. From Table 7, the proposed algorithm and MBA provide better solutions against other optimization algorithms with the objective function value of 1.9796747. Table 8 presents the comparison of statistical results provided by the previously reported algorithms and proposed DAPSO-GA for the Belleville disc spring design optimization problem in terms of the worst, mean, and best solutions as well as the SD values and NFEs. As seen from Table 8, the proposed DAPSO-GA, ABC, TLBO, and MBA almost provide the same best solutions, but the proposed algorithm requires the fewest NFEs 9000 and ABC and TLBO requires the most NFEs 150,000. In terms of SD, MBA has better robustness in detecting the best solution than other optimization algorithms. Figure 12 shows the convergence history of GA, standard PSO, and the proposed DAPSO-GA for the Belleville disc design problem. It is seen that the standard PSO and DAPSO-GA convergence faster than GA, while the DAPSO-GA has better global optimum searching ability.
Comparison of optimal solutions obtained from different optimization algorithms for the Belleville disc spring design optimization problem.
MBA: mine blast algorithm; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for the Belleville disc spring design optimization problem.
SD: standard deviation; ABC: artificial bee colony; MBA: mine blast algorithm; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.

Convergence history of GA, standard PSO, and the proposed DAPSO-GA for the Belleville disc spring design problem.
Constrained engineering optimization problemswith discrete variables
Speed reducer design problem
Figure 13 shows a schematic diagram of a speed reducer. The design optimization scheme of the speed reducer is to minimize its weight subject to strength constraints concerning gear teeth bending stress and surface stress, stresses in and transverse deflections of shafts.
1
The design variables of this design problem include the face width (b), teeth module (m), number of teeth in the pinion (z), length of the first and second shafts between their bearings (

Schematic diagram of speed reducer.

Convergence history of the proposed DAPSO-GA for the speed reducer design problem.
This design optimization algorithm was previously solved by researchers using different optimization algorithms such as DEDS, DELC,
45
HEAA, MDE,
61
PSO-DE,
54
WCA, MBA, LCA, APSO, IAPSO, TLBO, (μ + λ)-ES, SC, and ABC. Table 9 presents the comparison of optimal solutions provided by the previously reported algorithms and proposed DAPSO-GA. As seen from Table 9, the proposed algorithm and most of the reported algorithms including DEDS, DELC, HEAA, WCA, LCA, and IAPSO provide similar best solutions (
Comparison of optimal solutions obtained from different optimization algorithms for the speed reducer design optimization problem.
MBA: mine blast algorithm; APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm; MDE: modified differential evolution.
Comparison of statistical results obtained from different optimization algorithms for the speed reducer design optimization problem.
SD: standard deviation; ABC: artificial bee colony; MBA: mine blast algorithm; APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.
Gear train design problem
Figure 15 shows a schematic diagram of a gear train which consists of four gears. The scheme of the gear train design optimization problem is to minimize the error between the obtained gear ratio and the required gear ratio of 1/6.39
62
subject to constraints only on the allowable ranges of design variables (side constraints), which are the number of teeth of the four gears. It is a discrete optimization problem as all design variables are integers. Numbers of teeth of gears A, B, D, and F (i.e. design variables) in Figure 15 are respectively denoted by

Schematic diagram of gear train.
This design problem was solved before by many researchers using different optimization algorithms such as Gene AS1, Gene AS2, SC, ABC, MBA, augmented Lagrangian (AL) method, 62 branch and bound (BB) method, 63 APSO, IAPSO, and UPSO.Table 11 presents the comparison of optimal solutions provided by the previously reported algorithms and proposed DAPSO-GA. According to the research of H Barbosa (September 1996, personal communication, San Francisco, CA) who computes all possible gear teeth combinations (49 4 or about 5.76 million), it can be validated that the optimal solutions provided by Gene AS1, ABC, and the proposed DAPSO-GA are globally best solutions. Whereas SC, MBA, APSO, and IAPSO find a different best solution as shown in Table 11. Statistical results provided by the previously reported algorithms and proposed DAPSO-GA for this design optimization problem are compared in terms of the worst, mean, and best solutions as well as the SD values and NFEs, as shown in Table 12. It is demonstrated that the proposed DAPSO-GA, MBA, and IAPSO are superior to other algorithms in terms of both SD and NFEs. The mean, best, and worst solutions provided by these three algorithms are at a same level, and they stably convergence to the best solution with similar computing efforts and SD values. Figure 16 shows the convergence history of the proposed DAPSO-GA for the gear train design problem.
Comparison of optimal solutions obtained from different optimization algorithms for the gear train design optimization problem.
ABC: artificial bee colony; MBA: mine blast algorithm; AL: augmented Lagrangian; BB: branch and bound; APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for the gear train design optimization problem.
SD: standard deviation; MBA: mine blast algorithm; APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.

Convergence history of the proposed DAPSO-GA for the gear train design problem.
Multiple disc clutch brake design problem
Figure 17 shows a schematic diagram of a multiple disc clutch brake. The design problem of the multiple disc clutch brake is a minimum problem which aims to minimize its total mass subject to geometrical constraints and constraints concerning shear stress, temperature, relative speed of the slip–stick, and stopping time.
64
The design variables for this design problem are inner and outer radius (

Schematic diagram of the multiple disc clutch brake.
This design optimization problem was previously studied by many researchers using different optimization algorithms such as non-dominated sorting genetic algorithm (NSGA-II),
65
TLBO, WCA, ABC, APSO, and IAPSO. Table 13 presents the comparison of optimal solutions provided by the earlier reported algorithms and proposed DAPSO-GA. It is shown that the DAPSO-GA, IAPSO, WCA, and TLBO have the same objective function value of 0.31365661, although the values of the variable
Comparison of optimal solutions obtained from different optimization algorithms for the multiple disc clutch brake design optimization problem.
APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for the multiple disc clutch brake design optimization problem.
SD: standard deviation; ABC: artificial bee colony; APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.

Convergence history of the proposed DAPSO-GA for the multiple disc clutch brake design problem.
Pressure vessel design problem
Figure 19 presents a schematic diagram of a pressure vessel. Two hemispherical heads are capped at the two ends of the cylindrical vessel. The pressure vessel design problem is first presented by Kannan and Kramer
62
and the design objective is to minimize its total fabricating cost including materials, forming, and welding costs. The design variables include the shell thickness

Schematic diagram of the pressure vessel.
The pressure vessel design problem was previously studied by many researchers using different optimization algorithms including GA1, GA2, Cultural Differential Evolution (CDE),
66
PSO, CPSO, APSO, IAPSO, MBA, NM-PSO, G-QPSO, HPSO, WCA, HPSO-GA, ABC2, and LCA. The optimal solution obtained from the proposed algorithm is compared with those provided by the earlier reported algorithms as listed in Table 15. Table 16 presents the comparison of statistical results provided by the previously reported algorithms and proposed DAPSO-GA for the pressure vessel design optimization problem in terms of the worst, mean, and best solutions as well as the SD values and NFEs. It must be pointed out that the optimal results provided by NM-PSO, WCA MBA, HPSO-GA, and ABC are infeasible as the values of
Comparison of optimal solutions obtained from different optimization algorithms for the pressure vessel design optimization problem.
APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; CPSO: co-evolutionary particle swarm optimization; MBA: mine blast algorithm; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for the pressure vessel design optimization problem.
SD: standard deviation; PSO: particle swarm optimization; APSO: accelerated particle swarm optimization; IAPSO: improved adaptive particle swarm optimization; MBA: mine blast algorithm; CPSO: co-evolutionary particle swarm optimization; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.

Convergence history of the proposed DAPSO-GA for the pressure vessel design problem.
Rolling element bearing design problem
The schematic diagram of a rolling element bearing is shown in Figure 21. The aim of the rolling element bearing design optimization is to maximize its dynamic loading bearing capacity subject to the geometric and kinematic constraints as well as the limit on the number of balls.
67
The design variables of this design optimization problem have five geometric parameters including the pitch diameter (

Schematic diagram of rolling element bearing.
This design optimization problem was previously solved by many researchers using different optimization algorithms such as GA5,
67
ABC, TLBO, and MBA. Optimal solutions given by these reported algorithms and PSO-TVAC are compared with those provided by the proposed DAPSO-GA in terms of the values of design variables, objective function value, and constraint accuracy, as detailed in Table 17. It must be emphasized that there are some errors for the optimal solutions of GA5, TLBO, and MBA given by Sadollah et al.
9
in terms of the objective function value, number of constraints, and constraint accuracy, which are revised in this work as shown in Table 17. Note that the optimal solutions provided by GA5 and TLBO are infeasible as the fourth constraint
Comparison of optimal solutions obtained from different optimization algorithms for the rolling element bearing design optimization problem.
MBA: mine blast algorithm; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data in each table mean the best one among all the results provided by different algorithms.
Comparison of statistical results obtained from different optimization algorithms for the rolling element bearing design optimization problem.
SD: standard deviation; ABC: artificial bee colony; MBA: mine blast algorithm; DAPSO-GA: dynamic adaptive particle swarm optimization and genetic algorithm.
Note: The boldfaced data mean optimal results provided by the DAPSO-GA algorithm.

Convergence history of the proposed DAPSO-GA for the rolling element bearing design problem.
Conclusion
In this work, a DAPSO-GA is presented to solve constrained engineering design optimization problems with different kinds of objective functions, design variables, and constraints in nature. The presented algorithm uses a dynamic adaptive inertia weighting factor, which adaptively adjusts the search velocity in optimum searching process, to balance the exploitation (local search) and exploration (global search). In the proposed algorithm, GA-related operators are incorporated into PSO and used to refine the optimal solution provided by the PSO. Few particles in the swarm that meet the GA-selection criterion with time-varying selection probability are adaptively selected to update their positions via a crossover and n-point mutation operator in each iteration process. Global best and worst positions of the PSO are updated according to the refined particle position generated by GA. With the three GA-related operators, the particle swarm is greatly diversified and as a result, premature convergence is effectively prevented. The promising prospect of the proposed DAPSO-GA for engineering constrained optimization problems is evaluated by solving nine different benchmark mechanical engineering design optimization problems with continuous, discrete, or mixed design variables. For most of the considered mechanical engineering design optimization problems, statistical results show that the proposed DAPSO-GA convergences to the best or similar solution with the smallest SD values and lowest computation efforts (NFEs) against other meta-heuristic algorithms.
Footnotes
Appendix 1
Appendix 2
Handling Editor: Yunn-Lin Hwang
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The authors gratefully acknowledge to the financial support from the National Natural Science Foundation of China (no. 51805339), Fundamental Research Funds for Central Universities and the State Key Development Program for Basic Research of China (no. 2014CB049401).
