Abstract
This paper presents a review of the particular variants of particle swarm optimization, based on the velocity-type class. The original particle swarm optimization algorithm was developed as an unconstrained optimization technique, which lacks a model that is able to handle constrained optimization problems. The particle swarm optimization and its inapplicability in constrained optimization problems are solved using the dynamic-objective constraint-handling method. The dynamic-objective constraint-handling method is originally developed for two variants of the basic particle swarm optimization, namely restricted velocity particle swarm optimization and self-adaptive velocity particle swarm optimization. Also on the subject velocity-type class, a review of three other variants is given, specifically: (1) vertical particle swarm optimization; (2) velocity limited particle swarm optimization; and (3) particle swarm optimization with scape velocity. These velocity-type particle swarm optimization variants all have in common a velocity parameter which determines the direction/movements of the particles.
Keywords
Introduction
The particle swarm optimization (PSO) has a cooperative nature. 1 Over time several applications of PSO have been developed, but a disadvantage of this algorithm is the lack of assurance that will find the optimum solution, and also the high computational cost associated with the fitness function (FF). Compared to genetic algorithm, the PSO is faster when looking close to ideal solutions, although it is also faster to prematurely convergence. 2
In the literature, many variants of PSO are available. Sedighizadeh and Masehian 3 categorized these variants according to fuzziness, continuity, accordance, topology, attraction, velocity-type, activity, grouping, mobility, hierarchy, divisibility, interaction, etc.
In this review, we rely on the category called “velocity-type” which is composed by five PSO variants including the PSO with: restricted (and unrestricted) velocity, self-adaptive velocity, vertical velocity, limited velocity, and escape velocity.
The restricted velocity PSO (RVPSO) appeared due to constrained optimization problems (COPs). Lu and Chen 4 went further, introducing a new constraint treatment technique called dynamic-objective constraint-handling method (DOCHM). This method transforms COPs into unconstrained optimization problems (UPSO).
The basic PSO algorithm was initially developed to solve the unconstrained optimization problems. Hence, it lacks a mechanism to adapt the algorithm to optimization problems with constraints. Thus, the self-adaptive velocity PSO (SAVPSO) is an algorithm that employs a mechanism to study the impact of constraints on the basic PSO algorithm in order to enhance the particle’s search ability in the feasible region. Therefore, each particle will have the capacity to self-adaptively regulate its velocity according to the existing characteristics in the feasible region. 5
In some interactions of the PSO, the global optimum is not perfected and the particles remain near the optimum point but do not reach it. As the movements of the particles depend on its previous movements, they tend to move to a local optimum. The vertical PSO (VPSO) algorithm was developed to overcome the limitations of the basic PSO algorithm. The VPSO assumes that the particles fly in two directions: towards the global best particle or in vertical direction. At each iteration, a random number is generated, measuring the probability of flying in both directions. Yang 6 states that the PSO and the VPSO are used to train neural network (NN) and applies a neural network based on VPSO (VPSONN) in soft-sensor modeling of acrylonitrile yield.
The velocity limited variant (VLPSO) introduced by Xu and Chen 7 arose to help with financial decisions about the construction of a portfolio of investments with minimal risk but with maximum expected return, considering constraints for velocity and position of the particles. Here, the return rates were considered as stochastic variables being applied to the VLPSO algorithm to solve this problem.
The PSO with escape velocity (EVPSO) is an algorithm that enables the particles with an escape velocity so that they can free themselves from the local optimum. Thus, it is possible to prevent premature convergence observed in the basic PSO algorithm and enhance the population diversity of the swarm, safeguarding the efficient performance of PSO. 8
Particle swarm optimization
Kennedy and Eberhart 9 were the first to introduce the particle swarm optimization (PSO) algorithm based on the observation of the behavior of a birds flock in search for food. This meta-heuristic algorithm has a population nature based on cooperation. In comparison with other evolutionary algorithms, the PSO does not have crossover between individuals nor mutation during iterations of the algorithm.
In the PSO, the swarm’s particles start the exploration of the search space with randomly generated position and velocity, and the particles move only within the search space guided by the velocity in the previous instant, the best position found by the particle in question and the best position found by the set of particles in the neighborhood of the particle itself, in each iteration.
The PSO concept consists in updating the velocity of each particle, at each instant of time, relatively to the best position found by the particle and the best position found by the neighborhood. Thus, supposing that the search space is d-dimensional, then we can represent the current position of the particle in the search space by vector
PSO is influenced by multiple control parameters such as the inertia weight, the neighborhood size, the size of the problem, the population size, the maximum velocity, the cognitive and social parameters, and so on. The update equation of velocity presented includes the inertia weight, w, which is important to ensure convergent behavior. The most famous way to adjust the inertia weight is to use a large parameter w at the beginning of the exploration, and gradually reduce it along the iterations.
Compared to other evolutionary computation (EC) techniques, the lack of a velocity control mechanism in the PSO resulted in its low efficiency in terms of performance. 11 There are several researchers who study the neighborhood’s size. For example, Eberhart and Kennedy 12 concluded that a small local neighborhood is best to avoid local minimum, since a global neighborhood converges rapidly.
PSO and its inapplicability to COPs
The basic PSO algorithm, first proposed in 1995,9,12 has been developed to handle unconstrained optimization problems. 5 In this section, the aspects of the inapplicability of basic PSO algorithm for solving COPs will be examined.
Considering equation (1), the right member can be decomposed into three parts.
13
The first part
The sum of
Dynamic-objective constraint-handling method
Lu and Chen4,5 formulated COPs as follows
Typically, the equality constraints are transformed into inequalities shaped
The solution
When we apply the PSO to a COP, each particle moves toward the promising region in the search space. According to Lu and Chen, 4 a COP can be seen as a bi-objective unconstrained optimization. The first objective is to enter the feasible region, and the second is to optimize the original function of the COP, in order to find the optimal solution. Therefore, it is reasonable to imagine that only after a particle entered the feasible region, it is possible to approach the optimal solution.
During the search process of the PSO, the mechanism presented by the authors is designed in such a way that the particle has the ability to adjust depending on their goals, whether it lies inside or outside the feasible region, and hence this mechanism is designated dynamic-objective constraint-handling method (DOCHM).
A simple way to optimize all optimal solutions to the COP is given by the following function
Evidently
In DOCHM, the function (3) is only used as a way to measure the distance that the particle is from the feasible region. Therefore, the initial COP is converted into the following bi-objective unconstrained problem
In theory, only when
Notice that after starting the optimization
The incorporation of DOCHM with a PSO algorithm makes it easy to get the update strategies
Restricted velocity PSO
As mentioned above, the basic PSO was originally designed to solve unconstrained optimization problems, so it does not take into account the impact of the feasible region, i.e. constraints on the search mechanism, when it comes to solving COPs.
Lu and Chen 4 conducted a modification in the basic PSO search system in order to embed the impact factor from the feasible region and to improve algorithm performance on the COP resolution. Thus, the authors propose a new algorithm, the restricted velocity particle swarm optimization (RVPSO), which includes the impact of the feasible region in the velocity of the particle swarm.
Consider the update velocity equation (1) of basic PSO divided into two parts. The first part Set Replacing
Bearing in mind the two modifications presented above, the authors presented a new PSO algorithm, the restricted velocity particle swarm optimization, which is especially applied in COPs. Furthermore, the term
When applying RVPSO for COPs, it is necessary to incorporate a method of treating constraints, as noted above. The authors propose the DOCHM that is rooted in the PSO search mechanism. Thus, Lu and Chen 4 proposed the combination of RVPSO with DOCHM, a union of algorithms in order to solve COPs.
Moreover, the authors presented a simple way to hold the particles within the search space, i.e. if
Pseudo-code of DOCHM + RVPSO
Self-adaptive velocity PSO
The basic PSO algorithm was initially developed to solve unconstrained optimization problems; hence, it lacks a mechanism to solve COPs. Much of the literature about the solving of COPs with the basic PSO focuses on the way to deal with such constraints and not on their impact on the PSO search mechanism.
Lu and Chen 5 developed an algorithm called SAVPSO that uses a mechanism capable of dealing with COPs and that studies the impact of the constraints in that mechanism, in order to improve the PSOs ability. The SAVPSO algorithm claims that each particle of the swarm has the ability to self-adapt its velocity taking into account some characteristics of the feasible region.
To deal with constraints, SAVPSO adopted the same mechanism used in RVPSO. DOCHM operates in the inherent search mechanism, showing the impact of constraints and forcing the particles to seek the feasible region in the search space.
In SAVPSO, the particles of the swarm are handled according to the following equations
The update equation of velocity of SAVPSO was obtained by changing the update equation of velocity of basic PSO stated in equation (1). Comparing with equation (1), in the equation
Another modification done was the replacement of
Consequently, with
Making an integration of SAVPSO to solve COPs, we have to keep in mind that the limits of the feasible region may be very close to the parametric limits xl and/or xu so it is very probable that some particles close to the limits of the feasible region violate the parametric constraints.
In order to overcome this problem, Lu and Chen4,5 have adopted the same technique used above in RVPSO. Thus,
Finally, the authors describe the integrated SAVPSO algorithm in pseudo-code, where Imax is the maximum number of iterations, where it is possible to see that the SAVPSO incorporates DOCHM as one of the components of its search technique. 5
Create and initialize an n-dimensional swarm
Vertical PSO
The vertical PSO algorithm is a variant of the PSO developed to solve the basic PSOs problems such as premature convergence to local optimums, the loss of population diversity, and so on.
Considering equation (1), if there is not an improvement of the best global fitness of the neighborhood for a few iterations, so when the swarm’s particles are close to the best global position found, they will move in the same direction and converge to a local optimum. Therefore, to solve this problem VPSO was proposed.
VPSO claims that the particles can move in two different directions: towards the position of the global best particle or in vertical direction. Furthermore, in each iteration, a random number is generated to measure the probability of a particle to move in any of the above-mentioned directions. 6
In VPSO, the velocity of the particle and its new position will be given by the following equations
The application of VPSO in the soft-sensor model of acrylonitrile yield
Acrylonitrile is the main material of polyacrylonitrile production. The fabrication of acrylonitrile is a complex industrial process and thus, during the making of acrylonitrile, its yield is a very significant product indicator. Therefore, it is vital to obtain the precise acrylonitrile yield online and on real time to accurately control the fabrication of acrylonitrile.
Monitoring the acrylonitrile yield online with an analysis instrument can be ineffective, so Yang 6 suggests soft-sensing technology to control acrylonitrile yield online. He also claims that the VPSO algorithm is used to train neural network, and so the VPSONN emerges and it is applied in soft-sensor modeling of acrylonitrile yield.
The acrylonitrile yield is affected by seven variables: reaction pressure, temperature, the amount of pure propylene, the amount of ammonia, the amount of air, activators density, and reaction speed. To measure the acrylonitrile yield, it is crucial to discover the relationship between the acrylonitrile yield and the seven variables mentioned before. The VPSONN is utilized to determine that relationship. Since VPSONN is applied in soft-sensor modeling of acrylonitrile yield, it is important to define the objective function of the soft-sensing model
6
Stage 1: Initialize the structure of NN and the parameters of VPSO. Stage 2: Initialize the state of each particle. Calculate the corresponding output fitness of NN, and store the individual best position and the best fitness of each particle, and the best position and the best fitness of the whole swarm. Stage 3: Update the velocity and position according to equations (14), (15), (16), respectively. If necessary, limit the particles velocity and position. Stage 4: Calculate the corresponding fitness of each particle, update and store the individual best position and individual best fitness of each particle, update and store the global best position and global best fitness of whole swarm. Stage 5: If the stopping condition is not satisfied, go to Step3. Otherwise, stop iterating and output the global best position and the global best fitness of the whole swarm as the result.
Velocity limited PSO
One of the most important financial decisions that people and institutions have to deal with is to elaborate an investment portfolio. Here comes the modern portfolio theory, where the investor chooses the proportions of the portfolio assets, rationally, seeking to maximize the expected return and minimize the associated risks.
In order to formalize the problem, the authors resorted to the mean-variance model of Markowitz
14
that elucidates how it should be done with the selection of the portfolio with N risky assets
Xu and Chen 7 considered the rates of return expected as stochastic variables, in order to solve the problem by applying swarm optimization. A major goal of the authors is to overcome the global and local search limitation of traditional numerical algorithms in order to solve nonlinear programming problems. So a new variant of PSO algorithm is introduced, called velocity limited particle swarm optimization (VLPSO).
The basic PSO algorithm is not enough to solve optimization problems in small regions, because the best solution is frequently lost. We also know that the efficiency and accuracy of an optimization are determined by the exploration-exploitation trade-off, and is controlled by the velocity update equation (1). Therefore, focusing on the information obtained by the velocity equation, if we limit the velocity in a different scope, we can find a different best solution.
To VLPSO, the following limits were defined: firstly, if the particle velocity is greater than vmax, it was set equal to vmax; if the velocity is smaller than vmin, it was set equal to vmin. Secondly, if the found solution is greater than pmax, it was set equal to pmax; if the solution is smaller than pmin, it was set equal to pmin, where pmax and pmin are the range solution. For the constraint condition, the strategy used is preserve only the particles that satisfy the constraint condition, while the others are abandoned. The authors also presented the pseudo-code of VLPSO algorithm for the model of the investment portfolio 7
Initialize N particle;
{Set constants w, c1,c2;
Randomly initialize particle positions
Randomly initialize particle velocities
Evaluate function values
Set
Set
}
{
{Evaluate function values
}
{Calculate particle velocity
Update particle position
}
}
PSO with escape velocity
The basic PSO algorithm often converges prematurely and its performance is adversely affected by the loss of population diversity. The variant of PSO with escape velocity equips the swarm’s particles with an escape velocity, and thus they escape from local optimum. Consequently, through this variant, it will be possible to overcome the imperfections of the basic PSO namely the rapid convergence and the lack of population diversity.
To make it possible to provide the particles with the escape velocity, i.e. the ability to move continuously, it is necessary to define a new equation for each particles velocity
8
The EVPSO is useful when most of the particles are concentrated in the same convex subset, indicating other points outside of this subset by updating the particles velocity. If the point found in another convex subset is less than the current best global value, the new point will be adopted as the new best global value. Eventually, all the particles will move to the new convex subset. The exploration of the search space will end only when the algorithm converges to the crucial minimum subset.
The performance of the EVPSO algorithm is directly dependent on the choice of values for the parameters ec and ρ. Large ec values provide a global search while small ec values favor a local search. And large ρ values reduce the particle’s escape domain, while small ρ values increase the escape domain.
Wang et al.
8
stated that to obtain the desired exploration-exploitation trade-off, it is necessary to divide the particle search into two parts: initially ec is set at a large value and ρ is set at a small value; in the last stage, ec is set as a small value and ρ as a large value. Thus, initially, the particles will perform big movements and will search the entire search space in pursuit of good local optimums while in the final stage, the particles will execute a fine grain search. Modifying equation (22), a new update equation for velocity is given
Conclusion
In this review, we tried to present a specifically type of basic PSO variants, called velocity-type PSO variants. The main feature they all have in common is that it is possible to determine the direction or movements of the particles by defining a velocity parameter.
Based on the current literature, the PSO algorithm has experienced several changes in the form of variants, in order to overcome its faults. The variants presented in this paper are only a small portion of a long list of PSO variants.
Since its beginnings, the basic PSO algorithm has proven incapable of dealing with COPs. In order to solve this obstacle, the DOCHM emerged as a potential solution. Moreover, two PSO velocity-type variants have appeared and are associated with this mechanism, namely RVPSO and SAVPSO. The incorporation of DOCHM in RVPSO and SAVPSO provides efficiency and effectiveness to the final results.
To solve another limitations of the basic PSO, three other variants of velocity-type are analyzed, specifically the VPSO, VLPSO, and EVPSO algorithms. The VPSO states that particles fly towards the global particle or in vertical direction. The VLPSO suggests the finding of optimal solution with the lowest number of interactions as possible. The EVPSO introduces an escape velocity to improve the particle’s search capability. They all try to avoid the classical issue of basic PSO, which is the rapid convergence to local optimums.
The purpose of this review is to serve as a guide for future research, since due to the flexibility of the PSO algorithm, it has enormous applications in several areas. The PSO algorithm has a huge potential and room for improvement, placing it as a method of excellence for solving the most complex optimization problems. It is also important to study its variants in order to constantly update the algorithm and subsequently achieve the best results possible.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
