Abstract
Multiple objective linear programming problems are solved with a variety of algorithms. While these algorithms vary in philosophy and outlook, most of them fall into two broad categories: those that are decision space-based and those that are objective space-based. This paper reports the outcome of a computational investigation of two key representative algorithms, one of each category, namely the parametric simplex algorithm which is a prominent representative of the former and the primal variant of Bensons Outer-approximation algorithm which is a prominent representative of the latter. The paper includes a procedure to compute the most preferred nondominated point which is an important feature in the implementation of these algorithms and their comparison. Computational and comparative results on problem instances ranging from small to medium and large are provided.
Keywords
Introduction
Multiple objective linear programming (MOLP) is a branch of multiple criteria decision making (MCDM)32,33 that seeks to optimize two or more linear objective functions subject to linear constraints. Indeed, many real-world decision-making problems involve more than one objective function and can be formulated as MOLP problems. MOLP models have been widely applied in many fields of human endeavour such as science, engineering and management and have become a useful tool in decision making. An MOLP problem can be expressed as
In practice, MOLP is typically solved by the Decision Maker (DM) with the support of the analyst looking for a most preferred (best) solution in the feasible region X. This is because optimizing all the objective functions simultaneously is not possible due to their conflicting nature. Consequently, the concept of optimality is replaced with that of efficiency. The purpose of MOLP is to obtain either all the efficient or nondominated points or a subset of either or a most preferred point depending on the purpose for which it is needed.
An efficient solution of an MOLP problem is a solution that cannot improve any of the objective functions without deteriorating at least one of the other objectives. A weakly efficient solution is one that no other solution can improve all the objective functions simultaneously.
A nondominated point in the objective space is the image of an efficient solution in the decision space, and the set of all nondominated points forms the nondominated set. Let
The set of all efficient solutions and the set of all weakly efficient solutions of equation (2) are denoted by XE and XWE, respectively. The sets
The nondominated faces in the objective space of a given problem constitute the nondominated frontier, and the efficient faces in the decision space of the problem constitute the efficient frontier.
Robustness of method can be defined in different ways, in terms of computing efficiency or the ability of a method to solve problems depending on the researcher. In this paper, we consider it as the ability of a method to solve all problems (both simple and difficult).
The ideal objective point
MOLP has been an active area of research since the 1960s. During this period, various algorithms have been developed for generating either the entire efficient or nondominated set, or a subset of it, or a most preferred efficient or nondominated point for the problem. Most of these approaches are decision space-based. However, objective space-based methods are becoming more and more prominent.
In this paper, we are concerned with the state-of-the-art technology for MOLP. We are interested in a detailed comparison of the recently introduced parametric simplex algorithm (PSA) of Rudloff et al. 3 with the primal variant of Benson’s 4 outer-approximation algorithm (BOA) which is an objective space-based method. It has been noted by Benson 4 that, in practice, the DM prefers to base his or her choice of a most preferred (best) solution on the nondominated points. To achieve this comparison, we shall act here as the DM and choose a most preferred nondominated point (MPNP) whose components are as close as possible to an unattainable ideal objective point from the nondominated set returned by PSA to compare with a MPNP returned by BOA. These two algorithms are based on the same solution concept introduced by Löhne. 5 The algorithms will be compared comprehensively on a series of existing test problems.
One of the key issues in MOLP is computing the MPNP’s. We give a detailed procedure for the purpose here which allows us to carry out the comparison.
This paper is organized as follows: ‘Motivation’ section is the motivation. ‘Literature review’ section is a review of the related literature which centres on the parametric simplex and the objective space-based method that is BOA. We present PSA in ‘The parametric simplex algorithm’ section. ‘Scalarization techniques’ section discusses two scalarization techniques. BOA is presented in ‘Benson’s outer-approximation algorithm’ section. ‘Selection of the MPNP’ section discusses the selection of an MPNP. Detailed numerical experiments are presented in ‘Experimental results’ section to compare the quality of a MPNP, robustness and computing efficiency of the two algorithms. The results are summarized in the ‘Summary of results’ section. Finally, a conclusion is drawn in ‘Conclusion’ section.
Motivation
These two algorithms are similar in output but different in philosophy since one of them, BOA, is an objective space-based search algorithm, while PSA is a decision space-based algorithm. They are prominent examples of MOLP algorithms. Unfortunately, there is no empirical evidence in the literature, as far as we can tell, that separates them in terms of robustness and the quality of a MPNP they returned. This paper intends to fill this gap. It considers all available test problems ranging from small size to large size and solves them with Matlab implementations of the two algorithms on the same machine. The extensive results are recorded and discussed. It is hoped that these results would be a valuable guideline for the potential users to choose between the two depending on the problems they have to solve. It is also important to add that a comprehensive review of existing work is included. This puts the work well in context and saves the reader the need to consult much of the literature on the topic. We shall dwell more on robustness and quality of a MPNP returned by these algorithms using existing and realistic instances.
Note that to the best of our knowledge, no one tested these algorithms as extensively as we have done here in terms of the size and variety of problems.
The PSA of Rudloff et al. 3 works in the decision space and does not intend to find the set of all efficient extreme points of the problem; rather it finds a solution based on the idea of Löhne. 5 That is, it finds a subset of efficient extreme points and directions that allows to generate the whole efficient frontier and also returns the corresponding nondominated points and directions. BOA, on the other hand, works in the objective space to find the set of all nondominated points as well as directions of the problem.
Literature review
MOLP algorithms using a parametric programming approach have been widely studied in the last five decades. The first parametric programming approach to MOLP appears to be due to Schönfeld. 6 The author presented an algorithm for the enumeration of efficient solutions of the problem using parametric programming. Geoffrion 7 presented a bicriterion parametric linear programming algorithm to solve the problem. It was noted that solutions are not extreme points of the feasible region, pointing to the fact that one should not rely only on algorithms that consider extreme point solutions as in ordinary LP.
Evans and Steuer 8 introduced MSA for finding all the efficient extreme points and unbounded efficient edges for MOLPs. The algorithm first establishes that the problem is feasible and has efficient solutions. This is done by solving an auxiliary LP and two weighted sum LPs to find an initial efficient basis. Thereafter, it generates efficient bases by moving from one efficient extreme point to adjacent efficient extreme points until all the efficient extreme points have been found. The efficient extreme points are obtained using a test problem that determines the pivots that lead to them.
Zeleny 9 extended the approach of Geoffrion 7 to solve problems with more than two objectives. He decomposes the parametric space into finite subspaces that provide a set of optimal weights corresponding to extreme point solutions. It was noted that the approach might be inefficient for degenerate problems as two or more bases may be associated with the same extreme point. The vertices are tested for efficiency as soon as they are generated by solving an LP that determines the efficiency of such vertices.
Yu and Zeleny 10 presented two basic forms of parametric linear programming approaches and their computational procedures for computing the efficient set; the direct decomposition of the parametric space that was earlier introduced in Zeleny 9 and the indirect algebraic method. Based on a numerical experiment, the indirect algebraic method outperforms the direct decomposition method.
A parametric linear programming algorithm for generating the set of efficient vertices and higher-dimensional faces of the problem was presented by Gal. 11 In this procedure, efficient extreme points are generated through the use of an auxiliary problem which itself is an ordinary LP. The algorithm also determines higher-dimensional efficient faces for degenerate problems which were only discussed in Zeleny 9 but not solved. The efficient faces are generated following a bottom-up search strategy, that is, they are generated based on the information provided by the efficient extreme points.
Steuer 12 applied the MSA of Evans and Steuer 8 to parametric MOLP problems. Different methods for obtaining an initial efficient basis as well as different LP test problems were also presented. Similarly, Ehrgott 1 used this MSA variant to solve parametric MOLP problems.
A modification of the PSA for single objective LP to solve bounded bicriterion LP problems was presented by Ruszczyński and Vanderbei. 13 The approach was applied to a large mean-risk portfolio optimization problem for which the nondominated portfolios were generated.
Ehrgott et al. 14 introduced a primal-dual simplex algorithm for bounded problems. This algorithm finds a subset of efficient solutions that are enough to generate the whole efficient frontier. The algorithm starts with a coarse partitioning of the weight space which continues in each iteration as well as solves a costly LP in each iteration. A vertex enumeration is then performed in the last step to obtain efficient solutions. Numerical illustrations show the applicability of the algorithm.
Recently, Rudloff et al. 3 presented a PSA for the problem. The algorithm is a generalization of the algorithm of Ruszczyński and Vanderbei 14 and is similar to that of Ehrgott et al. 14 It works for any dimension, solves bounded and unbounded problems (unlike that of Ehrgott et al. 14 and Ruszczyński and Vanderbei 13 ) and does not find all the efficient solutions just like that of Ehrgott et al. 14 Instead, it finds a solution based on the idea of Löhne, 5 i.e. a subset of efficient extreme points and directions that allows to generate the whole efficient frontier. This is the so-called PSA. It was compared with a version of BOA in Hamel et al. 15 and MSA of Evans and Steuer 8 using small MOLP instances which were randomly generated with three and four objectives and up to 50 variables and constraints. The numerical results show that the proposed algorithm outperforms Benson’s algorithm for non-degenerate problems. However, Benson’s algorithm is better for highly degenerate problems. PSA was also found to be computationally more efficient than the algorithm of Evans and Steuer. 15 This comparison only focused on computing efficiency. Here, apart from computing efficiency, we will also compare the robustness and quality of MPNP’s returned by these two methods using existing and realistic MOLP instances.
Due to the various difficulties arising from solving MOLP problems in the decision space (such as having different efficient solutions that map onto the same point in the objective space), efforts were made to look at the possibility of solving them in the objective space.
Benson, 4 who presented a detailed account of decision space approaches, proposed an algorithm for generating the set of all nondominated points in the objective space. This is the so-called BOA. According to him, this algorithm is the first of its kind. Computational results suggest that the objective space-based approach is better than the decision space-based one. A further analysis of the objective space-based algorithm was presented in Benson. 16 This outer approximation algorithm also generates the set of all weakly nondominated points, thereby enhancing the usefulness of the algorithm as a decision aid.
Another of Benson’s 17 suggestions is a hybrid approach for solving the problem in the objective space. The approach partitions the objective space into simplices that lie in each face so as to generate the set of nondominated points. This idea was earlier presented in Ban. 18 The algorithm is quite similar to that in Benson. 4 The difference is in the manner in which the nondominated vertices are found. While a vertex enumeration procedure is employed in Benson, 4 a simplicial partitioning technique is used in the latter.
In Shao and Ehrgott, 19 a modification of the algorithm of Benson 4 was presented. While in Benson, 4 a bisection method that requires the solution of many LPs in one step is required; here, solving one LP achieves the desired effect and in the process improves computation time. Shao and Ehrgott 20 proposed an approximate dual variant of the algorithm of Benson 4 for obtaining approximate nondominated points of the problem. The proposed algorithm was applied to the beam intensity optimization problem of radio therapy treatment planning for which approximate nondominated points were obtained. Numerical testing shows that the approach is faster than solving the primal directly.
The explicit form of the algorithm of Benson 4 as modified by Shao and Ehrgott 19 is presented in Löhne. 5 This version solves two LPs in each iteration during the process of obtaining nondominated points and is extended to unbounded problems. Löhne 21 developed a Matlab implementation of this algorithm called BENSOLVE-1.2 for computing all the nondominated points and directions (unbounded nondominated edges) of the problem.
Csirmaz 22 presented an improved version of the algorithm of Benson 4 that solves one LP and a vertex enumeration problem in each iteration. While in Benson, 4 solving two LPs to determine a unique boundary point and a supporting hyperplane of the image is required in two steps; here, the two steps are merged and solving only one LP does both tasks and improves computation time. The algorithm was used to generate all the nondominated vertices of the polytope defined by a set of Shannon inequalities on four random variables so as to map their entropy region. Numerical testing shows the applicability of the approach to medium and large instances with 3 and 10 objectives and up to 5772 variables and 635 constraints.
Hamel et al. 15 introduced new versions and extensions of the algorithm of Benson. 4 The primal and dual variants of the algorithm solve only one LP problem in each iteration and is extended to pointed solid polyhedral ordering cones. Tests reveal a reduction in computation time. Similarly, Löhne et al. 23 extended the primal and dual variants of this algorithm to solve convex vector optimization problems approximately in the objective space.
Based on our review of the topic, it was observed that no comparison of robustness and quality of a MPNP chosen from the nondominated set returned by PSA with the MPNP chosen from the nondominated set returned by BOA has been carried out. We intend to fill this gap here.
The parametric simplex algorithm
The PSA of Rudloff et al.
3
is one of the current solution approaches for MOLP. It can be viewed as a variant of the algorithm of Evans and Steuer,
8
with a similar structure. It is different in the sense that it does not find all the efficient extreme points and unbounded efficient edges (extreme rays) as is being done in Evans and Steuer.
8
As mentioned earlier, the algorithm works in the decision space and finds a solution based on the idea of Löhne;
5
i.e., it finds a finite subset of efficient extreme points and directions that allows to generate the whole efficient frontier. The algorithm is initialized by solving an LP to find a weight vector, such that the weighted sum problem using this weight vector yields an optimal solution. The corresponding optimal dictionary (containing basic and nonbasic variables) is used to construct an initial dictionary D0 and an index set of entering variables
Notation
problem data the set of basic variables the set of boundary dictionaries the initial dictionary the set of explored pivots for the initial dictionary the index set of entering variables an initial index set of entering variables the set of nonbasic variables the recession cone of the image an initial efficient basic feasible solution corresponding to the initial dictionary
The indices i, j correspond to basic variable
the inverse of the basic matrix the new dictionary the set of explored pivots for the current dictionary the set of all explored pivots of the new dictionary the image of the direction the set of visited dictionaries the basic feasible solution for the new dictionary a direction of recession in the decision space the set of efficient extreme points in the decision space the set of extreme directions in the decision space the set of all nondominated vertices in the objective space the set of extreme directions in the objective space a direction in the objective space
Parametric simplex algorithm 3
0:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
Illustration of PSA
We consider the following MOLP adapted from Alves et al.,
24
which we solved using a Matlab implementation of PSA
The efficient extreme points found are

Efficient edges joining the three points in the decision space.
Scalarization techniques
Before presenting BOA, we first present two basic scalarization methods that play an important role in its implementation. These methods are weighted sum scalarization and translative or scalarization by a reference variable. As noted in Löhne, 5 scalarization is one of the most important techniques used in MOLP.
In the weighted sum method, a new objective function based on the q-linear objectives is obtained by assigning non-negative weights
The weights are usually normalized, so that
In the method of scalarization by a reference variable, the q objectives are associated to a common reference variable z, and the ith objective is restrained from being larger than the reference variable and a fixed real number yi, that is
The reference variable z is the objective function that has to be minimized. By setting
The dual program is
The above two scalarization techniques are fundamental for the implementation of BOA which is discussed in the next section. 5
Benson’s outer-approximation algorithm
This version of BOA is due to Shao and Ehrgott.
19
It can be found in Löhne.
5
It works in the objective space of the problem and returns the set of all nondominated points and extreme directions. The algorithm can be regarded as a primal-dual method because it also solves the dual problem. But here, we are only concerned with the solution of the primal. The algorithm first constructs an initial polyhedron Y0 (outer approximation) containing the upper image Y in the objective space and an interior point
Notation
problem data the homogeneous dual problem; the iteration counter an interior point; the homogeneous problem the LP that finds the unique value δ a set of solutions of the dual problem the solution of the homogeneous dual problem the inequality representation of the current polytope the representation by vertices an optimal solution to a unique value that determines the intersection or boundary point y
The command
returns the vertices of a polytope Yk the set of nondominated vertices the set of extreme directions
Benson’s outer-approximation algorithm 5
0:
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
Illustration of BOA
Consider again Problem 3 of ‘Illustration of PSA’ section. The nondominated points found by Algorithm 2 are

Nondominated edges connecting the three points in the objective space.
Selection of the MPNP
This issue has been alluded to in the introduction. To determine the MPNP, we employ the technique of Compromise Programming (CP) introduced by Zeleny
25
and compute the ideal objective point which would serve as a reference point in each case. CP is a mathematical programming method that is based on the notion of distance of a most preferred solution from the ideal point
We note here that the ideal point itself is not an element of the nondominated set
For our numerical illustration above (Problem 3 of ‘Illustration of PSA’ section), solving each of the objective functions individually over the feasible region X yields the ideal objective point
Having computed the ideal objective point
Since the relative distance of f2 from the ideal point
The following more substantial illustrative MOLP adapted from Zeleny
9
with three objectives makes the point
Again, optimizing each of the objective functions individually over the feasible region yields the ideal objective point
For PSA, the set of nondominated points found is
Next, we measure the distances of each of these points from the ideal point
Summary of experimental results.
We have used this method to choose the MPNP from the nondominated sets returned by PSA and BOA for comparison. The measure of the quality of solutions used is the distance to the ideal point as explained above.
Experimental results
In this section, we provide numerical results to compare the computing efficiency, robustness and the quality of a MPNP returned by Algorithms 1 and 2.
Table 2 shows the numerical results for a collection of 53 problems, from the existing literature. Problem 1 is taken from Ehrgott, 1 Problems 2–10 were taken from Zeleny. 26 Problems 11–21 are test problems from the interactive MOLP explorer (iMOLPe) of Alves et al. 24 Problems 22–47 are taken from Steuer. 12 Problems 48 and 53 are test problem in Bensolve-2.0 of Löhne and Weißing, 29 while Problem 52 is a test problem in Bensolve-1.2 of Löhne. 20 Finally, Problems 49–51 are obtained using a script in Bensolve-2.0 of Löhne and Weißing 29 that was used to generate problem 53 with the same number of variables and constraints. Problem 48 has a dense constraint matrix with an identity matrix of order n as its criterion matrix, where n is the number of variables in the problem. The RHS vector is such that all the components are zeros except for a one (1) at the beginning as the only none zero element. Problems 49–51 and 53 have dense criterion matrices with identity matrices of order n as their constraint matrices, where n is also the number of variables in the respective problem. All the elements in the RHS vectors are ones. Finally, Problem 52 is such that the constraint matrix is sparse while the criterion matrix is dense. The RHS vector is such that all the components are ones except for 200 at the end as the largest entry.
Comparative results for small to medium instances.
aThe image is the whole region, implying that the problem has no solution.
bOut of memory.
Both Algorithms 1 and 2 were implemented in Matlab. In all tests, m is the number of constraints, n the number of variables, q the number of objectives and NNP the number of nondominated points returned by the algorithms. All problems were executed on an Intel Core i5-2500 CPU at 3.30 GHz with 16.0GB RAM. We recorded the CPU times (in seconds) returned by the algorithms for each problem and also acted as the DM by choosing a MPNP (whose components are as close as possible to the ideal objective point as explained in ‘Selection of the most preferred nondominated point’ section) from the nondominated set
As can be seen in Table 2, the CPU times increase as the problem dimension increases. We can also infer from Tables 2 and 3 that the CPU times also depend to some extent on the total number of nondominated points returned by the algorithm for a given problem. That is to say, the more the number of nondominated points in a given problem, the more computational effort would be required to obtain them. We note here that most of the problems in Table 2 are non-degenerate. For these problems, PSA appears to have computational advantage over BOA, most especially for those problems with more nondominated points as it returns only a subset of them; see problems 20, 25, 30, 39, 40 and 46. We noticed that for those problems where both algorithms return the same number of nondominated points, there is a slight difference in CPU time which is in favour of PSA. We also observed that PSA returns more nondominated points for some of the problems than BOA; this is not supposed to happen as it is meant to return a subset of these points. Some of the nondominated points returned are repeated. In terms of the quality of a MPNP returned by these algorithms, we observed in Table 2 that both algorithms returned the same MPNP points for most of the problems considered. However, for a few of these problems where the MPNP are not the same, BOA returned higher quality MPNP than PSA as illustrated in pages 15 and 16 (second numerical illustration of Problem 5). This observation may be largely due to the fact that PSA is not meant to return all nondominated points, as it returns a subset of them, whereas BOA returns all the nondominated points of the problem.
Comparative results for large instances (NNP stands for Number of Nondominated Points).
aThe image is the whole region, implying that the problem has no solution.
bAborted after three days of running time.
cOut of memory.
Next, we use practical size MOLP instances from Csirmaz 30 which is an MOLP solver called Inner and MOPLIB31 which stands for Multi-Objective Problem Library. These test problems were also executed on the same machine, and the results are reported in Table 3.
Problems 54–72 are from Csirmaz 30 while Problems 73–86 are from MOPLIB. Note that Problems 54–73 are highly degenerate. Their structure is such that their constraint and criterion matrices are sparse while all the components of the RHS vectors are zeros except for a one (1) as the only non-zero entry. In Problems 74 and 79, the constraint matrices are sparse, the criterion matrices are dense and all the elements in the RHS vectors are ones. Problems 75 and 76 have sparse constraints and criterion matrices with dense RHS vectors. Problem 78 is such that the RHS vector is dense while the constraint and criterion matrices are sparse. In Problems 80 and 82, the constraint matrices are sparse, criterion matrices are dense and all the elements in the RHS vectors are ones. Problems 77 and 81 have dense RHS vectors while the constraint and criterion matrices are sparse. Problems 83 and 84 are such that the constraint and criterion matrices are sparse, and all the components of the RHS vectors are zeros except for a one (1) at the beginning as the only non-zero element. Problem 85 is such that the constraint and criterion matrices are sparse while the components of the RHS vector are all zeros except for a 90 at the end as the only non-zero entry. Finally, Problem 86 is such that the constraint and criterion matrices as well as the RHS vector are all sparse. For the highly degenerate problems, it was observed that BOA is computationally superior to PSA which confirms what was reported by Rudloff et al. 3 that BOA outperforms PSA on highly degenerate problems. Even the nondominated points returned by PSA for these problems are also of lower quality than those returned by BOA.
In terms of robustness of methods, we noticed in Tables 2 and 3 that PSA could not solve problems 38, 45, 52, 68 and 81. It returns the image which is the whole region which indicates that none of the vertices in the image is nondominated, meaning that no solution is returned thereby making BOA more robust. However, we also observed in Table 3 that BOA could not produce results for some of the test problems despite the long running time allowed (three days); it was aborted. The fact that some problems were aborted after three days of running time does not necessarily mean that the algorithms cannot solve these problems; if allowed to run further, they could potentially return a huge number of nondominated points or run out of memory which would indicate that the total number of nondominated points has exceeded the Matlab storage capacity on the machine used.
For those problems which were solved by BOA in Table 3, it was also observed that the MPNPs returned are of higher quality than those returned by PSA. However, for the non-degenerate problems, PSA was found to be computationally superior to BOA.
Summary of results
In this section, we present the summary of experimental results discussed in the previous section in Table 1. We have also presented the CPU time of BOA and PSA for 70 out of the 86 instances (which represent 81.40%) of the total problems solved by both methods in Figure 3.

Running time of PSA and BOA for the 70 instances solved.
Conclusion
We have reviewed the existing literature on the parametric simplex and Benson’s BOAs. We have also presented these algorithms, explained and illustrated them on small MOLP instance. Crucially, we have explained how MPNP’s are obtained. A detailed computational experience to compare the efficiency, robustness as well as the quality of MPNP’s is provided. The CPU times and quality of MPNP’s returned by these algorithms for a collection of 86 existing MOLP problems, ranging from small to medium and practical size instances is reported. It was observed that BOA is superior to PSA in terms of robustness, quality of the MPNP’s it returns and is also computationally more efficient than PSA on highly degenerate problems. The measure of quality used is the distance to the ideal point as explained in ‘Experimental results’ section. However, PSA outperforms BOA on non-degenerate problems.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The research work leading to this paper and its publication were partially financially supported by the UK ESRC grant ES/L011859/1.
