Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Streamlined Artificial Variable Free Version of Simplex Method

Abstract

This paper proposes a streamlined form of simplex method which provides some great benefits over traditional simplex method. For instance, it does not need any kind of artificial variables or artificial constraints; it could start with any feasible or infeasible basis of an LP. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Later in this paper, a dual version of the new method has also been presented which provides a way to easily implement the phase 1 of traditional dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our methods provide full freedom to the user, that whether to start with primal artificial free version or dual artificial free version without making any reformulation to the LP structure. Last but not the least, it provides a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement.

Introduction

Linear programming has been an indispensable area in the progress of the computational sciences [1]. Since 1947, after World War II, linear programming gradually arose and with the passage of time has become popular among the people of various fields. It is by far the most widely used optimization model. Its impact on economic and government modeling is incredible. Today it has become the center of attention of many Mathematicians, Economists and Decision Scientists etc. Linear programming is the optimization of an outcome based on some set of linear constraints using a linear mathematical model. The origin of developing algorithms to solve a given system of linear inequalities goes back to the 19th century, where they were first studied by Fourier [2]. Later, several mathematicians such as Dines and Motzkin rediscovered these algorithms [3][4].

During the course of Second World War, Dantzig formulated the largest coefficient simplex method for solving a given linear program which he presented in a conference in 1951 [5]. Later, it was also published in his famous book “Linear programming and extensions”[6]. Since then, it has become the preferred method of LP practitioners because of its efficiency [7]. Hence, it is now the most useful tool to teach and solve practical linear programming problems.

Considering its vast applicability in various fields, teaching LPs have become an important part of undergraduate and graduate courses. Because of this, many researchers are now focused in designing algorithms which are more efficient and easily implementable for classroom teaching.

Originally the simplex method developed by Dantzig was confined only to LPs having a known feasible solution, commonly referred as the initial basic feasible solution. For the LPs having no initial basic feasible solution, almost all the current variants of simplex method are applicable in two phases [8][9], called phase 1 and phase 2. In phase 1, we create a basic feasible solution by adding some (non-negative) artificial variables to the problem with an additional objective (auxiliary objective), equal to minimization of sum of all the artificial variables, called phase 1 objective. The purpose of phase 1 process is to maintain feasibility and minimize the sum of artificial variables (or in a simpler sense minimize the sum of infeasibilities) as much as possible. If phase 1 ends with an objective value equal to zero, it implies that all artificial variables have attained a value zero (means all infeasibilities have been removed) and our current basis is feasible to the original problem, then we return to the original objective and proceed with simplex phase 2. Otherwise, we conclude that the problem has no solution.

For larger LPs, implementation of traditional two phase simplex method significantly increases the number of variables, number of iterations and thus the complexity as well. From the point of view of class room teaching, it often becomes a tedious job.

Since simplex method is so far still practically the best known pivot algorithm for solving LPs [10], therefore this has arose the need of developing more general linear program solving methods in which one may start from an arbitrary initial basic solution (not necessarily a feasible one).

Arsham [11][12][13][27] presented an artificial-free algorithm named push and pull algorithm which initiates with an incomplete basic variable set (BVS). As the algorithm proceeds the variables are brought in to the basis. The Push Phase continues until the basic variable set is complete. This phase may terminate yielding an infeasible BVS. The problem then proceeds by starting the Pull Phase, which pulls the solution back to feasibility by incorporating pivot rules similar to the dual simplex method. Arsham has claimed that the push and pull algorithm is artificial-free, however, his claim would be correct if we are only concerned with artificial variables, but actually his method requires adding artificial constraints so it is not truly an artificial free method.

Papparrizos[14] presented another artificial variable free method but his method also uses additional artificial constraint with a big-M number. The method also requires a tedious evaluation of series of additional objective functions besides the original objective function. Moreover at each iteration this algorithm must check both primal and dual feasibility[12].

Arsham[15][16] proposed another algorithm for general LP models in which he claimed that his algorithm will either provide a feasible solution or will declare infeasibility after a finite number of iterations. Enge and Huhn [17] presented a counter example in which Arsham’s algorithm is declaring a feasible problem inconsistent. Here we are presenting another consistent problem which Arsham’s method declares inconsistent.

Some other artificial free algorithms were presented [18][19][20][21][22], but all these methods are based on minimum angle approaches which differ from the simplex method.

Hence, in all Dantzig’s simplex method is better than its successor variants in terms of efficiency and as an elementary text book material as well. But from a teacher’s point of view difficulty in simplex method is that one could not learn simplex method for feasibility (simplex phase 1), before learning simplex method for optimality (simplex phase 2). So, usually students learn “phase 2” before “phase 1”, of course it sounds odd for both teachers and students.

Here in this paper, instead of developing a new method for general linear programs we are going to present a better version of simplex method which would not only hide the unnecessary elements of simplex table but also save many computations.

Furthermore Dual version of the simplex method is the most effective and efficient way to achieve the optimality of dual feasible problem [23]. So in this paper we have also presented a dual version of our method which is indeed an artificial constraint free version of the dual simplex method.

Advantages of the New Approach

There are several advantages of the new method. For instance, it could start with any feasible or infeasible basis of an LP. In fact it could also be very useful for solving integer programming problems. This method deals with the artificial variables in an implicit manner therefore, as long as a variable is infeasible its corresponding slack variable would be invisible but implicitly after leaving the basis it would be replaced by the corresponding invisible slack variable. In this method the artificial variables are virtually present but their presence is not revealed to the user in the form of extra columns in the simplex table. This method follows the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables which also makes it space efficient. Due to this reason we have named our method ‘A streamlined artificial variable free version of simplex method’. Last but not the least, it is a fruitful tool for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement, because working rule of this method can easily be demonstrated to the students having either a little or even no prior knowledge of simplex method for optimality (phase 2). So this method would change the learning sequence by excluding the need to teach phase 2 before phase 1.

Moreover as mentioned above dual simplex method is very efficient and effective for solving a dual problem but in case of a dual infeasible problem dual simplex method requires addition of artificial constraints which makes things difficult and cumbersome in practice [24]. In contrast the dual version of our method does not require artificial constraints. This makes our methods more efficient and effective not only for attaining feasibility but also for attaining optimality. Our streamlined versions of simplex and dual simplex methods provide a way to easily implement the phase 1 of either simplex or dual simplex method. For a problem having an initial basis which is both primal and dual infeasible, our primal and dual versions provide full freedom to the user that is they can directly initiate with either the streamlined primal simplex or streamlined dual simplex method without making any reformulation to the LP structure.

Some Basic Terminologies and Notations

A general LP problem could be considered as (1) where, x is the decision variable vector, and mn

For a given basis B such that AB is invertible, and non-basis N := {1,⋯,n}\B we get (2) By setting xN = 0, a solution is obtainable. Such vector x is called basic solution of system (2) or system (1) for basis B.

Now by substituting the value of xB from equation (2), the objective of system (1) can be reformulated as (3) The problem data could be arranged in the following table, denoted by D(B), known as short simplex table [25]

The dual of system (3) could also be written in the following form, (4) Here dual basic variables are yN and non-basic variables are yB.

For a given basis B, the problem data could be read from the short table as

The current primal/dual objective value is:

The primal solution is: x(B) = (xB, xN) where xN = 0,

The dual solution is: y(B) = (yB, yN) where yB = 0,

A basis B is called primal feasible if xB0 or one may conclude that the primal problem has a feasible solution x(B) while if yN0 then B is called dual feasible or one may conclude that the dual problem has a feasible solution y(B). Also if B is both primal and dual feasible then B is optimal. Usually dB0 is referred as certificate column of primal feasibility; and d0N is referred as certificate row of dual feasibility, see Fig. 1.

thumbnail
Fig 1. A short simplex table structure showing optimality of current basis.

https://doi.org/10.1371/journal.pone.0116156.g001

A basis B is called primal infeasible if there exists iB such that di0 < 0 and dual infeasible if there exists jN such that d0j < 0.

A short table could also be used to reveal inconsistency and unboundedness of an LP. A problem is said to be inconsistent if there exists a basis B, having a member k, to the problem such that dk0 < 0 and dkN ≥ 0, see Fig. 2. A problem is said to be dual inconsistent if there exists a non-basis N having a member l such that d0l < 0 and dBl ≤ 0, see Fig. 3.

thumbnail
Fig 2. A short simplex table structure showing inconsistency of primal problem.

https://doi.org/10.1371/journal.pone.0116156.g002

thumbnail
Fig 3. A short simplex table structure showing inconsistency of dual problem.

https://doi.org/10.1371/journal.pone.0116156.g003

Primal unboundedness could be deduced if a basis is primal feasible but reveals dual inconsistency; and dual problem is unbounded if a basis is dual feasible but reveals primal inconsistency.

Pivot operation

To move through different bases of an LP one may need to apply pivot operations. A single pivot operation may be used to obtain short table of an adjacent basic solution.

For rB, kN and (r, k) being the position of the pivot element drk (≠ 0) of D, then one can obtain an updated equivalent short table D(B’) with a new basis B’ := (B ⋃ {k})\{r} and the new non-basis N’ := (N ⋃ {r})\{k} by performing the following operations on D(B) The above replacement is known as a pivot operation on (r, k). A degenerate variable is the basic variable with zero value. A pivot operation or pivot element is said to be primal (dual) degenerate if the corresponding primal (dual) basic variable is degenerate. A degenerate pivot is said to be positive (negative) degenerate if the sign of pivot element is positive (negative).

Short table before pivot (r, k)

Short table after pivot (r, k)

Deduction of art-free auxiliary form of an LP

The term auxiliary form is commonly used in the context of simplex method as a special purpose linear program constructed by incorporating a sufficient number of artificial variables into the system in order to develop a pseudo feasible basis, and then appending an objective function of minimizing sum of all artificial variables to reach a feasible basis to the original system. Here in this section we would develop a variant of the auxiliary form of the LP, namely “the art-free form”.

Consider the following system of inequalities (5)

By adding the slack vector,xS we can get an equivalent equality form of the above system. (6) where xO are remaining variables other than that of slack variables. Here we are allowing negative variables to reside into the basis. So, in this regard we can take B := S and N := O.

(7)

Here B may not constitute a feasible basis (because of some negative values in b).

We can decompose xB into difference of two non-negative variables and . Here the value of is showing infeasibility of the current basis B (Note: and work just like slack and artificial variables in usual simplex method). According to the logic of simplex method, to reach a feasible basis our aim should be to make equal to zero keeping feasibility of all other variables preserved.

(8)

Clearly, feasible solution set of system (8) becomes also feasible to system (7) if . Initial set of basic feasible variables to system (8) is constructible as a sub-set from , in the following paradigm,

Paradigm.

“If be the corresponding basic variable and if be the corresponding basic variable”.

Let the initial basic variable set developed by above paradigm be , where L := {p+i:bi<0} ⊆ B and L’ := {p+i:bi ≥ 0} ⊆ B, clearly L’ = BL. Recall that non-basic variables would increase the extent of infeasibility if they become basic variable, so to restrict their entrance into the basis we should eliminate them from the system (8). Now, if is also empty then the basis B is feasible to system (7) and hence feasible to system (5), otherwise to obtain a feasible basis we would imbed an additional objective function (we may also call it as infeasibility objective function), . Here the non-negativity along with the new objective reinforces to zero. From system (8) one may obtain , so the infeasibility objective function equivalently becomes . The transformed system is, (9)

Note: Throughout this paper, we are overloading the addition ‘+’ and negative ‘-’ operations for affine translations to a set Ω of real numbers by a number a as Ω + a := {x+a}:x ∈ Ω and Ω − a := {xa : x ∈ Ω }

By using the paradigm (described above in this section), we may construct the following augmented matrix showing basic variable corresponding to each row.

Here top row (w-row) represents infeasibility objective function.

Constructing a short table of the above matrix, where, bLp < 0 and bL’-p ≥ 0. The procedure to find the entering basic variable is similar to phase 1 simplex method i.e. we may identify the entering basic variable by seeking most negative element in w-row (excluding first element, which is actually equal to sum of all infeasibilities). After determining the entering basic variable leaving variable is determined by taking the minimum-ratio test. But the criterion for minimum ratio test would be consequently different for the rows of positive basic variables and negative basic variables .

New form of Minimum ratio test

Here ratio test scheme is different for positive ‘ ‘ and negative ‘ ‘ variables. Ratios corresponding to variables in are obtainable by dividing bL’-p by its corresponding element in the pivot column (only if corresponding element in pivot column is positive) whereas; ratios corresponding to variables in are obtainable by dividing bLp by its corresponding element in the pivot column (only if corresponding element in pivot column is negative). The element in the pivot column corresponding to the minimum ratio would be the pivot element. Hence the leaving basic variable would be the variable corresponding to that pivot element.

If the leaving basic variable, say ,belongs to then set L := L\{i} for the next pivoting table and perform the pivot operation. Now in the new pivoting table proceed with the same procedure of entering and leaving basic variable until L becomes empty.

Reduced short table

The short table size can be more reduced by considering the fact that column of is conjugate of the hidden basic column of except the coefficient of in infeasibility objective is ‘1’ rather than ‘0’.

One can observe for iL, after the pivot operation, columns of and are identical with the exception that the infeasibility objective coefficient of is greater than infeasibility objective coefficient of by ‘1’, that is wi := wi + 1. So, we can also hide the column of keeping in mind that for every leaving basic variable after the pivot operation we must replace the newly obtained non-basic column of by .

Let the elements of reduced short table are denoted by dij, where iB ⋃ {0} and jN ⋃ {0},

Note: Recall that , it is clear that if leaving basic variable is , then could be replaced by xi after pivot in the short table.

Problem 1.

Given a reduced short table D(B), obtain primal feasibility.

Algorithm: Art-free Simplex Method (ASM).

  • Step 1: Let L be a maximal subset of B such that L = {i:di0 < 0, iB}. If L = φ then D(B) is primal feasible. Exit.
  • Step 2: Denote the basic variables xL by and compute infeasibility objective vector such that wj = ΣiLdij, jN. Place w to the top of the reduced short table D(B).
  • Step 3: Let KN such that K = {j: wj < 0, jN}. If K = φ then D(B) is primal inconsistent. Exit.
  • Step 4: Choose kK such that wkwh, ∀hK
    (Ties are broken arbitrarily)
  • Step 5: Choose r1L and r2B \L such that
    Set
  • Step 6: Make a pivot on (r, k) (⇒ Set B := (B ⋃ {k})\{r}, N := (N ⋃ {r})\{k} and update D(B) along with the appended w(B)).
  • Step 7: If rL, set L := L\{r} and wr := wr + 1, replace notation of by xr
  • Step 8: If L = φ then D(B) is primal feasible. Exit.
    Otherwise, go to Step 3.

Proof of Correctness.

As stated earlier, the objective of the simplex phase 1 is, to minimize the sum of all artificial variables. In an analogous sense, as shown in the last section the infeasibility objective w is to . Just like simplex method, our method’s ratio test, by taking minimum of all ratios, preserves the feasibility of existing feasible variables. Hence in the streamlined art-free simplex method the number of negative basic variables is steadily decreasing. The overall structure of pivoting, forces to be strictly increasing for non-degenerate pivots throughout the iterations and constant for degenerate pivots. So, finiteness of total number of bases in every LP problem proves finiteness of our method for complete non-degenerate LP problems.

Example 1.

Obtain a feasible basis of the following system of inequalities using ASM.

By adding non-negative slack variables x4, x5, x6, and x7 we can construct the associated reduced short table along with row vector w (sum of rows of infeasible basic variables) of the above problem as

Initial table:

Clearly, current basic solution is infeasible.

Here L = {4,5,6,7}, replace

Here B = {4,5,6,7} and N = {1,2,3}, According to most negative coefficient rule k = 2, so entering basic variable is x2 and according to new minimum ratio test r = 6 the leaving basic variable is ‘ ‘. Perform the pivot operation on (6,2).

Replace , L = {4,5,6,7}\{6} = {4,5,7}, w6 := w6 + 1

It can be seen that now a single infeasibility is removed.

Iteration 2:

Here, k = 1 and r = 5 perform pivot operation on (5,1).

Since rL, replace L = {4,5,7}\{5} = {4,7}, w5 := w5 + 1

Iteration 3:

Here, k = 6 and r = 7 perform pivot operation on (7,6).

Since rL, replace L = {4,7}\{7} = {4}, w7 := w7 + 1

Iteration 4:

Here, k = 3 and r = 1 perform pivot operation on (1,3).

Iteration 5:

Here, k = 7 and r = 3 perform pivot operation on (3,7).

Iteration 6:

Here, k = 5 and r = 4 perform pivot operation on(4,5).

Since rL, replace , L = {4}\{4} = φ, w4 := w4 + 1

Now all the infeasibilities have been removed. Primal feasibility is achieved; the feasible solution is (x1, x2, x3) = (0,6,0).

For more illustration purpose see Fig. 4 for an iteration by iteration comparison between Art-free simplex (ASM) and simplex method (SM) phase 1 of the above example.

thumbnail
Fig 4. Iteration by iteration comparison between ASM and Simplex method for example 1.

https://doi.org/10.1371/journal.pone.0116156.g004

Remark:

The number of iterations and the pivoting sequence of art-free simplex and simplex phase 1 are same, but it could be easily seen that number of computations are noticeably reduced. For more details, see the computational results section.

Example 2.

Show that the following system of inequalities is inconsistent by using Art-free simplex method and usual simplex method.

Initial table

Iteration 1

Iteration 2

I

Iteration 3

The above table shows that the algorithm terminated unsuccessfully. Hence the problem is primal inconsistent. See Fig. 5 for comparison with simplex method.

thumbnail
Fig 5. Iteration by iteration comparison between ASM and Simplex method for example 2.

https://doi.org/10.1371/journal.pone.0116156.g005

Dual Version of Art-Free Simplex Method

The symmetric relationship between primal dual linear programs always enables one to construct a dual version of every method developed for solving the primal problem. The most popular example of such beautiful characteristic is the dual simplex method. Dual simplex method is known for its application in sensitivity analysis, integer programming, branch-and-bound/cut/price algorithms. But unfortunately phase 1 of dual simplex has not much participated in to the practice of solving dual LPs [24].

In phase 1 of dual simplex, to obtain a dual feasible basis of a given LP problem an additional dual slack variable is introduced which is then assigned an adequately large cost coefficient M in the dual objective function. This is analogous to adding a constraint to the primal problem. The method also requires the addition of artificial variables to transform the LP into the standard form. However this approach becomes tedious as the size of the problem is considerably increased. Another reason for not using this approach in practice is that it depends on the value of M, a high value might result in numerical problems whereas a value too small might fail to produce a primal feasible solution [24].

In this section we are going to develop dual version of the Art-free simplex method, in fact development of this pivot algorithm is a trivial task. Without going into the geometric or algebraic complications we may construct a dual version of the method by keeping in mind that the aim is to solve the dual feasibility problem through primal short table. Consider the following LP problem, (10)

By adding non-negative slack variables x4, x5, x6 and x7, the associated short table of the above problem can be constructed as shown below. Here, the dual variables y1, y2, y3, y4, y5, y6 and y7 have been demonstrated explicitly as it is required to observe dual variables too.

Here y is the dual objective variable. Objective coefficients (z-coefficients) of primal non-basic variables are the values of dual basic variables, and values of primal basic variables are coefficients of dual non-basic variables in dual objective.

That is, primal and dual solutions associated with above short table are,

With mutual objective values z = − y = 0

Clearly, current primal and dual both basic solution are infeasible.

As demonstrated earlier in this manuscript to achieve optimality one may either have to achieve primal feasibility and then go for optimality through primal simplex method, or on the other hand one may have to achieve dual feasibility and then go for optimality through dual simplex method. If we measure the degree of infeasibility through the number of infeasible variables, one can see that the above problem is more primal infeasible than dual infeasible. So, from a general perspective the primal feasibility achievement problem might be a difficult task in comparing to dual feasibility achievement problem. For system (10), to attain dual feasibility we may either consider solving it by using art-free simplex method on its reduced dual short table see Fig. 6 or using dual art-free simplex on its reduced primal short table see Fig. 7.

thumbnail
Fig 6. Reduced dual short table for system (10) along with the infeasibility objective w.

https://doi.org/10.1371/journal.pone.0116156.g006

thumbnail
Fig 7. Reduced primal short table for system (10) along with the dual infeasibility objective w’.

https://doi.org/10.1371/journal.pone.0116156.g007

Problem 2

Given a short table D(B), obtain dual feasibility.

Algorithm: Dual Art-free Simplex Method (DASM)

  • Step 1: Let K be a maximal subset of B such that K = {j : d0j < 0, jN}. If K = φ then D(B) is dual feasible. Exit.
  • Step 2: Denote the basic variables yK by and compute dual infeasibility objective vector such that . Append w’ to the right of the dictionary D(B).
  • Step 3: Let LB such that . If L = φ then D(B) is dual inconsistent. Exit.
  • Step 4: Choose rL such that
    (Ties are broken arbitrarily)
  • Step 5: Choose k1K and k2N\K such that
    Set
  • Step 6: Make a pivot on (r, k) (⇒ Set B := (B ⋃ {k})\{r}, N := (N ⋃ {r})\{k} and update D(B) along with the appended w’(B)).
  • Step 7: If kK, K := K\{k} and , replace notation of by yk
  • Step 8: If K = φ then D(B) is dual feasible. Exit.
    Otherwise, go to Step 3.

Example 3

Obtain a complementary dual feasible basis for the associated LP of the following short table by using Dual Art-free simplex method.

Here K = {1,3}, replace

Iteration 1:

Here, B = {4,5,6,7} and N = {1,2,3}, According to most negative dual coefficient rule k = 2, so leaving dual basic variable is y2 and according to art-free dual minimum ratio test r = 7 the entering dual basic variable is ‘y7’. Perform the pivot operation on (7,2)

Iteration 2:

Here, k = 1 and r = 4 perform pivot operation on (4,1).

Since k = 1 ∈ k, replace

Iteration 3:

Here, k = 3 and r = 1 perform pivot operation on (1,3).

Since k = 3 ∈ k, replace Dual feasibility is achieved; the complementary dual feasible solution is .

Example 4

Show that the following problem is dual inconsistent by using dual art-free simplex method.

By adding non-negative slack variables x4, x5, x6, x7 and x8 we can construct the associated dictionary along with column vector w' (negative sum of columns of infeasible dual basic variables) of the above problem.

Initial table:

Here y1, y2, y3, y4, y5, y6, y7 and y8 are dual variables.

Here K = {1,2,3}, replace

Iteration 1:

Here, k = 3 and r = 8 perform pivot operation on (8,3).

Since k = 3 ∈ k, replace

Iteration 2:

Here, k = 8 and r = 5 perform pivot operation on (5,8)

The algorithm has terminated unsuccessfully. Here, the last table shows that the problem is dual inconsistent.

Applications

Since the proposed method ASM is an efficient alternative to the simplex method phase 1 and dual simplex method so it can effectively be incorporated in the solution process of linear programming problems, integer programming problems, sensitivity analysis, parametric programming etc. It can become an essential tool which can directly be employed by researchers in solving various problems of diverse fields like biological sciences and engineering for example: biological sciences [29] [31], medical sciences [28] [30], biochemical sciences [32], mechanical engineering [33] etc.

Computational Results

In this section, tables 1 and 2 present a comparison of the computational results of our algorithm with the simplex method. Using random models suggested by Kaluzny [26] we generated 250 consistent linear programs and 250 inconsistent linear programs with the coefficients cj, bi and aij chosen randomly from the integer interval [−50,50]. The results of both consistent and inconsistent problems exhibit that our algorithm (ASM) and the simplex method (SM) both take same number of iterations.

thumbnail
Table 1. Average number of iterations on random consistent LPs.

https://doi.org/10.1371/journal.pone.0116156.t001

thumbnail
Table 2. Average number of iterations on random inconsistent LPs.

https://doi.org/10.1371/journal.pone.0116156.t002

For further testing on practical problems, we executed both the algorithms on NETLIB problems [34] which again revealed that for each problem number of iterations taken by both algorithms are exactly the same. The results have been summarized in table 3.

thumbnail
Table 3. Number of iterations on 22 instances of NETLIB test problems.

https://doi.org/10.1371/journal.pone.0116156.t003

The above comparisons in between ASM and SM strengthen that ASM and SM always follow exactly same number of iterations. The following tables show a comparison of computational efficiency of both the methods:

Figs. 8 and 9 depict the comparison of SM and ASM by the average number of multiplication and addition operations needed to solve LP problems of different sizes. The tables illustrate that (either we consider multiplications or additions) ASM always need less number of operation computations as compared to SM. This difference becomes quite noticeable especially for the problems that have greater number of constraints as compared to the number of variables. So, it is empirically observed that ASM is much advantageous when “number of constraints minus number of variables” is a large number. This observation could be verified by the graph as depicted in Fig. 10, as the value of mn increases the percentage of saved computations in ASM also increases. For example for a 200 × 300 order problem the average saving in computations is just about 5% but in contrast to a 300 × 200 order problem it reaches a remarkable level of 40%. This fact can also be seen in the problems of order 5 × 10, 10 × 5, 50 × 100 and 100 × 70. For the problems having mn, the savings in number of computations is not much high. Basic theory of duality asserts that in contrast to ASM, the dual art-free simplex method (DASM) would be more efficient computationally when nm is large.

thumbnail
Fig 8. Comparison of SM and ASM in terms of Multiplication Operations.

https://doi.org/10.1371/journal.pone.0116156.g008

thumbnail
Fig 9. Comparison of SM and ASM in terms of Addition Operations.

https://doi.org/10.1371/journal.pone.0116156.g009

thumbnail
Fig 10. Graph showing trend of percentage of computations saved in ASM.

https://doi.org/10.1371/journal.pone.0116156.g010

Conclusion

In this paper streamlined artificial free versions of simplex and dual simplex method have been presented. These new versions do not need any kind of artificial variables or artificial constraints; they could start with any feasible or infeasible basis of an LP, providing full freedom to the user that whether to start with primal artificial free version or dual artificial free version without making any abrupt changes to the LP structure. These methods follow the same pivoting sequence as of simplex phase 1 without showing any explicit description of artificial variables or artificial constraints. Computational results showed that ASM is more efficient when mn is large and DASM is more efficient when nm is large. Hence these methods also provide great benefits in class room teaching by eliminating the relatively difficult and tedious calculations of artificial variables and constraints. It is also a teaching aid for the teachers who want to teach feasibility achievement as a separate topic before teaching optimality achievement. It is very helpful tool in integer programming and sensitivity analysis, because it provides a way to avoid dual simplex method.

Author Contributions

Conceived and designed the experiments: NT SI MI. Performed the experiments: MI. Analyzed the data: SI. Wrote the paper: SI. Algorithm Designing: MI.

References

  1. 1. Lanstra J, Rinooy Kan A, Schrijver A (1991) History of Mathematical Programming. A Collection of Personal Reminisscences.
  2. 2. Grattan Guinness I (1970) Joseph Fourier's Anticipation of Linear Programming. Operational Research Quarterly, 21(3), 361–364.
  3. 3. Dines L (1918) Systems of inequalities. Annal of Mathematics, 20 (2), 191–198.
  4. 4. Motzkin T (1952) Contributions to the theory of linear inequalities. RAND Corporation Translation 22.
  5. 5. Dantzig GB (1951) Maximization of a linear function of variables subject to linear inequalities. In Koopmans TC (Ed.), (pp. 339–347).
  6. 6. Dantzig GB (1963) Linear programming and Extensions. Princeton Univ. Press, NJ.
  7. 7. Shamir R (1987) The efficiency of simplex method:A survey. Management Science, 301–334.
  8. 8. Dantzig GB, Orden A, Wolfe P (1955) The generalized simplex method for minimizing a linear form under linear inequality restraints. Pacific J. Math., 5 (2), 183–195.
  9. 9. Wagner HM (1956) A Two-Phase Method for the Simplex Tableau. Operation Research, 4, 443–447
  10. 10. Serang O (2012) Conic Sampling: An Efficient Method for Solving Linear and Quadratic Programming by Randomly Linking Constraints within the Interior. PLOS ONE.
  11. 11. Arsham H, Baloh P, Damij T, Grad J (2003) An Algorithm for Simplex Tableau Reduction with Numerical Comparison. International Journal of Pure and Applied Mathematics, 4, 53–80.
  12. 12. Arsham H, Damij T, Grad J (2003) An algorithm for simplex tableau reduction: The push-to-pull solution strategy. Applid Mathematics and computation, 525–547.
  13. 13. Arsham H (1989) A tabular simplex type algorithm as a teaching aid for general LP models. Mathematical and Computer Modelling, 12, 1051–1056.
  14. 14. Papparrizos K (1990) The two-phase simplex without artificial variables. Methods of Operations Research, 73–83.
  15. 15. Arsham H (1997) Initialization of the simplex algorithm: An artificial-free approach. SIAM Rev., 736–744.
  16. 16. Arsham H (1997) An artificial-free simplex algorithm for general LP models. Mathematical and Computer Modelling, 25 (1), 107–123.
  17. 17. Enge A, Huhn P (1998) A Counterexample to H.Arsham's "Initialization of the Simplex Algorithm: An Artificial-Free approach". Society for Industrial and Applied Mathematics.
  18. 18. Stojkovic NV, Stanimirovic PS (2001) Theory and Methodology Two Direct Methods in Linear Programming. European Journal of Operations Research, 417–439.
  19. 19. Junior H, Lins M (2005) An Improved initial basis for Simplex algorithm. Computers & Operations Research, 1983–1993.
  20. 20. Wei L, Guangting C (2006) A New Least Square Algorithm for Linear Programming. Appl. Math. J. Chinese Univ. Ser. B, 21, 214–222.
  21. 21. Inayatullah S, Khan N, Imtiaz M (2010) New Minimum Angle Algorithms for Feasibility and Optimality. Canadian Journal on Computing in Mathematics, Natural Sciences, Engineering & Medicines, 22–36. pmid:25675475
  22. 22. Boonperm A, Sinapiromsaran K (2014) The artificial-free technique along the objective direction for the simplex algorithm.
  23. 23. Huangfu Q (2013) High performance simplex solver. Ph.D. dissertation, University of Edinburgh.
  24. 24. Koberstein A (2005) The dual simplex method, Techniques for a fast and stable implementation. Ph. D. Dissertation.
  25. 25. Illes T, Terlaky T (2002) Pivot versus interior point methods: Pros and cons. European Journal of Operational Research, 140(2), 170–190.
  26. 26. Kaluzny B (2001) Finite pivot algoirthms and feasibility. MS thesis, Faculty of Graduate Studies and Reseach, School of Computer Science, McGill University, Montreal, Quebec, Canada.
  27. 27. Arsham H (2006) A Big-M free Solution algorithm for general linear programs. International Journal of Pure and Applied Mathematics, 32, 37–52.
  28. 28. Darvas F (1974) Application of the sequential simplex method in designing drug analogs. Journal of Medicinal Chemistry, 799–804.
  29. 29. De Bruijn W, Ketelaars D, Gelsema E, Sorber L (1991) Comparison of the simplex method with several other methods for background-fitting for electron energy-loss spectral quantification of biological materials. Microsc. Microanal. Microstruct., 281–291.
  30. 30. Pulgarı́n JM, Molina AA, Pardo MA (2002) The use of modified simplex method to optimize the room temperature phosphorescence variables in the determination of an antihypertensive drug. Talanta, 57, 795–805. pmid:18968682
  31. 31. Haefner JW (2005) Modeling Biological Systems Principles and Applications. United States of America: Springer.
  32. 32. Yuekai S, Fleming RM, Thiele I, Saunders MA (2013) Robust flux balance analysis of multiscale biochemical reaction networks. BMC Bioinformatics.
  33. 33. Jatau J, Datau S, Agyo D (1999) Application of Simplex Method to Determine the Minimum Deviation in Parameters Affecting Dimensional Accuracy During Rolling From their Set Values. Bulletin of Science Association of Nigerian, 22, 113–120.
  34. 34. The Net-Lib Repository for Linear Problems. (n.d.). Retrieved from http://www.netlib.org/lp/data.