Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Political optimizer with interpolation strategy for global optimization

  • Aijun Zhu,

    Roles Conceptualization, Formal analysis, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    Affiliations School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, P.R. China, Department of Computer Science and Engineering, Texas A&M University, College Station, TX, United States of America, School of Mechano-Electronic Engineering, Xidian University, Xi’an, P.R. China, Guangxi Key Laboratory of Automatic Detecting Technology and Instruments, Guilin, P.R. China

  • Zhanqi Gu,

    Roles Data curation, Investigation, Software, Validation, Visualization, Writing – review & editing

    Affiliation School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, P.R. China

  • Cong Hu ,

    Roles Formal analysis, Funding acquisition, Supervision

    46518438@qq.com (CH); junhao@guet.edu.cn (JN)

    Affiliation School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, P.R. China

  • Junhao Niu ,

    Roles Project administration, Supervision

    46518438@qq.com (CH); junhao@guet.edu.cn (JN)

    Affiliation School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, P.R. China

  • Chuanpei Xu,

    Roles Project administration, Supervision, Visualization

    Affiliation School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, P.R. China

  • Zhi Li

    Roles Methodology, Resources, Supervision

    Affiliations School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, P.R. China, School of Mechano-Electronic Engineering, Xidian University, Xi’an, P.R. China, Guangxi Key Laboratory of Automatic Detecting Technology and Instruments, Guilin, P.R. China, Guilin University of Aerospace Technology, Guilin, P.R. China

Abstract

Political optimizer (PO) is a relatively state-of-the-art meta-heuristic optimization technique for global optimization problems, as well as real-world engineering optimization, which mimics the multi-staged process of politics in human society. However, due to a greedy strategy during the election phase, and an inappropriate balance of global exploration and local exploitation during the party switching stage, it suffers from stagnation in local optima with a low convergence accuracy. To overcome such drawbacks, a sequence of novel PO variants were proposed by integrating PO with Quadratic Interpolation, Advance Quadratic Interpolation, Cubic Interpolation, Lagrange Interpolation, Newton Interpolation, and Refraction Learning (RL). The main contributions of this work are listed as follows. (1) The interpolation strategy was adopted to help the current global optima jump out of local optima. (2) Specifically, RL was integrated into PO to improve the diversity of the population. (3) To improve the ability of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain a good balance. To the best of our knowledge, PO combined with the interpolation strategy and RL was proposed here for the first time. The performance of the best PO variant was evaluated by 19 widely used benchmark functions and 30 test functions from the IEEE CEC 2014. Experimental results revealed the superior performance of the proposed algorithm in terms of exploration capacity.

Introduction

Global optimization problems (GOPs) are inevitable in applied mathematics and practical engineering fields. As a general rule, most GOPs can be formulated as follows: (1) where f(x) and n denote an objective function and the number of variables, respectively. R is the real field, xQ and Q is an n-dimensional rectangle in Rn defined by the following equation: (2)

Where l = (l1,…,ln), u = (u1,…,un), xi∈(li, ui), i = 1,…,n and [l, u] is the feasible region.

Over the last two to three decades, meta-heuristic optimization techniques have been extremely popular in the GOPs field. Generally speaking, meta-heuristic optimization techniques may be divided into four categories: swarm-based, evolution-based, human behavior-based, and physics-based algorithms [1, 2]. A few well-known and state-of-the-art swarm-based algorithms primarily include Salp Swarm Optimizer [3], Grey Wolf Optimizer (GWO) [4], Particle Swarm Optimization (PSO) [5], Ant Colony Optimization [6], and [724], etc. Well-known evolution-based algorithms mainly include biogeography based optimizer [25], Differential Evolution (DE) [26], Evolutionary Strategy [27], Genetic Programming [28], and Genetic Algorithm [29]. Known human behavior-based algorithms primarily include Brain Strom Optimization [30], Soccer League Competition [31], and [3238], etc. Familiar physics-based algorithms include Thermal Exchange Optimization [39], Gravitational Search Algorithm [40], Sine Cosine Algorithm (SCA) [41], and [4250], etc.

Although Sorensen presented a more critical view on such meta-heuristic methods in 2015 [51], this did not prevent researchers from steadily creating new methods based on meta-heuristics. Since No Free Lunch Theory [52] gives the following enlightenment: there are no algorithms which can solve all problems, this motivates researchers develop new meta-heuristic algorithms ceaselessly. Recently (2020), Askari proposed a new meta-heuristic method called political optimizer (PO), which was inspired by a multi-staged process of politics and described a model to solve GOPs [53]. PO is the mathematical mapping of all the major phases of politics [5456]; thus, it belongs to human behavior-based algorithms. Experimental results have demonstrated that PO can solve classical engineering design problems, such as welded beam and speed reducer design. Results have indicated that the PO had excellent convergence speed performance, and a good exploration capacity in early iterations.

However, almost all individuals in the canonical PO are concentrated in a narrow area near the current optimal individual in the final stage of iterative optimization. Therefore, when solving complex multimodal global optimization problems, the entire population can easily converge to the local optimum. How to improve the capacity for jumping out of the local optimum in PO is the key technology, and the most important goal of global optimization. There are two main methods for improving the ability to jump out of the local optimum in PO. The first involves adjusting the parameters in PO, whereas the second is to introduce new search operators. However, there is only one parameter n (number of political parties, constituencies, and party members). According to the canonical PO, n should be set at 8 to obtain an appropriate convergence; thus, it is the better choice when introducing new search operators.

Furthermore, canonical PO has an inappropriate balance of global exploration and local exploitation during the party switching stage. In the canonical PO, the values of λ are linearly decreased from one to zero; however, it is not appropriate to adopt a linear parameter strategy to simulate the actual nonlinear search process.

Consequently, a summative conclusion may be drawn as follows. Firstly, the limited global optima searching capacity of canonical PO makes it stagnate in a local optimum with high probability and may lead to premature convergence. Second, it is not appropriate to adopt a linear parameter strategy to simulate the nonlinear search process, which leads to an unsuitable balance of global exploration and local exploitation during the party switching stage. Finally, due to limited iterations, the global optima in canonical PO are difficult to find as relates to complicated multimodal problems. These conclusions motivated us to make the following innovative implementation, with the main contributions of this work listed as follows:

  1. An interpolation strategy was adopted to facilitate the current global optima jump out of local optima. Interpolation is the process of synthesizing all known data to predict unknowns, which can take full advantage of certain known data.
  2. If interpolation strategy is utilized solely, the diversity of the population quickly declines. Therefore, a refraction learning (RL) strategy was adopted to improve the population diversity.
  3. To enhance the capacity of balancing exploration and exploitation during the party switching stage, a logistic model was proposed to maintain good balance.
  4. To the best of our knowledge, PO combined with interpolation strategy and RL was proposed here for the first time.

This paper is further organized as below. Section 2 introduces preliminary knowledge of canonical PO. The proposed methodology is introduced in Section 3. In Section 4, 19 well-known and extensively employed benchmark functions and 30 test functions from the IEEE CEC 2014 were utilized to evaluate the proposed algorithms. Finally, Section 4 also summarizes with concluding remarks.

Political optimizer (PO)

PO is inspired by the western political process of optimization, which involves two aspects. The first assumption is that all citizens attempt to optimize their goodwill to win the election. The second assumption is that all parties try to obtain more seats in parliament. PO is consisted of five phases, which include party formation and constituency allocation, election campaign, party switching, interparty election, and parliamentary affairs [53]. The main PO process is shown in Fig 1, whereas the process of the system to express how the system works is shown in Fig 2.

thumbnail
Fig 2. PO process.

(A) Population initialization, different shapes refer to different parties. Quadrilaterals, pentagons, and circles represent political parties P1, P2, and P3, respectively. Dotted frames C1, C2, and C3 represent different constituencies. (B) Members of various political parties conduct canvassing activities within their respective constituency. (C) Party leaders (solid) and constituency winners C1’, C2’, and C3’ are determined. (D) Update the positions of party members according to the constituency winners. (E) Update the positions of party members according to the party leaders. (F) The resultant positions are synthesized according to the position of the constituency winners and the party leaders. (G)(H) party switch, in constituency C3 is exchanged with in constituency C1. (I) election phrase and reassign party leaders and constituency winners. (J) parliamentary affairs: each parliamentarian is updated if its fitness is improved after being attracted by a random parliamentarian. (K) final positions after one iteration.

https://doi.org/10.1371/journal.pone.0251204.g002

The entire population can be divided into n political parties, which can be represented as Eq (3).

(3)

Every party consists of n party members, as demonstrated in Eq (4).

(4)

Each party member consists of d dimensions, as shown in Eq (5).

(5)

Each solution can also be an election candidate. Suppose there are n electoral districts as represented in Eq (6).

(6)

It is assumed there are n members in each constituency, as shown in Eq (7).

(7)

The party leader is defined as the member with the best fitness in a party, as shown in Eq (8).

(8)

All of the party leaders can be expressed as Eq (9).

(9)

The winners of the different constituencies are called members of parliament, as shown in Eq (10).

(10)

During the election campaign stage, Eq (11) and Eq (12) are employed to update the position of potential solutions.

(11)(12)

To balance exploration and exploitation, party switching is adopted. An adaptive parameter λ is used, which is linearly decreased from one to zero during the entire iterative process. Each candidate is selected according to probability λ and exchanged with the worst member of a randomly selected party, as shown in Eq (13).

(13)

In the election phrase, the winner in a constituency is obtained, as shown in Eq (14).

(14)

Proposed methodology

PO can obtain good convergence when dealing with simple problems; however, it easily falls into local optima when engaged with multi-peak test benchmark functions. To more efficiently obtain an optimum, interpolation strategy is introduced into the PO.

Interpolation strategy is based on known discrete points, where the interpolation method employs the data of known points to obtain other unknown points. It can obtain information on unknown points by employing the data of known points, so that it can make full use of the data of the known points to obtain an optimal solution in PO. The introduction of the interpolation can overcome the shortcomings of PO, which include its easily falling into local optima. In this section, several interpolation strategies were applied to PO, wherein the most suitable interpolation strategy was selected to combine with RL according to the results of various interpolation strategies.

The interpolation strategy is shown in Fig 3, where quadrilaterals represent known discrete points, solid lines are continuous functions drawn according to the data of known points, and circles represent the position where the optimal point is predicted (the lowest point in this example), where the lowest point is contingent on the constructor method. Following the use of the interpolation strategy a derived optimal solution is generated and compared with the solution generated by the original PO. If the solution is better than that generated by the PO, it will be replaced; otherwise, the solution generated by PO will be retained. This method was used for each of the interpolation strategies in this section and not be repeated.

Quadratic interpolation PO (QIPO)

Quadratic interpolation is one of the most common strategies used in unconstrained one-dimensional optimization. In the case that the current optimal solution is known, the new optimal solution is explored based on other data. By using this approach, the known data may be employed to calculate the real optimal solution. Quadratic interpolation is a curve fitting technique used to construct a quadratic function via the data of known point. The optimal point, and any two points of the contemporary population are selected to construct the quadratic function, and the optimal point of the quadratic function is obtained by Eq (15).

(15)

Where x1 = (x11,x12,,x1i) represents the optimal individual of the contemporary population, x2 = (x21,x22,,x2i) and x3 = (x31,x32,,x3i) represents the other two individuals of the contemporary population; f1, f2 and f3 are the fitness values of corresponding individual, respectively; i = 1,2, …,dim, dim is the maximum dimension. Quadratic interpolation is applied in each iteration, if the fitness value of optimal solution f(Qn) is better than that of the original optimal individual, the optimal individual is updated; otherwise, the original optimal individual is retained for the next iteration. The process of quadratic interpolation is shown in Fig 4.

Initially, PO initializes individuals in the first generation and calculates the fitness of all individuals. In the next step, it ranks the individuals by fitness and marks the best individual, and then randomly selects two individuals from all those except the optimal individual. These three individuals are selected to predict the best individual via interpolation strategy. Finally, the fitness value of the new individual is compared with the original optimal individual, after which the original individual is either retained or replaced according to the results of the comparison.

Advance quadratic interpolation PO (AQIPO)

Similar to quadratic interpolation, advance quadratic interpolation utilizes other means to derive the object function. In such a strategy, the selection of any contemporary population transforms the constructed quadratic function into an arc function. This enhanced interpolation strategy further improves the accuracy of the results and does not increase the complexity of the algorithm. The object function f(x) is constructed by individuals x1, x2 and x3, these individuals, x1 = (x11,x12,,x1i), x2 = (x21,x22,,x2i), x3 = (x31,x32,,x3i); i is the maximum dimension, whereas f1, f2 and f3 are the fitness values of corresponding individuals, respectively, Subsequently, we can obtain the arc equation F(x): (16)

Expanding this matrix and taking its derivative with respect to x, gives F’(x) = 0, which can then obtain the optimal solution to the function: (17)

The fitness value of the optimal solution is compared with that of the optimal individual of the original population, where the best one is retained, and the position of the individual is updated. The process of advance quadratic interpolation is similar to quadratic interpolation, which only requires the replacement of Eq (15) with Eq (17) (Fig 4).

Cubic interpolation PO (CIPO)

There is an issue with quadratic interpolation and its improved strategies, as shown in Fig 5. The dashed line is the predict line of quadratic interpolation and solid line is the object function line. Point p is the optimal value inferred by the quadratic interpolation based on the information of points a, b, and c. There is quite a difference between point p and the object function point q, and this problem is unavoidable in quadratic interpolation.

thumbnail
Fig 5. Cases when QI and AQI deal with multimodal problems.

https://doi.org/10.1371/journal.pone.0251204.g005

To solve this problem, cubic Interpolation was adopted, which constructs a cubic function and obtains a new optimal solution by four known points, including the individual of the contemporary population. As the second derivative of the cubic function is introduced on the basis of Hermitian interpolation, the precision and stability of cubic interpolation are improved. Moreover, cubic interpolation utilizes additional data than quadratic interpolation so as to avoid the false optimum. The cubic interpolation method selects the optimal individuals x1 and three other individuals x2, x3 and x4 to represent all the information of the population. The four selected individuals are sequentially arranged (from small to large) to obtain the following formula: (18)

According to the continuity of interpolation: (19) (20)

Differential equation of cubic spline interpolation curve: (21) (22)

According to the differential equation of the cubic interpolation curve: (23) (24) (25)

Combine Eq (18) to Eq (25), it can get a function for each interval by Eq (26): (26) where k = 1, 2, 3; x1, x2, x3 and x4 optimal individuals of the contemporary population and any other three individuals, respectively. y1, y2, y3 and y4 are the fitness values of the corresponding individuals, respectively. i is the dimension from 1 to the maximum dimension. Qki represents the cubic function that corresponds to each subinterval. By connecting the functions of each subinterval, the continuous function that contains all the above points and the optimal solution Qki can be calculated. Finally, the fitness value f(Qk) of the new solution is compared with the fitness of the original optimal individual. If the fitness value of the new solution is better than that of the original optimal individual, the optimal individual is updated. PO with cubic interpolation requires more data than other interpolation strategies.

Lagrange interpolation PO (LIPO)

To overcome the shortcomings of the original PO falling into local optima, PO with Lagrange interpolation was proposed. The principle involves the construction of quadratic functions from the data of known points, as well as other interpolations but is different from quadratic and cubic interpolation. Lagrange interpolation is not simply the solving of equations in that the conic formed by three points can be solved without constructing an equation. It can obtain the object function from the three quadratic curves via the following solution process:

Considering the unknown formula yki = ax2ki+bxki+c, the quadratic function may be obtained by adding three quadratic functions that are derived from the data of the three known points. When k = 1, y1i = 1, y2i = y3i = 0, the formula contains these points as f1; When k = 2, y2i = 1, y1i = y3i = 0, the formula contains these points as f2; When k = 3, y3i = 1, y1i = y2i = 0, the formula contains these points as f3, thus, the quadratic function that is: (27) Where k = 1,2,3; i = 1, 2, …, dim. Lagrange interpolation effectively utilizes the known data construct function to obtain the optimal value in the simplest form. Once the desired function is obtained, the optimal value is compared with that of the optimal individual of the original population. If the adaptive value of the optimal value of the optimal individual of the original population is better than that of the optimal individual of the original population, the individual is updated, and the next iteration is carried out.

Newton interpolation PO (NIPO)

The function constructed by Newton interpolation method is similar to Lagrange, where each additional point does not need to be recalculated, only subsequent newly added points need to be calculated. The specific operation steps of the Newton interpolation method are as follows:

It assumes that there are three points (x1,f1), (x2,f2), (x3,f3). x1 = (x11,x12,, x1i), x2 = (x21,x22,,x2i), x3 = (x31,x32,,x3i). i is the maximum dimension in this generation. The first order mean difference of Newton interpolation formula is obtained: (28)

The second-order mean difference can be obtained after adding a point: (29)

Therefore, the function of Newton interpolation method can be obtained as follows: (30)

Thus, we can find the optimal solution Q from the function. The optimal solution is compared with the optimal individual in the original population. If the fitness value f(Q) of the optimal solution is better than the fitness value of the original optimal individual, the optimal individual is updated. Otherwise, the optimal individual of the original population is retained, and the next iteration is carried out.

Refraction learning

If only the interpolation strategy is utilized, the diversity of the population will quickly decline. Therefore, a RL strategy was proposed to assist with finding a potentially better area and maintaining the diversity of PO. RL was proposed by Long (2019) [57, 58], which has a strong and broad searching capacity. As seen in Fig 6, the lower bound of the search is a and the upper bound of the search is b. Y indicates the normal. A is the incident point and B is the refraction point. The length of AO is L and that of BO is L’. α is the incident angle and θ is the refraction angle.

Based on Fig 6, we can calculate sin(α) and sin(θ) as follows: (31) (32)

From above two formulas, the refraction index p is obtained as: (33)

Assuming , the above formula is rewritten as: (34)

The n-dimensional space obtains: (35)

As shown in Fig 6, Y indicates the normal. A is the incident point and B is the refraction point. The length of AO is L and that of BO is L’. α is the incident angle and θ is the refraction angle. As shown in Eq (35), where is the jth dimension of solution X*; is the jth dimension of the opposite solution X*′; aj is the lower bound of the jth dimension;bj is the upper bound of the jth dimension.

Adaptive parameter based on logistic model

Obtaining a good balance between exploration and exploitation is the key component of optimization techniques. Global exploration involves the exploration of new potential areas. Consequently, it is critical to maintain the diversity of the population. Local exploitation involves searching for a high-precision solution in a small area, which is revealed through global exploration. Excessive global exploration leads to a slower convergence speed, whereas excessive local exploitation results in premature convergence.

According to Askari, adaptive parameter λ should be adopted to obtain an improved balance between exploration and exploitation during the party switching stage [53]. In canonical PO, the λ values are linearly decreased from one to zero. However, it is not appropriate to adopt a linear parameter strategy to reflect and simulate the actual nonlinear search process. As is well acknowledged, at the onset of iterative optimization, a robust global search capacity is required to examine as many potential areas as possible. At the conclusion of the iterative optimization, a strong localized search ability is required to improve accuracy. During the party switching stage in PO, the larger the λ value, the stronger of global exploration. For the early iteration, a larger λ value can produce a larger step, which leads to stronger global exploration. At this stage, the declining speed of λ needs to be larger. For the late iteration, a smaller λ value can produce a smaller step, which leads to stronger local exploitation. However, the smaller λ value leads to a poorer population diversity and results in a higher probability at local optimum. At this stage, we require population jumping out of local optimum; thus, the decline speed of λ needs to be smaller. Suppose the maximum of λ is λmax and the minimum of lamuda is λmin. Assume the initial decline speed of λ is k, after which it gradually slows down during the entire iterative process. Such a changing law of λ conforms to the logistic model [59], as shown in Eq (36): (36)

We can employ means of separation variables to solve Eq (36), after which adaptive parameter adjustment based on a logistic model is obtained, as shown in Eq (37): (37) where t is the iteration count, and k is the initial decline speed. Assuming that t equals zero in Eq (37), we can then obtain λ(0) = λmax. If t is allowed to approach infinity, then λ(∞) = λmin is obtained.

Each of the above interpolation strategies is applied to PO. The fitness of individuals predicted by interpolation are compared with the fitness of optimal individuals obtained by the original PO. The optimal interpolation strategy is selected to combine with refraction strategy. The flow chart of the CRLPO algorithm is illustrated in Fig 7.

Computation complexity

CRLPO preserves the canonical framework and main operations of PO, adding only RL with cubic interpolation following parliamentary affairs. Assuming the dimension of the objective function is D the population size is M and the maximum number of iterations is T. The main steps of CRLPO are election campaign, party switching, parliamentary affairs, and refraction learning. The time complexity of the election campaign is O(TMD). The time complexity of party switching is O(TM). The time complexity of parliamentary affairs is . The time complexity of refraction learning, and cubic interpolation are both O(TD). The overall time complexity of CRLPO is As the latter of above equation is a low order item, it can be omitted. Therefore, the overall time complexity of CRLPO is almost the same as that of the original PO. In summary, the improvement in PO does not add significant calculation costs.

Results and analysis

Benchmark and functions

To test the effectiveness of CRLPO, we benchmarked the CRLPO in this section with 19 classical and popular benchmark functions used by a many researchers [53, 6064]. There were two categories of benchmark functions adopted: unimodal and multimodal.

As unimodal functions have no local optimum, they are quite suitable for the evaluation of exploitation capacities. Conversely, multimodal benchmark functions possess many local optimums and only one global optimum. Therefore, they are quite suitable for evaluating the exploration abilities of algorithms. The main characteristics of the test functions are listed as follows:

Performance metrics

To evaluate the experimental results, several performance indicators were adopted in this paper, such as best fitness, worst fitness, mean fitness, and fitness variance. These performance indicators were defined as follows: (38) (39) (40) (41)

Where NR indicates the total number of independently repeated experiments, ExpReslt indicates the output function fitness of each independent experiment, and i is the current count of repeated experiments.

Comparative results of interpolation strategy on benchmark functions Table 1

In experiment 1, to compare all above interpolation strategies in Sect III, all of the methods with the interpolation strategy were set at the same parameters: the population size was 64 and the maximum number of iterations was 417.

thumbnail
Table 1. Main characteristics of 19 widely used benchmark functions.

https://doi.org/10.1371/journal.pone.0251204.t001

Fig 8 shows the convergence process of different interpolation strategies applied in PO with 10 dimensions. To further verify the performance, interpolation strategies are applied in PO with 30 and 50 dimensions. The convergence curves with 30 and 50 dimensions are depicted in Figs 9 and 10, respectively, which show the convergence processes of various interpolation strategies applied to the PO algorithm. The experimental results revealed the same conclusion; cubic interpolation has better performance than other interpolation strategies in different dimensions.

thumbnail
Fig 8. Convergence curve on 6 representative benchmark functions with 10 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g008

thumbnail
Fig 9. Convergence curve on 6 representative benchmark functions with 30 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g009

thumbnail
Fig 10. Convergence curve on 6 representative benchmark functions with 50 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g010

If interpolation strategy is utilized solely, the diversity of the population quickly declines; therefore, RL strategy was adopted to improve the diversity of the population.

Thus, on the basis of PO with the cubic interpolation strategy (CIPO), the RL strategy was applied into the CIPO, and we can assume that it enhances the performance of the algorithm. The CIPO algorithm with RL is referred to as CRLPO.

Experiments were conducted to compare the differences between RLPO and CRLPO under the same circumstances. As shown in Fig 11, the curve with square pattern represents the iteration of the PO algorithm using the RL strategy. It can be seen from Fig 11 that the convergence accuracy of the results were improved. For example, the RLPO algorithm requires 140 iterations to attain the convergence point with function 11, while following the addition of the cubic interpolation strategy, it requires only 50 iterations to reach the convergence point. Consequently, the cubic interpolation can improve the performance of the RLPO. The same results can be found in dimension 30 and dimension 50 as shown in Figs 12 and 13.

thumbnail
Fig 11. Convergence graph of CRLPO on 4 representative benchmark functions with 10 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g011

thumbnail
Fig 12. Convergence graph of CRLPO on 4 representative benchmark functions with 30 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g012

thumbnail
Fig 13. Convergence graph of CRLPO on 4 representative benchmark functions with 50 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g013

The above experimental results proved that the performance of CRLPO was better than that of RLPO. In the next section, CRLPO will be compared with other algorithms to evaluate its performance.

Experiments on test functions: Table 1

For the sake of evaluating the performance of CRLPO, it and other state-of-the-art algorithms were run thirty times on each benchmark function with 30, 100, and 1000 dimensions from Table 1. The CRLPO was compared with eight algorithms, including PO [53], GWO [4], HHO [15], PSO [5], SCA [41], DE [25], WOA [2], and RLGWO [58].

For the purpose of fairness, we set the same common parameters for all algorithms: population size was 64 and the maximum number of iterations was 417. As to other specific parameters of all algorithms, we set them as follows: for the PO algorithm, the party number n = 8; for the GWO algorithm, parameter a was linearly decreased between two and zero; for the HHO algorithm, parameter E1 was linearly decreased between two and zero; for the PSO algorithm, wMax = 0.9; wMin = 0.2, c1 = 2, c2 = 2; in SCA algorithm, parameter r1 was linearly decreased between two and zero; for the DE algorithm, F = 0.5, Cr = 0.01; for the WOA algorithm, parameter a was linearly decreased between two and zero, and parameter a2 was linearly decreased between -1 and -2; for the RL-GWO algorithm, p = 100, ξ = 1000; for the CRLPO algorithm, party number n = 8, p = 100, ξ = 1000;

In experiment 2, CRLPO and other eight algorithms were run independently thirty times on 19 benchmark functions from Table 1 with 30 dimensions. Table 2 summarizes the average (Ave) and standard deviation (Std), and the results of the Friedman’s test (Ave.R denotes average rank and Ova.R represents overall rank). Table 3 summarizes the p-values of the Wilcoxon rank-sum test at a 0.05 significance level for CRLPO against the other eight algorithms.

thumbnail
Table 2. Experimental results of 19 benchmark functions from Table 1 with 30 dimensions.

https://doi.org/10.1371/journal.pone.0251204.t002

thumbnail
Table 3. p-values of the Wilconxon rank-sum test at 0.05 significance level for CRLPO against other eight algorithms on 19 benchmark functions from Table 1 with 30 dimensions.

https://doi.org/10.1371/journal.pone.0251204.t003

According to Table 2, CRLPO outperformed PO, PSO, DE, and GWO on 2, 3, 4, and 5 benchmark test functions, respectively. Furthermore, CRLPO was similar to PO on 1, 9, 10, 11, 12, and 13 benchmark test functions. With respect to WOA, SCA, HHO and RLGWO, CRLPO found better results than these four algorithms on 6, 7, 12, and 13 benchmark test functions, respectively. In addition, CRLPO was similar to HHO and RLGWO on 14 and 19 benchmark test functions, respectively. However, it was surpassed by HHO, PO, and WOA in only one case. Additionally, CRLPO was similar to WOA and HHO on 18 and 19 benchmark test functions, respectively. According to Table 2, CRLPO did the best on the Friedman’s test.

As is well known, the Wilconxon rank-sum test is one of the most frequently used statistical significance analyses; therefore, it was employed to evaluate the performance of CRLPO. As shown in Table 3, the results of a pair-wise comparison of CRLPO and other algorithms were demonstrated at a 0.05 significance level with 30 dimensions. In Table 3,“+” represents the number of functions in CRLPO that significantly outperformed an optimization technique, “-” indicates the number of functions where CRLPO was statistically surpassed by an optimization technique, and “≈” denotes the number of CRLPO functions that were similar to an algorithm. As illustrated in Table 3, the larger number in the “+” field and the lower number in “-” field revealed that CRLPO was statistically significant and relatively better than other optimization techniques.

Fig 14 demonstrates the evolutionary process of the mean of the optimal value on 6 representative benchmark functions with 30 dimensions. As shown in Fig 14, CRLPO converged faster than did the other eight optimization techniques for all six representative cases. To ulteriorly observe and study the scalability of CRLPO, the proposed algorithm was tested with problems with higher dimensions. In experiment 3 CRLPO and the eight other algorithms were run independently 30 times on 19 benchmark functions (Table 1) with 100 dimensions. Table 4 summarizes the average (Ave) and standard deviation (Std) and the results of the Friedman’s test with 100 dimensions. Table 5 summarizes the p-values of the Wilconxon rank-sum test at a 0.05 significance level for CRLPO against the other eight algorithms with 100 dimensions.

thumbnail
Fig 14. Convergence graph of CRLPO and eight other algorithms on six representative benchmark functions with 30 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g014

thumbnail
Table 4. Experimental results of 19 benchmark functions from Table 1 with 100 dimensions.

https://doi.org/10.1371/journal.pone.0251204.t004

thumbnail
Table 5. p-values of the Wilconxon rank-sum test at 0.05 significance level for CRLPO against other eight algorithms on 19 benchmark functions from Table 1 with 100 dimensions.

https://doi.org/10.1371/journal.pone.0251204.t005

According to Table 4, CRLPO outperformed PO, PSO, DE, and GWO on 2, 3, 4, and 7 benchmark test functions, respectively. Further, CRLPO was similar to PO on 1, 5, 6, 9, 10, 11, 12, and 13 benchmark test functions. With respect to WOA, SCA, HHO, and RLGWO, CRLPO found better results than these four algorithms on 15, 19, 12, and 5 benchmark test functions, respectively. In addition, CRLPO was similar to HHO and RLGWO on 8 and 16 benchmark test functions, respectively. However, it was not surpassed by HHO, PO, or WOA in any case. Moreover, CRLPO was similar to WOA on 9 and 11 benchmark test functions. According to Table 4, CRLPO did the best on the Friedman’s test.

Table 5 shows the results of pair-wise comparisons of CRLPO and other algorithms demonstrated at a 0.05 significance level with 100 dimensions. As illustrated in Table 5, the larger number in “+” field and the lower number in “-” field show that CRLPO was statistically significant and relatively better than the other optimization techniques.

Fig 15 demonstrates an evolutionary process of the mean of the optimal value on six representative benchmark functions with 100 dimensions. As shown in Fig 15, CRLPO converged faster than the eight other optimization techniques in all six representative cases.

thumbnail
Fig 15. Convergence graph of CRLPO as well as eight other algorithms on six representative benchmark functions with 100 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g015

To further observe and study the scalability of CRLPO with large-scale problems, in experiment 4, CRLPO and the eight other algorithms were run independently thirty times on 19 benchmark functions (Table 1) with 1000 dimensions.

Table 6 summarizes the average (Ave) and standard (Std) deviation and results of the Friedman’s test with 1000 dimensions. Table 7 summarizes the p-values of the Wilconxon rank-sum test at a 0.05 significance level for CRLPO against the eight other algorithms with 1000 dimensions.

thumbnail
Table 6. Experimental results of 19 benchmark functions from Table 1 with 1000 dimensions.

https://doi.org/10.1371/journal.pone.0251204.t006

thumbnail
Table 7. p-values of the Wilconxon rank-sum test at 0.05 significance level for CRLPO against other eight algorithms on 19 benchmark functions from table 1 with 1000 dimensions.

https://doi.org/10.1371/journal.pone.0251204.t007

According to Table 6, CRLPO outperformed PO, PSO, DE, and GWO on 2, 3, 4, and 7 benchmark test functions, respectively. In addition, CRLPO was similar to PO on 1, 5, 6, 10, 11, 12, 13, 18, and 19 benchmark test functions. With respect to WOA, SCA, HHO, and RLGWO, CRLPO found better results than these four algorithms on 8, 10, 12, and 18 benchmark test functions, respectively. Furthermore, CRLPO was similar to HHO and RLGWO on 9 and 17 benchmark test functions, respectively. However, it was not surpassed by HHO, PO, and WOA ion any case. Additionally, CRLPO was similar to HHO and 11 and 14 benchmark test functions. According to Table 6, CRLPO did the best on the Friedman’s test.

As shown in Table 7, the results of a pair-wise comparison of CRLPO and the other algorithms were demonstrated at a 0.05 significance level with 1000 dimensions. As illustrated in Table 7, the larger number in “+” field and the lower number in “-” field show that CRLPO was statistically significant and relatively better than the other optimization techniques.

Fig 16 demonstrates an evolutionary process of the mean of the optimal value on six representative benchmark functions with 1000 dimensions. As shown in Fig 16, CRLPO converged faster than the eight other optimization techniques in all six representative cases. To summarize, it was concluded that CRLPO had excellent scalability on test functions with 30D, 100D, and 1000D.

thumbnail
Fig 16. Convergence graph of CRLPO as well as eight other algorithms on six representative benchmark functions with 1000 dimensions.

https://doi.org/10.1371/journal.pone.0251204.g016

Experiments on test functions from IEEE CEC 2014

To further evaluate the performance of CRLPO, other more complex 30 test functions from IEEE CEC 2014 were adopted. The IEEE CEC 2014 test suite can be separated into four categories. The first category (FC01- FC03) is unimodal functions, the second category (FC04- FC16) is multimodal functions, the third category (FC17- FC22) is hybrid functions, whereas the fourth category (FC23- FC30) is composition functions. The detailed problem definitions of 30 test functions from IEEE CEC 2014 were demonstrated in [65]. The search range of 30 test functions were defined as [–100, 100], and their dimensions were defined as 30.

To investigate the effectiveness of CRLPO, it was compared with nine state-of-the-art optimization techniques, including RW-GWO [66], CMA-ES [67], CLPSO [68], CoDE [69], MoABC [70], DGSTLBO [71], HCSA [72], LX-BBO [73], and RLGWO [58]. The parameter settings of the nine algorithms were the same as described in their original papers. To maintain the fairness of comparison, the same maximum number of function evaluations was set, which was 3.00E+10. CRLPO was run independently on 30 test functions.

The experimental results of the other nine optimization techniques were reviewed [58]. The error values (F(x)-F(x0)) were compared following 30 independent runs, where x is best position after an algorithm ends and x0 is the global optima. Table 8 shows the average of the error value (Ave), standard deviation (Std), and the results of the Friedman test. If an algorithm exhibited the best performance on a test function, the Ave results are displayed in bold face with a gray background.

thumbnail
Table 8. Comparison of CRLPO and nine selected algorithms on 30 test functions with 30 dimensions from IEEE CEC 2014.

https://doi.org/10.1371/journal.pone.0251204.t008

As illustrated in Table 8, CRLPO performed better than CMA-ES, CLPSO, CoDE, MoABC, and DGSTLBO on 2,7,9,19, and 23 benchmark functions, respectively. On the contrary, CMA-ES, CoDE, and DGSTLBO outperformed CRLPO on 1, 8, and 3 benchmark functions, respectively. Compared with MoABC, RW-GWO, and RLGWO, CRLPO exhibited improved and similar results on 1, 2, 7, 13, and 25 benchmark functions, respectively. In contrast, MoABC, RWGWO, and RLGWO beat CRLPO on 8, 4, and 6 benchmark functions, respectively. According to Table 8, CRLPO ranked first for the Friedman test.

Based on the analysis of the experimental results (Tables 28), it was concluded that the proposed CRLPO achieved competitive performance compared with other state-of-the-art algorithms.

Effectiveness evaluation of two components of CRLPO

As described in section three, there were two components that comprised CRLPO (refraction learning, and adaptive parameter based on logistic model). To evaluate the effectiveness of these two components, additional experiments were required to be implemented. Firstly, we adopted RL only without adaptive parameters based on a logistic model in canonical PO. In this situation, parameter adjustment based on linear model in original PO was adopted, where such algorithm is defined as CRLPO-I. Secondly, we adopted an adaptive parameter based on a logistic model only without RL in canonical PO, where such an algorithm is defined as CRLPO-II. If both components are not adopted (parameter adjustment based on linear model), such algorithm is defined as RLPO-III. If both components and cubic interpolation are adopted, such algorithm is defined as CRLPO, as designed in section three.

Each algorithm was independently run on 19 test functions (Table 1) thirty times. The setting of the population size, as well as the maximum number of iterations were the same as those of section 4.3. As shown in Table 9, “+” represents the number of functions where an optimization technique significantly outperformed CRLPO, “-” indicates the number of functions where an optimization technique was statistically surpassed by CRLPO, and “≈” denotes the number of functions where CRLPO was similar to an algorithm. As illustrated in Table 9, the larger number in the “-” field and the lower number in “+” field reveal that CRLPO was statistically significant and relatively improved over the other algorithms.

thumbnail
Table 9. Experimental results of effectiveness evaluation of two components of CRLPO on 19 test functions from Table 1 with 100 dimensions.

https://doi.org/10.1371/journal.pone.0251204.t009

As shown in Table 9, compared with CRLPO, CRLPO-II showed a poorer performance on 12 benchmark functions and did not have a better performance on any benchmark function. The reason was that CRLPO-II uses only a nonlinear conversion parameter strategy. Furthermore, CRLPO-II can easily be trapped into a local optimum. However, the nonlinear conversion parameter strategy cannot assist it with jumping out of local optimum. Consequently, there were significant differences in performance between CRLPO-II and CRLPO.

Additionally, compared with CRLPO, CRLPO-I had a similar performance on 12 benchmark functions and did not have a better performance on any benchmark function. Such a phenomenon was attributed to that fact that RL was more effective for improving the exploration capacity during the search process. Consequently, there are no significant differences in performance between CRLPO-I and CRLPO.

Moreover, compared with CRLPO, CRLPO-III showed a poorer performance on 11 benchmark functions and did not have better performance on any benchmark function. CRLPO-III could also be easily trapped into a local optimum. However, the linear conversion parameter strategy could not help it to jump out of local optimum. Consequently, there were significant differences in performance between CRLPO-III and CRLPO.

To summarize, we concluded that the two mentioned components were mutually beneficial for enhancing the performance of PO. Further, the adaptive parameters based on a logistic model resulted in fast convergence, while a RL strategy enhanced the exploration capacity.

Sensitivity analysis of the parameters of ξ and p

As described in the first portion of section three, RL is highly effective and was adopted to enhance global exploration. Nevertheless, the parameters in RL such as ξ and p have essential impacts on the performance of CRLPO. Consequently, to evaluate the impacts of ξ and p on the performance of CRLPO, additional experiments needed to be done. Firstly, let both ξ and p be larger than 1000. We found that, in such case, the performance of CRLPO was almost unchanged. Therefore, we selected the parameters of ξ and p in the range of from 1–1000. Secondly, to reduce the number of experiments, we selected a few typical values (e.g., 1, 10, 100, and 1000). Naturally, there were 16 combinations of ξ and p.

According to Eqs (34) and (35), we concluded that the combinations of both ξ = 1 p = 10 and ξ = 10 p = 1 had the same experimental results, and other combinations had the same effects. Therefore, as shown in Table 10, there were only seven representative combinations. The setting of population size, as well as the maximum number of iterations were the same as those of section 4.3. The dimensions were set to 30D, 100D, and 1000D, respectively (Tables 1012).

thumbnail
Table 10. Experimental results of CRLPO for 19 test functions with typical combinations of ξ and p(Dim = 30).

https://doi.org/10.1371/journal.pone.0251204.t010

thumbnail
Table 11. Experimental results of CRLPO for 19 test functions with typical combinations of ξ and p (Dim = 100).

https://doi.org/10.1371/journal.pone.0251204.t011

thumbnail
Table 12. Experimental results of CRLPO for 19 test functions with typical combinations of ξ and p (Dim = 1000).

https://doi.org/10.1371/journal.pone.0251204.t012

As illustrated in Tables 1012, compared with ξ = 100 and p = 1000, CRLPO with ξ = 1 and p = 1 achieved a poorer performance on most of benchmark functions. For functions 12, 13, and 18 similar results were obtained for CRLPO with all combinations of p and ξ. Consequently, observing the results of all combinations of p and ξ, it was concluded that the parameter setting of ξ = 100 and p = 1000 for CRLPO was appropriate.

The effect of interpolation strategy on RLPO

The RLPO with Cubic interpolation, Quadratic interpolation, Advance quadratic interpolation, Lagrange interpolation and Newton interpolation are defined as CRLPO, QIRLPO, AQIRLPO, LIRLPO and NIRLPO, respectively. In order to further explore the effect of interpolation strategy on RLPO, 17 test functions from [51] were adopted. All of the PO variants were run thirty times on each benchmark function with 10 and 30 dimensions, respectively. The experimental results are demonstrated as Tables 13 and 14, which indicate that Cubic interpolation can more efficiently enhancing the performance of RLPO overall.

thumbnail
Table 13. Experimental results of RLPO with various interpolation strategies on 17 test functions from [51] (Dim = 10).

https://doi.org/10.1371/journal.pone.0251204.t013

thumbnail
Table 14. Experimental results of RLPO with various interpolation strategies on 17 test functions from [51] (Dim = 30).

https://doi.org/10.1371/journal.pone.0251204.t014

Conclusion

A novel variant of PO (referred to as CRLPO) was proposed to enhance current metaheuristic methods. Inspired by the advantages of interpolation strategy and the phenomenon of refraction in nature, a sequence of novel PO variants was suggested and the best variant with interpolation strategy and RL strategy was proposed to form a hybrid with the original PO. We observed that CRLPO improved the diversity of the population and was beneficial for assisting PO with jumping out of local optimums. CRLPO was evaluated on 19 well-known benchmark functions with 30D, 100D, 1000D, and 30 benchmark functions from IEEE CEC 2014. The experimental results confirmed that CRLPO exhibited better, or at least competitive performance, when compared with selected state-of-the-art optimization techniques.

Acknowledgments

We express our sincere thanks to the anonymous reviewers for their helpful suggestions. We convey great appreciation to Professor Rabi N Mahapatra, who is with Texas A&M University, College Station, USA, for his inspiration during my stay in his laboratory. We are grateful to Mr. Frank Boehm (CEO of NanoApps Medical, Inc. and Editor of Nanomedical Device and Systems Design: Challenges, Possibilities, Visions) for his grammatical editing of this manuscript

References

  1. 1. Fausto F., Reyna-Orta A., Cuevas E., Andrade A. G., Perez M.- ´ Cisneros, From ants to whales: metaheuristics for all tastes, Artificial Intelligence Review (Jan. 2019).
  2. 2. Mirjalili S., Lewis A., The whale optimization algorithm, Advances in Engineering Software 95 (2016) 51–67.
  3. 3. Mirjalili S., Gandomi A.H., Z Mirjalili S., et al. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 2017, 114:1–29.
  4. 4. Mirjalili S., Mirjalili S. M., Lewis A. Grey Wolf Optimizer. Advances in Engineering Software, 2014, 69(3): 46–61.
  5. 5. Kennedy J., Eberhart R., Particle swarm optimization. IEEE International Conference on Neural Networks, 1995: 1942–1948.
  6. 6. Dorigo M., Birattari M., Stutzle T. Ant colony optimization. IEEE Computational Intelligence Magazine, 2006, 1(4): 28–39.
  7. 7. Zhu A., Xu C., Li Z., et al. Hybridizing grey wolf optimization with differential evolution for global optimization and test scheduling for 3D stacked SoC. Journal of Systems Engineering and Electronics, 2015, 26(4): 317–328.
  8. 8. Yu J. J., Li V. O., A social spider algorithm for global optimization, Applied Soft Computing 30 (2015) 614–627.
  9. 9. Dhiman G., Kumar V., Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems, Knowledge-Based Systems 165 (2019) 169–196.
  10. 10. Yang X.-S., Deb S., Cuckoo search via levy flights, in: 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), IEEE, 2009.
  11. 11. Alsattar H. A., Zaidan A. A., Zaidan B. B., Novel metaheuristic bald eagle search optimisation algorithm, Artificial Intelligence Review (Jul. 2019)
  12. 12. Harifi S., Khalilian M., Mohammadzadeh J., Ebrahimnejad S., Emperor penguins colony: a new metaheuristic algorithm for optimization, Evolutionary Intelligence 12 (2) (2019) 211–226
  13. 13. Jain M., Singh V., Rani A., A novel nature-inspired algorithm for optimization: Squirrel search algorithm, Swarm and Evolutionary Computation 44 (2019) 148–175.
  14. 14. Lamy J.-B., Artificial feeding birds (AFB): A new metaheuristic inspired by the behavior of pigeons, Springer International Publishing, 2018, pp. 43–60
  15. 15. Heidari A. A., Mirjalili S., Faris H., Aljarah I., Mafarja M., Chen H., Harris hawks optimization: Algorithm and applications, Future Generation Computer Systems 97 (2019) 849–872.
  16. 16. Kaveh A., Kooshkebaghi M., Artificial coronary circulation system; a new bio-inspired metaheuristic algorithm, Scientia Iranica (Apr. 2019).
  17. 17. Tan W.-H., Mohamad-Saleh J., Normative fish swarm algorithm (NFSA) for optimization, Soft Computing (May 2019)
  18. 18. Masadeh R., Sharieh B. A., A., Sea lion optimization algorithm, International Journal of Advanced Computer Science and Applications 10 (5) (2019)
  19. 19. Salgotra R., Singh U., The naked mole-rat algorithm, Neural Computing and Applications 31 (12) (2019) 8837–8857
  20. 20. Sharma H., Hazrati G., Bansal J. C., Spider monkey optimization algorithm, in: Studies in Computational Intelligence, Springer International Publishing, 2018, pp. 43–59.
  21. 21. Morais R. G., Mourelle L. M., Nedjah N., Hitchcock birds inspired algorithm, in: Computational Collective Intelligence, Springer International Publishing, 2018, pp. 169–180.
  22. 22. Arora S., Singh S., Butterfly optimization algorithm: a novel approach for global optimization, Soft Computing 23 (3) (2018) 715–734.
  23. 23. Wang G.G., Deb S., Cui Z., Monarch butterfly optimization, Neural Computing and Applications 31 (7) (2015) 1995–2014.
  24. 24. Shadravan S., Naji H., Bardsiri V., The sailfish optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems, Engineering Applications of Artificial Intelligence 80 (2019) 20–34.
  25. 25. Simon D., Biogeography-based optimization, IEEE Transactions on Evolutionary Computation 12 (6) (2008) 702–713.
  26. 26. Storn R., Price K. Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 1997, 11(4): 341–359.
  27. 27. Huning A., ARSP: Archiv fur Rechts- und Sozialphilosophie ¨ / Archives for Philosophy of Law and Social Philosophy 62 (2) (1976) 298–300.
  28. 28. Koza J. R., Genetic Programming: On the Programming of Computers by Means of Natural Selection (Complex Adaptive Systems), A Bradford Book, 1992.
  29. 29. Holland J. H., Genetic algorithms, Scientific American 267 (1) (1992) 66–73
  30. 30. El-Abd M., Global-best brain storm optimization algorithm, Swarm and Evolutionary Computation 37 (2017) 27–44.
  31. 31. Moosavian N., Roodsari B. K., Soccer league competition algorithm: A novel meta-heuristic algorithm for optimal design of water distribution networks, Swarm and Evolutionary Computation 17 (2014) 14–24.
  32. 32. Kumar M., Kulkarni A. J., Satapathy S. C., Socio evolution & learning optimization algorithm: A socio-inspired optimization methodology, Future Generation Computer Systems 81 (2018) 252–272.
  33. 33. Bodaghi M., Samieefar K., Meta-heuristic bus transportation algorithm, Iran Journal of Computer Science 2 (1) (2018) 23–32.
  34. 34. Salih S. Q., Alsewari A. A., A new algorithm for normal and large-scale optimization problems: Nomadic people optimizer, Neural Computing and Applications (Oct. 2019).
  35. 35. Ghorbani N., Babaei E., Exchange market algorithm, Applied Soft Computing 19 (2014) 177–187
  36. 36. Balochian S., Baloochian H., Social mimic optimization algorithm and engineering applications, Expert Systems with Applications 134 (2019) 178–191.
  37. 37. Rao R., Savsani V., Vakharia D., Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems, Computer-Aided Design 43 (3) (2011) 303–315.
  38. 38. Singh P. R., Elaziz M. A., Xiong S., Ludo game-based metaheuristics for global and engineering optimization, Applied Soft Computing 84 (2019) 105723.
  39. 39. Kaveh A., Dadras A., A novel meta-heuristic optimization algorithm: Thermal exchange optimization, Advances in Engineering Software 110 (2017) 69–84.
  40. 40. Rashedi E., Nezamabadi-pour H., Saryazdi S., GSA: A gravitational search algorithm, Information Sciences 179 (13) (2009) 2232–2248
  41. 41. Mirjalili S., SCA: A sine cosine algorithm for solving optimization problems, Knowledge-Based Systems 96 (2016) 120–133.
  42. 42. Jeong S., Kim P., A population-based optimization method using newton fractal, Complexity 2019 (2019) 1–9
  43. 43. Kirkpatrick S., Gelatt C. D., Vecchi M. P., Optimization by simulated annealing, Science 220 (4598) (1983) 671–680 pmid:17813860
  44. 44. Erol O. K., Eksin I., A new optimization method: Big bang–big crunch, Advances in Engineering Software 37 (2) (2006) 106–111.
  45. 45. Hatamlou A., Black hole: A new heuristic optimization approach for data clustering, Information Sciences 222 (2013) 175–184.
  46. 46. Anita A. Yadav AEFA: Artificial electric field algorithm for global optimization, Swarm and Evolutionary Computation 48 (2019) 93–108
  47. 47. Gilyen A., Arunachalam S., Wiebe N., Optimizing quantum ´ optimization algorithms via faster quantum gradient computation, in: Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, Society for Industrial and Applied Mathematics, 2019, pp. 1425–1444.
  48. 48. Eskandar H., Sadollah A., Bahreininejad A., Hamdi M., Water cycle algorithm–a novel metaheuristic optimization method for solving constrained engineering optimization problems, Computers & Structures 110–111 (2012) 151–166.
  49. 49. Feng X., Liu Y., Yu H., Luo F., Physarum-energy optimization algorithm, Soft Computing (Sep. 2017). pmid:26242590
  50. 50. Faramarzi A., Heidarinejad M., Stephens B., Mirjalili S., Equilibrium optimizer: A novel optimization algorithm, Knowledge-Based Systems (2019) 105190.
  51. 51. Sorensen K., (2015), Metaheuristics—the metaphor exposed. Intl. Trans. in Op. Res., 22: 3–18.
  52. 52. Ho Y. and Pepyne D., “Simple explanation of the no-free-lunch theorem and its implications,” J. Opt. Theory Appl., vol. 155, pp. 549–570,2002.
  53. 53. Askari Q., Younas I. and Saeed M., Political Optimizer: A novel socio-inspired meta-heuristic for global optimization, Knowledge-Based Systems (2020), https://doi.org/10.1016/j.knosys.2020.105709.
  54. 54. Melvix J. L., Greedy politics optimization: Metaheuristic inspired by political strategies adopted during state assembly elections, in: 2014 IEEE International Advance Computing Conference (IACC), IEEE, 2014.
  55. 55. Borji A., A new global optimization algorithm inspired by parliamentary political competitions, in: MICAI 2007: Advances in Artificial Intelligence, Springer Berlin Heidelberg, pp. 61–71.
  56. 56. Lv W., He C., Li D., Cheng S., Luo S., Zhang X., Election campaign optimization algorithm, Procedia Computer Science 1 (1) (2010) 1377–1386.
  57. 57. Long W., Wu T., Jiao J., et al. Refraction-learning-based whale optimization algorithm for high-dimensional problems and parameter estimation of PV model. Engineering Application of Artificial Intelligence, 89 (2020) 103457:1–14
  58. 58. Long W., Wu T., Cai S., et al. A Novel Grey Wolf Optimizer Algorithm with Refraction Learning. IEEE ACCESS, ‏ 2019,7: 57805–57819
  59. 59. David W., Stanley L. 2000. Applied Logistic Regression. Wiley Inter-science Press, New York.
  60. 60. Yao X., Liu Y., Lin G. Evolutionary programming made faster. IEEE Transactions on Evolutionary Computation, 1999, 3(2): 82–102.
  61. 61. Digalakis J., Margaritis K. On benchmarking functions for genetic algorithms. International Journal of Computer Mathematics, 2001, 77(4): 481–506.
  62. 62. Gaviano M., Lera D. Test functions with variable attraction regions for global optimization problems. Journal of Global Optimization, 1998, 13(2): 207–233.
  63. 63. Jamil M., Yang X.S. A literature survey of benchmark functions for global optimisation problems. International Journal of Mathematical Modelling and Numerical Optimization, 2013, 4(2): 150–194.
  64. 64. Mirjalili S., Lewis A. S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization. Swarm and Evolutionary Computation, 2013, 9(4): 1–14.
  65. 65. Liang J. J., Qu B. Y., and Suganthan P. N., “Problem de_nitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization,’’ Zhengzhou Univ., Henan Province, China, Tech. Rep. 201311, 2014.
  66. 66. Gupta S. and Deep K., “Anovel randomwalk greywolf optimizer,’’ Swarm Evol. Comput., vol. 44, pp. 101_112, Feb. 2019
  67. 67. Hansen N., Müller S. D., and Koumoutsakos P., “Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES),’’ Evol. Comput., vol. 11, no. 1, pp. 1_18, Mar. 2003. pmid:12804094
  68. 68. Hu H., Bai Y., and Xu T., “Improved whale optimization algorithms based on inertia weights and theirs applications,’’ Int. J. Circuits Syst. Signal Proc., vol. 11, pp. 12_26, Jan. 2017.
  69. 69. Wang Y., Cai Z., and Zhang Q., “Differential evolution with composite trial vector generation strategies and control parameters,’’ IEEE Trans. Evol. Comput., vol. 15, no. 1, pp. 55_66, Feb. 2011.
  70. 70. Akbari R., Hedayatzadeh R., Ziarati K., and Hassanizadeh B., “A multiobjective artificial bee colony algorithm,’’ Swarm Evol. Comput., vol. 2, pp. 39_52, Feb. 2012.
  71. 71. Zou F., Wang L., Hei X., Chen D., and Yang D., “Teaching_learningbased optimization with dynamic group strategy for global optimization,’’ Inform. Sci., vol. 273, pp. 112_131, Jul. 2014.
  72. 72. Mlakar U., Fister I. Jr., and Fister I., “Hybrid self-adaptive cuckoo search for global optimization,’’ Swarm Evol. Comput., vol. 29, pp. 47_72, Aug. 2016.
  73. 73. Garg V. and Deep K., “Performance of laplacian biogeography-based optimization algorithm on CEC 2014 continuous optimization benchmarks and camera calibration problem,’’ Swarm Evol. Comput., vol. 27, pp. 132_144, Apr. 2016.