Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Optimal Routing in General Finite Multi-Server Queueing Networks

Abstract

The design of general finite multi-server queueing networks is a challenging problem that arises in many real-life situations, including computer networks, manufacturing systems, and telecommunication networks. In this paper, we examine the optimal routing problem in arbitrary configured acyclic queueing networks. The performance of the finite queueing network is evaluated with a known approximate performance evaluation method and the optimization is done by means of a heuristics based on the Powell algorithm. The proposed methodology is then applied to determine the optimal routing probability vector that maximizes the throughput of the queueing network. We show numerical results for some networks to quantify the quality of the routing vector approximations obtained.

Introduction

The design of networks with random arrivals, random service times, multiple servers, and a finite number of buffer spaces is a challenging problem that arises in many real-life situations, e.g. in manufacturing systems [1], [2], computer networks [3], [4], public services [5], [6], call centers [7], [8], pedestrian and vehicular traffic [9][12], among other situations. This problem is challenging because finite queueing networks are notoriously difficult to analyze analytically, and closed form expressions are not easily constructed for the performance measures of such systems. Also note that the underlying network design problems involved are usually very hard to solve.

In fact, there are several distinct network design optimization problems associated with finite queueing networks. According to Daskalaki & Smith [13] the optimal network design problem can be split up into three interrelated optimization problems:

  1. The optimal topology problem (OTOP), which deals with decisions of the design of the network itself, that is, the number of nodes (e.g. workstations, warehouses, delivery points, etc.) and arcs (e.g. corridors, conveyors, escalators, etc.) and the general configuration of these two components;
  2. The optimal routing problem (OROP), which deals with determining the routing probabilities in the network defined by the first problem;
  3. The optimal resource allocation problem (ORAP), which deals with the optimal allocation of the scarce resources in the network, e.g. the number of buffers (i.e., the buffer allocation problem, BAP) and the number of servers (i.e., the server allocation problem, CAP).

These three problems are challenging and difficult optimization problems. For an arbitrary topology, the OTOP is shown to be [14], and the same is conjectured for the general ORAP [15].

Previous work focused mainly on the ORAP in open finite acyclic queueing network settings. Both BAP and CAP are probably among the most well-known optimal resource allocation problems [16]. For instance, Cruz et al. [17] and Smith et al. [18] looked into the BAP, both in a single and in a multi-server setting, and Smith et al. [19] proposed algorithms to solve the CAP. However, the routing probabilities are usually assumed to be known beforehand for BAP and CAP [20].

The overall research objective of this paper is to build a relevant model and solution methodology for the system's throughput maximization problem. In this paper, we focus on optimizing the routing probabilities through the queueing network, i.e. the OROP. A similar research question is tackled by Daskalaki & Smith [13] in which they evaluated the joint effect of buffer allocation and routing on the throughput. Earlier, Gosavi & Smith [21] focused on the routing optimization problem related to the overall objective of throughput maximization. The common ground of both papers is that they used queueing networks with single servers and exponential service times [13], [21]. Kerbache & Smith [22] considered, for different product classes, the optimal routes conjoint with a variant of the optimal topology problem to determine the connected arcs in a single server queueing network setting. Distinct from Daskalaki & Smith [13] and Gosavi & Smith [21] is that Kerbache & Smith [22] considered general arrival times, general service times, and single server queues. Secondly, Gosavi & Smith [21] did not consider the general expansion method (GEM) in their analysis as the evaluation tool [23].

Specifically, we examine the OROP, by means of a combination of the GEM and a heuristic based on the Powell algorithm [24], specifically for acyclic networks of queues, which in Kendall notation means a queueing system with Markovian arrival rates, General service times, c parallel servers, and a total capacity of K items, including those items in service (practical applications to queueing networks include manufacturing and service systems [25] and transportation and material handling systems [26]). The results are compared to simulations to attest for the quality of the routing vectors obtained. Besides, another important contribution of this paper is to present helpful approximations to swift managerial decisions regarding the optimal routing probability vectors to maximize the overall throughput in a network of finite general-service queues. We also present important empirical arguments to show that these approximations are effective.

This paper is organized as follows. The next section describes in detail the mathematical model formulation considered and elaborates on both the performance evaluation tool and on the optimization procedure. In the following section detailed results are given for the problem on hand. Finally, the last section concludes this paper with final remarks and topics for future work in the area.

Materials and Methods

Mathematical Programing Formulation

The problem is defined on a digraph in which is the set of vertexes (finite queues) and is the set of arc (connections between the queues). Each vertex (queue) is characterized by Poisson arrivals, general service, and multiple servers. Mathematically, the optimal routing problem can be formulated as follows.

(OROP):(1)subject to:(2)(3)in which is the throughput, which is the objective that must be maximized, the optimal routing probability matrix, i.e. the matrix that maximizes the objective function of this predefined network, and is the set of succeeding vertexes of vertex that is,

The throughput will thus be affected by the effective routing of jobs through the network, the variability of the service distribution, the number of servers, and the number of buffers. It should be clear that the above model is a highly difficult stochastic programming problem to handle due to the inherent non-linearity of the objective function combined with the absence of any closed-form expressions for the throughput

Proposed Algorithm

Figure 1 presents the proposed algorithm to solve the OROP in order to provide more insights into the interaction between the performance evaluation tool and the optimization method.

The initial routing probability vector is conveniently set to the inverse of the number of nodes after a split,(4)in which is the number of succeeding nodes to node that is, the cardinality of set The optimization step itself is an iteration in which new solutions are generated following the Powell logic until convergence, that is, until the difference in is less than a predefined tolerance

The Powell algorithm can be described as an unconstrained optimization procedure that does not require the calculation of first derivatives of the function. Numerical examples have shown that the method is capable of minimizing a function with up to twenty variables [24], [27]. The Powell method locates the minimum of a non-linear function by successive uni-dimensional searches from an initial starting point along a set of conjugate directions. These conjugate directions are generated within the procedure itself. The Powell method is based on the idea that if a minimum of a non-linear function is found along conjugate directions in a stage of the search, and an appropriate step is made in each direction, the overall step from the beginning to the p-th step is conjugate to all of the sub-directions of the search. We have seen remarkable success in the past with coupling Powell algorithm and the GEM [19]. We discuss the GEM in detail now, which is also the method we used to obtain the performance measures for the problem studied in this paper.

Performance Evaluation

In previous papers (see e.g. Kerbache & Smith [23], [28]) the GEM has been successfully used to evaluate the performance measures of acyclic networks of finite queues. The GEM is a robust and effective approximation technique that is basically a combination of repeated trials and node-by-node decomposition in which each queue is analyzed separately and then corrections are made in order to take into account the interrelation between the queues in the network. The GEM has three stages, Network Reconfiguration, Parameter Estimation, and Feedback Elimination, to be described as follows.

Stage I: Network Reconfiguration.

The first step in the GEM involves reconfiguring the network. An artificial vertex is added preceding each finite vertex in the network. The artificial vertex is added to register the blocked customers at node and is modeled as an queue, as shown in Figure 2.

When an entity leaves vertex vertex may be blocked with probability or unblocked, with probability Under blocking, the entities are rerouted to vertex for a delay while node is busy. Vertex helps to accumulate the time an entity has to wait before entering vertex and to compute the effective arrival rate to vertex In other words, the GEM ultimate goal is to provide an approximation scheme to update the service rates at the upstream vertex to take into account all blocking after service caused by the downstream vertex (5)in which is the exponential service rate at vertex is the blocking probability of finite queue of size is the corrected exponential service rate at the artificial vertex and is the updated service rate at vertex As a final note, an important point about this process is that we do not physically modify the networks, only represent the expansion process through the software.

Stage II: Parameter Estimation.

In the second stage, the parameters and must be estimated, which is done essentially by utilizing known results for queueing theory. Index is omitted for simplicity.

Analytical results from the queue provide the following expression for the blocking probability (6)in which(7)for

However, the interest is on models, for which there is not exact closed form expression for Then approximations must be used and Kimura's [29] two moment approximation has proven to be very effective in similar cases [25], [30]. For example, let us fix and the following closed form expression can be developed from Equation (6), for the optimal buffer size for Markovian queues, as a function of the blocking probability:(8)

The following Kimura's two moment approximation can be used to approximate the optimal buffer size of a general service queue:(9)in which is the squared coefficient of variation of the service time distribution at the queue, is the traffic intensity, is the exact buffer size for a respective Markovian system, and is the nearest integer to Now, if we invert Equation (9) to solve for we achieve:(10)

This is a process that can be extended for In fact, expressions for of up to are available [30]. Another expressions, for are included in the software so that we have a complete set of blocking probabilities for

Since there is no closed form solution for this quantity the following approximation is used given by Labetoulle & Pujolle [31] obtained using diffusion techniques.(11)in which and are the roots to the polynomialwith in which and are the actual arrival rates to the finite and artificial holding nodes respectively. Furthermore, the arrival rate to the finite node is given by:

The delay distribution of a blocked customer at the holding node has the same distribution as the remaining service time of the customer being serviced at the node doing the blocking. Using renewal theory, one can show that the remaining service time distribution has the following rate (12)in which is the service time variance given by Kleinrock [32]. Notice that if the service time distribution at the finite queue doing the blocking is exponential with rate then:that is, the service time at the artificial node is also exponentially distributed with rate When the service time of the blocking node is not exponential, then will be affected by

Stage III: Feedback Elimination.

This stage is simply to eliminate the feedback loop, by recomputing the service time at vertex The updated service rate is given by:

Summary.

Similar equations can be established with respect to each of the finite vertexes (queues). Ultimately, we have simultaneous non-linear equations in variables and along with auxiliary variables such as and Solving these equations simultaneously, we can compute all performance measures of the network.

Numerical Results and Discussion

The software implementation is currently in Fortran 77. The compiler used was the DIGITAL Visual Fortran, Version 6, with the IMSL Fortran 90 MP Library version 3.0 for Microsoft Windows, to solve the nonlinear equations from the GEM. In our implementation, we set which proved to be effective based on the experiments. We first discuss the shape of the objective function. Secondly, we will give more insights for a number of split structures. We end the numerical results with some complex network structures. Please bear in mind that the range of possible experiments is exponential itself, so we have determined a very selected, but representative sample to present.

Shape of the Objective Function

It is interesting to analyze the shape of the objective function for the optimization problem described earlier. The case discussed here is defined as follows. We have a three node network with a split into two branches, as seen in Figure 3. The general parameters for the base case are and and for The criteria to select those parameters is such that the number of servers and the total capacity of node 1 is larger than the others as to prevent it from becoming a bottleneck.

We are particularly interested in the relationship between the overall throughput the routing probability the arrival rate and the squared coefficient of variation of node 2, Consequently, we set for all nodes, and The sensitivity of these settings on the throughput is not analyzed now, but is amongst others the subject of study in the next sections. Next, we enumerate all possible combinations for and and then analytically obtain the corresponding throughput which is shown in Figure 4 (always on the vertical axis), as a function of and

Figure 4-(a) clearly shows that the arrival rate is interacting with the routing probability. For low arrival rates, the network has low utilization. Consequently, different routing probabilities do not result in large changes in throughput On the other hand, for large arrival rates, one clearly sees an optimal point in regard to the routing probability. Due to the symmetrical structure considered, a 50% split seems to be optimal here. Figure 4-(b) looks into the joint effect of changing the squared coefficient of variation, together with the routing probability Again the inverted U-shape effect with a maximum around the 50% routing probability is visible. Next to this, it is clear that increasing the squared coefficient of variation from 0 to 2 reduces the overall throughput but has a smaller impact on throughput than the routing probability. Based on this simple network structure, it is evident that the routing probabilities and the squared coefficient of variation affect the throughput to a large extent. Consequently, correctly setting the routing matrix leads to significant gains in terms of throughput.

Basic Split Networks

In this section, we analyze further the two-branch network from Figure 3 and include in our analysis the three-branch network seen in Figure 5. We are interested in assessing the influence of the number of servers total capacities service rates and squared coefficient of variation of the service times in the model OROP, Equations (1) – (3). We choose to start with these two variants of a basic split structure as, from a routing allocation point of view, splits are the key building blocks in a generally configured network. The nodes after the splits are the ones of interest here. The first buffer is larger than the others as this will help to prevent the first queue of becoming a bottleneck node. The arrival rate is set equal to values

Split with Two Branches.

The top part of Table 1 gives the results for a two-branch split networks for unbalanced service rates and different squared coefficients of variation and In these cases the service rate of node 2 is made either relatively lower versus or equal versus or higher versus than the service rate of node 3. The base cases (sets B1d to B1f) show that the routing probability is roughly equal to 0.5 when the nodes after the split are identical (that is, same number of servers, capacities, service rates, and squared coefficient of variation). Moreover, this results appears to be insensitive to changes in the squared coefficient of variation of both nodes after the split. Of course, the throughput is affected (reduced) due to the changes (increase) in the variability. Secondly, changing the service rate of node 2, (and keeping all other parameters equal to the base case settings), clearly shows that the fast nodes receive preference over the slow nodes. For example, in sets B1a to B1c (i.e., when node 2 is slower than node 3) a lower routing probability is set to node 2 (0.3334) than the one to node 3 (0.6666).

Rather than changing the squared coefficient of variation of both nodes after the split, we evaluated some unbalanced cases where only node 2 is affected by a different squared coefficient of variation, (sets B1j to B1x, Table 1), combined with For these cases, we observe that the unbalance caused by the squared coefficient of variation only slightly changes the routing probability compared to sets with equal squared coefficients of variation (sets B1l, B1q, and B1v, Table 1). This is a confirmation of what we observed when evaluating the objective function earlier in the previous section. As we are now focusing on the small scale networks, this conclusion does not mean that the squared coefficient of variation has little effect in general. It is interesting to see that the throughput value is reducing as the squared coefficient of variation goes up although the routing probability is changing to protect more against the uncertainty in the second node. This is more prevalent in highly loaded systems.

For the two-branch split networks, we evaluated a number of routing vectors around the optimal routing obtained. Table 2 shows that the algorithm seems to have reached the optimal allocation for the routing probabilities into nodes 2 and 3 (set B1e, Table 1). Of course, one might argue that the optimization is rather easy due to the symmetric setting of the parameters. Therefore, we did the same analysis for the same parameter settings but with a network with unbalance in the service rates (set B1b, Table 1), also seen in Table 2.

thumbnail
Table 2. Perturbations around the optimal solution of two-branch split networks.

https://doi.org/10.1371/journal.pone.0102075.t002

In conclusion, we observed that in previous results the optimization algorithm tries to balance out the flow taking into account the differences (in service rates and squared coefficient of variation) among the two nodes after the split, which is intuitively logical as this strategy leads to the highest throughput. Moreover, the methodology seems to always find the optimal routing vector.

Split with Three Branches.

Let us now turn to the three-branch split network, Figure 5. It would be interesting to see to what extent the optimization algorithm balances the flow over the three nodes after the split and to what extent this is affected by the characteristics of the different nodes after the split. Table 3 shows a selected set of experiments done for this specific case.

Table 3 shows that for the complete symmetric case, that is, set B2c, Table 3, again the routing probabilities are symmetric, i.e. For the unbalanced cases in the squared coefficient of variation (sets B2a, B2b, B2d, and B2e, Table 3), it can be observed that the routing probability into the two identical nodes and are close to each other. For the remaining asymmetrical cases (sets B2f to B2o, Table 3), again the same conclusion holds. The faster (either in high number of servers or service rates) or more reliable (in terms of low squared coefficient of variation) are the nodes, more favored they are, resulting in high routing probabilities into these nodes.

Complex Networks

The simple networks discussed so far are interesting as they make it possible to show the behavior and logic of the optimization model in the presence of one split. In this section, we will evaluate some different complex topologies with regard to their routing probabilities. The first complex network considered is an extension of the two- and three-branch split networks, as depicted in Figure 6.

Table 4 gives an overview of a selected set of experiments for the structure C1. The initial setting is again a balanced case, that is, for (set C1a, Table 4). Additional set of experiments involves unbalancing the service rates , the squared coefficients of variation and the number of servers for nodes 6 and 7. With these experiments, we evaluate whether and how the methodology takes the characteristics of the complete sub-network after the split into account in determining the optimal routing vector.

We set up the experiments in such a way that either there are slow nodes (experiments C1c, C1e, C1g and C1h) or slow subsystems consisting of three connected nodes (experiments C1b, C1d, C1f, and C1i). Based on Table 4, we observe that in general the slower part of the network tends to receive less flow due to a lower routing probability into the relevant part. When after the first split in node 1 there is the choice to go to either the fast or slow subsystem, the faster subsystem is preferred. This is very clear in experiments C1b, C1d, C1f, and C1i, when the routing probability always favors the fastest downstream subsystem. However, if the last nodes are different (experiments C1c, C1e, C1g, and C1h), the conclusion is different. In all these experiments, the first split is just exactly half. The imbalance in the last nodes (i.e. nodes 4, 5, 6, and 7 are different), is completely absorbed in the routing probability at the immediately preceding nodes (i.e. nodes 2 and 3). Interestingly, this effect did not propagate upstream and did not affect the routing at the first split. Again, we see that the effect of the squared coefficient of variation on the routing probability is smaller compared to the number of servers or the service rates.

The second network structure C2 has a more general structure than the other networks, as seen in Figure 7. Nodes and can act as a bottleneck node which might become overloaded depending upon the specific parameters. It is then interesting to see how the routing probabilities are adapted to avoid or to reduce the workload in these bottleneck nodes.

Table 5 gives the results for a selection of parameter settings for network structure C2. The table shows that when node 10 becomes the bottleneck, routing the jobs into the direction of node 10 is avoided by reducing the routing probability at node 3 (always around 0.4) and node 5 (ranging between 0.0409 to 0.4751). On the other hand, when the characteristics of node 10 are such that it is not a bottleneck, then the routing to nodes 5 and 6 is almost 50/50. Secondly, it is clear that if the last node (node 12) becomes the bottleneck, only the throughput will be reduced.

Approximations for the Routing Probabilities

From a managerial point of view, it is interesting to have some good easy approximations that can be used to quickly set the routing probabilities. A number of possible approximations for the routing probabilities in the arc after a split of vertex into vertexes, can be considered.(13)(14)(15)(16)in which is the set of succeeding vertexes of vertex that is, Notice that is the cardinality of set

The first approximation, Equation (13), is simple but does not use any information from the vertexes after the split. This approximation only provides an equal spread of the throughput over the succeeding vertexes. It is expected that this approximation works well when the nodes after the split are very similar in terms of service rate, number of servers, and so on. The other approximations, Equations (14), (15), and (16), do take more information into account. Equation (16) is believed to be the most general as it combines information in regard to the speed and the number of servers. On the other hand, no information about the squared coefficient of variation is included in none of the approximations.

Tables 6 and 7 show that the performance of approximation improves as the network becomes more unbalanced (for instance, cases B1a-B1c are unbalanced, as defined in Table 1, and for these cases the smallest found is for on the other hand cases B1d-B1f are balanced and for them all is equal to 0.00%). This approximation of course takes into account the most information from the nodes after the split (not taking into account the squared coefficient of variation). If the nodes after the split are more alike (balanced) then the second approximation becomes favorable. On the other hand the first approximation is performing acceptable as well and could be preferred due to the easy implementation.

Conclusions and Final Remarks

In this paper, we examined the optimal routing problem in open finite acyclic queueing networks with a given general topology and multiple generally distributed servers. We determined the optimal routing probability vector that maximizes the throughput of an arbitrary configured network via a combination of the Generalized Expansion Method and Powell optimization tool. We presented numerical results showing the merits of the approach. Approximations for the routing probability vector are also presented and evaluated.

We have considered here only the throughput as the main performance measure. It would also be interesting to evaluate the behavior of the routing algorithm to minimize the cycle time, the work-in-process (WIP) or other performance measures. Topics for future research on the area include the routing in queueing networks with cycles, e.g., to model many important industrial systems that have reverse streams of products due to re-work, or even the extension to queueing networks.

Author Contributions

Conceived and designed the experiments: TvW FRBC. Performed the experiments: TvW FRBC. Analyzed the data: TvW FRBC. Contributed reagents/materials/analysis tools: TvW FRBC. Contributed to the writing of the manuscript: TvW FRBC.

References

  1. 1. Kock AAA, Etman LFP, Rooda JE (2008) Effective process times for multi-server flowlines with finite buffers. IIE Transactions 40: 177–186.
  2. 2. Liu R (2014) Probabilistic decomposition method on the server indices of an Mξ/G/1 vacation queue. Journal of Applied Mathematics 2014: 9 pages.
  3. 3. Ahmed NU, Ouyang XH (2007) Suboptimal RED feedback control for buffered TCP flow dynamics in computer network. Mathematical Problems in Engineering 2007: 17 pages.
  4. 4. Chen J, Hu C, Ji Z (2010) An improved ARED algorithm for congestion control of network transmission. Mathematical Problems in Engineering 2010: 14 pages.
  5. 5. de Bruin AM, van Rossum AC, Visser MC, Koole GM (2007) Modeling the emergency cardiac in-patient flow: An application of queuing theory. Health Care Management Science 10: 125–137.
  6. 6. He X, Hu W (2014) Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network. The Scientific World Journal 2014: 12 pages.
  7. 7. Jouini O, Dallery Y, Aksin Z (2009) Queueing models for full-flexible multi-class call centers with real-time anticipated delays. International Journal of Production Economics 120: 389–399.
  8. 8. Gontijo GM, Atuncar GS, Cruz FRB, Kerbache L (2011) Performance evaluation and dimensioning of GIX/M/c/N systems through kernel estimation. Mathematical Problems in Engineering 2011: 20 pages.
  9. 9. Cruz FRB, Smith JM, Queiroz DC (2005) Service and capacity allocation in M/G/C/C state dependent queueing networks. Computers & Operations Research 32: 1545–1563.
  10. 10. Cruz FRB, van Woensel T, Smith JM, Lieckens K (2010) On the system optimum of traffic assignment in M/G/c/c state-dependent queueing networks. European Journal of Operational Research 201: 183–193.
  11. 11. Rahman K, Ghani NA, Kamil AA, Mustafa A, Chowdhury MAK (2013) Modelling pedestrian travel time and the design of facilities: A queuing approach. PLoS ONE 8: e63503.
  12. 12. Khalid R, Nawawi MKM, Kawsar LA, Ghani NA, Kamil AA, et al. (2013) A discrete event simulation model for evaluating the performances of an M/G/C/C state dependent queuing system. PLoS ONE 8: e58402.
  13. 13. Daskalaki S, Smith JM (2004) Combining routing and buffer allocation problems in series-parallel queueing networks. Annals of Operations Research 125: 47–68.
  14. 14. Garey MR, Johnson DS (1979) Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman and Company.
  15. 15. Smith JM, Daskalaki S (1988) Buffer space allocation in automated assembly lines. Operations Research 36: 343–358.
  16. 16. Dallery Y, Gershwin SB (1992) Manufacturing flow line systems: A review of models and analytical results. Queueing Systems 12: 3–94.
  17. 17. Cruz FRB, Duarte AR, van Woensel T (2008) Buffer allocation in general single-server queueing network. Computers & Operations Research 35: 3581–3598.
  18. 18. Smith JM, Cruz FRB, van Woensel T (2010) Topological network design of general, finite, multi-server queueing networks. European Journal of Operational Research 201: 427–441.
  19. 19. Smith JM, Cruz FRB, van Woensel T (2010) Optimal server allocation in general, finite, multi-server queueing networks. Applied Stochastic Models in Business & Industry 26: 705–736.
  20. 20. Cruz FRB, van Woensel T (2014) Finite queueing modeling and optimization: A selected review. Journal of Applied Mathematics 2014: 11 pages.
  21. 21. Gosavi HD, Smith JM (1997) An algorithm for sub-optimal routeing in series-parallel queueing networks. International Journal of Production Research 35: 1413–1430.
  22. 22. Kerbache L, Smith JM (2000) Multi-objective routing within large scale facilities using open finite queueing networks. European Journal of Operational Research 121: 105–123.
  23. 23. Kerbache L, Smith JM (1987) The generalized expansion method for open finite queueing networks. European Journal of Operational Research 32: 448–461.
  24. 24. Himmelblau DM (1972) Applied Nonlinear Programming. New York: McGraw-Hill Book Company.
  25. 25. Smith JM (2008) M/G/c/K performance models in manufacturing and service systems. Asia-Pacific Journal of Operational Research 25: 531–561.
  26. 26. Bedell P, Smith JM (2012) Topological arrangements of M/G/C/K, M/G/C/C queues in transportation and material handling systems. Computers & Operations Research 39: 2800–2819.
  27. 27. Powell MJD (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives. Computer Journal 7: 155–162.
  28. 28. Kerbache L, Smith JM (1988) Asymptotic behavior of the expansion method for open finite queueing networks. Computers & Operations Research 15: 157–169.
  29. 29. Kimura T (1996) A transform-free approximation for the finite capacity M/G/s queue. Operations Research 44: 984–988.
  30. 30. Smith JM (2003) M/G/c/K blocking probability models and system performance. Performance Evaluation 52: 237–267.
  31. 31. Labetoulle J, Pujolle G (1980) Isolation method in a network of queues. IEEE Transactions on Software Engineering SE-6: 373–381.
  32. 32. Kleinrock L (1975) Queueing Systems, volume I: Theory. New York, NY, USA: John Wiley & Sons.