Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Self-Averaging Property of Minimal Investment Risk of Mean-Variance Model

Abstract

In portfolio optimization problems, the minimum expected investment risk is not always smaller than the expected minimal investment risk. That is, using a well-known approach from operations research, it is possible to derive a strategy that minimizes the expected investment risk, but this strategy does not always result in the best rate of return on assets. Prior to making investment decisions, it is important to an investor to know the potential minimal investment risk (or the expected minimal investment risk) and to determine the strategy that will maximize the return on assets. We use the self-averaging property to analyze the potential minimal investment risk and the concentrated investment level for the strategy that gives the best rate of return. We compare the results from our method with the results obtained by the operations research approach and with those obtained by a numerical simulation using the optimal portfolio. The results of our method and the numerical simulation are in agreement, but they differ from that of the operations research approach.

Introduction

Investment is one of the most common economic activities, and it is defined as an activity in which it is expected that future remuneration will more than repay the cost [13]. The uncertainty involved in investments cannot be removed, and in general, a greater risk corresponds to a greater expected return. In this paper, we consider the portfolio optimization problem, which is a mathematical formulation of risk management for investments. The portfolio optimization problem is based on the framework of risk diversification management, which was introduced by Markowitz in 1952; it is the topic of some of the most important and most active research in mathematical finance, and various models have been proposed [25]. For instance, Markowitz proposed a rule for investing in several securities in order to diversify. For example, this rule states that when the expected return and the invested assets are constant, the best strategy minimizes the variance of the return (investment risk). Markowitz also analytically derived the investment strategy which minimizes investment risk. Konno and Yamazaki proposed the mean-absolute deviation model, whose risk function is not defined by the variance of the return but as the sum of the absolute error in each period; it has also been shown that the optimal solutions of the mean-variance model and that of the mean-absolute deviation model are in agreement [4]. Rockafellar and Uryasev proposed an expected shortfall model that is based on an index that measures the risk of not being less than a chosen confidence level; this model considers the downside risk of stochastically fluctuating gross earnings [5].

In recent decades, the portfolio optimization problem has been studied using analytical approaches that were developed in cross-disciplinary fields other than operations research [69]. Ciliberti and Mézard used the replica analysis method developed in spin glass theory to analyze the typical behaviors of the risk functions of the mean-absolute deviation model and the expected shortfall model [6]. Pafka and Kondor compared the distribution of the eigenvalues of the variance-covariance matrix defined by the return rate obtained from the dealings market with the limit distribution acquired by assuming an independent return rate; they also quantitatively analyzed the correlation between assets using random matrix theory, which was developed in mathematical statistics and quantum chaos [7]. Shinzato and Yasuda used a belief propagation method that was developed as a decoding algorithm to create an algorithm which can derive the optimal solution, with computational complexity that is proportional to the square of the number of investment assets [8]. Wakai, Shinzato, and Shimazaki analyzed the typical behaviour when using minimal investment risk and the concentrated investment level of Markowitz’s mean-variance model for the cases in which the return rates of the random matrix ensemble were independently and identically distributed from a normal distribution, a uniform distribution, and an exponential distribution [9].

Although studies have used methods from random matrix theory and statistical mechanical informatics to analyze the potential risks for the portfolio optimization problem [10], this has been done without a mathematical proof that the self-averaging property of investment risk and the concentrated investment level can be used to effectively evaluate the optimal solution (self-averaging will be further discussed below) [11]. However, it is not obvious that these indicators are self-averaging. Furthermore, if they are self-averaging, the potential risk of an investment system can be analyzed in this way, but it is important to determine whether this is so since the results are not always in agreement with those produced using operations research. Thus it is necessary to consider these problems systematically.

Therefore, in this paper, we provide a mathematical proof of the self-averaging property and discuss the validity of the analytical procedure that is widely used in operations research approaches to the portfolio optimization problem. To do this, we reformulate the portfolio optimization problem using a probabilistic framework. We also consider two scenarios, neither of which have been previously addressed, for the optimization of stochastic phenomena. We introduce the concept of self-averaging, and we then use it to analyze the potential risk of an investment system and determine the optimal investment strategy. We validate our proposed approach by comparing it with the results obtained by the standard operations research method and with those of a numerical simulation, and finally, we summarize the problems of using the operations research approach for a problem with this mathematical structure.

This paper is organized as follows. In the next section, we mathematically formulate the portfolio optimization problem and discuss an easy game that optimizes stochastic phenomena; this presents the viewpoint that we will use to analyze the potential risk of an investment system. The following section presents the concepts we will use, such as those from statistical mechanics and probabilistic inequalities, and summarizes the self-averaging property, which is an important feature of the optimal investment strategy. We present our results and compare them with those of other methods, as discussed above. In the final section, we present a summary and discuss areas of future work.

Model setting and optimization for stochastic phenomena

Markowitz’s mean-variance portfolio selection

In this subsection, we present the mean-variance model, which is one of the most commonly used models for the portfolio optimization problem. We begin by considering a stable investment market with N investment outlets, where wk represents the portfolio (or investment ratio) of asset k(= 1, ⋯, N), and denotes the return rate of asset k in scenario μ(= 1, ⋯, p). However, for simplicity, we do not include short sales, that is, −∞ < wk < ∞, and we assume that the probability distribution of the return rate is known for each asset. In Markowitz’s mean-variance model, given p scenarios, the investment risk is defined to be the sum of squares of the difference between the gross return for a scenario and its expectation ; determining an investment strategy by minimizing the risk creates a hedge. That is to say, the investment risk of a portfolio with N assets w = (w1, ⋯, wN)TRN is defined as (1) where T denotes the transpose of a matrix or vector, and E[f(x)] is the expectation of f(x). Since we have assumed that the probability distribution of the return rate of each asset is known, we represent the return rate as and the return rate matrix as . Also, note that although we introduce the coefficient in Eq (1) for simplicity of the discussion below, since is the summation of N random variables x wk (wk can be interpreted as the coefficient of a random variable x), even if we do not assume that the return rates of the assets are independent, if w is fixed, the correlation between the returns is small, and the third and higher moments of the return rate are finite, then we expect that as the number of investment outlets N increases, asymptotically approaches a multidimensional Gaussian distribution according to the central limit theorem.

In the mean-variance model, in the absence of constraints (such as budgets), an obvious optimal portfolio is obtained by minimizing the risk function 𝓗(wX) with w1 = ⋯ = wN = 0. Since this is equivalent to not investing, there is no investment risk; however, in this paper, we use the budget constraint (2) Moreover, although in the actual management of assets, it is necessary to impose expected return restrictions in addition to budget constraints, for simplicity, we will consider only budget constraints. Therefore, the portfolio optimization problem is formulated as determining the portfolio w that minimizes 𝓗(wX), that is, the risk function in Eq (1) with the constraint of Eq (2). In the case of p > N, the optimal solution can be analytically determined: (3) where the unit vector e = (1, 1, ⋯, 1)TRN and J−1 is the inverse of the variance-covariance matrix J = {Jij} = XXT ∈ 𝓜N × N, where element i, j of matrix J is (4) If pN, then since matrix J is not a regular matrix, the optimal solution of this portfolio optimization problem cannot be uniquely determined.

Using the definition of 𝓗(wX) in Eq (1), for each scenario μ, we can estimate the sum of squares of the difference between the gross earnings and its expectation ; this can be interpreted as the investment potential of portfolio w, and the concentrated investment level qw is defined as follows [6, 8, 9]: (5) With an equipartition investment strategy w = (1, 1, ⋯, 1)TRN, we obtain qw = 1; with a concentrated investment strategy, for example, investing only in asset 1, w = (N, 0, ⋯, 0)TRN, so qw = N is obtained; if one investor invests equally in m of N possible outlets, qw = N/m. Thus we have (6) and as portfolio w approaches equipartition, qw decreases to 1, and as it approaches a concentrated investment strategy, qw increases.

We note that although is widely used as a budget constraint in operations research, we do not use one in this paper. Since the optimal solution to the portfolio optimization problem with a budget constraint that is widely used in operations research is , and the optimal solution defined in Eq (2) is , the relation is proved, that is, in the optimal portfolios of each method, the investment ratios are in agreement. Furthermore, the concentrated investment level qw can be interpreted as an indicator of diversification when using the budget constraint of Eq (2).

Optimization for Stochastic Phenomena

In this subsection, we consider this optimization problem from a different viewpoint. We analyze the behaviour of minimal investment risk ɛ and the concentrated investment level qw of the mean-variance model, and we discuss the optimization of stochastic phenomena which have not been addressed in the operations research approach to this problem. Let us consider the following variant of the well-known game of rock-paper-scissors. Rule 1: Two subjects, Alice and Bob, play rock-paper-scissors 300 times. Rule 2: Alice can freely choose to display rock, paper, or scissors. On the other hand, Bob’s choice is randomly assigned by the toss of a fair dice: rock when the dice shows 1 or 2, paper for 3 or 4, and scissors for 5 or 6. Moreover, Alice knows that Bob’s choice is randomly and independently determined by the dice. Rule 3: The winner adds a point, the loser subtracts a point, and if they tie, there is no change to the score. We now consider whether it is expected that Alice will win overall.

(a) Two subjects simultaneously hold out their hands to indicate rock, paper, or scissors.

First, we consider the ordinary case. Since Alice does not know Bob’s choice, she assumes that each possibility has equal probability. Thus, if Alice also chooses according to a roll of a dice, the expected total acquired score would be 0. Similarly, if Alice chooses only rock, the expected score would be 0. We will use the following notation: rA (resp. rB) is the probability Alice (resp. Bob) chooses rock, pA (resp. pB) is the probability Alice (resp. Bob) chooses paper, and sA (resp. sB) is the probability Alice (resp. Bob) chooses scissors; note that for each player, the expectation of the total acquired score is 0. That is, if Alice does not have prior knowledge of Bob’s choice, neither of them can win (for a sufficient number of trials), and the expectation is that they tie. However, if the probabilities of Bob’s choice are not equal (e.g., (rB, pB, sB) = (2/3,1/6,1/6)), then Alice should choose (rA, pA, sA) = (0,1,0). In this case, the expected total acquired score for Alice is 150. Generally speaking, even if Alice does not have prior knowledge of Bob’s choice, if she knows the probabilities of his choices, she can choose in such a way that maximizes her expected score.

(b) Alice has prior knowledge of Bob’s choice.

We now consider the case where Alice makes her choice after learning what Bob will display. Her goal is to maximize the expectation of her total score. With added constraints, her expected total score will be larger than 0; without constraints, it will be 300.

(c) There is a constraint on the number of times that rock, paper, and scissors can each be chosen.

We now consider the case where Alice’s choices are constrained; for instance, they must each be chosen an equal number of times (i.e., 100 times). With this constraint, the expected total acquired score is 0 for case (a) (Alice does not have prior knowledge of Bob’s choice), but for case (b) (Alice has prior knowledge of Bob’s choice), it is 500/3. That is, if she has prior knowledge, she can take protective action.

(d) Five sets of 300 sessions.

Finally, we consider the case that two subjects play five sets of 300 games. If Alice has no prior knowledge of Bob’s choice, her expected total acquired score is again 0. If she has prior knowledge and there are no constraints, her expected score is 1500. If there is a constraint such that Alice must make the same choice each time, her expected score is 0 for case (a) (Alice does not have prior knowledge of Bob’s choice), but 5000/9 for case (b) (Alice has prior knowledge of Bob’s choice). In case (c) (Alice’s choices are constrained), it is easy to see that the expectation of Alice’s total score for case (a) is not larger than it is for case (b).

In conclusion, for both case (c) (Alice’s choices are constrained) and case (d) (five sets of 300 games and Alice makes the same choice each time), if Alice has prior knowledge of Bob’s choice, her score will be higher than if she has no such knowledge. That is, if Alice has prior knowledge, she can produce a better strategy.

We would like to make one more point, which will be further discussed below. Cases (a) and (b) (respectively, Alice does not or does have prior knowledge of Bob’s choice) are similar to the discussion of annealed and quenched disorder systems in statistical mechanics [10, 12]. In an annealed disorder system, the indicator f(wX) is first averaged using a random X in the disordered system, and then the averaged indicator E[f(wX)] is optimized in order to assess the behaviour of the system. In the rock-paper-scissors example, the indicator f(wX) corresponds to the total acquired score of Alice, w corresponds to Alice’s choices (or strategy), the random X corresponds to Bob’s choices, and case (a) corresponds to the annealed disorder system. On the other hand, in a quenched disorder system, f(wX) is first optimized subject to a restriction and a random X which is included in the disordered system, and then the optimized indicator is averaged using the random X in order to assess the behaviour of this system; this corresponds to case (b) of the rock-paper-scissors example. More generally, when optimizing an indicator f(wX) for a stochastic phenomena, it matters in which order the averaging and optimizing occur. When maximizing, it is necessary to precisely estimate two kinds of indicators, fa = maxw E[f(wX)] and fq = E[maxw f(wX)]. Since for any w, maxw f(wX) ≥ f(wX) holds for any random X, to find the relative magnitude of fa and fq, one averages both sides and then maximizes the right-hand side. The left-hand side does not need to be maximized because a definite value is obtained for fqfa. When minimizing, we have fa = minw E[f(wX)] and fq = E[minw f(wX)], and so in way similar to the above, we obtain fafq, that is, (7)

Operations research approach for portfolio optimization

From the above argument, we see that optimization of stochastic phenomena is handled differently for an annealed disorder system than it is for a quenched disorder system. Using this core concept, let us reconsider the portfolio optimization problem. In the standard analytical approach of operations research to the portfolio optimization problem, one first averages the risk function 𝓗(wX) with the return rate on the assets and then minimizes the expectation of the risk function E[𝓗(wX)] with a budget constant. Here, for simplicity, we presume that return rate is independently and identically distributed with a standard normal distribution. Thus, the expectation of the correlation between asset i and asset j is (8) Using this, the expected investment risk function E[𝓗(wX)] with a return rate on the assets is (9) where the ratio α = p/N is used. In addition, from the symmetry of this model, the optimal investment strategy of Eq (9), using the budget constraint of Eq (2), describes an equipartition investment strategy. The minimum expected investment risk per asset is evaluated as follows: (10) The concentrated investment level is (11) This analytical approach, which is widely used in operations research, does not provide insight into the optimal investment strategy in an actual market; the reason is not just that the model was simplified by assuming the return rate is independently and identically distributed with a standard normal distribution. Since this analytical approach is equivalent to case (a) in the rock-paper-scissors game with two subjects and the annealed disorder system, it is not clear that this approach could be used to minimize the investment risk 𝓗(wX) with respect to a realistic individual return rate matrix X, that is, w = argminw𝓗(wX). In particular, the equality argminw𝓗(wX) = argminw E[𝓗(wX)] is not always satisfied. As we discussed with the rock-paper-scissors game, if we average the investment risk with the return rate, we can avoid the complication of optimizing individual return rates; on the other hand, this approach does not evaluate the optimal strategy based on individual return rates. Even though it is not guaranteed mathematically that the solution to the minimal expected investment risk optimizes the investment risk for each set of return rates 𝓗(wX), this approach is widely used in operations research and might provide a misleading investment strategy.

On the other hand, let us consider case (b) in the rock-paper-scissors game with two subjects and the quenched disorder system. In a stable investment market, even if at μ = 0 one had prior information about the probability distribution of the return rate of each asset during the next period (μ = 1 to μ = p), since it is not possible to know the actual return rate, it is difficult to select the optimal investment strategy. However, if we have prior information about the return rates, as discussed in the rock-paper-scissors example, we can minimize the investment risk and obtain an optimal investment strategy. In particular, if p/N ≤ 1, since it is well known that the optimal solution is a linear sum of the eigenvectors of the minimal eigenvalue of the variance-covariance matrix J, the investment risk per asset is 0, since the minimal eigenvalue of the matrix J is 0 since J is nonsingular. We next consider the concentrated investment level qw. Let 𝓥 be variance of the sample variance ; it is evaluated as follows: (12) When α = p/N is small, 𝓥 is large, and one should invest heavily in blue-chip assets for which the return rates have smaller sample variances than those of the other N investment outlets; in this way, the risk is decreased, and the optimal investment strategy is asymptotically close to a concentrated investment strategy, namely qw ≫ 1.

When p/N > 1, using the optimal solution for Eq (3), the two indicators can be analytically assessed. That is, the minimal investment risk per asset ɛ(X) and the concentrated investment level qw(X) can be written as follows: (13) (14) where we use the explicit return rate matrix as the argument since these indicators depend on the return rate matrix. The variance-covariance matrix J = XXT ∈ 𝓜N × N has already been defined. In actual investments, assuming fair dealing, since we do not have prior knowledge of the actual return rate, we cannot precisely determine the two indicators. However, we can evaluate the previous risk in an investment system and thus support the strategy of an investor. In order to provide useful insight, we need to precisely analyze ɛ(X) and qw(X). For the reasons noted here, we assume that during the initial period, we have prior knowledge of the return rate; although this assumption is impossible, we note that we will show below that this assumption is not required to evaluate the potential of an investment system. Although we need to assess the optimal solution or the inverse of the variance-covariance matrix in order to assess the potential risk of an investment market, it is difficult to do this since the computational complexity of finding the inverse matrix is proportional to the cube of the matrix size N, which is the number of investment outlets. In addition, we need to find each inverse matrix for each return rate in order to evaluate the minimal investment risk of each set; however, if the minimal investment risk randomly fluctuates with the return rates, we would need to average the minimal investment risk with the return rate set X. We now note that we can use the self-averaging property to simplify evaluation of the potential investment risk.

Preliminaries

We first prepare some mathematical tools to enable discussion of the self-averaging property of the minimal investment risk.

Statistical mechanics

First, using the Boltzmann distribution of the inverse temperature β(> 0), which is widely used in statistical mechanics [13], the posterior probability of portfolio w given return rate matrix X, P(wX), is defined as follows: (15) where the prior probability P0(w) is 1 if portfolio w satisfies Eq (2), and it is 0 otherwise; eβ𝓗(wX) is the likelihood function; and Z(β, X), the partition function, is a normalized constant and is defined as follows: (16) From this, it is found that the posterior probability P(wX) satisfies the property of a probability measure, that is, P(wX) ≥ 0 and . Furthermore, it is well known that w* = arg maxw P(wX), which is obtained using the maximum a posteriori estimation, is consistent with the portfolio obtained by minimizing the investment risk function 𝓗(wX). By taking the limit of the inverse temperature β, we obtain (17) where is the optimal investment ratio for asset i. Thus, we can average the portfolio w and the investment risk 𝓗(wX) using the a posteriori probability P(wX) and allow the inverse temperature β to become sufficiently large: (18) (19) where δ(u) is the Dirac delta function and this holds for any function f(x) such that (see appendix 2). From this reformulation and using the posterior probability defined in Eq (15), the portfolio optimization problem can be solved using the framework of probabilistic reasoning.

Chernoff inequality

Next, we introduce one of the probability inequalities, the Chernoff inequality, as follows. For a random variable Y with known probability measure and a constant number η, the probability that ηY satisfies the following inequality for any u > 0 is [14, 15]: (20) This can be easily proved; for example, consider the step function Θ(W), which is 1 if W ≥ 0 and 0 otherwise. First, for u > 0, we obtain Θ(W) ≤ euW. From this, we derive Pr[ηY] = E[Θ(Yη)] ≤ e E[euY]. In addition, for Pr[ηY], we obtain Pr[ηY] ≤ e E[euY] for u < 0.

From Eq (20), we could derive a tighter upper bound. Since the right-hand side in Eq (20) is guaranteed for an arbitrary u > 0, there necessarily exists a minimum value for the right-hand side for any u > 0, and we obtain (21) Here R(η) is the rate function and is defined as (22) The cumulative generating function ϕ(u) = logE[euY] is a convex function of u, and the rate function R(η), defined by the Legendre transformation of a convex function, is also a convex function. It is also known that R(η) is nonnegative, R(η) = 0 if ηE[Y], and R(η) > 0 if η > E[Y]. These properties of the rate function are proved in appendix 1.

Self-averaging property supported by large deviation theory

When the portfolio w depends on the posterior probability P(wX) defined in Eq (15), the probability that the investment risk per asset is less than or equal to a constant number satisfies the Chernoff inequality; that is, satisfies (23) Here, is a positive number and is defined by Eq (16) (with β replaced by ). Thus, the probability inequality of the tighter upper bound of Eq (23) is derived using the following rate function: (24) and we have [16]. In a similar way, for the probability of , , thus is also obtained, where we use the rate function (25) In order to analyze the rate functions in Eqs (24) and (25), it is also necessary to assess and , which depend on the return rate matrix X. Based on the definition in Eq (16), assessing these partition functions analytically is more difficult than assessing the optimal solution analytically. In order to resolve this difficulty, we consider the cumulative distribution of or the Helmholtz free energy f(β, X), as defined in the following equation: (26) The Helmholtz free energy f(β, X) fluctuates randomly with the probability of the return rate matrix X. Thus, it is necessary to evaluate the Chernoff inequality for the Helmholtz free energy and its rate function: (27) where n > 0 has already been defined. In a similar way, we have (28) where n < 0. In conclusion, we obtain the two probability inequalities, and where (29) (30) In both inequalities, it is necessary to analyze .

Replica analysis and numerical simulation

Similarity to the Hopfield model

In this subsection, in order to determine whether we can use replica analysis to evaluate E[Zn(β, X)], let us consider briefly the problem of recalling a pattern stored in a neural network constructed of N neurons; the mathematical structure of this problem is similar to the portfolio optimization problem [10, 17]. Let Sk be the state of neuron k; then Sk = 1 if neuron k has been fired and Sk = −1 otherwise. Additionally, x, (k = 1, ⋯, N, μ = 1, ⋯, p) is the memory of neuron k for pattern μ included in p stored patterns, and it is randomly assigned ±1 with equal probability.

Then, for p patterns, the Hebb rule is defined as follows: (31) where Jij is the correlation between neuron i and neuron j. Thus, it is well known that the neuron state S that minimizes the Hamiltonian 𝓗(SX) in Eq (32) is consistent with each stored pattern: (32) If neuron state S is consistent with pattern 1, that is, Sk = xk1, the Hamiltonian 𝓗(SX) can be written as (33) where the overlap between pattern μ and pattern ν in limited by the number of neurons N and satisfies (34) Intuitively, each pattern is orthogonal with each of the others, since the stored patterns are independent and are randomly assigned. This problem of recalling patterns stored in a neural network and of accounting for the number of identifiable patterns is called the associative memory problem, and the model defined in Eq (32) is called the Hopfield model.

In the analysis of the Hopfield model, the upper limit of the number of identifiable patterns is estimated using E[Zn(β, X)], which evaluates the learning potential of the neural network. We also note the mathematical similarity between this model and the mean-variance model, which indicates that we could adapt the analytical approach used for the Hopfield model; that is, we could use replica analysis for the portfolio optimization problem and to assess the potential of the investment system.

Main results obtained in replica analysis

For the detailed calculations of replica analysis, please see appendix 2. We will limit the number of investment outlets N such that α = p/NO(1). For nN, we have (35) where , ka, qwab, and are order parameters, e = (1, ⋯, 1)TRn is a constant vector, I ∈ 𝓜n × n is the identity matrix, and ExtrA f(A) are the extrema of f(A) with respect to A. From this, the extrema of k, Qw, and are assessed as follows: (36) (37) (38) where D = e eT ∈ 𝓜n × n is a square matrix, all of whose components are 1. Based on these results, we do not need to assume replica symmetry ansatz with respect to the order parameters of this model (). Thus, substituting these result into Eq (35), we have (39) Here we should note that in appendix 2, we require that the replica number n in Eq (35) is a natural number, that is, since E[Zn(β, X)] at replica number nN can be estimated comparatively easily, the replica number n in Eq (39) should also be a natural number. However, in the optimization in Eqs (29) and (30), we need to have nR to adequately discuss the solution. Thus, we assume here that the replica number n in Eq (39) is a real number and use this to discuss our approach in detail. In below subsection, we will compare this result with the result to justify that this is applicable.

The two rate functions are calculated as follows: (40) (41) where (42) (43) This result satisfies the properties of a rate function, as shown in appendix 1. Moreover, using Gibbs inequality, s − 1 − log s ≥ 0, if α > 1, in the limit as the number of investment outlets N becomes sufficiently large, we obtain (44) (45) From Eqs (44) and (45), since f(β, X) is localized around the constant , (46) is verified for a realistic set of return rates. Namely, the Helmholtz free energy per asset f(β, X), which is a function of the random variable X, becomes a definite value in the limit of sufficiently large N. Thus, we have (47) In addition, because of localizing around the definite value, fm(β, X) = E[fm(β, X)] is also satisfied. This property in which a statistic or function of a random variable localizes around a definite value (or its average) is called a self-averaging property. By substituting Eq (47) into Eq (26), we obtain (48) and the two rate functions (49) (50) where . Thus, for a sufficiently large N, since the investment risk per asset is also localized around , the investment risk is self-averaging. Moreover, for a sufficiently large β, from Eq (17) we can derive the minimal investment risk, as follows: (51)

Furthermore, since ɛ(X) is also derived analytically from an identical equation, , we have validated our method in another way (see appendix 2). In addition, from the self-averaging property of the investment risk, since we can ignore the dependency of the investment risk ɛ(X) on the return rate matrix X, we will replace ɛ with ɛ(X). In a similar way, qw(X) is also self-averaging, and so then (52) is obtained, where qw(X) has been replaced by qw.

We also note that since the minimal investment risk per asset and the concentrated investment level are both self-averaging (since their dependency on the return rate matrix X is ignored), we can estimate the potential of this investment system. In a stable investment market, this implies that the minimal investment risk with respect to a realistic return rate averaged over an investment period and the minimal investment risk defined by the return rate are in agreement since the minimal investment risk is self-averaging. Because of this, we do not need the assumption of the quenched disorder system that during the initial period, we have prior knowledge of the return rates that is, we only need to know a priori the previous return rates. This is another advantage of the self-averaging property.

Comparison with results obtained by the operations research approach

In this subsection, we compare the two indicators that were derived in Eqs (10) and (11) using the analytical approach of operations research. For example, we consider and with the two feature indicators derived in above subsection and using the self-averaging property, that is, if α > 1 and ɛ = 0 otherwise, and if α > 1 and qw ≫ 1 otherwise. Thus, for any α, we have (53) (54) First, the minimal expected investment risk ɛOR is not smaller than the expected minimal investment risk ɛ, that is, Eq (53) is consistent with the relationship in Eq (7). Next, from both of the concentrated investment levels and using the analytical procedure of operations research, we find that the risks for each investment outlet are averaged and negated by the returns matrix. We thus conclude that the optimal strategy is equipartition investing. On the other hand, when using our proposed method, since it is possible to find the optimal solution for each investment outlet, the best return rate is found for an investment outlet that has little variation, especially if α is small, and this implies that the optimal strategy is concentrated investing.

Furthermore, we provide another intuitive interpretation using another mathematical argument. By the definition of qw and the N eigenvalues of matrix J = XXT ∈ 𝓜N × N, λk, (k = 1, ⋯, N, λ1λ2 ≤ ⋯ ≤ λN), then qw = E[λ−2]/(E[λ−1])2 and where ; see appendix 3 for the derivation. In addition, since the minimum eigenvalue of the asymptotic distribution is approximately close to +0 when α → +1, if m eigenvalues are regarded as minimum eigenvalues, where mO(1), then by using L’Hôpital’s rule, we can estimate the asymptotic form of qw as follows: (55) Since E[λ−2] increases faster than (E[λ−1])2, qw increases. This is consistent with our finding that . Moreover, if α ≫ 1, then 1 ≤ qw ≤ (λmax/λmin)2, and (56) where the maximum asymptotic eigenvalues are and . This is also consistent with our finding that .

Numerical simulation

Although we presented a theoretical discussion of the potential of an investment system, using the self-averaging property of the investment risk per asset ɛ and the concentrated investment level qw, we assumed that the replica number n in Eq (39) was a real number in above subsection. In the previous subsection, we presented some mathematical interpretations for our findings. However, it is not guaranteed mathematically that the replica number nR is applicable; we thus need to verify that we may use this assumption in order to legitimize the findings based on our proposed method. In this subsection, we perform a numerical simulation, and we then compare the results of our proposed method, the numerical results, and the results from the analytical operations research procedure.

In this numerical simulation, the number of investment outlets was N = 103, and the number of scenarios was p ∈ [1200, 8000]; the scenario ratio was α ∈ [1.2,8.0]. In addition, we assessed J−1 = (XXT)−1, the inverse of the variance-covariance matrix defined by the randomly assigned return rate matrix X; the return rates on assets were independently and identically distributed with a standard normal distribution. We then solved Eq (3) for the optimal portfolio in order to estimate the minimal investment risk per asset ɛ(X) and the concentrated investment level qw(X). Finally, we averaged them over 100 sets of the return rate matrix.

In Fig 1, three minimal investment risks per asset and three concentrated investment levels are shown. The horizontal axis indicates the scenario ratio α = p/N, and the vertical axis shows the two indicators. The results of our proposed approach are indicated by solid lines, the numerical results are indicated by markers with error bars, and the results of the operations research approach are indicated by dotted lines. The results of our method (solid lines) and the numerical results (markers with error bars) are in agreement. For this numerical simulation, we considered the case in which we have a priori knowledge of the return rates. Thus, it turns out that our proposed approach can precisely assess the potential of an investment system. On the other hand, the dotted lines are based on a scenario in which the expected utility is maximized, and these results do not coincide with the others. Unfortunately, this indicates that the approach based on maximizing the expected utility is unable to determine the optimal investment strategy and may instead provide a misleading portfolio which is not guaranteed to be optimal with respect to particular set of return rates.

thumbnail
Fig 1. The investment risk ɛ and the concentrated investment level qw are shown for the case in which the return rate x is independently and identically distributed with a standard normal distribution.

The horizontal axis indicates the scenario ratio α = p/N, and the vertical axis shows the investment risk and the concentrated investment level qw. The two solid lines (results obtained by our proposed approach) and the two dotted lines (results obtained by the operations research approach) are theoretical results. The markers with error bars are the numerical results evaluated using the optimal solution according to a return rate which was randomly assigned. In the simulation, the number of investment outlets N was 103, and we averaged 100 return rate matrices . This figure shows that the results obtained by our proposed approach (solid lines) and the numerical results (markers with error bars) are in agreement. On the other hand, the results obtained by the operations research approach (dotted lines) do not coincide with the others. Thus, unfortunately, the approach based on maximizing the expected utility cannot propose an optimal investment strategy.

https://doi.org/10.1371/journal.pone.0133846.g001

Summary and future work

In this paper, we analyzed the potential of an optimal solution to the mean-variance model, which is widely used for the portfolio optimization problem; in particular, we analyzed its potential investment risk and the concentrated investment level using self-averaging and replica analysis. We used the example of the rock-paper-scissors game with two subjects as a context for the optimization of stochastic phenomena. We noted that the minimal expected investment risk (from our discussion of an annealed disorder system) is not always in agreement with the expected minimal investment risk (from our discussion of a quenched disorder system). We discussed whether the optimal investment strategy which was derived using the analytical procedure that is widely used in operations research and the maximization of the expected utility based on an annealed disorder system are valid for use with actual return rates on assets. From the relationship in Eq (7), based on the more general formulation, we determined that the minimal expected investment risk obtained by the operations research approach was not smaller than the expected minimal investment risk. However, it does not provide useful information for an investment strategy since it underestimates the expected minimal investment risk. The main reasons for this are as follows. (1) At the start of an investment, there is no a priori knowledge of the future return rates on the assets. (2) The computational complexity required to assess the inverse of the optimal solution matrix increases with the cube of the number of investment outlets. (3) In order to precisely assess the potential investment risk, it is necessary to average the minimal investment risk with the actual return rate. In order to solve these problems, we used probabilistic reasoning to reformulate the portfolio optimization problem; we also used the Chernoff inequality and replica analysis to determine a tighter upper bound for the cumulative distribution of the investment risk. From an analytical result for the rate function that was derived from replica analysis, we clarified the self-averaging property of the investment risk. Thus, we determined that the minimal investment risk for the case in which complete information on the return rates is known a priori is in agreement with the minimal investment risk for the case in which the return rate matrix is averaged. We are thus able to evaluate the potential investment risk in an actual investment system. From this, we have solved the first and third problems that we listed above. Furthermore, by using replica analysis, we estimated two indicators for the optimal portfolio: the investment risk and the concentrated investment level; this was done without resolving the optimal portfolio directly, and this resolved the second problem. We found that the concentrated investment level obtained by our proposed approach was consistent with the intuitively obvious choice for an optimal investment strategy; we considered cases in which the scenario ratio approached 1 and in which the ratio was sufficiently large. We compared the results of our proposed method, the results obtained by the operations research approach, and the results obtained from a numerical simulation. The results of our method were in agreement with the results of the numerical simulation, but they did not coincide with the results of the operations research approach. As discussed above, although our findings are based on the mean-variance model with only a budget constraint and a return rate which is independently and identically distributed with a standard normal distribution, the relationship in Eq (7) and our findings imply that the approach based on maximizing the expected utility is not able to determine the most desirable strategy for actual investments.

In our future work, although for simplicity we considered only a budget constraint in this paper, in order to make our method more realistic, we wish to determine the optimal solutions under other constraints, such as limits on expected gross earnings, short-selling restrictions, and upper and lower limits for each asset. In particular, we wish to consider whether these problems can be resolved by using other analytical approaches of statistical mechanical informatics, such as the belief propagation method, random matrix integrals, or the Markov Chain Monte Carlo method. It is also necessary to confirm the self-averaging property of the risk function for cases other than the mean-variance model, such as for the mean-absolute deviation model or the expected shortfall model (for these models, typical behaviours of investment risk were evaluated by Ciliberti and Mézard). In addition, in order to clarify the mathematical structure of this optimization problem, we assumed that the return rates on assets were independently and identically distributed with a standard normal distribution, however, in an actual investment market, the return rate is not always independently and identically distributed. We would thus like to quantify the effects of this correlation on the indicators. Thus, although several models used in operations research have been proposed for assessing investment systems, in many cases, only the expected utility has been maximized; that is, only the annealed disorder system has been analyzed. The portfolio optimization problem is an undeveloped field, and many issues have not yet been considered.

Appendix

1. Properties of the rate function

We introduce the properties of the rate function R(η); these properties support the discussion in above section. In this appendix, for convenience, we define the function (57) Moreover, we will discuss only Pr[ηY] ≤ eR(η) and R(η) = maxu > 0{ϕ(u)}, but we note that Pr[ηY] ≤ eR(η) and R(η) = maxu < 0{ϕ(u)} can be verified in a similar way.

R(η) ≥ 0 is held

For any ηR, R(η) ≥ 0 holds. Also, for any u > 0, R(η) = maxu > 0{ϕ(u)} ≥ ϕ(u) = R(η, u) in the limit as u goes to +0, that is, we obtain limu → +0 R(η, u) = 0 for R(η) ≥ 0. In addition, if R(η) < 0, since Pr[ηY] ≤ 1 < eR(η), R(η) ≥ 0 implies an intuitive upper bound for the cumulative distribution.

When E[Y] ≥ η, R(η) = 0

When η is less than or equal to E[Y], the expectation of Y, that is, E[Y] ≥ η, then R(η) = 0. Since euY is a convex function of Y, for any u > 0, ϕ(u) ≥ logeuE[Y] = uE[Y], and we obtain 0 ≥ uE[Y] − ϕ(u). If η = E[Y], then we obtain R(E[Y]) = 0 from R(E[Y], u) = uE[Y] − ϕ(u) ≤ 0. In addition, if E[Y] ≥ η, then 0 ≥ uE[Y] − ϕ(u) ≥ ϕ(u) = R(η, u), and we obtain R(η) = 0. That is, this property may intuitively imply E[Y] = sup{ηR(η) = 0}.

R(η) is a convex function

R(η), which is derived from the Legendre transformation of a convex function, is a convex function of η. Thus, for any ∀λ ∈ [0,1), R(η) and R(ξ), (58) where u is nonnegative. Thus, the λR(η) + (1 − λ)R(ξ) ≥ R(λη + (1 − λ)ξ) is obtained by maximizing both sides of Eq (58) for u > 0.

2. Calculation of replica analysis

In this appendix, we analytically evaluate E[Zn(β, X)] using replica analysis; however, in general, it is difficult to estimate E[Zn(β, X)] for any nR [10]. Direct evaluation of E[Zn], the n-th moment of a nonnegative random variable Z ≥ 0, at any nR, is not possible unless the random variable follows a log-normal distribution [18, 19]. In particular, it is not easy to assess the partition function Z(β, X), defined in integral form in Eq (16), with a fixed return rate matrix X. If we could calculate this directly, we could easily solve Eqs (24) and (25) without needing to refer to the Helmholtz free energy; however, it is difficult to directly evaluate a partition function with a fixed return matrix in this model. We can solve E[Zn(β, X)] with the replica number nN because it is comparatively easy to calculate E[Zn(β, X)] for any replica number nN, and this can be used to estimate E[Zn(β, X)] at any replica number nR. Intuitively, for instance, it is possible to expand (a + b)2 = a2+2ab + b2 and (a + b)3 = a3+3a2 b+3ab2+b3 with finite terms, although it is not possible to obtain a finite expansion of (a + b)2.5. Nevertheless, it is trivial that this expansion will be between the square and the cube of (a + b), that is, (a + b)2 < (a + b)2.5 < (a + b)3. Thus, as a first step, we can estimate E[Zn(β, X)] at replica number nN, and then use this to estimate E[Zn(β, X)] at replica number nR. This approach is called replica analysis.

We can evaluate E[Zn(β, X)] at nN, as follows: (59) where wa = (w1a, ⋯, wNa)TRN, (a = 1, 2, ⋯, n). Moreover, since P0(wa) is the coefficient used to average the return rate matrix X, it can be separated.

We now introduce the Dirac delta function δ(x) in order to use it to average the x. The Dirac delta function δ(x) is one of the most widely used generalized functions, defined for any f(x) as (60) This function returns the function f(v) when the argument of δ(vw) on the right-hand side, vw, is 0, that is f(w). Thus, if a constant function is used in Eq (60), for example, f(x) = 1, then (61) In addition, the Fourier transform of the Dirac delta function, (62) can be obtained if the imaginary unit is employed. Thus, the integrand in Eq (59) can be written as (63) Substituting this into Eq (59), we obtain (64) where ua = (u1a, ⋯, upa)TRp and va = (v1a, ⋯, vpa)TRp, (a = 1, 2, ⋯, n). Since the return rate x is independently and identically distributed with a standard normal distribution, the expectation of x is (65) where . Thus we obtain (66) We then substitute (67) and obtain (68) From this technique, we obtain (69) where ∑a, b means and ExtrA f(A) are the extrema of f(A) with respect to A. In order to satisfy the constraint in Eq (67), we use the auxiliary variable . Moreover Qw = {qwab} ∈ 𝓜n × n and are the order parameter matrices.

We can separate the integral of uμa, vμa from the integral of wka. We evaluate the integral of uμa, vμa, (70) where I ∈ 𝓜n × n is the identity matrix. Because this is independent of the scenario index μ, we can estimate the integral using two novel vectors, u = (u1, ⋯, un)TRn, and v = (v1, ⋯, vn)TRn. On the other hand, we can calculate the integral of wka, (71) where k = (k1, ⋯, kn)TRn,e = (1, ⋯, 1)TRn, w is the prior probability of the portfolio, P0(wa) is replaced by , and because this is not dependent on the asset index k, we can solve the integral using a novel vector w = (w1, ⋯, wn)TRn.

We summarize this and rewrite the limit of for the number of investment outlets N as Φ(n): (72) Although a sufficiently large number of investment outlets N is required to guarantee that evaluating by using the order parameters as defined in Eqs (69) and (71) is consistent with the constraints of Eqs (67) and (2) in the replica analysis, our target indicator ɛ represents the minimal investment risk per asset; that is, since this is independent of the system size N, there will not be problems in the limit as N approaches infinity. It is preferable to normalize the investment risk per asset with respect to different sizes of investment markets, and this allows the comparison of potential investment risks.

We note two important points. First, in above subsection, we already mentioned that ɛ(X) = E[ɛ(X)], since the investment risk is self-averaging. From the above discussion, we have verified that (73) where we assume that the replica number n is a continuous number, and we use the replica trick [10, 18, 19]. This result is consistent with that of Eq (51). Second, from the definition in Eq (67), since qwaa is consistent with the concentrated investment level qw in Eq (5), . However, since this is an optimal solution with a sufficiently large β, (74) Although we used replica analysis to analyze the minimal investment risk, we can also obtain the concentrated investment level of the optimal portfolio. Fortunately, in the limit of very large N, qw is finite; thus there is an advantage of using Eq (2) as the budget constraint.

3. Random matrix approach for minimal investment risk and concentrated investment level

We show here that it is also possible to evaluate the two indicators, ɛ and qw, by using an asymptotic eigenvalue distribution of a random matrix [9]. As in the above discussion, we will consider only the case α = p/N > 1 in order to uniquely determine the optimal solution of Eq (1). Using the optimal solution defined in Eq (3), from Eqs (13) and (14), the minimal investment risk per asset ɛ and the concentrated investment level qw are replaced, as follows: (75) (76) If N is sufficiently large, we have (77) (78) where . If we could analyze g(s), then ɛ and qw could be precisely determined. It turns out that it is easy to assess g(s) by using a random matrix ensemble.

For this ensemble of random matrices, we require the following two properties: (1) when the random matrix is decomposed as X = UDV, where U ∈ 𝓜N × N and V ∈ 𝓜p × p are orthogonal matrices and D ∈ 𝓜N × p is a diagonal rectangular matrix, then U and V are independently distributed with a Haar measure; (2) when N is sufficiently large, the distribution of the eigenvalues of the variance-covariance matrix J = XXT, for any return rate matrix X, is asymptotically close to , where λk is the kth diagonal of DDT = diag{λ1, λ2, ⋯, λN} ∈ 𝓜N × N; if N and p simultaneously approach infinity, then it is required that α = p/NO(1). If these two properties are satisfied, then (79) Moreover, if the return rates are independently and identically distributed with a standard normal distribution, then the random matrix X satisfies the requirements for the random matrix ensemble described above [9, 20].

Next, we consider the asymptotic eigenvalue distribution. If the return rate x is independently and identically distributed, its mean and variance are respectively 0 and 1, and the higher-order moments are finite, that is, ∣E[(x)s]∣ < ∞, (s = 3, 4, ⋯), then the distribution of the eigenvalues of the variance-covariance matrix J = XXT of the return rate matrix is asymptotically close to (80) where δ(u) is the Dirac delta function, [u]+ = max(0, u), and [2123]. This eigenvalue distribution ρ(λ) is called the Marčenko-Pastur law, and this distribution can be regarded as the limit distribution for the eigenvalues, similar to the limit distribution (normal distribution) guaranteed by the central limit theorem.

The eigenvalues in this distribution can be easily calculated: (81) (82) where (83) (84) Thus, (85) (86) This result is consistent with the result we obtained by replica analysis and numerical simulation.

Supporting Information

Acknowledgments

The author thanks R. Wakai, Y. Shimazaki, and I. Kaku for their fruitful discussions. The author is also grateful to Y. Takemoto, I. Arizono, and T. Mizuno for valuable comments. This paper is an improved and expanded version of a previous paper by the same author [11], which was an unrefereed conference paper written in Japanese.

Author Contributions

Conceived and designed the experiments: TS. Performed the experiments: TS. Analyzed the data: TS. Contributed reagents/materials/analysis tools: TS. Wrote the paper: TS.

References

  1. 1. Dixit A.K., Pindyck R.S., 1994. Investment under uncertainty, Princeton University Press.
  2. 2. Luenberger D.G., 1997. Investment science, Oxford University Press.
  3. 3. Markowitz H., 1952. Portfolio selection. Journal of Finance, 7(1), 77–91. Markowitz H., 1959. Portfolio Selection: Efficient Diversification of Investments. John Wiley and Sons, New York.
  4. 4. Konno H., Yamazaki H., 1991. Mean-absolute deviation portfolio optimization model and its applications to tokyo stock market, Management Science, 37(5), 519–531.
  5. 5. Rockafellar R.T., Uryasev S., 2000. Optimization of conditional value-at-risk, Journal of Risk, 2(3), 21–41.
  6. 6. Ciliberti S., Mézard M., 2007. Risk minimization through portfolio replication, The European Physical Journal B., 57(2), 175–180.
  7. 7. Pafka S., Kondor I., 2003. Noisy covariance matrices and portfolio optimization II, Physica A, 319, 487–494.
  8. 8. Shinzato T., Yasuda M., 2010. Statistical mechanical informatics for portfolio optimization problems, Technical Report IEICE, 110(265), 257–263, Shinzato T., Yasuda M., 2010. Belief propagation algorithm for portfolio optimization problems, preprint into http://arxiv.org/abs/1008.3746.
  9. 9. Wakai R., Shinzato T., Shimazaki Y., 2014. Random matrix approach for portfolio optimization problem, Journal of Japan Industrial Management Associate, 65(1), 17–28.
  10. 10. Nishimori H., 2001. Statistical physics of spin glasses and information processing, Oxford University Press.
  11. 11. Shinzato T., 2014. Analysis based on self-averaging of optimal solution of mean variance model, Bulletin RIMS, Kyoto University, 1912, 26–34.
  12. 12. Ma S.-K., 2000. Modern theory of critical phenomena, Westview Press.
  13. 13. Nishimori H. and Ortiz G., 2011. Elements of Phase Transitions and Critical Phenomena, Oxford University Press.
  14. 14. Chernoff H., 1952. A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations, Annual of Mathematical Statistics 23(4), 493–507.
  15. 15. Gallager R.G., 1968. Information theory and reliable communication, John Wiley and Sons, New York.
  16. 16. Touchette H., 2009. The large deviation approach to statistical mechanics, Physics Reports 478, 1–69.
  17. 17. Amit D.J., Gutfreund H., Sompolinsky H., 1987. Statistical mechanics of neural networks near saturation, Annuals of Physics, 173(1), 30–67.
  18. 18. Ogure K. Kabashima Y., 2009. On analyticity with respect to the replica number in random energy models: I. An exact expression for the moment of the partition function, Journal of Statistical Mechanics, P03010.
  19. 19. Tanaka T., 2007. Moment problem in replica method, Interdisciplinary Information Sciences, 13(1), 17–23.
  20. 20. Shinzato T., Kabashima Y., 2008. Perceptron capacity revisited: classification ability for correlated patterns, Journal of Physics A, 41(32), 324013.
  21. 21. Bai Z., Silverstein J.W., 2010. Spectral analysis of large dimensional random matrices, Springer.
  22. 22. Marčenko V.A., Pastur L.A., 1967. Distribution of eigenvalues for some sets of random matrices, Mathematicheskii Sbornik, 72(114), 507–536.
  23. 23. Tulino A.M., Verdú S., 2004. Random matrix theory and wireless communications, Now Publisher.