Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Entropy Bounds for Hierarchical Molecular Networks

  • Matthias Dehmer ,

    mdehmer@geometrie.tuwien.ac.at

    Current address: Center for Mathematics, University of Coimbra, Coimbra, Portugal

    Affiliation Institute of Discrete Mathematics and Geometry, Vienna University of Technology, Vienna, Austria

  • Stephan Borgert,

    Current address: Department of Computer Science, Darmstadt University of Technology, Telecooperation Group, Darmstadt, Germany

    Affiliation Department of Physics, University of Siegen, Siegen, Germany

  • Frank Emmert-Streib

    Affiliation Department of Biomedical Sciences, Center for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, United Kingdom

Abstract

In this paper we derive entropy bounds for hierarchical networks. More precisely, starting from a recently introduced measure to determine the topological entropy of non-hierarchical networks, we provide bounds for estimating the entropy of hierarchical graphs. Apart from bounds to estimate the entropy of a single hierarchical graph, we see that the derived bounds can also be used for characterizing graph classes. Our contribution is an important extension to previous results about the entropy of non-hierarchical networks because for practical applications hierarchical networks are playing an important role in chemistry and biology. In addition to the derivation of the entropy bounds, we provide a numerical analysis for two special graph classes, rooted trees and generalized trees, and demonstrate hereby not only the computational feasibility of our method but also learn about its characteristics and interpretability with respect to data analysis.

Introduction

The investigation of topological aspects of chemical structures concerns a major part of the research in chemical graph theory and mathematical chemistry [1], [2], [3], [4]. Following, e.g., [5], [6], [7], [1], [2], [8], [9], classical and current research topics in chemical graph theory involve, e.g., modeling of chemical molecules by means of graphs, graph polynomials, graph-theoretical matrices, enumeration of chemical structures, and aspects of quantitative structure analysis like measuring the structural similarity of graphs and structural information. Further, a lot of the above mentioned contributions can be integrated under the following thematic categories which are well know in chemistry: QSAR and QSPR. QSAR (Quantitative structure-activity relationship) deals with descripting pharmacokinetic processes as well as biological activity or chemical reactivity [10], [11]. In contrast, QSPR (Quantitative Structure-Property Relationship) generally addresses the problem to convert chemical structures into molecular descriptors which are relevant to a physico-chemical property or a biological activity [11], [12]. However, a main problem in QSPR is to investigate relationships between molecular structure and physicochemical properties, e.g., the topological complexity of chemical structures [7], [13], [14], [11].

This paper mainly deals with a challenging problem of quantitative graph analysis: Deriving bounds for the entropies of hierarchical graphs. An important application area of information-theoretic methods applied to networks is, e.g., QSPR where our main focus lies on the examination of graph classes which are widely used in chemical graph theory and computational biology. Generally, there are two main directions in quantitative graph analysis: (i) Comparing and (ii) characterizing networks. Network comparison addresses the problem of measuring their structural similarity or distance, see, e.g., [15], [16], [17], [18], [19], [20], [21], [22]. In contrast, to characterize a network means that one has to infer structural network statistics which capture certain structural information of the networks [23], [24], [25], [26]. For giving a short review on information-theoretic methods to characterize graphs [6], [7], [14], [27], [28], [29], we want to emphasize that the problem of quantifying certain structural information of systems was a starting point of an emerging field that deals with applying information-theoretic techniques to networks, e.g., for investigating living systems [30], [31], [32], [33], [34], [35]. As a fundament, SHANNON [36] extended the concept of entropy that was known in thermodynamics for transmitting information. For this, he considered a message transmitted through information channels as a certain set of symbols denoted as an outcome which was selected from the ensemble of all k such sets containing the same total number of symbols N [27]. By assigning probabilities p1,p2,…,pk to each i-th outcome based on the quantities where Ni denotes the number of symbols of the i-th outcome, SHANNON characterized the entropy H as the uncertainty of the expected outcome [27]. Then, the classical SHANNON-entropy formula to measure the average entropy of information per communication symbol can be expressed by(1)Hm is often called the mean information. Additionally, BRILLOUIN [37] defined the total information as(2)Now, the topics we just mentioned [30], [31], [32], [33], [34], [35] have been mainly influenced by the, at that time, novel insight that an inferred or constructed graph structure can be considered as the result of a certain information process or communication between the elements of the underlying system [14], [36]. As a consequence [7], [38], Equation (1) and(3)Equation (2) can be now interpreted as the mean information content and the total information content(4)of a graph G. Here, |V| denotes the number of vertices of a graph G, k denotes the number of different (obtained) sets of vertices, |Vi| is the number of elements in the i-th set of vertices, and it holds .

The first attempt in this direction was given by [34] who developed a technique to determine the structural information content of a graph. This technique is based on the principle of finding distinguishable vertices of a graph to apply SHANNON's entropy (Equation (3) and Equation (4)) for determining the information content of such a graph-based system. Also, [38], [39], [40], [41] investigated this problem by using algebraic methods, i.e., determining the automorphism groups of graphs. We remark that the mentioned methods, e.g., [38], [39], [40], [41], [34], [35] for measuring the structural information content of a graph-based system are based on the following principle: Starting from a certain equivalence criterion, a graph-based system with n elements can be partitioned into k classes, see, e.g., [14]. As a consequence, a probability distribution can be obtained that leads directly to the definition of an entropy of the system under consideration (Equation (3) and Equation (4)). Following [14], [38], [28], the structural information content of such a system is interpreted as the entropy of the underlying graph topology. As a remark, we note that graph entropy definitions which are rooted in information theory can be found in [42], [43], [44], [45].

A major contribution of this paper addresses the problem of finding bounds for the entropies of hierarchical graphs, which often occurs in chemical graph theory and computational and systems biology. Here, the term “hierarchical” means that we deal with graphs having a distinct vertex that is called a root. To achieve this goal, we use an approach for determining the entropy of undirected and connected graphs that has been recently presented in [28]. In contrast to the classical methods which we have already outlined above, this method is based on assigning a probability value to each vertex in a graph by using a special information functional. The information functional we have presented in [28] is based on metrical properties of graphs, more precisely, on so-called j-spheres. In terms of practical applications, we want to point that the task of deriving bounds for the entropies of graphs is crucial because the exact entropy value can often not be calculated concretely, especially regarding large graphs. For this reason, entropy bounds for special graph classes help to reduce the complexity of such problems and can be also used for characterizing graphs or graph classes by using information-theoretic measures.

As mentioned, hierarchical (rooted) graph structures do have a large application potential in chemical graph theory and computational biology. Therefore, we restrict our analysis on such graph structures. A further reason for focusing on rooted graphs is, to our knowledge, such a study does not exist. Another contribution of this paper deals with demonstrating the practical ability of the used graph entropy approach [28] by interpreting the produced numerical results. Starting from two graph classes, ordinary rooted trees and so-called generalized trees [46], [47], we show that our entropy measure captures important structural information meaningfully. To summarize the main contribution of this paper, Figure (1) shows the overall approach exemplarily.

thumbnail
Figure 1. Overall approach to derive entropy bounds for hierarchical graphs.

https://doi.org/10.1371/journal.pone.0003079.g001

Analysis

Applications of Hierarchical Graphs

In this section, we briefly outline some applications of hierarchical graphs in chemical graph theory and computational biology.

Mathematical Chemistry.

There is a universe of problems dealing with trees for modeling and analyzing chemical structures [48], [1], [2], [3], [4]. However, also rooted tree structures are of particular interest because, e.g., considering such graph classes often helps to solve more general graph problems. In the following, we state some interesting applications of rooted trees in chemical graphs theory:

  • Enumeration and coding problems of chemical structures by using rooted trees [49], [50], [51], [52].
  • Describing so-called signatures as molecular descriptors for problems in QSAR [53].
  • Graph polynomials of hierarchical graphs [54].
  • Chemical graph analysis by using algebraic and metrical graph properties [55], [56], [57], [58].

Biology.

Tree structures have been intensely investigated for solving and modeling biological problems. In particular, rooted trees often serve as an important graph representation for many biological classification problems as well as for problems in evolutionary biology [59]. To summarize some known approaches involving hierarchical graph structures, we state the following listing:

  • Reconstruction problems and so-called supertree methods in phylogenetics [60], [61], [62], [63], [59].
  • Modeling and analyzing RNA structures [64], [65].
  • Supervised and unsupervised graph classification problems in computational biology [66], [67].
  • Clustering problems in computational biology [68], [69].

A Method for Determining the Entropy of Graphs

In this section, we briefly repeat the method to measure the entropy of arbitrary undirected and connected networks, see [28]. As mentioned, we will interpret and define the structural information content as the entropy of the underlying graph topology [28]. The method we want to use is mainly based on the principle to assign a probability value to each vertex in a graph by using a certain information functional for quantifying structural information in a graph and, hence, for determining its entropy. The information functional that has been used [28] is based on determining the so-called j-spheres of a graph. Before outlining the main construction steps of this approach, we want to mention that [70] also used so-called vertex distance degree sequences (DDS) to develop the idea of a graph center for chemical structures. Interestingly, the derived DDS-distributions correspond to vertex distributions by using j-spheres. Similarly to the just described idea, one main idea of the approach of [28] to determine the entropy of a graph was to use a connectivity concept to express neighborhood relations of its vertices. Finally, it turned out that a natural procedure for expressing such relations is to calculate the number of the first neighboring vertices, the number of the second neighboring vertices, etc. and, hence, this just corresponds to the definition of the j-sphere. As an example, Figure (2) shows the process of determining j-spheres visually.

thumbnail
Figure 2. G represents an undirected and connected graph.

For example, we get |S1(vi,G)| = 5 and |S2(vi,G)| = 9.

https://doi.org/10.1371/journal.pone.0003079.g002

In order to repeat the main construction step of the above mentioned graph entropy method, we first express some mathematical preliminaries [71], [72], [28]. We define an undirected, finite and connected graph by G = (V,E),|V|<∞, . G is called connected if for arbitrary vertices vi and vj there exists an undirected path from vi to vj. Otherwise, we call G unconnected. GUC denotes the set of finite, undirected and connected graphs. The degree of a vertex vV is denoted by δ(v) and equals the number of edges eE which are incident with v. In order to measure distances between vertices in a graph, we denote d(u,v) as distance between uV and vV expressed as the minimum length of a path between u,v. We notice that d(u,v) is a metric. We call the quantity σ(v) = maxuVd(u,v) the eccentricity of vV. Further, ρ(G) = maxvVσ(v) is called the diameter of G. The j-sphere of a vertex vi regarding GGUC is defined as the set,(5)Now, we state the definition of a special information functional that has been introduced in [28] to define the entropy of a graph. Here, the information functional fV quantifies structural information of a graph G by using the cardinalities of the corresponding j-spheres.

Definition 2.1 Let GGUC with arbitrary vertex labels. For the vertex viV, the information functional fV is defined as(6)fV (vi) captures structural information of G by using metrical properties of G. The parameters α and ck are introduced to weight structural characteristics or differences of G in each sphere, e.g., a vertex with a large degree.

As a remark, we generally see that it always(7)(8)(9)holds [28]. Hence, the ck have to be chosen such that they are not equal, e.g, c1>c2>…>cρ. Finally, we observe that the variation of ck and α aims to study the local information spread in a network.

Definition 2.2 The vertex probabilities are defined by the quantities(10)

Definition 2.3 Let G = (V,E)∈GUC. Then, we define the entropy of G by(11)

As outlined in [28], we recall that the process of defining information functionals and, hence, the entropy of a graph by using structural properties or graph-theoretical quantities is not unique. Consequently, each information functional captures structural information of a given graph differently. Further, we pointed out [28] that the parameter α can always be determined via an optimization procedure based on a given data set and, hence, is uniquely defined for a given classification problem.

Bounds for the Entropies of Hierarchical Graphs

In this section, we derive bounds for the entropies of hierarchical graphs. For this, we use the entropy measure explained in the previous section. As mentioned, in this paper we choose the class of rooted trees and so-called generalized trees [47]. We notice that a generalized tree contains an ordinary rooted tree as a special case [47]. Further, it turned out that generalized trees can be very useful for solving current problems in applied discrete mathematics, computer science and systems biology [47], [73], [74], [66]. To start with the problem of finding entropy bounds, we first define the mentioned graph classes. Directed generalized trees have already been defined in [47].

Definition 2.4 An undirected graph is called undirected tree if this graph is connected and cycle free. An undirected rooted tree T = (V,E) is an undirected graph which has exactly one vertex rV for which every edge is directed away from the root r. Then, all vertices in T are uniquely accessible from r. The level of a vertex v in a rooted tree T is simply the length of the path from r to v. The path with the largest path length from the root to a leaf is denoted as h.

Definition 2.5 As a special case of T = (V,E) we also define an ordinary w-tree denoted as Tw where w is a natural number. For the root vertex r, it holds δ(r) = w and for all internal vertices rV holds δ(v) = w+1. Leaves are vertices without successors. A w-tree is fully occupied, denoted by Two, if all leaves possess the same height h.

Definition 2.6 Let T = (V,E1) be an undirected finite rooted tree. |L| denotes the cardinality of the level set L: = {l0,l1…,lh}. The longest length of a path in T is denoted as h. It holds h = |L|−1. Λ∶VL is a surjective mapping and it is called a multi level function if it assigns to each vertex an element of the level set L. A graph H = (V,EGT) is called a finite, undirected generalized tree if its edge set can be represented by the union EGT: = E1E2E3, where

  • E1 forms the edge set of the underlying undirected rooted tree T.
  • E2 denotes the set of horizontal Across-edges. A horizontal Across-edge does not change a level i.
  • E3 denotes the set of edges which change at least one level.

As an example, Figure (3) shows an undirected rooted tree T and its corresponding undirected generalized tree H.

thumbnail
Figure 3. An undirected tree T and its corresponding undirected generalized tree H.

It holds |L| = 4 and h = |L|−1 = 3.

https://doi.org/10.1371/journal.pone.0003079.g003

Entropy Bounds for Rooted Trees.

Starting from the definition of the information functional fV (see Equation (6)), we first express a technical assertion proven in [75] that states a relationship between certain vertex probabilities. Starting from the definition of fV, this assertion expresses that it is always possible to infer inequalities between the corresponding vertex probabilities. In order to achieve this, we also use simple estimations of parameters which we introduce in Lemma (2.1). Finally, we will see that by applying this lemma, we can easily derive entropy bounds for the graph classes under consideration. Hence, the following lemma serves as a fundament for the proofs of some theorems we want state in this section.

Lemma 2.1 Let T be a rooted tree with a certain height h and let fV be the information functional represented by Equation (6). Further, we define the quantities(12)It holds(13)where(14)and . pV (vik) denotes the vertex probability of vik regarding fV. Further, vik denotes the k-th vertex on the i-th level, 1≤ih,1≤kσi. σi denotes the number of vertices on level i.

In the following, we derive entropy bounds for hierarchical networks by applying Lemma (2.1). Because Lemma (2.1) provides inequalities between vertex probabilities for each vertex in a graph, the main idea for inferring entropy bounds is to add up the obtained inequalities. As a result, we get relations between graph entropy measures for hierarchical networks which can be interpreted as entropy bounds. Also, the conclusion of Lemma (2.1) implies that by varying the Inequalities (13), special entropy bounds can be obtained.

Theorem 2.2 Let T be a rooted tree. For the entropy of T, it holds the inequality(15)where(16)

Proof: To start the proof, we consider Inequality (13) in Lemma (2.1). If we multiply this inequality by -1, we get(17)Now, by using the assertion of Lemma (2.1) and the monotonicity property of the logarithm function, we obtain(18)If we perform this step for each vertex vikV and then add up the obtained inequalities, we getBecause by definition it holdswe obviously get(19)Now, by using the definition of the graph entropy (see Definition (2.3)), Inequality (19) finally becomes toThis completes the proof of the theorem.

By considering special classes of rooted trees, we obviously get special bounds for the corresponding entropies.

Theorem 2.3 Let Two be a fully occupied w-tree. For the graph entropy of Two holds(20)

Proof: Let Two be a fully occupied w-tree. Therefore, it holds ρ = 2h. Starting from the root vertex v01, all other vertices are reachable. Hence, we obtain |Sh(v01,Two)| = wh. Then, we clearly get |Sj(vik,Two)|<wh, 1≤j≤2h. Hence, we can set ω = wh. Now, the proof of the Theorem (2.3) can be obtained by analogously applying the same technique and steps of the proof of Theorem (2.2).

Theorem 2.4 Let Tw be an ordinary w-tree. For the graph entropy of Tw holds(21)

Proof: Let Tw be an ordinary w-tree. Actually, it holds ωwh. From this, and by applying Lemma (2.1), we yield(22)Finally, we obtain the assertion of the theorem by applying the same technique and steps performed in the proof of Theorem (2.2).

We emphasize that each information functional captures structural information of a graph differently. Obviously, the resulting graph entropies are also different. If we now apply Theorem (2.2) and additionally assume an abstract information functional f*, we find as a consequence of the previous theorems that one can infer a statement that expresses a relationship between the resulting graph entropies. These kind of inequalities can be used to study the influence of an information functional on the final graph entropies.

Corollary 2.5 Let T be a rooted tree and let f*(vik) be an information functional such that(23)pV (vik) and p*(vik) denotes the vertex probability value (k-th vertex on the i-th level) regarding fV and f*. Then, it holds(24)

Entropy Bounds for Generalized Trees.

In this section, we give a first attempt to state entropy bounds for certain classes of generalized trees. By only allowing generalized trees with specific edge sets, we get bounds for the entropies of special classes of generalized trees. The assertion of the next theorem means the following: The entropy of a specific generalized tree can be characterized by the entropy of another generalized tree that is extremal with respect to a certain structural property.

Theorem 2.6 Let H = (V,EGT) be a generalized tree with EGT: = E1E2, i.e., H possesses Across-edges only. Starting from H, we define H* as the generalized tree with the maximal number of Across-Edges on each level i,1≤ih.

  • First, there exist positive real coefficients ck which satisfy the inequality system(25)
  • Second, it holds(26)

Proof: We assume H = (V,EGT) such that EGT = E1E2. Besides edges eE1, H possesses Across-edges eE2 only. Then, we first determine

Now, we consider H* and find that the total number of Across-edges for each level equals . Except for the root vertex v01, we further see that in particular |S1(vik,H*)|≥|S1(vik,H)| holds. This corresponds to the fact that H* has normally more connections than H. Finally, the cardinalities of the remaining j-spheres of H* increase correspondingly. Therefore, we conclude that we can find coefficients ck>0 such that the Inequality System (25) holds. But from this, we directly obtain(27)if α>1, 1≤ih, 1≤kσi. fHV (vik) and denotes the information functional fV regarding H and H*, respectively. We want to emphasize that it holds . Similarly as in Lemma (2.1), by using the quantities we yield(28)Finally, Equation (26) can be obtained by applying the assertion of Theorem (2.2).

We want to remark that by using the main argument of Theorem (2.6), one can easily express similar assertions for other specific generalized tree classes. To finalize this section, we state a simple lemma concerning the maximum entropy of a graph. Then, we apply this assertion to generalized trees.

Lemma 2.7 Let K|V|,|V| be the complete graph with |V| vertices. K|V|,|V| maximizes the graph entropy with respect to the information functional fV, i.e.,(29)

Theorem 2.8 Let H = (VH,E) be an arbitrary generalized tree and let H|V|,|V| be the complete generalized tree such that |VH|≤|V|. It holds(30)

Proof: The proof follows directly by using the monotonicity property of the logarithm function and the assertion of Lemma (2.7).

Corollary 2.9 Let H* = (V*,E*) and it holds |V*|≤|V|.(31)

Results and Discussion

Numerical Results for Hierarchical Graphs

This section aims to demonstrate that our entropic measure is able to distinguish certain graph classes of hierarchical graphs structurally by comparing the resulting cumulative entropy distributions. As a result of our numerical analysis, we will find that the calculated entropy distributions can be clearly distinguished and, hence, also the graph classes under consideration. Thus, this proves that the entropy measure captures significant structural information. To start, we give a short overview on the key steps we performed to carry out our numerical analysis:

  • Generate the data classes CαRT and CαGT. For this, we randomly create rooted trees with a fixed height h. Further, we use these trees to generate generalized trees (see also below).
  • Choose the parameters ck.
  • Vary α to compute IfV for different classes CαRT and CαGT.
  • Compute the mean of the entropies for each such class denoted by μ and the variances σ2.
  • Compute and interpret the cumulative entropy distributions for CαRT and CαGT.

We remark that the intuitive meaning of the entropy IfV (G) has been already explained in [28]. Now, we start our numerical section with defining some data classes. These data classes emerge from starting with fixed sets of hierarchical graphs and by varying certain parameters.

Definition 3.1 The class CαRT denotes a certain set of rooted trees whose entropies have been computed by using the value α and the coefficient vector (c1,c2,…,cρm). We setCorrespondingly, CαGT denotes a certain set of generalized trees whose entropies have been computed by also using the value α and (c1,c2,…,cρm).

In order to compute the graph entropies concretely, we choose the ck values such thatholds, and set c1: = 6,c2: = 5,c3: = 4,c4: = 3,c5: = 2,c6: = 1. A class CαRT was generated by providing a fixed value h as the height of each tree . Further, each has an unique root vertex and the remaining vertices and edges were created randomly. To generate a class CαGT, we first compute an arbitrary random tree with height h as mentioned and, then, a certain number of additional edges of a generalized tree randomly. The numerical results of our study are summarized in Table (1). As we have already mentioned, we computed the entropies of certain classes of rooted and generalized trees with a fixed height h by varying the α-value. We notice that by providing a fixed height h, the number of vertices of T or H can be nevertheless extremely different. Now, from Table (1) we see that the resulting entropies of generalized trees are in average larger than the entropies of rooted trees, depending on α. This corresponds to our intuition that a generalized tree can be generally considered as structurally more complex than an ordinary rooted tree. To argue in this way, we apply a definition due to [11] that states, the higher the information content (entropy) of a system is, the more complex is the system. Further, one finds that the variances of the generated tree and generalized tree classes can be clearly distinguished. This can be also explained by the fact that a set of generalized trees is in average more structurally complex and diverse than a set of rooted trees with the same height h. Also, we observe that the larger the α-value of CαRT and CαGT is, the smaller is the resulting mean and variance. Additionally, we also find that the entropy of a graph decreases with an increasing α-value. In the following, we interpret the cumulative entropy distributions (for h = 8) which are shown in Figure (4) and Figure (5). Such a distribution expresses the percentage rate of graphs (of the cardinality of CαRT or CαGT) which possess an entropy value less or equal IfV (T) or IfV (H). As an important observation, we find that for α∈{2,3,4,5,10} the cumulative entropy distributions of CαRT (see Figure (4)) are clearly different from the corresponding cumulative distributions of CαGT (see Figure (5)). Hence, we interpret this result such that the entropy measure (by incorporating the information functional fV) is able to detect that we deal with different graph classes. The reason why the distribution for C1RT and C1GT seems to be almost equal is related to the fact that our entropy measure has always a maximum at α = 1. For this case, the entropies of trees- and generalized trees are almost equal. We remark that we have already been proven that the entropy functional (by using fV) possesses for every graph a maximum at α = 1, see [28]. As the main result of this section, we find that our entropy measure captures important structural information meaningfully and, hence, detects that rooted and generalized trees manifest structurally different graph classes.

thumbnail
Figure 4. Cumulative entropy distributions of the classes CαRT for h = 8.

The x-axis corresponds to the entropy IfV (T) and the y-axis represents the cumulative entropy distribution for C1RT -C5RT and C10RT.

https://doi.org/10.1371/journal.pone.0003079.g004

thumbnail
Figure 5. Cumulative entropy distributions of the classes CαGT for h = 8.

The x-axis corresponds to the entropy IfV (H) and the y-axis represents the cumulative entropy distribution for C1GT -C5GT and C10GT.

https://doi.org/10.1371/journal.pone.0003079.g005

thumbnail
Table 1. μ represent the means of the entropies for each class CαRT and CαGT and σ2 denotes the corresponding variance.

https://doi.org/10.1371/journal.pone.0003079.t001

Summary and Conclusion

In this paper, we investigated the problem of finding entropy bounds for hierarchical graphs. Based on an entropic measure to determine the entropy of graphs, we derived certain estimations for the corresponding entropies. We now summarize the main contributions and arguments of our paper as follows:

We defined two classes of hierarchical graphs, rooted trees and generalized trees. A generalized tree is structurally more complex than an ordinary rooted tree because it contains a rooted tree as a special case. As a main result, we proved entropy bounds for rooted trees as well as for generalized trees. Also, assuming specific structural properties of the graph classes under consideration led us to characteristic bounds. It is important to note that we presented only one method for finding those entropy bounds, different bounds can be derived by using different entropy measures and techniques. To classify these bounds, we call the derived bounds implicit bounds because the entropy of a graph was estimated by a quantity that contains another graph entropy expression. Generally, bounds to estimate the entropy of graphs are very useful for practical applications because the real entropy value is often difficult to obtain. Particularly, an interesting result represents Corollary (2.5). From this assertion, we found that an information functional (e.g., fV or f*) has an influence on the resulting graph entropy because each such functional quantifies structural information differently. Hence, Corollary (2.5) can be used for describing relations of the resulting entropies by using different information functional.

Further, we performed a numerical study to demonstrate the practical ability of our graph entropy measure. Based on two generated graph classes of rooted and generalized trees, we computed the entropies of each such class by varying the free parameter α. Then, we calculated the cumulative entropy distributions for these classes. From the obtained results we could conclude that our entropy measure can distinguish between rooted trees and generalized trees. This implied that the used entropy measure captures significant structural information because we know that rooted trees and generalized trees are different graph classes.

Acknowledgments

We would like to thank the anonymous referee as well as Danail Bonchev and Claus Grupen for fruitful discussions and valuable comments.

Author Contributions

Conceived and designed the experiments: MD FES. Performed the experiments: SB. Analyzed the data: MD SB FES. Wrote the paper: MD SB FES.

References

  1. 1. Bonchev D, Rouvray DH (1991) Chemical Graph Theory. Introduction and Fundamentals. Abacus Press.
  2. 2. Diudea MV, Gutman I, Jäntschi L (2001) Molecular Topology. Nova Publishing.
  3. 3. Gutman I, Polansky OE (1986) Mathematical Concepts in Organic Chemistry. Springer.
  4. 4. Trinajstić N (1992) Chemical Graph Theory. CRC Press.
  5. 5. Batagelj V (1988) Similarity measures between structured objects. In: Graovac A, editor. Dubrovnik/Yugoslavia: Proceedings of an International Course and Conference on the Interfaces between Mathematics, Chemistry and Computer Sciences. pp. 25–40.
  6. 6. Bonchev D, Trinajstić N (1977) Information theory, distance matrix and molecular branching. Journal of Chemical Physics 67: 4397–4533.
  7. 7. Bonchev D (1983) Information Theoretic Indices for Characterization of Chemical Structures. Chichester: Research Studies Press.
  8. 8. Rupp M, Proschak E, Schneider G (2007) Kernel approach to molecular similarity based on iterative graph similarity. J Chem Inf Comput Sci 47: 2280–2286.
  9. 9. Skvortsova MI, Baskin II, Stankevich IV, Palyulin VA, Zefirov NS (1998) Molecular similarity. I. Analytical description of the set of graph similarity measures. J Chem Inf Comput Sci 38: 785–790.
  10. 10. Benigni R (2003) Quantitative Structure-Activity Relationship (QSAR) Models of Mutagens and Carcinogens. CRC Press.
  11. 11. Devillers J, Balaban AT (2000) Topological Indices and Related Descriptors in QSAR and QSPAR. CRC Press.
  12. 12. Diudea MV (2001) QSPR/QSAR Studies by Molecular Descriptors. Nova Publishing.
  13. 13. Bonchev D (2000) Overall connectivities and topological complexities: A new powerful tool for QSPR/QSAR. J Chem Inf Comput Sci 40(4): 934–941.
  14. 14. Bonchev D (2003) Complexity in Chemistry. Introduction and Fundamentals. Taylor and Francis.
  15. 15. Bunke H (1983) What is the distance between graphs? Bulletin of the EATCS 20: 35–39.
  16. 16. Bunke H (2000) Recent developments in graph matching. 15-th International Conference on Pattern Recognition 2. pp. 117–124.
  17. 17. Bunke H, Neuhaus M (2007) Graph matching. Exact and error-tolerant methods and the automatic learning of edit costs. In: Cook D, Holder LB, editors. Mining Graph Data. Wiley-Interscience. pp. 17–32.
  18. 18. Sobik F (1982) Graphmetriken und Klassifikation strukturierter Objekte. ZKI-Informationen, Akad Wiss DDR 2(82): 63–122.
  19. 19. Sobik F (1986) Modellierung von Vergleichsprozessen auf der Grundlage von Ähnlichkeitsmaßen für Graphen. ZKI-Informationen, Akad Wiss DDR 4: 104–144.
  20. 20. Kaden FGraphmetriken und Distanzgraphen. ZKI-Informationen, Akad Wiss DDR 2(82): 1–63.
  21. 21. Zelinka B (1975) On a certain distance between isomorphism classes of graphs. Časopis pro pest. Mathematiky 100: 371–373.
  22. 22. Zhu P, Wilson RC (2005) A study of graph spectra for comparing graphs. 16-th British Machine Vision Conference, Oxford Brookes University.
  23. 23. Brinkmeier M, Schank T (2005) Network statistics. In: Brandes U, Erlebach T, editors. Network Analysis, Lecture Notes of Computer Science. Springer. pp. 293–317.
  24. 24. Dorogovtsev SN, Mendes JFF (2003) Evolution of Networks. From Biological Networks to the Internet and WWW. Oxford University Press.
  25. 25. Barabási AL (2003) How Everything Is Connected to Everything Else and What It Means. Plume, Reissue edition.
  26. 26. Mason O, Verwoerd M (2007) Graph theory and networks in biology. IET Systems Biology 1(2): 89–119.
  27. 27. Bonchev D, Rouvray DH (2005) Complexity in Chemistry, Biology, and Ecology. Mathematical and Computational Chemistry, Springer.
  28. 28. Dehmer M, Emmert-Streib F (2008) Structural information content of networks: Graph entropy based on local vertex functionals. Computational Biology and Chemistry 32: 131–138.
  29. 29. Sahu PK, Lee SL (2008) Net-sign identity information index: A novel approach towards numerical characterization of chemical signed graph theory. Chemical Physics Letters. In press.
  30. 30. Morowitz H (1953) Some order-disorder considerations in living systems. Bull Math Biophys 17: 81–86.
  31. 31. Quastler H (1953) Information Theory in Biology. University of Illinois Press.
  32. 32. Dancoff S, Quastler H (1953) Information content and error rate of living things. In: Quastler H, editor. Essays on the Use of Information Theory in Biology. University of Illinois Press. pp. 263–274.
  33. 33. Linshitz H (1953) The information content of a battery cell. In: Quastler H, editor. Essays on the Use of Information Theory in Biology. University of Illinois Press. pp. 14–15.
  34. 34. Rashewsky N (1955) Life, information theory, and topology. Bull Math Biophys 17: 229–235.
  35. 35. Trucco E (1956) A note on the information content of graphs. Bulletin of Mathematical Biology 18(2): 129–135.
  36. 36. Shannon CE, Weaver W (1997) The Mathematical Theory of Communication. University of Illinois Press.
  37. 37. Brillouin L (1956) Science and Information Theory. New York: Academic Press.
  38. 38. Mowshowitz A (1968) Entropy and the complexity of the graphs I: An index of the relative complexity of a graph. Bull Math Biophys 30: 175–204.
  39. 39. Mowshowitz A (1968) Entropy and the complexity of graphs II: The information content of digraphs and infinite graphs. Bull Math Biophys 30: 225–240.
  40. 40. Mowshowitz A (1968) Entropy and the complexity of graphs III: Graphs with prescribed information content. Bull Math Biophys 30: 387–414.
  41. 41. Mowshowitz A (1968) Entropy and the complexity of graphs IV: Entropy measures and graphical structure. Bull Math Biophys 30: 533–546.
  42. 42. Fujii JI, Yuki S (1997) Entropy and coding for graphs. Int J Math Stat Sci 6(1): 63–77.
  43. 43. Kieffer J, Yang E (1997) Ergodic behavior of graph entropy. Electronic Research Announcements of the American Mathematical Society 3: 11–16.
  44. 44. Körner J (1973) Coding of an information source having ambiguous alphabet and the entropy of graphs. Transactions of the 6-th Prague Conference on Information Theory. pp. 411–425.
  45. 45. Simonyi G (2001) Perfect graphs and graph entropy. An updated survey. In: Ramirez-Alfonsin J, Reed B, editors. Perfect Graphs. John Wiley & Sons. pp. 293–328.
  46. 46. Harary F (1969) Graph Theory. Addison Wesley Publishing Company.
  47. 47. Mehler A, Dehmer M, Gleim R (2004) Towards logical hypertext structure — A graph-theoretic perspective. In: Böhme T, Heyer G, editors. Proceedings of the Fourth International Workshop on Innovative Internet Computing Systems (I2CS '04), Lecture Notes in Computer Science 3473. pp. 136–150.
  48. 48. Aringhieri R, Hansen P, Malucelli F (2001) A linear algorithm for the hyper-wiener index of chemical trees. J Chem Inf Comput Sci 41(4): 958–963.
  49. 49. Aringhieri R, Hansen P, Malucelli F (2003) Chemical trees enumeration algorithms. 4OR, A Quarterly Journal of Operations Research 1(1): 67–83.
  50. 50. Müller WR, Szymanski K, Knop JV, Trinajstić N (1995) A comparison between the matula numbers and bit-tuple notation for rooted trees. J Chem Inf Comput Sci 35(2): 211–213.
  51. 51. Matula DW (1968) A natural rooted tree enumeration by prime factorization. SIAM Review 10: 273.
  52. 52. Elk SB (1990) A canonical ordering of polybenzenes and polyadamantanes using a prime factorization technique. J Math Chem 4: 55–68.
  53. 53. Visco DP, Pophale RS, Rintoul MD, Faulon JL (2002) Developing a methodology for an inverse quantitative structure-activity relationship using the signature molecular descriptor. Journal of Molecular Graphics and Modelling 20: 429–438.
  54. 54. Zmazeka B, Žerovnik J (2007) The hosoya-wiener polynomial of weighted trees. Croatica Chemica Acta 80(1): 75–80.
  55. 55. Bohanec S, Perdih M (1993) Symmetry of chemical structures: A novel method of graph automorphism group determination. J Chem Inf Comput Sci 33: 719–726.
  56. 56. Chepoi V (1996) On distances in benzenoid systems. J Chem Inf Comput Sci 36: 1169–1172.
  57. 57. Liu SC, Tong LD, Yeh JY (2000) Trees with the minimum wiener number. International Journal of Quantum Chemistry 78(5): 331–340.
  58. 58. Skorobogatov VA, Dobrynin AA (1988) Metrical analysis of graphs. MATCH 23: 105–155.
  59. 59. Semple C, Steel M (2000) A supertree method for rooted trees. Discrete Applied Mathematics 105(1-3): 147–158.
  60. 60. Felsenstein J (2003) Inferring Phylogenies. Sinauer Associates.
  61. 61. Foulds LR (1992) Graph Theory Applications. Springer.
  62. 62. Steel M, Dress A, Boker S (2000) Simple but fundamental limitations on supertree and consensus tree methods. Systematic Biology 49: 363–368.
  63. 63. Semple C, Steel M (2003) Phylogenetics. Graduate Series Mathematics and its Applications. Oxford University Press.
  64. 64. Höchstmann M, Töller T, Giegerich R, Kurtz S (2003) Local similarity in RNA secondary structures. Proceedings of the IEEE Computational Systems Bioinformatics Conference (CSB'03). pp. 159–168.
  65. 65. Shapiro BA, Zhang K (1990) Comparing multiple RNA secondary structures using tree comparisons. Comp Appl Biosci 6(4): 309–318.
  66. 66. Emmert-Streib F, Dehmer M, Kilian J (2005) Classification of large graphs by a local tree decomposition. In: Arabnia HR, Scime A, editors. Proceedings of DMIN'05, International Conference on Data Mining, Las Vegas, June 20–23. pp. 200–207.
  67. 67. Horváth T, Gärtner T, Wrobel S (2005) Cyclic pattern kernels for predictive graph mining. Proceedings of the 2004 ACM SIGKDD, International Conference on Knowledge Discovery and Data Mining. pp. 158–167.
  68. 68. Gambin A, Slonimski PP (2005) Hierarchical clustering based upon contextual alignment of proteins: A different way to approach phylogeny. Comptes Rendus Biologies 328(1): 1–22.
  69. 69. Morihiro H, Tatsuya A, Hitoshi N (2006) A novel clustering method for analysis of biological networks using maximal components of graphs. IPSJ SIG Technical Reports 99: 1–8.
  70. 70. Bonchev D (1989) The concept for the center of a chemical structure and its applications. Journal of Molecular Structure: Theochem 185: 155–168.
  71. 71. Bang-Jensen J, Gutin G (2002) Digraphs. Theory, Algorithms and Applications. London, Berlin, Heidelberg: Springer.
  72. 72. Halin R (1989) Graphentheorie. Akademie Verlag.
  73. 73. Dehmer M, Emmert-Streib F, Mehler A, Kilian J (2006) Measuring the structural similarity of web-based documents: A novel approach. International Journal of Computational Intelligence 3(1): 1–7.
  74. 74. Emmert-Streib F, Dehmer M, Liu J, Mühlhäuser M (2006) Ranking genes from DNA microarray data of cervical cancer by a local tree comparison. International Journal of Biomedical Science 1(1): 17–22.
  75. 75. Dehmer M (2008) Information processing in complex networks: Graph entropy and information functionals. Applied Mathematics and Computation 201: 82–94.