Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A Bayesian Alternative to Mutual Information for the Hierarchical Clustering of Dependent Random Variables

  • Guillaume Marrelec ,

    marrelec@lib.upmc.fr

    Affiliation Sorbonne Universités, UPMC Univ Paris 06, CNRS, INSERM, Laboratoire d’imagerie biomédicale (LIB), F-75013, Paris, France

  • Arnaud Messé,

    Affiliation Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg University, Hamburg, Germany

  • Pierre Bellec

    Affiliation Département d’informatique et recherche opérationnelle, Centre de recherche de l’institut universitaire de gériatrie de Montréal, Université de Montréal, Montréal, Qc, Canada

Abstract

The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC) raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity), provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms) to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example, we found that the Bayesian approaches led to results that were similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On real functional magnetic resonance imaging (fMRI) datasets measuring brain activity, it identified clusters consistent with the established outcome of standard procedures. On this application, normalized mutual information had a highly atypical behavior, in the sense that it systematically favored very large clusters. These initial experiments suggest that the proposed Bayesian alternatives to mutual information are a useful new tool for hierarchical clustering.

Introduction

Cluster analysis aims at uncovering natural groups of objects in a multivariate dataset (see [1] for a review). In the vast variety of methods used in cluster analysis, an agglomerative hierarchical clustering (AHC) is a generic procedure that sequentially merges pairs of clusters that are most similar according to an arbitrary function called similarity measure, thereby generating a nested set of partitions, also called hierarchy [2]. The choice of the similarity measure indirectly defines the shape of the clusters, and thus plays a critical role in the clustering process. While this choice is guided by the features of the problem at hand, it is also often restricted to a limited number of commonly used measures, such as the Euclidean distance or Pearson correlation coefficient [3]. In the present work, we focus on the clustering of random variables based on their mutual information, which has recently gained in popularity in cluster analysis, notably in the field of genomics [47] and in functional magnetic resonance imaging (fMRI) data analysis [810]. Mutual information is a general measure of statistical dependency derived from information theory [1113]. A key feature of mutual information is its ability to capture nonlinear interactions for any type of random variables [14]; also of interest, it indifferently applies to univariate or multivariate variables and can thus be applied to clusters of arbitrary size. Yet, mutual information is an extensive measure that increases with variable dimensionality. In addition, , the finite-sample estimator of mutual information, suffers from a dimensionality-dependent bias (see §A of S1 File). Several authors have proposed to correct mutual information for dimensionality by using a “normalized” version of mutual information [1517]. In the clustering literature, normalized mutual information is routinely used. However, the impact of such correction procedure has not been extensively evaluated so far.

In the present paper, we consider Bayesian model-based clustering [1, 1820] as an alternative to mutual information for the hierarchical clustering of dependent multivariate normal variables. Specifically, we derive a similarity measure by comparing two models: where Xi and Xj are independent (i.e., the covariance between any element of Xi and any element of Xj is equal to zero), against where the covariance between Xi and Xj can be set to any admissible value. The proposed similarity measure is then the log Bayes factor in favor of against [21]. With appropriate priors on the model parameters, we show that the similarity measure s(Xi,Xj) between Xi and Xj can be expressed in closed form. As will be shown below, the Bayesian formulation naturally (1) allows for clustering even when the sample covariance matrix is ill-defined; (2) provides for an automated stopping rule when the clustering reached has s(Xi,Xj) ≤ 0 for any pair of remaining clusters; (3) corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables; and (4) provides for a local and global measure of similarity, in that it can be used to decide which pair of variables to cluster at each step (local level) as well as to compare different levels of the resulting hierarchy (global level). Asymptotically (i.e., when the number of samples N → ∞), the similarity measure is a linear function of mutual information, with a penalization factor that is in agreement with the Bayesian information criterion (BIC) [22]. In this sense, the present paper makes an explicit connection between Bayesian model comparison for the clustering of dependent random variables and mutual information. The code corresponding to the Bayesian approach is freely available online (https://github.com/SIMEXP/arXiv-1501.05194/releases/tag/1.0)

We evaluated an AHC procedure based on this approach with synthetic datasets. The experiment aimed to evaluate how it behaved under both its exact and asymptotic forms compared to other approaches, including raw and normalized mutual information. We finally tested the new measures on two real datasets: a toy dataset and functional magnetic resonance imaging (fMRI) data.

Analysis

In the following, we develop a Bayesian solution to the problem of clustering detailed above. We first introduce the model together with the Bayesian framework and a general expression for the similarity measure. In subsequent subsections, we derive a closed-form expression for the marginal model likelihoods under both assumptions of dependence and independence as well as exact and asymptotic expressions for the similarity measure. We then provide a description of the hierarchical agglomerative clustering algorithm resulting from the present development. We examine how the same framework can be conveniently used to compare nested partitions, that is, different levels of a hierarchy. We also deal with the issue of setting the hyperparameters. Finally, we show how the Bayesian solution can naturally provide for an automatic stopping rule.

Bayesian model comparison

Let X be a D-dimensional multivariate normal variable with known mean μ and unknown covariance matrix Σ. Define Xi and Xj as two disjoint subvectors of X (of size Di and Dj, respectively), and Xij as their union (of size Dij = Di + Dj). Assume that we have to decide whether we should cluster Xi and Xj based on their level of dependence. To this end, consider two competing models and . In , Xi and Xj are independent and the distribution of Xij can be decomposed as the product of the marginal distributions of Xi and Xj. Under such condition, Σij, the restriction of Σ to Xij, is block diagonal with blocks Σi and Σj, the restrictions of Σ to Xi and Xj, respectively. In , by contrast, Xi and Xj are dependent. Given a dataset {x1, …, xN} of N independent and identically distributed (i.i.d.) realizations of X and S the corresponding sample sum-of-square matrix we propose to quantify the similarity between Xi and Xj as the log Bayes factor, that is, the log ratio of the marginal model likelihoods of versus (1) Each marginal model likelihood can be expressed as an integral over the model parameters as described below.

Note that we assumed a known mean in the following theoretical development for the sake of simplicity. If the mean is unknown (as will be the case in the simulation and real data sections), this development is still valid, with N replaced by N − 1 and μ by the sample mean in the expression of the sample sum-of-square matrix.

Marginal model likelihood under the hypothesis of dependence

Let us first calculate ), the marginal model likelihood under the hypothesis of dependence. Expressing this quantity as a function of the model parameters yields (2) Calculation of the integral requires to know the likelihood and the prior distribution of the covariance matrix under . With multivariate normal data, Sij given Σij is Wishart distributed with N degrees of freedom and scale matrix Σij ([23] Corollary 7.2.2), leading to the following likelihood (3) where Z(d, n) is the inverse of the normalization constant As to the prior distribution, this quantity is here set as a conjugate prior, namely an inverse-Wishart distribution with νij degrees of freedom and inverse scale matrix Λij ([24] §3.6) (4) Bringing Eqs (3) and (4) together into Eq (2) yields for the marginal model likelihood The integrand is proportional to an inverse-Wishart distribution with N + νij degrees of freedom and scale matrix Λij + Sij, leading to (5)

Marginal model likelihood under the hypothesis of independence

We can now calculate , the marginal model likelihood under the hypothesis of independence. If holds, then Σij is block-diagonal with two blocks Σi and Σj the submatrix restrictions of Σij to Xi and Xj, respectively. Introduction of the model parameters therefore yields for the marginal likelihood (6) To calculate this integral, we again need to know the likelihood ,Σi,Σj) and the prior distribution of the two blocks of the covariance matrix under . The likelihood is the same as for and has the form of Eq (3). Furthermore, since Σij is here block diagonal, we have ∣Σij∣ = ∣Σi∣ ∣Σj∣ and , where Si and Sj are the restrictions of S to Xi and Xj, respectively. Consequently, the likelihood can be further expanded as (7) As to the prior distribution, assuming no prior dependence between Σi and Σj yields (8) For the sake of consistency, and are set equal to p(Σi) and p(Σj), respectively, which are in turn obtained by marginalization of p(Σij) as given by Eq (4). For k ∈ {i, j}, can be found to have an inverse-Wishart distribution with νk = νijDij + Dk degrees of freedom and inverse scale matrix Λk the restriction of Λij to Xk ([25] §5.2) (9) Incorporating Eqs (7), (8), and (9) into Eq (6) yields Each integrand is proportional to an inverse-Wishart distribution with N + νk degrees of freedom and scale matrix Sk + Λk, leading to (10)

Log Bayes factor of dependence versus independence

Let us now express the Bayesian similarity measure by incorporating Eqs (5) and (10) into Eq (1), yielding (11) with (12) Another form for s(Xi,Xj) is (13) with (14) and (15)Δϕk quantifies, up to a constant that cancels out in s(Xi,Xj), the amount by which the data support a model of multivariate normal distributions with unrestricted covariance matrix for Xk.

Asymptotic form of the log Bayes factor

We can now provide an asymptotic expression for s(Xi,Xj). Define as the standard sample covariance matrix, i.e., . When N → ∞, we can use Stirling approximation ([26] p. 257) to expand the expression ϕ of Eq (15), leading to (see §B of S1 File) (16) where is the plug-in estimator of mutual information for a multivariate normal distribution. Alternatively, can be seen as the minimum discrimination information for the independence of Xi and Xj ([12] Chap. 12, §3.6).

Hierarchical agglomerative clustering

A hierarchy on a set of D variables is a nested set of partitions , where is a partition of {1, …, D} into Dl clusters [2]. A hierarchical agglomerative clustering (AHC) is a generic procedure to generate such a hierarchy, outlined in pseudo-code in Algorithm 1. The main steps of the algorithm are: (1) Initialize the partition with singletons (line 2); (2) derive a matrix Sl where each element represents the similarity between two clusters Xi and Xj of , based on an arbitrary function s(Xi,Xj) (line 4); (3) identify the two clusters that are most similar (line 5); (4) form a new partition identical to the previous one, except that the two most similar clusters are now merged (lines 6–7); (5) iterate Steps 2–4 until the partition has only one single element which covers the whole set of variables (line 3). In the case of the methods proposed here, the similarity measure is given by either Eq (11) for the exact formulation or Eq (16) for the asymptotic BIC approximation.

Note that for the selection of the pair of clusters that are most similar (step 3), there may be more than one pair of clusters which maximize the similarity function. Most implementations of AHC proceed by selecting arbitrarily one such pair (e.g., the first one to be detected). In the in-house implementation we used, the pair was selected randomly amongst all these pairs. This was done to properly capture the instability of the algorithm. In such a form, AHC may not be deterministic anymore, in that two runs of the same algorithm on the same dataset may result in different hierarchies.

1: return Hierarchy

2: ← {{X1}, …, {XD}}

3: for l = 0, …, D − 2 do

4:  sl ← [s(Xi,Xj)]Xi,Xj

5:  (i*, j*) ← arg maxi, j sl(i, j)

6:  \{i*, j*}

7:  ∪{i*∪j*}

8: end for

Algorithm 1 General description of the hierarchial agglomerative clustering algorithm.

Comparing distinct levels of the hierarchy

The last development aims at providing a way to compare nested partitions, i.e., different levels of the hierarchy. Once the hierarchical clustering has been performed, each step is associated with a partition of X. Assume that, at level l, the partition reads {X1, …, XK} and that, at step l + 1, Xi and Xj are clustered. Denote by the assumption that the partition at level l is the correct partition of X. The global improvement brought by the clustering of Xi and Xj between steps l and l + 1 can be quantified by the log ratio between the marginal model likelihoods where both quantities can be computed in a manner similar to what was done for the similarity measure. For instance, if is true, then Σ is block-diagonal with K blocks Σk’s, the submatrix restrictions of Σ to Xk. Introducing the model parameters then yields (17) In a way similar to what was done previously, the likelihood p(S,Σ1, …, ΣK) can be expanded as (18) Turning to the prior distribution p(Σ1, …, ΣK) and assuming no prior dependence between the Σk’s, we can set (19) Each p(Σk) can be obtained by the marginalization of p(Σ), which is here taken as an inverse-Wishart distribution with ν degrees of freedom and inverse scale matrix the D-by-D diagonal matrix Λ. Note that such a prior on Σ is compatible with the prior used earlier for Σij if one sets νij = νD + Dij and if Λij is the restriction of Λ to Xij ([25] §5.2). We then have (20) Incorporating Eqs (18), (19), and (20) into Eq (17) and integrating leads to (21) The same calculation can be performed for p(S+1). The result is the same as in Eq (21), except that the product is composed of K − 1 terms. Of these terms, K − 2 correspond to clusters that are unchanged from to +1 and, as a consequence, are identical to those of Eq (21). The (K − 1)th term corresponds to the cluster obtained by the merging of Xi and Xj. As a consequence, the log Bayes factor reads But this quantity is nothing else than s(Xi,Xj). In other words, we proved that (22) i.e., s(Xi,Xj), the local measure of similarity between Xi and Xj, can be used to compute the global measure of relative probability between two successive levels of the hierarchy.

Setting the hyperparameters

Hyperparameter selection is often a thorny issue in Bayesian analysis. We here considered two approaches. The first approach (coined BayesCov) is to set the degree of freedom to the lowest value that still corresponds to a well-defined distribution, that is ν = D, and a diagonal scale matrix that optimizes the marginal model likelihood of Eq (21) before any clustering (that is, with K = D clusters and Dk = 1 for all k), yielding (see §C of S1 File) where Λdd and Sdd are the diagonal elements of the prior inverse scale matrix Λ and sum-of-square matrix S, respectively. An alternative approach (coined BayesCorr) is to work with the sample correlation matrix instead of the sample covariance matrix. One can then set the number of degrees of freedom to ν = D + 1 and the scale matrix to the identity matrix. The corresponding prior distribution yields uniform marginal distributions for the correlation coefficients [27]. Note that clustering with the asymptotic form of Eq (16) (coined Bic) does not involve hyperparameters; it is also insensitive to the fact that the input is the covariance matrix or the correlation matrix.

Automatic stopping rule

An advantage of the Bayesian clustering scheme proposed here and its BIC approximation is that they come naturally with an automatic stopping rule. By definition of s in Eq (1), the fact that s(Xi0,Xj0) > 0 for the pair that is selected for clustering also means that the marginal model likelihood for is larger than that for . As such, Xi0 and Xj0 are more likely to belong to the same cluster than not and, as a consequence, it indeed makes sense to cluster them. By contrast, if we have s(Xi0,Xj0) < 0 for the same pair, the marginal model likelihood for is smaller than that for . Xi0 and Xj0 are therefore more likely to belong to different clusters. If, at a given step of the clustering, the pair with highest similarity measure has a negative similarity measures, then all pairs do, meaning that all pairs of clusters tested more probably belong to different clusters. It therefore makes sense to stop the clustering procedure at that point. This shows that an automatic stopping rule can simply be implemented into the clustering algorithm: Stop the clustering if the pair (Xi0,Xj0) selected for clustering at a given step has s(Xi0,Xj0) < 0. Note that, according to Eq (22), the resulting clustering corresponds to the one in the hierarchy that has largest marginal likelihood. We will refer to BayesCovAuto, BayesCorrAuto and BicAuto when applying the clustering schemes with this automated stopping rule.

Results

Validation on synthetic data

To assess the behavior of the method expounded here, we examined how it fared on synthetic data. We used the two variants of the Bayes factor mentioned above (BayesCov and BayesCorr), Bic, as well as the same methods with automatic stopping rule (BayesCovAuto, BayesCorrAuto and BicAuto). As a mean of comparison, we also used the following methods—

  • A random hierarchical clustering scheme, where variables were clustered uniformly at random at each step. This category contains only one algorithm: Random, which was implemented for the purpose of the present study.
  • Hierarchical clustering schemes with similarity measures given by either Pearson correlation or absolute Pearson correlation, and a merging rule based on either the single, average, or complete linkage, or using Ward criterion. This category contains 8 algorithms: Single, Average, Complete, Ward, SingleAbs, AverageAbs, CompleteAbs, WardAbs. We used the implementations of these methods proposed in NIAK (https://github.com/SIMEXP/niak)
  • Hierarchical clustering with a similarity measure given by mutual information, with and without normalization. This category contains 2 algorithms: Infomut and InfomutNorm. These methods were implemented for the purpose of the present study. Note that neither algorithm can run on small samples.
  • An approach where the clusters are estimated as the blocks of the precision (i.e., inverse covariance) matrix estimated with the graphical lasso—essentially a maximum-likelihood with L1-norm penalization [28]. The penalization parameter λ was set in [0, 1] by step of 0.1, then to 5, 10, 20, 50, 100, 200 [29]. A version that optimizes λ with a BIC criterion was also used [30]. Since the graphical lasso is not invariant by transformation of a covariance matrix into a correlation matrix, we used either the covariance matrix or the correlation matrix as input. Note that this approach automatically determined a number of clusters. Also, for λ = 0 (unconstrained case), the algorithm cannot run on small samples. This category contains 34 algorithms: 16 algorithms gLassoCov-x and 16 algorithms gLassoCorr-x, where x is the value of λ, and 2 algorithms gLassoCovOpt and gLassoCorrOpt. For this category of algorithms, we used a package freely available (http://www.cs.ubc.ca/∼schmidtm/Software/L1precision.html) and already used in [29].
  • An approach based on the spectral clustering [31] of either the raw value or the absolute value of either the correlation or the partial correlation matrix. This approach required the number of clusters as input. Since this clustering has a step of k-means, which is stochastic by nature, we considered 2 variants: one with 1 step of k-means and the other one with 10 repetitions of k-means and selection of the best clustering in terms of inertia. The similarity measures were defined so that the range would be the same (namely in [0, 1]) when using the signed or the absolute value of correlation: 0.5(1+r) and ∣r∣, respectively. This category contains 8 algorithms: 2 algorithms SpecCorr-x, 2 algorithms SpecCorrAbs-x, 2 algorithms SpecCorrpar-x and 2 algorithms SpecCorrparAbs-x, where x is the number of times that k-means is performed. The spectral clustering algorithms of this category were coded for the purpose of the present study, while we used the implementations of the k-means algorithm proposed in NIAK.

All in all, 59 variants were tested.

Data description.

In order to assess the performance of the Bayesian approach, we performed the following set of simulations. For each value of D in {6,10,20,40}, we considered partitions with increasing number of clusters C (1 ≤ CD). For a given value of C, we performed 500 simulations. For each simulation, the D variables were randomly partitioned into C clusters, all partitions having equal probability of occurrence ([32] chap. 12); [33]. For a given partition {X1, …, XK} of X, we generated data according to (23) where all fk’s were taken either as multivariate normal distributions (parameters: mean μk and covariance matrix Σk) or multivariate Student-t distributions (parameters: degres of freedom ν, location parameter μk and scale matrix Σk). In both case, the μk’s were set to 0 and the Σk’s were first sampled according to a Wishart distribution with Dk + 1 degrees of freedom and scale matrix the identity matrix and then rescaled to a correlation matrix. The sampling scheme on Σk generated correlation matrices with uniform marginal distributions for all correlation coefficients [27]. For the multivariate Student-t distributions, ν was set in {1,3,5}. Eq (23) was used to generate synthetic datasets of length N varying from 10 to 300 by increment of 40. Each dataset was summarized by its sample correlation matrix and hierarchical clustering was performed using the methods mentioned above. All simulations were implemented using the Pipeline System for Octave and Matlab, PSOM (https://github.com/SIMEXP/psom) [34] under Matlab 7.2 (The MathWorks, Inc.) and run on a 24-core server.

To assess the efficiency of the various methods, we thresholded each clustering at the true number of clusters, except for BayesCovAuto, BayesCorrAuto, BicAuto and gLasso for which we used the clustering obtained by the method. We then quantified the quality of the resulting partition using the proportion of correct classifications as well as the adjusted Rand index, which computes the fraction of variable pairs that are correctly clustered corrected for chance [35, 36]. Results corresponding to a given dimension D and a given method were then pooled across numbers of clusters C, lengths N and distributions (multivariate normal and Student). We classified the methods from best to worst based on these global results using the following indices (in this order): median of adjusted Rand index, 25% percentile value of adjusted Rand index, 5% percentile value of adjusted Rand index, smallest value of adjusted Rand index, and proportion of correct classifications. Note that some algorithms (Infomut, InfomutNorm and SpecCorrpar) require the sample covariance matrix to be definite positive. As a consequence, these algorithms could not run on small samples. We therefore restrained our evaluation to cases where all algorithms were operational. Finally, we performed a Bayesian ANOVA-like regression analysis ([24] §15.6), where we explained the adjusted Rand index of nine algorithms (BayesCov, BayesCovAuto, BayesCorr, BayesCorrAuto, Bic, BicAuto, Infomut, InfomutNorm, and AverageAbs) with the following effects: clustering algorithm (9 levels), number of variables D (4 levels: D ∈ {6,10,20,40}), type of distribution (4 levels: multivariate Gaussian or multivariate Student-t with 1, 3, or 5 degrees of freedom), number of samples N (8 levels: N ∈ {10,50,90,130,170,210,250,290}), and number of clusters C (68 levels: from 2 to D − 1 clusters for each D). In other words, the model considered was of the form (24) Interactions between effects, while potentially relevant, were not considered to keep the analysis tractable. The posterior distribution for the various regression parameters were estimated using Gibbs sampling.

Results.

The results corresponding to the adjusted Rand index and fraction of correct classification are summarized in Figs 1 and 2 for the 20 best methods. Globally, and as expected, all methods were adversely affected by an increase in the number of variables. In all cases, the variants proposed in the present paper performed very well compared to other methods. BayesCov and BayesCorr were always classified as the two best algorithms and Bic was never outperformed by a method already published. The methods with automatic thresholding of the hierarchy performed surprinsingly well, considered that they were compared against methods with oracle. In particular, they clearly outperformed all variants of gLasso, the only method that was able to automatically determine the number of clusters. Of note, all variants of gLasso proved too slow to be applied to our simulation data for D ∈ {20,40}.

thumbnail
Fig 1. Simulation study.

Computational time (top), adjusted Rand index (middle) and proportion of correct classifications (bottom) for D = 6 (left) and D = 10 (right).

https://doi.org/10.1371/journal.pone.0137278.g001

thumbnail
Fig 2. Simulation study.

Computational time (top), adjusted Rand index (middle) and proportion of correct classifications (bottom) for D = 20 (left) and D = 40 (right).

https://doi.org/10.1371/journal.pone.0137278.g002

The results of the regression analysis are represented in Fig 3. The 9 algorithms selected included the ones proposed in the present manuscript (BayesCov, BayesCovAuto, BayesCorr, BayesCorrAuto, Bic, and BicAuto), the best-performing classic algorithm in the previous analysis (AverageAbs) as well as the algorithms based on mutual information (Infomut and InfomutNorm). We found that increasing the dataset size (N) increased the performance of the algorithm, while increasing the dimensionality of the problem (D) and the number of clusters (C) decreased it. Note that dimension was found to have a negative influence on the adjusted Rand index, even though this index was partly proposed as a modification from the raw Rand index to overcome this limitation. Finally, this analysis confirmed the superior behavior of the methods proposed here, even when the method included the automatic stopping rule.

thumbnail
Fig 3. Simulation study.

Result of the regression analysis. Posterior distribution for the different regression coefficients: β0 (global effect), βN (dataset size N), βalgo (algorithm), βD (number of variables D), βdistr (type of distribution), and βD, C (number of clusters) [see Eq (24)].

https://doi.org/10.1371/journal.pone.0137278.g003

Toy example

This data set was first used in [37] and later re-analyzed in [38] in the context of conditional independence graphs. It originates from a study investigating early diagnosis of HIV infection in children from HIV positive mothers. The variables are related to various measures on blood and its components: X1 and X2 immunoglobin G and A, respectively; X4 the platelet count; X3, X5 lymphocyte B and T4, respectively; and X6 the T4/T8 lymphocyte ratio. The sample summary statistics are given in Table 1. [37] found that the correlations between X4 and any other variable had strong probability around zero and hypothesized that the model was overparametrized. Based on the simulation study, we performed clustering of the data with the following methods: BayesCov(Auto), BayesCorr(Auto), Bic(Auto), Infomut, InfomutNorm, SingleAbs, AverageAbs, CompleteAbs, WardAbs, SpecCorrAbs and SpecCorrparAbs. For spectral clustering, we used either 1 or 10 repetitions of the k-means step; since the results obtained for 1 step of k-means were highly variable for 3, 4, and 5 clusters, we discarded these results.

thumbnail
Table 1. Toy example.

Summary statistics for the HIV data: sample variances (main diagonal), correlations (lower triangle) and partial correlations (upper triangle) (from [37]).

https://doi.org/10.1371/journal.pone.0137278.t001

The resulting clusterings are given in Fig 4 and Table 2. All hierarchical clusterings started by clustering X3 and X5 (lymphocite). This was confirmed by SpecCorrAbs-10 when required to provide 5 clusters. All hierarchical clustering methods then clustered X1 and X2 (immunoglobin). This result was in agreement with both SpecCorrAbs-10 and SpecCorrparAbs-10 when required to provide 4 clusters. After the second step, we observed four behaviors for the AHC algorithms and classified them accordingly:

  1. (G1). BayesCov, BayesCorr, Infomut and InfomutNorm;
  2. (G2). SingleAbs and AverageAbs;
  3. (G3). Bic;
  4. (G4). CompleteAbs and WardAbs.

thumbnail
Fig 4. Toy example.

Result of clustering. Algorithms in the top row clustered X4 at the last step, while it was clustered at the before the last step for algorithms in the bottom row. Algorithms in the left column clustered X6 with {X3, X5}, while X6 was clustered with {X1, X2} for the algorithms in the right column. Parts in grey correspond to clustering steps that were not performed by BayesCovAuto or BayesCorrAuto in (G1), or Bic in (G3).

https://doi.org/10.1371/journal.pone.0137278.g004

thumbnail
Table 2. Toy example.

Result of spectral clustering with increasing number of clusters.

https://doi.org/10.1371/journal.pone.0137278.t002

While not a hierarchical clustering, SpecCorrAbs-10 provided successive clusterings that were in agreement with methods in (G2). Algorithms in (G1) and (G3) clustered X6 with {X3, X5}, creating a cluster of variables related to lymphocite. Algorithms in (G1) and (G2) (and SpecCorrAbs-10) agreed that a partitioning of the variables in two clusters should leads to {X1, X2, X3, X5, X6} on the one hand and {X4} on the other hand. This clustering was also found by SpecCorrpar-10 when constrained to extract two clusters from the data. It was also considered the best clustering for BayesCov and BayesCorr. For Bic, the optimal partitioning was composed of three clusters, namely, {X1, X3, X6}, {X1, X2}, and {X4}, which is in agreement with what would methods in (G1) yield for three clusters; furthermore, it still kept X4 separated from the other variables.

In Fig 5, we represented the evolution of the log10 Bayes factor during hierarchical clustering for BayesCov, BayesCorr and Bic. Note that, while the clustering steps are identical for BayesCov and BayesCorr, the log Bayes factors are similar but not identical. Likewise, while the first two clustering steps of Bic is identical to those of BayesCov and BayesCorr, one can see that, unlike BayesCov and BayesCorr, Bic considered the merging of {X3, X5} with X6 almost as likely as that of X1 with X2. Also, while the successive clusterings of X3 with X5 and then X6 as well as that of X1 with X2 strongly increased the Bayes factor for BayesCov and BayesCorr, the improvement brought by the clustering of {X3, X5, X6} with {X1, X2} in these methods was less important.

thumbnail
Fig 5. Toy example.

Result of clustering for BayesCorr, BayesCov and Bic. The values on the y axis correspond to the log10 Bayes factor in favor of the global clustering obtained at each step compared to a model where all variables are independent (step 0 of hierarchical clustering). The dotted lines correspond to clustering steps that were not performed with the automatic stopping rule.

https://doi.org/10.1371/journal.pone.0137278.g005

All in all, this analysis led us to the following conclusion: it is very likely that variables X1 and X2 belong to the same cluster of dependent variables, and similarly for variables X3 and X5. Also, there is very strong evidence in favor of the fact that X4 is independent from the other variables. Finally, we suspect that X3, X5 and X6 could belong to the same cluster of variables.

fMRI data

Cluster analysis is a popular tool to study the organization of brain networks in resting-state fMRI [39, 40], by identifying clusters of brain regions with highly correlated spontaneous activity. We applied the 13 methods that were found to have good performance on simulations (see Fig 6) to a collection of resting-state time series. The time series had 205 time samples and were recorded for 82 brain regions in 19 young healthy subjects. See §D of S1 File for details on data collection and preparation. The data are available online (http://figshare.com/articles/Atlanta_resting_state_fMRI_time_series_preprocessed_using_the_AAL_template/1521155).

thumbnail
Fig 6. Real resting-state fMRI data—between-method similarity.

Panel a: Rand indices between individual partitions generated with different methods, averaged across all subjects and scales (number of clusters). Panel b: Hierarchical clustering using matrix shown in Panel a as a similarity measure and Ward’s criterion.

https://doi.org/10.1371/journal.pone.0137278.g006

We first aimed at establishing which clustering algorithms yielded similar results on these datasets. We more specifically investigated a 7-cluster solution, as this level of decomposition has been examined several times in the literature [39, 41, 42]. Each clustering algorithm was applied to the time series of each subject independently. For a given pair of methods, the similarity between the cluster solutions generated by both methods on the same subject was evaluated with the Rand. Note that the raw, unnormalized Rand index was used here, as we did not compare cluster solutions with different numbers of clusters, which is the main motivation of the normalization. The unnormalized Rand index has a more intuitive interpretation than its adjusted counterpart. The Rand indices were averaged across subjects, hence resulting into a method-by-method matrix capturing the (average) similarity of clustering outputs for each pair of methods (see Fig 6). An AHC with Ward’s criterion was applied to this matrix in order to identify clusters of methods with similar cluster outputs. We visually identified five clusters of methods that had high (> 0.7) average within-cluster Rand index. The largest cluster included classical AHCs such as CompleteAbs, AverageAbs, WardAbs, as well as the Bic and BayesCov methods proposed here. It should be noted that this class of algorithms generated solutions for this problem that were very close to one another (average within-cluster Rand index > 0.8). The BayesCorr method was also close to that large group of methods, but not quite as much as the aforementioned methods (average Rand index of about 0.7), and was thus singled out as a separate cluster. The spectral methods were split into two clusters, depending on whether they were based on correlation (SpecCorrAbs-1 and SpecCorrAbs-10) or partial correlation (SpecCorrparAbs-1 and SpecCorrparAbs-10). Finally, the two variants of mutual information (Infomut and InfomutNorm) generated solutions that were highly similar to SingleAbs. It was reassuring that the variants of Bayes methods proposed here performed similarly to algorithms known to produce physiologically plausible solutions, such as Ward [43, 44]. While we found some analogy between BayesCorr, BayesCov, Infomut and InfomutNorm, it was intriguing that the variants of mutual information tested seemed to generate markedly different classes of solutions from the Bayes methods. We decided to examine the cluster solutions of these methods in more details.

As a reference, we examined the cluster solutions generated by WardAbs, in addition to two variants of Bayes clustering that yielded slightly different solutions (BayesCov and BayesCorr), and nomalized mutual information, InfomutNorm. To represent the typical behavior of each method across subjects, we generated a “group” consensus clustering summarizing the stable features of the ensemble of individual cluster solutions. This consensus clustering was generated by the evidence accumulation algorithm [45] outlined below. First, each partition of each subject was represented as a binary 81-by-81 adjacency matrix A = (Aij), where for each pair (i, j) of brain regions, Aij = 1 if areas i and j were in the same cluster, and Aij = 0 otherwise. The adjacency matrices were then averaged across subjects and selected methods, yielding a 81-by-81 stability matrix C = (Cij) where each element Cij coded for the frequency at which brain areas i and j fell in the same cluster. Finally this stability matrix was used as a similarity matrix in a AHC with Ward’s criterion to generate one consensus partition. The brain regions were grouped into the same consensus cluster if they had a high probability of falling into the same cluster on average across subjects and methods, hence the name consensus clustering.

Fig 7 represents the stability matrices and consensus clusters, for the four methods of interest. As expected based on our first experiment on the similarity of cluster outputs, the WardAbs and BayesCov methods were associated with highly similar stability matrices and almost identical consensus clusters. Many areas of high consensus could be identified (with values close of 0 or 1), illustrating the very good agreement of the cluster solutions across subjects. The outline of the consensus clusters as well as a volumetric representation of the brain parcellation are presented in Fig 7b. Some of these clusters closely matched those reported in previous studies: cluster 7 can be recognized as being the visual network, cluster 2 the sensorimotor network, and clusters 6 and 3 the anterior and posterior parts of the default-mode network, respectively [41, 46, 47]. By contrast with WardAbs and BayesCorr, the InfomutNorm tended to generate very large clusters, which was apparent both on the stability matrix and the consensus clusters. The BayesCorr method was intermediate between BayesCov and InfomutNorm in terms of cluster size. These decompositions into very large clusters do not fit current views on the organization of resting-state networks.

thumbnail
Fig 7. Real data—consensus clustering.

A consensus clustering was generated based on the average adjacency matrices across all subjects (Panel a). The (weighted) adjacency matrix associated with the consensus clustering is represented along with a volumetric brain parcellation (Panel b). The weights in the adjacency matrix were added to establish a visual correspondence with the volumetric representation. Note that the brain regions have been order based on the hierarchical clustering generated with WardAbs.

https://doi.org/10.1371/journal.pone.0137278.g007

Overall, our analysis on real fMRI data led to the following conclusions. Three big subsets of methods emerged: spectral methods, mutual information (with SingleAbs), and finally all the other methods. Application of this last group of methods, which included the Bayes variants proposed here, resulted in a plausible decomposition into resting-state networks. In the absence of ground truth, it is not possible to further comment on the relevance of the differences in cluster solutions identified by the three groups of methods. We still noted that the methods based on mutual information led to large clusters that were difficult to interpret. Our interpretation is that the strategies implemented in Infomut and InfomutNorm did not behave well for these datasets.

As a final computational note, the time required by the different methods to cluster data is summarized in Table 3. Although the differences in execution time may reflect the quality of the implementation, the methods proposed here were the slowest of the hierarchical methods, but were still faster than spectral clustering.

thumbnail
Table 3. Real resting-state fMRI data—computational cost.

Time required by each method to cluster one dataset.

https://doi.org/10.1371/journal.pone.0137278.t003

Discussion

Contributions

Summary.

We here proposed some novel similarity measures well suited for the agglomerative hierarchical clustering of dependent variables. These measures rely on a Bayesian model comparison for multivariate normal random variables. On synthetic data with a known (ground truth) partition, hierarchical clustering based on the Bayesian measures was found to outperform several standard clustering procedures in terms of adjusted Rand index and classification accuracy. On the toy example, the Bayesian approaches led to result similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On the real fMRI data, the Bayesian measures led to results consistent with standard clustering methods, in contrast to methods based on mutual information, which exhibited a highly atypical behavior.

Bayesian normalization of mutual information.

A key feature of the Bayesian approach is its ability to take the dimension of the clusters into account. Dimensionality is an important issue in two respects (see §A of S1 File for an illustration). First, mutual information is an extensive measure that depends on the dimension of the variables. This has motivated the introduction of a normalization factor in the application of mutual information to hierarchical clustering [16, 17]. A second issue is the existence of a bias in the estimation of mutual information. This bias mechanically increases with the dimensionality of the variables. Because of the two issues described above, hierarchical clustering based on mutual information will tend to cluster unrelated but large variables rather than correlated variables of lower dimensions. As demonstrated on real fMRI data, the heuristic proposed by [16] does not provide a general solution to the issue of dimensionality. Furthermore, it removes some interesting features of mutual information, such as the additivity of the clustering measure. By contrast, the Bayesian approach takes the dimensionality into account in a principled way, providing a quantitative version of Occam’s factor ([48] Chap. 20). The Bayesian normalization is additive rather than multiplicative, thus preserving the additive properties of mutual information.

From similarity measure to log Bayes factor.

We defined the similarity measure s(Xi,Xj) between any two pairs of variables Xi and Xj as a log Bayes factor. At each step, the pair (Xi0,Xj0) that had the largest similarity measure was merged. Taking into account the unique features of s as the log Bayes factor defined in Eq (1) allowed us to have access to a global measure of fit as defined in Eq (22) as well as to provide an automatic stopping rule that behaved very well on simulated data. Going from a similarity measure to a log Bayes factor has other advantages that could take the clustering proposed here even further (see below).

Practical value of the Bayes/Bic clustering in fMRI.

The new alternatives to mutual information introduced in this paper (i.e., Bayes and Bic) proved useful for the analysis of resting-state fMRI. The benefits were particularly clear when compared to InfomutNorm, which tended to create large, inhomogeneous clusters. By contrast, both Bayes and Bic had a behavior similar to standard hierarchical clustering based on Pearson’s linear correlation. The possible benefits of Bayes and Bic over those canonical methods are still substantial. The mutual information first provides a multivariate measure of interaction that is well adapted to hierarchical brain decomposition [49, 50] and which has a clear interpretation in information theory [5153]. For these reasons, the mutual information for Gaussian variables is more appealing than a simple average of pairwise correlation coefficients across clusters. Because mutual information is measured between clusters, it is natural to build the clusters themselves based on this metric. A second benefit of the proposed approach is that Bic proved to be the most stable of all tested methods in the range of 5–15 clusters on real fMRI datasets. The increase in stability over Ward’s was modest, but significant. This advantage may become even more substantial if the clustering is performed in higher dimension, i.e., with smaller areas than the AAL brain parcellation or even at the voxel level.

Similarity vs. distance.

Clustering techniques are based on either a similarity measure or a distance measure. While the description of the present manuscript mostly relied on the notion of similarity, going from one concept to the other one can generally be done with minor changes. For instance, standard hierarchical procedures which rely on the minimization of a distance to perform clustering (e.g., single, complete and average linkage) can be applied to cases where closeness is quantified by a measure of similarity, simply by using the opposite of the similarity matrix as a distance matrix. Although the resulting measure may not define a mathematically valid distance, it is not required for the procedure to work. Similarly, in a Bayesian framework, switching from a similarity measure to a distance measure tantamounts to switching from to that is, from the log ratio of the marginal model likelihoods in favor of dependent variables to the log ratio of the marginal model likelihoods in favor of independent variables.

Modeling choices

Choice of priors.

Any Bayesian analysis requires the introduction of prior distributions. In the present study, we needed the prior distribution for the covariance matrix associated to any clustering of X. Our choices were guided by one assumption, one rule of consistency, and one rule of simplicity. First, our assumption was to not assume a priori any sort of dependence between covariance matrices associated to different clusters. This allowed for the decomposition of any prior as a product of independent priors, see Eqs (8) and (19). The rule of consistency was to consider that the prior for a given covariance matrix should not be contradictory at different levels of the hierarchy. This is why we set the prior distribution for the global covariance matrix Σ as an inverse-Wishart distribution and then derived the prior for any covariance matrix Σk as the prior distribution for Σ integrated over all parameters that do not appear in Σk; using such an approach, the resulting prior turned out to be an inverse-Wishart distribution as well. Last, the rule of simplicity is the one that dictated the use of inverse-Wishart distributions as prior distributions for the covariance matrices. Such a family of priors had the twofold advantage of being closed by marginalization and allowing for closed-form expressions of the quantities of interest. An inverse-Wishart distribution is characterized by two parameters: the degrees of freedom ν and the inverse scale matrix Λ. If Σ is a D-by-D matrix, we must have ν > D − 1 for the distribution to be normalized. Also, Λ must be positive definite. From there, we had two strategies. The first one was to set the degree of freedom to the lowest value that still corresponded to a well defined distribution (ν = D) and a diagonal scale matrix that optimized the marginal model likelihood. An alternative approach was to work with the sample correlation matrix, set ν = D + 1 and equal Λ with the D-by-D identity matrix I, since this choice corresponds to a distribution that is associated with uniform marginal distributions of the correlation coefficients [27]. While we believe that our assumption and the rule of consistency are sensible choices, we must admit that we are not quite as content with the choice of inverse-Wishart distributions for priors. The major issue with such a family of priors is that they simultaneously constrain the structure of correlation and the variances. By contrast, it seems intuitive that clustering should depend on the correlation structure only, not on the variances. As such, the prior on the variances should ideally be set independently from the correlation structure. Priors that separate variance and correlation have already been proposed [27]. Unfortunately, the use of such priors would make it impossible to provide a closed form for the marginal model likelihood. While numeric schemes could be implemented to circumvent this issue, it would render the procedure proposed much more complex and computationally burdensome. By contrast, the algorithm detailed here is rather straightforward. Also, the influence of the prior vanishes when the sample size increases. From a practical perspective, the three methods proposed here (two, BayesCov and BayesCorr, with different priors and one, Bic, not influenced by the prior) exhibited similar behaviors and still outperformed other existing methods in the simulation study. We take it as a proof of the robustness of the method to the choice of prior.

Covariance vs. precision matrix modeling.

The presence of clusters of variables that are mutually independent is equivalent to having a covariance matrix Σ that is block diagonal, which is itself equivalent to having a precision (or concentration) matrix ϒ = Σ−1 that is also block diagonal. One could therefore solve the problem using precision matrices instead of covariance matrices. The corresponding calculations can be found in §E of S1 File. The main difference between the two approaches stems from the fact that a submatrix of the inverse covariance matrix is not equal to the inverse of the corresponding submatrix of the covariance matrix, that is, (Λ−1)k ≠ (Λk)−1, unless Λ is block diagonal; also, Wishart and inverse-Wishart distributions do not marginalize the same way. These differences are to be related to the fact that a submatrix of a covariance matrix is better estimated than the whole covariance matrix, while the same does not hold for a precision matrix. From there, we could expect the covariance-based approach to perform better than the precision-based approach, and the difference to increase with increasing D. This was confirmed on our synthetic data, where the precision-based approach behaved as well as BayesCov and BayesCorr for D = 6 but had worse results than these two approaches for D ∈ {10,20,40}. Besides, performance of the automated stopping rule was much more efficient with BayesCov and BayesCorr than it was with the precision-based approach. As a final note, basing the method on concentration matrices yielded a slower algorithm, arguably because of the matrix inversions that are required.

Sample covariance matrix vs. full dataset.

Intuitively, the structure of dependence of a multivarite normal distribution is included in its covariance matrix. All existing algorithms that we used here do not need the full dataset but only the sample covariance (or correlation) matrix. This is why we started with a likelihood model that only considers the covariance matrix [see Eqs (3) and (7)]. Rigorously, this model is only valid for ND; for N < D, one should resort to a model of the full data as being multivariate normal with a mean μ and covariance matrix Σ. Nonetheless, we kept the ‘intuitive’ approach as it has fewer hyperparameters, is easier to deal with, and leads to formulas that are simpler to interpret. Also, the resulting similarity measure [Eq (11)] is not restricted to ND, but is well defined for N < D as well. From a practical perspective, the ‘intuitive’ algorithm only requires the sample covariance matrix as input, while a full model would also require the sample mean. Finally, this simpler model exhibited good behavior on our synthetic data, even for small datasets.

Directions for future work

Computational costs.

Regarding the computational cost, measures derived from mutual information or a Bayesian approach are more demanding than standard methods such as Average or Ward. The derivation of the similarity matrix and the search for the most similar pairs of clusters are the two critical operations that can be optimized to decrease computation time. It is always possible to speed up these two steps by taking advantage of the fact that the similarity matrices of two successive iterations of the algorithm have many elements in common, as all but one element of a partition at a given iteration are identical to the elements of the partition of the previous iteration. Critically, in the case of Average and Ward methods, it is in addition possible to derive the similarity matrix at every iteration only based on the similarity matrix at initialization through successive updates using the Lance-Jambu-Williams recursive formula [54]. By contrast, other measures, including BayesCov, BayesCorr, Bic, Infomut, and InfomutNorm, have to be re-evaluated independently at every step, which means that the determinant of a covariance matrix with increasing size has to be computed. Finding an update formula analogous to Lance-Jambu-Williams for clustering methods based on variants of the mutual information would substantially accelerate these algorithms.

From deterministic to stochastic clustering.

Another extension would be to replace the deterministic rule of selecting the pair with largest similarity measure for merging by a probabilistic rule where the probability to cluster a given pair is given by the posterior probability of the resulting clustering.

Group analysis.

A last extension that could be easily implemented in the present framework is the generalization of the method to account for E different entities (e.g., subjects in fMRI) sharing the same structure but with potentially different covariance matrices for that structure. If each entity e is characterized by a variable X[e] and corresponding sample sum-of-square matrix S[e], one can perform AHC on each X[e] using S[e] and the corresponding similarity measure s[e]. However, with a straightforward modification of the present method, one could also perform global AHC of all E covariance matrices considered simultaneously. Assuming that the covariance matrices of the different elements are independent given the common underlying structure, then the resulting similarity measure is the sum of all individual similarity measures s[e]’s.

Generalization to other types of distribution.

Altogether, the Bayesian framework that we used provides a principled way to generalize our approach to distributions other than multivariate normal ones. Such generalization would potentially account for a wide variety of situations, such as nonlinear dependencies or discrete distributions. This would widen the scope of possible applications of the technique, e.g., genetics [47]. The issues related to this generalization are twofold. First, one needs a model of dependence. In the discrete case, one could think of multinomial distributions, originating from categorical, i.e., generalized Bernoulli distributions ([55] §3.4). In the continuous case, multivariate normal distributions are a first choice model beyond which it is not clear what to use. Multivariate Student-t distributions could be considered, even though our results on simulated data would tend to hint that the difference with multivariate normal distriubtions might not be that large. One could also consider using models where dependence is controled independently of the marginal distributions, such as multivariate copulas [56, 57]. Another issue is the possibility to express the marginal posterior likelihood of the data given the model selected. For multivariate discrete variables, we expect it to be feasible, albeit computationally very challenging and sensitive to the type of prior distribution. For other distributions, obtaining a closed form might be out of reach. Nonetheless, the marginal posterior likelihood could be approximated using various criteria, such as the AIC [58] or variants thereof—AICc [59]; ([60] §2.3.1) or AICu ([60] §2.4.1)—, or the BIC [22], which naturally appeared in the present derivation. In any case, any approach beyond multivariate normal distributions would drastically increase the complexity of our approach, both in terms of inference and computation.

Application to truly hierarchical data.

In the present manuscript, we used a hierarchical algorithm as a way to extract an underlying structure of dependence from data. Our assumption was that there was one such structure. Such an approach provided a simple and efficient clustering algorithm with an interesting connection to mutual information. However, the method as presented here is not able to deal with data that are truly organized hierarchically. Extending it to deal with such data would improve the scope of the algorithm. One way to do would be to use Dirichlet process mixtures [20, 6165], together with a model of dependent variables.

Conclusion

In this paper, we proposed a procedure based on Bayesian model comparison to decide whether or not to merge Gaussian multivariate variables in an agglomerative hierarchical clustering procedure. The resulting similarity measure was found to be closely related to the standard mutual information, with some additional corrections for the dimensionality of the datasets. These new Bayesian alternatives to mutual information turned out to be beneficial to hierarchical clustering on simulations and real datasets alike. Because of the simplicity of its implementation, its good practical performance and the potential generalizations to other types of random variables, we believe that the approach presented here is a useful new tool in the context of hierarchical clustering.

Supporting Information

Acknowledgments

The authors are grateful to the members of the 1000 functional connectome consortium for publicly releasing the ‘Atlanta’ data sample, to the UNF (Unité de neuroimagerie fonctionnelle, Université de Montréal, Montréal, Qc, Canada) for providing them with computational resources, to Mathieu Desrosiers (UNF) for his expertise and suppport running the simulations on the grid engine. Part of this study was performed while G.M. was at the UNF.

Author Contributions

Conceived and designed the experiments: GM PB. Performed the experiments: GM PB. Analyzed the data: GM PB. Contributed reagents/materials/analysis tools: GM AM PB. Wrote the paper: GM AM PB. Conceived the theoretical model: GM.

References

  1. 1. Jain AK. Data clustering: 50 years beyond K-means. Pattern Recognition Letters. 2010;31(8):651–666.
  2. 2. Duda RO, Hart PE, Stork DG. Pattern Classification. 2nd ed. Wiley-Interscience; 2000.
  3. 3. D’haeseleer P. How does gene expression clustering work? Nature Biotechnology. 2005;23(12):1499–1501. pmid:16333293
  4. 4. Butte AJ, Kohane IS. Mutual information relevance networks: functional genomic clustering using pairwise entropy measurements. In: Proceddings of the 5th Pacific Symposium on Biocomputing. vol. 5; 2000. p. 415–426.
  5. 5. Zhou X, Wang X, Dougherty ER, Russ D, Suh E. Gene clustering based on clusterwide mutual information. Journal of Computational Biology. 2004;11(1):147–161. pmid:15072693
  6. 6. Dawy Z, Goebel B, Hagenauer J, Andreoli C, Meitinger T, Mueller JC. Gene mapping and marker clustering using Shannon’s mutual information. IEEE/ACM Transactions on Computational Biology and Bioinformatics. 2006;3(1):47–56. pmid:17048392
  7. 7. Priness I, Maimon O, Ben-Gal I. Evaluation of gene-expression clustering via mutual information distance measure. BMC Bioinformatics. 2007;8:111. pmid:17397530
  8. 8. Stausberg S, Elger CE, Lehnertz K. 73. Hierarchical mutual information clustering for an improved classification of fMRI data. Clinical Neurophysiology. 2009;120(1):e33.
  9. 9. Benjaminsson S, Fransson P, Lansner A. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fMRI. Frontiers in Systems Neuroscience. 2010;4:34. pmid:20721313
  10. 10. Kolchinsky A, van den Heuvel MP, Griffa A, Hagmann P, Rocha LM, Sporns O, et al. Multi-scale integration and predictability in resting state brain activity. Frontiers in Neuroinformatics. 2014;8:66. pmid:25104933
  11. 11. Shannon CE. A mathematical theory of communication. The Bell System Technical Journal. 1948;27:379–423, 623–656. .
  12. 12. Kullback S. Information Theory and Statistics. Dover, Mineola, NY; 1968.
  13. 13. Cover TM, Thomas JA. Elements of Information Theory. Wiley Series in Telecommunications and Signal Processing. Wiley; 1991.
  14. 14. Steuer R, Kurths J, Daub CO, Weise J, Selbig J. The mutual information: Detecting and evaluating dependencies between variables. Bioinformatics. 2002;18(suppl. 2):S231–S240. pmid:12386007
  15. 15. Li M, Badger JJ, Chen X, Kwong S, Kearney P, Zhang H. An information-based sequence distance and its application to whole mitochondrial genome phylogeny. Bioinformatics. 2001;17(2):149–154. pmid:11238070
  16. 16. Kraskov A, Stögbauer H, Andrzejak RG, Grassberger P. Hierarchical clustering using mutual information. Europhysics Letters. 2005;70(2):278–284.
  17. 17. Kraskov A, Grassberger P. MIC: Mutual information based hierarchical clustering. In: Emmert-Streib F, Dehmer M, editors. Information Theory and Stastistical Learning. Springer; 2009. p. 101–123.
  18. 18. Scott AJ, Symons MJ. Clustering methods based on likelihood ratio criteria. Biometrics. 1971;27(2):387–397.
  19. 19. Binder DA. Approximations to Bayesian cluster analysis. Biometrika. 1981;68:275–285.
  20. 20. Heller KA, Ghahramani Z. Bayesian hierarchical clustering. Gatsby Unit; 2005.
  21. 21. Kass RE, Raftery AE. Bayes factors. Journal of the American Statistical Association. 1995;90(430):773–795.
  22. 22. Schwarz G. Estimating the dimension of a model. The Annals of Statistics. 1978;6(2):461–464.
  23. 23. Anderson TW. An Introduction to Multivariate Statistical Analysis. 3rd ed. Wiley Series in Probability and Mathematical Statistics. John Wiley and Sons, New York; 2003.
  24. 24. Gelman A, Carlin JB, Stern HS, Rubin DB. Bayesian Data Analysis. 2nd ed. Texts in Statistical Science. Chapman & Hall, London; 2004.
  25. 25. Press SJ. Applied Multivariate Analysis. Using Bayesian and Frequentist Methods of Inference. 2nd ed. Dover, Mineola; 2005.
  26. 26. Abramowitz M, Stegun IA, editors. Handbook of Mathematical Functions. No. 55 in Applied Math.. National Bureau of Standards; 1972.
  27. 27. Barnard J, McCulloch R, Meng XL. Modeling covariance matrices in terms of standard deviations and correlations, with application to shrinkage. Statistica Sinica. 2000;10(4):1281–1311.
  28. 28. Friedman J, Hastie T, Tibshirani R. Sparse inverse covariance estimation with the graphical lasso. Biostatistics. 2008;9(3):432–441. pmid:18079126
  29. 29. Smith SM, Miller KL, Salimi-Khorshidi G, Webster M, Beckmann CF, Nichols TE, et al. Network modelling methods for fMRI. NeuroImage. 2011;54(2):875–891. pmid:20817103
  30. 30. Lian H. Shrinkage tuning parameter selection in precision matrices estimation. Journal of Statistical Planning and Inference. 2011;141(8):2839–2848.
  31. 31. von Luxburg U. A tutorial on spectral clustering. Max-Planck-Institut für biologische Kybernetik; 2006. TR-149.
  32. 32. Nijenhuis A, Wilf H. Combinatorial Algorithms for Computers and Calculators. 2nd ed. Academic Press, Orlando, FL, USA; 1978.
  33. 33. Wilf HS. East Side, West Side; 1999. Available from: http://www.math.upenn.edu/~wilf/lecnotes.html.
  34. 34. Bellec P, Lavoie-Courchesne S, Dickinson P, Lerch JP, Zijdenbos AP, Evans AC. The pipeline system for Octave and Matlab (PSOM): a lightweight scripting framework and execution engine for scientific workflows. Frontiers in Neuroinformatics. 2012;6:7. pmid:22493575
  35. 35. Rand WM. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association. 1971;66(336):846–850.
  36. 36. Hubert L, Arabie P. Comparing partitions. Journal of Classification. 1985;2(1):193–218.
  37. 37. Roverato A. Asymptotic prior to posterior analysis for graphical Gaussian models. In: Vichi M, Opitz O, editors. Classification and Data Analysis. Springer; 1999. p. 335–342.
  38. 38. Marrelec G, Benali H. Asymptotic Bayesian structure learning using graph supports for Gaussian graphical models. Journal of Multivariate Analysis. 2006;97:1451–1466.
  39. 39. Yeo BTT, Krienen FM, Sepulcre J, Sabuncu MR, Lashkari D, Hollinshead M, et al. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology. 2011;106:1125–1165. pmid:21653723
  40. 40. Kelly C, Toro R, Di Martino A, Cox CL, Bellec P, Castellanos FX, et al. A convergent functional architecture of the insula emerges across imaging modalities. NeuroImage. 2012;61(4):1129–1142. pmid:22440648
  41. 41. Bellec P, Rosa-Neto P, Lyttelton OC, Benali H, Evans AC. Multi-level bootstrap analysis of stable clusters in resting-state fMRI. NeuroImage. 2010;51(3):1126–1139. pmid:20226257
  42. 42. Power JD, Cohen AL, Nelson SM, Wig GS, Barnes KA, Church JA, et al. Functional network organization of the human brain. Neuron. 2011;72(4):665–678. pmid:22099467
  43. 43. Thirion B, Varoquaux G, Dohmatob E, Poline JB. Which fMRI clustering gives good brain parcellations? Frontiers in Neuroscience. 2014;8:167. pmid:25071425
  44. 44. Orban P, Doyon J, Petrides M, Mennes M, Hoge R, Bellec P. The richness of task-evoked hemodynamic responses defines a pseudohierarchy of functionally meaningful brain networks. Cerebral Cortex. in press;.
  45. 45. Fred ALN, Jain AK. Combining multiple clusterings using evidence accumulation. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2005;27(6):835–850. pmid:15943417
  46. 46. Salvador R, Suckling J, Coleman M, Pickard JD, Menon D, Bullmore E. Neurophysiological architecture of functional magnetic resonance images of human brain. Cerebral Cortex. 2005;34(4):387–413.
  47. 47. van den Heuvel M, Mandl R, Hulshoff Pol H. Normalized cut group clustering of resting-state fMRI data. PLoS ONE. 2008;3(4):e2001. pmid:18431486
  48. 48. Jaynes ET. Probability Theory: The Logic of Science. Cambridge University Press, Cambridge; 2003.
  49. 49. Tononi G, Sporns O, Edelman GM. A measure for brain complexity: relating functional segregation and integration in the nervous system. Proceedings of the National Academy of Sciences of the USA. 1994;91(11):5033–5037. pmid:8197179
  50. 50. Marrelec G, Bellec P, Krainik A, Duffau H, Pélégrini-Issac M, Lehéricy S, et al. Regions, systems, and the brain: hierarchical measures of functional integration in fMRI. Medical Image Analysis. 2008;12(4):484–496. pmid:18396441
  51. 51. Watanabe S. Information theoretical analysis of multivariate correlation. IBM Journal of Research and Development. 1960;4(1):66–82.
  52. 52. Joe H. Relative entropy measures of multivariate dependence. Journal of the American Statistical Association. 1989;84:157–164.
  53. 53. Studený M, Vejnarová J. The multiinformation function as a tool for measuring stochastic dependence. In: Jordan MI, editor. Proceedings of the NATO Advanced Study Institute on Learning in Graphical Models; 1998. p. 261–298.
  54. 54. Batagelj V. Generalized Ward and related clustering problems. In: Bock HH, editor. Classification and Related Methods of Data Analysis. North-Holland, Amsterdam; 1988. p. 67–74.
  55. 55. Papoulis A. Probability, Random Variables, and Stochastic Processes. International student edition ed. McGraw-Hill Series in Systems Science. McGraw-Hill Kogakusha, Tokyo; 1965.
  56. 56. Nelsen RB. An Introduction to Copulas. Springer, New York; 1999.
  57. 57. Fischer M. Multivariate copulae. In: Kurowicka D, Joe H, editors. Dependence Modeling: Vine Copula Handbook. World Scientific; 2011. p. 19–36.
  58. 58. Akaike H. A new look at the statistical model identification. IEEE Transactions on Automatic Control. 1974;19(6):716–723.
  59. 59. Burnham KP, Anderson DR. Multimodel inference: understanding AIC and BIC in model selection. In: Amsterdam Workshop on Model Selection; 2004. p. 261–304.
  60. 60. McQuarrie ADR, Tsai CL. Regression and Time Series Model Selection. World Scientific; 1998.
  61. 61. Heller KA. Efficient Bayesian methods for clustering. Gatsby Computational Neuroscience Unit, University College London; 2007.
  62. 62. Savage RS, Heller K, Xu Y, Ghahramani Z, Truman WM, Grant M, et al. R/BHC: fast Bayesian hierarchical clustering for microarray data. BMC Bioinformatics. 2009;10:242. pmid:19660130
  63. 63. Cooke EJ, Savage RS, Kirk PDW, Darkins R, Wild DL. Bayesian hierarchical clustering for microarray time series data with replicates and outlier measurements. BMC Bioinformatics. 2011;12:399. pmid:21995452
  64. 64. Darkins R, Cooke EJ, Ghahramani Z, Kirk PDW, Wild DL, Savage RS. Accelerating Bayesian hierarchical clustering of time series data with a randomised algorithm. PLoS ONE. 2013;8(4):e59795. pmid:23565168
  65. 65. Sirinukunwattana K, Savage RS, Bari MF, Snead DRJ, Rajpoot NM. Bayesian hierarchical clustering for studying cancer gene expression data with unknown statistics. PLoS ONE. 2013;8(10):e75748. pmid:24194826