 Research
 Open Access
 Published:
Ranking of communities in multiplex spatiotemporal models of brain dynamics
Applied Network Science volume 7, Article number: 15 (2022)
Abstract
As a relatively new field, network neuroscience has tended to focus on aggregate behaviours of the brain averaged over many successive experiments or over long recordings in order to construct robust brain models. These models are limited in their ability to explain dynamic state changes in the brain which occurs spontaneously as a result of normal brain function. Hidden Markov Models (HMMs) trained on neuroimaging time series data have since arisen as a method to produce dynamical models that are easy to train but can be difficult to fully parametrise or analyse. We propose an interpretation of these neural HMMs as multiplex brain state graph models we term Hidden Markov Graph Models. This interpretation allows for dynamic brain activity to be analysed using the full repertoire of network analysis techniques. Furthermore, we propose a general method for selecting HMM hyperparameters in the absence of external data, based on the principle of maximum entropy, and use this to select the number of layers in the multiplex model. We produce a new tool for determining important communities of brain regions using a spatiotemporal random walkbased procedure that takes advantage of the underlying Markov structure of the model. Our analysis of real multisubject fMRI data provides new results that corroborate the modular processing hypothesis of the brain at rest as well as contributing new evidence of functional overlap between and within dynamic brain state communities. Our analysis pipeline provides a way to characterise dynamic network activity of the brain under novel behaviours or conditions.
Introduction
The brain activity of healthy subjects at rest is commonly used as a baseline against which a wide range of both pathological (e.g. dementia) and healthy (e.g. sleep) conditions are compared (de Vos et al. 2018; Mitra et al. 2017; Pullon et al. 2020). Often, activity under one condition is modelled as a single static pattern of activity, ignoring large scale dynamic shifts. However, neuroimaging researchers have begun to recognise that subjects move through a wide array of brain activity configurations even while relaxed or asleep (Vidaurre et al. 2016; Karahanoğlu and Van De Ville 2017; Suk et al. 2016). A brain state is a configuration of brain activity evoked in response to a stimulus or to facilitate more complex responses (Brown 2006). Neuroimaging time series provide a way to observe these reconfigurations as spatial patterns of metabolic or electrophysiological activity, termed functional activity (Papo 2019). In order to generate these patterns, brain regions must coordinate through transfer of information. This exchange between brain regions defines the state’s functional connectivity. Functional activity can therefore be interpreted as a realisation from a brain state graph model which describes brain dynamics and the relationships between brain regions in the state (Bassett and Sporns 2017). This relates to models of the relationship between observed state and environment in which states are realisations of a socalled Markov blanket taking input from the environment to create an internal model of the external and internal environment (Hipólito et al. 2021; Kirchhoff et al. 2018). In these graph models, nodes are anatomically or functionally defined brain regions and edge strength is determined by the level of information shared between these regions (their functional connectivity).
The dynamics of communities of brain regions are of particular interest due to the important functional roles some communities play. Previous work has focused on deriving communities of brain regions using a number of methods including dynamic community detection (Martinet et al. 2020). State space models have also been proposed that focus on the changing community structure within brain states from inferred functional connectivity (Ting et al. 2020; Liu et al. 2018). Our novel framework uses a Hidden Markov Model (HMM) approach to construct a model, we term a Hidden Markov Graph Model (HMGM). This framework is fully unsupervised requiring no sliding windowbased estimation or thresholding of the functional connectivity, and no prior assumptions about the number of states or embedding dimension.
We analyse brain state dynamics as a multiplex graph with modular (community) structure at both the temporal (state switching dynamics) and spatial (brain region communication) levels. In order to differentiate the temporal communities of states and the less functionally relevant spatial communities from the most relevant we use the term network. Network here is used exclusively to refer to modular subgraphs of coordinated brain regions within a state that are functionally important (rather than being synonymous with the term graph). These brain networks form the basis of our understanding of the functional connectivity pathways within the brain and are integral to our understanding of the role of changing brain configurations in wakefulness and beyond (Rosazza and Minati 2011).
We have developed a method based on the HMGM framework to identify the importance of possible brain networks using random walks to ascribe to each module in each state an importance or Tscore based on their functional connectivity and coactivation. Notably, the method does not apply random walk information to partition the graph but rather to determine the relative importance of communities within a partition (Rosvall and Bergstrom 2008). Our method provides a means to characterise dynamic functional activity under novel conditions or behaviours. As a proof of principle, we apply our pipeline to neuroimaging data from subjects at rest and provide new evidence for both modular and nested functional activity in the awake brain.
Static brain state models
In the simplest brain state models (see Fig. 1A), functional activity arises as noisy realisations of a single static brain state. Considerable progress has been made using this static framework to characterise the vast repertoire of activity patterns observed during wakefulness. Models using both weighted and unweighted graph structures derived from Independent and Principal Component Analysis (ICA and PCA respectively) have revealed key modules within the brain across a wide range of conditions. These include both behavioural and taskbased conditions (sensory, motor etc.) and resting state conditions in the absence of direct stimulation (Calhoun and Adali 2012; Smith et al. 2009; Kokkonen et al. 2009; Sämann et al. 2011; Calhoun et al. 2004). Recent results from both electrophysiological data derived from Electroencephalography (EEG) and Blood Oxygen Dependent (BOLD) data derived from functional MRI (fMRI), suggest that weighted network models produce more reliably reproducible and robust results than do binarised network models (Jalili 2016; Ran et al. 2020; Smith et al. 2017).
Studies using static models have helped neuroscientists to build up vast libraries of associations between cognitive functions and specific brain regions (Poldrack et al. 2011). However, the static approach makes it difficult to account for intersubject variability as well as dynamic changes in state that occur in time as different cognitive and functional demands are placed on the brain (Michael et al. 2014). These demands result in activity in one moment that is often functionally incompatible with activity in the the next, driving the need for dynamic approaches to brain state modelling (Sridharan et al. 2008).
Dynamic brain state models
Moving windowbased approaches produce a series of snapshots of the activity pattern of the brain. Although these methods have proved incredibly useful in understanding changing brain state, they are limited in their ability to reliably detect changes in functional connectivity between regions over time (Hindriks et al. 2016). By contrast, state space models (Fig. 1B) and in particular Hidden Markov Models (HMMs) (Vidaurre et al. 2017; Chen et al. 2016), have arisen as an alternative to the sliding window approach and use a number of simplifying assumptions to improve on these models’ tractability and specifiability (Suk et al. 2016). More recently, dynamic community detection methods have been proposed which capture many of the same features as dynamic state space models, however these methods often still rely on sliding window approximations of functional connectivity to construct a series of dynamic networks (Martinet et al. 2020; Liu et al. 2018).
The chief underlying assumption of HMMs is that brain dynamics can be parametrised by a finite state, positive recurrent, Markov process where functional activity and connectivity is determined by an observation model, typically a multivariate normal distribution (Vidaurre et al. 2016; Ting et al. 2020). In these models, dynamic switching between states can be interpreted as a temporal graph of probable state transitions (Fig. 1C). The full model can thus be interpreted as a nested, or multiplex graph in which the layers are brain states (with brain regions as nodes) and the interlayer directed edges are transition probabilities between state layers.
Novel multiplex approach
A state characterises a pattern of activity across the whole brain at a given time; however, it is most often characterised in terms of just a few key subgraphs of interacting brain regions (see Fig. 1C) (Ting et al. 2020; Liu et al. 2018). Much progress has been made to characterise the vast repertoire of activity patterns observed during resting states and task performance. These enquiries have given rise to a number of reoccurring and important networks, associated with a wide range of brain functions and behaviours (Shulman et al. 1997; Biswal et al. 1995; Menon 2011). The most prevalent and widely characterised of these are the socalled resting state networks, termed the DefaultMode (DMN), Salience (SN) and Central Executive (CEN) Networks as well as those active during sensory and motor tasks including: the sensorimotor, visual and auditory networks (Ryali et al. 2016). The mechanisms underlying these networks are interdependent with recruitment of one network often necessitating the further recruitment of other networks (Karahanoğlu and Van De Ville 2017). Conversely, some networks are known to be largely mutually antagonistic in activity, with DMN and SN activity generally being anticorrelated with sensorimotorlike activity in resting wakefulness (Vidaurre et al. 2018).
Although state space modelling of brain dynamics is a relatively young field, one key finding has been the multiscale modularity of brain states. In particular, Louvain modularitybased community detection applied to the temporal graph of state transitions has shown that states are organised modularly into communities under a variety of conscious conditions including resting wakefulness and sleep (Vidaurre et al. 2017; Stevner et al. 2019).
In order to construct a set of plausible brain states models we train a number of HMMs with different numbers of states on resting state data. We then utilise our novel crossvalidated maximum entropy procedure, based on the maximum entropy principle, to select the HMM that best generalises across subjects (Jaynes 1957). We convert the selected HMM into a dynamic graph model by transforming the state covariance matrices into weighted, directed graphs based on the regional correlations within each state and node attributes given by the state mean activity. The intralayer network which we term the Markov Information Matrix of the states is motivated by an interpretation of brain states as realisations of an underlying Markov blanket or network as in Hipólito et al. (2021).
Ranking the importance of networks within the brain
We perform twolevel Louvain community detection to discover important communities of brain states (temporal communities) and brain regions within a state (spatial communities). We use community centrality statistics to identify the hub states of key activity in each network. Within each hub state we look at spatial community structure to determine the key actors in the dynamics of the model that may be important to the overall dynamics of wakefulness across subjects.
Random walks provide an effective way to construct representative samples from a graph in a way that preserves local structure (Dupont et al. 2006; Leskovec and Faloutsos 2006). In complex interdependent data sets random walk sampling can be used to remove baseline levels of interdependence and discern the most robust relationships in a one dimensional model, by conditioning out local inhomogeneity in noisy activity (Luecken et al. 2018). Here, we extend this principle to network sampling across two dimensions, space and time. Our method is based on a nonparametric random walk statistic that combines a temporal walk between layers with a spatial walk between regions. We use random walks to sample plausible patterns of functional network activity from the local functional activity background. We then use the samples as a benchmark against which to score functional coordination in our spatial communities. This statistical score, termed the Tscore, is simple to compute given the graph model and putative network and is inspired by a similar method for analysing large, complex protein graphs with metalayer information (Luecken et al. 2018).
Our method allows us to determine which spatial communities are highly coactivated or inactivated relative to the expected dynamics across states in that brain area, providing a generalisable procedure to determine functionally relevant brain state communities. Our within state community functional associations largely agree with macroscopic analysis of the state functional activity maps, but provide an additional layer of information in the form of networks that provide clarification and depth to our understanding of brain states at the mesoscale.
Metatextual and network analysis of brain state models
We use the powerful metanalysis tool, Neurosynth (Yarkoni et al. 2011), to determine functional associations between each brain state, it’s most important networks and important functional terms from the literature. Neurosynth provides scores based on either correlations between brain images and the occurrence of a predefined set of terms in the literature or, in conjunction with the NIMARE package (NiMARE 2019), a posteriori probabilities of associations between the image and an exhaustive list of literature terms. Using these tools and images derived from our brain states, termed functional activity maps, we provide evidence to corroborate the modular processing hypothesis in resting wakefulness (Reichardt and Bornholdt 2006). Key to our findings is that the states associated with resting state networks tend to selfassociate while being anticorrelated with sensorimotor associated states.
Methods
In the following sections, “Acquisition and preprocessing of fMRI data for HMM modelling" and “Model specification and generalisability” sections, we explain the preprocessing of the data and define the state space (HMM) model and novel model selection criterion. We will see that each brain state s can be thought of as a pattern of activity represented by a weighted graph \(G(s)=\{V,a(s),W(s)\}\) in which each node is a brain region \(x\in V\), (with \(V=D\) nodes), each with a level of functional activity \(a(s)^x\) attributed to x. Similarly, each edge in G(s) is weighted by \(W(s)^{x,y}\in W(s)\) the level of information flow from region x to region \(y\in V\) (edge absence is represented by \(W(s)^{x,y}=0\)), with \(W(s)^{x,y}\ne W(s)^{y,x}\) in general.
As we shall show, Hidden Markov Modelling with our new model selection method, provides a means to construct a dynamic state space model from multisubject fMRI time series data in a data driven way. We use interregional correlations to determine the state graphs and use the temporal relationships between states to determine the directed interlayer edges (see Fig. 1C). Lastly, in “Louvain and hierarchical temporal clustering”, “Community hub selection”, “Identifying functionally important spatial communities” and “Analysis of states and communities with NeuroSynth” sections we set out methods to explore the spatiotemporal modular and functional structure of these multiplex brain state models.
Acquisition and preprocessing of fMRI data for HMM modelling
Ten minutes of whole brain fMRI activity were recorded separately for each of \(N=15\) wakeful subjects (with eyes closed) as part of a previous study (Mhuircheartaigh et al. 2013). The brain volumes produced by the scanner were aligned to the MNI152 standard brain template (Fonov et al. 2011). This resulted in a high dimensional time series of each subject’s fMRI (BOLD) signal for each voxel, with a temporal resolution of 3 s and a spatial resolution of 2 mm\(^3\) (Woolrich et al. 2009).
Recordings were collected separately from each subject. Of the 200 volumes recorded per subject (each time point is one volume), four dummy volumes were removed to exclude any nonsteadystate magnetisation effects. This was followed by motion correction with MCFLIRT (Motion Correction FMRIB’s Linear Image Registration Tool), spatial smoothing using a Gaussian kernel of 5 mm full width halfmaximum, global intensity normalisation, and temporal highpass filtering with a cutoff of 0.02 Hz to remove low frequency scanner drift. Automated removal of nonbrain tissue was initially performed before statistical analysis using BET (Brain Extraction Tool), with further manual correction in FSLview. Further spatiotemporal artefact removal was carried out by independent component in FSL melodic (Woolrich et al. 2009).
We selected regions of interest in our study based on the HarvardOxford (HO) probabilistic cortical and subcortical brain parcellations, which assigns to each voxel a probability for each brain region. We assign each voxel a unique region identity according to the maximum probability across regions in the HO parcellation. Excluding white matter regions the resulting parcellation of 63 Regions of Interest (ROIs) includes 48 cortical and 15 subcortical brain regions (Caviness et al. 1996; Makris et al. 2006). ROI time series were calculated using the ROI spatial mean BOLD signal at each time point. This results in a \(D=63\) dimensional time series with \(T=196\) time points per subject. Each of the D constituent ROI time series were temporal mean subtracted and normalised by the standard deviation.
Model fitting presents two challenges, the first is that the time taken to fit the model scales with parametric complexity, and the second is that a poorly parametrised model may lead to overfitting or underfitting. To address these challenges, dimensionality reduction by principal components of the original D dimensional time series was performed to reduce parametric complexity while also reducing overall noise. This approach is justified by the generally low embedding dimension of most real world data, including neuroimaging data (Ma et al. 2018; Shen and Meyer 2008). In order to balance dimensionality reduction and retention of signal, Parallel Analysis is used (see Additional file 1: Section 1) to obtain a \(D\times d\) eigenmatrix A of the first \(d<D\) eigenvectors (Horn 1965). This method assumes roughly linear separability of uncorrelated noise from signal, but has been shown to outperform a number of methods, including maximum likelihood estimation, in simulation (Humphreys and Montanelli 1975). The reduced d dimensional time series \(\{X_{n,t}^*\}_{t\in {\mathbb {N}}_T}\) is then inputted to train a noise reduced HMM model of the data.
Model specification and generalisability
We use the HMMMAR package to train HMMs with multivariate normal observations by Variational Bayes (Vidaurre et al. 2016), whilst separating the data by subject into distinct trials of length T. For further details on model fitting see (Vidaurre et al. 2016). Figure 2A shows how observations of the fMRI BOLD signal at each time point are modelled across subjects. Dynamics for each subject are modelled and fitted using a shared set of states \({\mathcal {S}}\) with finite \({S}=\{1,2, \ldots ,K\}\) and Markov transition matrix P.
We give a brief overview of HMM dynamics. We note that a key parameter, for these dynamics, the number of brain states, \(K={\mathcal {S}}\), that best generalises these dynamics across subjects is unknown. Consequently, we introduce a novel framework for selecting K based on an information theoretic criterion that maximises generalisability by maximising entropy of the state dynamics across subjects.
In each HMM state trajectory, the initial state of each subject’s trial is selected independently at random. Under the Markov assumption of the model the resulting subjectspecific state dynamics are assumed independent realisations of the same stochastic process, \(S_{n,t}\). For \(t>1\), \(S_{n,t}\) is conditionally dependent on the previous time step \(S_{n,t1}\) so that
for \(s'\in {\mathcal {S}}\). Each brain state \(s\in {\mathcal {S}}\) is associated with an observation model \(O(s)\sim MVN(\mu ^*(s),\Sigma ^*(s))\). The \(O(S_{n,t})\) model the row dimensionally reduced brain data \(X_{n,t}^*\). In order to obtain the full model, the reduced model is then backprojected into D dimensional brain region space [see Eq. (2)].
Novel model selection criterion based on fractional occupancy
The Markov chain defined by P and any given initial state \(s_0\in {\mathcal {S}}\), has a unique stationary distribution \(\pi _s\) that is independent of \(s_0\) assuming the chain is irreducible and the states are positive recurrent. The probability \(\pi _s\) is the long run probability of the reoccurrence of state s. Selection of the number of these hidden states is carried out by crossvalidated entropy maximisation over the related fractional occupancy distribution. The fractional occupancy distribution \(\kappa\) is defined by subject n for each state s and given by
where \(P(S_{n,t}=s{\mathcal {M}},X)\) is the posterior probability of state s occurring at time t given the model \({\mathcal {M}}\) and data X. The fractional is the probability of finding subject n in s over the entire trial of length T. The distribution \(\kappa\) for subject n is related to the stationary distribution \(\pi _s\) by the wellknown limit
That is to say that \(\kappa\) asymptotically approximates the long run average state dynamics of the model as trial length increases. Knowing this, our goal is to select the model whose fractional occupancy maximises the entropy pooled across subjects by maximising the objective function
where the model \({\mathcal {M}}(n,k)\) is the model trained using all trials except the data from subject n assuming k hidden states, and \(X_n\) is the trial data from subject n (see Fig. 2B).
By selecting the initial number of states \(K={{\,\mathrm{arg\,max}\,}}H(k)\), we appeal to the information theoretic principle of maximum entropy which states that the model which maximises the uncertainty over the data tends to be the one that best approximates the true data distribution (Jaynes 1957). More specifically, our goal is to obtain a set of states with similar uncertainty about subject behaviour over the course of the experiment. We shall see in “Entropy relates to model selection” section that the goal of statesubject uncertainty maximisation relates closely to that of optimal model selection. We note that to the best of our knowledge this is the first application of such a subjectspecific entropic criterion in state space model selection.
The state Markov information graph
First model parameters \(\mu ^*(s)\) and \(\Sigma ^*(s)\) for state s from the HMM model \({\mathcal {M}}\) are backprojected using the transpose eigenmatrix A to obtain a model in D dimensional brain space so that the full D dimensional model has mean \(\mu (s)\) and variance \(\Sigma (s)\) defined over the ROIs and given by
Using the full model, each state s has normally distributed observations with mean \(\mu (s)^x\) and covariance \(\Sigma (s)^{x,y}\), for brain regions \(x,y \in V\). We use these to define a graph \(G(s)=(V,a(s),W(s))\) over the set of R brain regions, node weights a(s) and edge weights W(s), which we take to be a proxy for the information flow between regions. More specifically, we estimate the weights W(s) by the correlation matrix \(\rho (s)\), as derived from the state covariance matrix \(\Sigma (s)\).
Here, \(a(s)^x=\mu (s)^x\) are the mean regional functional activity at brain region x in s. The weighted edge (directed information flow) from regions x to y are
The resulting edge weights matrix W(s), defines a Markov transition matrix, a model of information flow between brain regions in state s in which information flow between x and y is defined both into x from y, \(W(s)^{x,y}\) and out of x to y, \(W(s)^{y,x}\). Note this defines a potentially asymmetric and directed graph with edges (information flow) both into and out of x. The rationale for using such a Markov transition matrix to define edge weights is to convert the entire network into a dynamic Markov graph in which information is propagated probabilistically both in time and space. This is useful in particular in “Analysis of states and communities with NeuroSynth” section.
Louvain and hierarchical temporal clustering
We perform Louvain modularity detection on the directed Markov transition and information graphs (Blondel et al. 2008). Suppose \(G=(V,E,W)\) is a potentially directed and weighted graph with vertex set V, edge set E and weight matrix W. The Louvain algorithm involves the greedy optimisation of an objective function \(Q({\mathcal {U}})\), termed the modularity score for \({\mathcal {U}}\) a partition of V (see Additional file 1: Section 2) (Girvan and Newman 2002; Newman 2006). The algorithm allows for a resolution parameter \(\gamma\) which determines the relative size of communities and goes to one as \(\gamma \rightarrow \infty\) (Lambiotte et al. 2008).
We use a form of the Louvain optimisation algorithm originally designed for undirected networks but complement this with a version of the modularity \(Q({\mathcal {U}})\) which has been adapted for directed networks in Nicosia et al. (2009) and Leicht and Newman (2008). In order to assess the validity of this approach, a rough measure of the degree of symmetry in a weight matrix W can be given by the fraction of the energy of the adjacency matrix (as measured by the Frobenius norm) that is contributed by the symmetric part, \(\text {Sym}(W)\) (see Additional file 1: Section 3) (Aggarwal 2020).
In the case of temporal communities, we determine the significance of the community partitioning by comparing \(Q({\mathcal {U}})\) to an empirical distribution composed of modularity scores from 10,000 partitions constructed by random permutation of the community labels. In addition, in order to examine the statesubject relationships directly, we perform agglomerative hierarchical linkage clustering based on correlation in fractional occupancy \(\kappa\) using Ward’s method (Ward 1963).
Community hub selection
State hubs are the states most central to the dynamics of the model and facilitate the switching dynamics within each community. These are selected by maximising the community centrality zscore, z(s), for each community \(U\subset {\mathcal {S}}\) (Guimera and Amaral 2005; Shine et al. 2016). This score measures the within community degree centrality of a node relative to the mean community connectivity (see Supplementary Information 4). Hubs are then analysed for their community structure, using the same Louvain algorithm as in “Louvain and hierarchical temporal clustering” section but this time on the directed brain state graph G(s).
Identifying functionally important spatial communities
Not all detected communities are as relevant to a state’s functional role as others. Performance of these roles requires both functional activation and coordination of brain regions. To discern which communities are the most functionally cohesive, we rank communities by comparing to samples of regional activity from the full multiplex graph model (see Fig. 2C, D). We used random walks to sample plausible patterns of functional network activity and employ them as a benchmark against which to measure the level of coordination within spatial communities. Controlling for the local level of background activity in space and time allows for a more representative indication of functional cohesion within brain networks identified by community detection than naive comparison of communities by community mean functional activity.
We introduce to neuroimaging the Functional Homogeneity, FH, as our community coherence measure, a statistic derived from the mean activity \(\mu (s)\) and \(\Sigma (s)\) that is high when the community mean activity is most in agreement with the directions of maximum community functional connectivity and low otherwise. It is a measure of the alignment between the two key features of spatial communities, their level of shared information and activation. This measure is well suited for neuroimaging data, and is well established in computer vision and image classification where it is known as the covariance metric and measures the agreement between and within image classes (Li et al. 2019). The FH for a community C in a state s is
where the superscript C refers to the submatrix given by removal of all rows and columns not corresponding to regions in community C. This metric is key to the community ranking procedure which follows a six step process:

1
Given a community \(C\subset V\) in state G(s) we calculate FH(s, C).

2
Sample a state \(s'\) from the stationary distribution \(\pi\).

3
Select a region \(x\in C\) and sample C nodes from \(G(s')\) starting at \(x\in V\) in \(G(s')\).

4
Repeat steps 2 and 3 to construct a representative sample of paired states and brain regions \((s_1,C_1),(s_2,C_2) \ldots ,(s_L,C_L)\)

5
Calculate the Tscore for functional cohesiveness of a subgraph
$$\begin{aligned} T(s,C) = \frac{1}{L}\sum \limits _{l=1}^L I[FH(s,C)>FH(s_l,C_l)] \end{aligned}$$where I is the standard indicator function and rank the communities in s by decreasing Tscore.

6
Determine whether the community represents a correlated or anticorrelated brain subgraph by the sign of \(E_C[\mu (s)]=\sum _{x\in C} \mu (s)^x\).
The Tscores of all the communities in a specific state can then be used to order the states in terms of which are most likely to contribute to the functional cohesion of the state. Note that T(s, C) is a score between zero and one, with one implying that the community C is much more functionally cohesive than other comparable brain subgraphs in space and time. Tscores are not designed to be compared across states. These steps are summarised by steps E to F in Fig. 2.
Analysis of states and communities with NeuroSynth
NeuroSynth is a metaanalysis tool that takes in 3D images of brain activity (termed functional activity maps) in MNI152 standard space and returns a scored association (based on the Pearson correlation) between the activity maps and other images from published articles that directly reference a given term i (Yarkoni et al. 2011). We choose the six terms most clearly associated with resting state activity default mode, salience, executive, these are the resting state network terms and sensorimotor, auditory and visual, sensory network terms. We used these to characterise the mean activity of a given state s by projecting the activity pattern \(\mu (s)\) back into 3D brain standard space (see Supplementary Figure S2A) and inputting the resulting map into NeuroSynth.
The resulting score for a state s and term i is denoted \(\theta _{i,s}\in [1,1]\), with 1 indicating perfect correlation between the state’s mean functional activity map and i and 1 indicating perfectly anticorrelated activity. We note that although these terms, while chosen to relate to known resting state patterns, are not equivalent and should be thought of as suggestive of a global pattern of activity (or its absence). We explore the activity of actual networks in our spatial community analysis “Community rankings reveal spatiotemporal modules of functional activity” section.
We propose that the global score \(\theta\) can also be considered a dynamically changing property of the system. Given a score \(\theta _{i,s}\) for a term i and state s, the one step ahead predicted score is
We use this predicted score to examine the global properties of the activity observed after reaching a given state.
NeuroSynth can also be used in conjunction with the newly developed package NiMARE to directly calculate the posterior probability of terms from a large corpus of neuroimaging journal abstracts and images given a selection of brain voxels in standard space (NiMARE 2019). Due to the variability in brain region size, regions selected by community membership are downsampled by selecting 10,000 voxels with replacement from each community which was found to produce stable posterior probabilities up to the third decimal place.
We use NeuroSynth with NiMARE to determine a plausible function for each of our spatial brain region communities, selecting only those terms that are most a posteriori probable and which had a functional rather than anatomical interpretation (see Supplementary Figure S2B). We pass each community from each hub state through our spatiotemporal community ranking method resulting in a ranked list of communities of brain regions per state and then pass each top ranked community through the NiMARE/NeuroSynth method to determine their most likely functional term associations. In order to be comparable with the global score \(\theta\), the NeuroSynth score is either a positive or negative association depending on the mean activity of the regions as suggested in “Identifying functionally important spatial communities” section.
Validation of model framework
A detailed validation of key features of the modelling and analysis framework was carried out using synthetic data (see Additional file 1: Section 5). This includes validation of the dimensionality reduction method as a means to reduce the computational demand of modelling while retaining community structure using the Adjusted Rand Index (ARI) (Rand 1971). Validation is also performed for the Markov Information Graphbased community detection and model selection procedures. Other key components of the model such as the HMM inference procedure have already been validated using synthetic data with detailed simulations (Vidaurre et al. 2016, 2018).
Not all components of the modelling and analysis framework could be validated by simulation as it was considered beyond the scope of this document to generate realistic synthetic community functional homogeneity and NeuroSynth scores. The community importance ranking procedure is instead validated using real annotation metadata and the NeuroSynth tool.
Results
Results for our multisubject HMM model training and multiplex graph model analysis are given below.
Dimensionality reduction
We select the appropriate number of principal components using the method of parallel analysis outlined in Additional file 1: Section 1. This resulted in a reduced set of \(d=9\) dimensions that account for roughly \(75\%\) of the total variance, which are then used in fitting the model. Validation of this approach using synthetic data is explored in Additional file 1: Sections 5.1 and 5.2.
Entropy relates to model selection
Applying our cross validated maximum entropy Hidden Markov Model selection criteria by maximising the crossvalidated entropy H(k), we obtain an HMM with \(K=33\) initial states. Figure 3 shows that the entropy maximum also coincides with the maximisation of the crossvalidated Bayesian loglikelihood, which is a general indicator of model fit. To further reduce the risk of overfitting, we exclude those states that occur in less than 25\(\%\) of subjects and renormalise P so that the rows again sum to one. The resulting model has a total of \(K=27\) brain states.
Network dynamics indicate clustering of activity patterns in space and time
Table 1 shows that states positively correlated with resting state activity terms are significantly more likely to transition to states with similar associations and vice versa (see Supplementary Figure S3 for linear model comparison). In contrast, states correlated with resting state terms tended to transition to states that are negatively correlated with the sensory terms. This suggests that states associated with the former resting state networks tend to cooccur to the exclusion of sensory and sensorimotor patterns of activity. These results indicate a spatiotemporal separation between resting state network activity and sensory activity.
States with high scores for sensory activity terms show a far weaker positive affinity for transition to each other than do the former resting state network terms. This suggests that concurrent activity in space and time is most likely between states with high resting state network activity. This pattern of concurrent activity is only weakly suggestive for sensory modes of activity. In contrast, robust mutually antagonistic spatiotemporal relationships between sensory and resting state network associations are present. We shall see in “Community rankings reveal spatiotemporal modules of functional activity” section this pattern of mutual exclusivity is mirrored by the most central states in the network or hub states at both the global (functional activity map) and the local (network community) levels. States show a general trend of transitioning from terms with one global activity association to another state that scores highly for the same association, suggesting some level of brain state inertia in the global pattern of functional activity.
Evidence for metatastate structure in wakefulness
In order to demonstrate the presence of temporal community structure, we performed hierarchical linkage clustering using the correlation in \(\kappa\) between subjects and states. We also calculated the normalised degree of symmetry in P, \(\text {Sym}(P)=0.9921\) indicating a degree of symmetry in P (with \(\text {Sym}(P)=1\) when P is completely symmetric). Figure 4A suggests a temporally clustered pattern of state fractional occupancy in which certain states are more likely to cooccur in one subset of subjects than in the other. Figure 4B shows the transition probability matrix P organised into communities by Louvain community detection, where \(\gamma =0.48\) (as selected by Variation of Information minimisation) (Lambiotte et al. 2008). Temporal communities indicate modules of clustered state transitions. This temporal community partition was tested for robustness by comparing the Q modularity statistic to 10,000 random partitions with the same community labels (\(p=\) 1e−4).
Each community, \(U\subset {\mathcal {S}}\), is characterised by a hub state h(U) determined by the state with the highest community degree zscore, a measure of state centrality to the temporal network (see Supplementary Figure S1). Figure 4C, shows the long run probability of state s reoccurence \(\pi _s\). Reoccurence and centrality to a community appear to be strongly correlated as states more central to their communities according to the zscore, z(s), also tended to have a higher stationary probability \(\pi _s\), with correlation coefficient \(\rho =0.537\) (\(p=0.004\)). This observation suggests that as mediators of network dynamics, community hub states tend to reoccur, playing a central role in the overall network dynamics as well as in their own community.
Community rankings reveal spatiotemporal modules of functional activity
Louvain community detection was performed for each of the community hub state graphs G(h(U)) for each community U in partition \({\mathcal {U}}\). We assessed the degree of symmetry in the Markov Information graph of each hub states and found that \(\textit{Sym}(W(h(U)))>0.99\) for all communities U. Here, the Variation of Information was not used to select \(\gamma\) as differing recommended \(\gamma\) between hubs was found to produce communities of inconsistent and incomparable sizes; we thus select the resolution as \(\gamma =2\) for all hub states. This was found to produce median spatial community network sizes that were sufficiently small on average (roughly 4 regions per community) for our community ranking method to efficiently sample the graph while also being large enough to detect functionally conserved brain state networks.
We perform NeuroSynth analysis by taking the mean functional activity maps generated for the hub states as input in combination with the resting state network terms default mode, salience, executive and the sensory network terms sensorimotor, auditory and visual (see Supplementary Figure S2A for algorithmic explanation). The results in Table 2 suggests a separation between sensory and resting state activity in space and time with hub states scoring highly for either resting state or sensory terms but rarely both. Table 2 gives the highest ranked functional terms (filtering out purely anatomical terms) in each hub state for the top three ranked spatial network communities (using our ranking method). The top terms for each of the networks (communities) in the states largely coincide with the functional associations ascribed to each of the hub states themselves.
Exploring these relationships, we see that in some cases the connections between spatial community function and hubs are direct. State 23 shows a positive association with observation and action in dominant spatial communities and a strong association with all three sensory network terms. State 11 shows a clear association with auditory activity as well as a top ranked community association with the term voice. In state 15, which shows a strong correlation with \(\textit{visual}\) activity, the top ranked communities include positive associations with the face (a common object of visual processing).
In some states we see both strong positive and negative associations. Global negative asssociations are difficult to interpret in isolation as evidence of anticorrelated network behaviour within a state, however when paired with mesoscale information from the top ranked communities a stronger case is possible. State 32 appears mixed in activity but shows strong to moderate negative correlations with visual and auditory processing. The latter of these is corroborated by the anticorrelated speech network. State 30 is another state with mixed associations based purely on global functional activity, however, we see both moderate negative correlation globally with visual activity, and a specific negatively correlated community related to visual tasks or processing, suggesting a visual down state. A similar explanation can be used for state 5. State 23 is a sensory associated state with sensory associations at both the global and network scales. State 23 is negatively correlated with default mode activity. The default mode network is involved in language comprehension and reasoning, explaining the anticorrelated network associated with syntactic processing. Negatively associated communities may more generally suggest decreased metabolic or functional demand for these in networks leading to a coordinated down state.
Discussion
In this paper we present a fully unsupervised pipeline for characterising the spatiotemporal activity of neuronal brain states in terms of a multiplex brain state graph model. This pipeline involves the training of an HMM in order to obtain a multiplex spatiotemporal directed brain state graph that represents the dynamics of subjects in resting wakefulness. We present a method for obtaining a set of states (layers) that generalises well over subjects and use this method to determine key states in the network dynamics. Lastly, we characterise the spatiotemporal components of the model that are most central and most functionally coherent, characterising these using metatextual image analysis of the neuroscience literature.
Our HMGMbased methodology reveals a rich array of complementary communities acting together to produce modes of neural behaviour during resting wakefulness. Crucially, we have shown that patterns of activity resembling the resting state networks tend to cooccur and that these patterns tend to preclude sensory and sensorimotor patterns of activity. This modularity of brain state function has been suggested by others (Smallwood et al. 2012; Vidaurre et al. 2017), but metaanalysis of terms associated with these functions allows us to characterise individual states and quantify their change in character through time.
Within each hub brain state the division between functions was not clearly partitioned, with many terms featuring communities with memory or autobiographical associations, possibly suggesting an undercurrent of narrative thought which persists across numerous states. Alternatively, this may be due to artefacts caused by auditory memoryrelated tasks studies in the NeuroSynth database. It is important to note that spatiotemporal statebased activity analysis is novel and so terms in the literature which derive from static models of activity may not map accurately onto dynamic patterns of activity. In particular, transient states may be smoothed out of these analyses meaning that new studies will need to be performed focusing on dynamic functional activity change at much shorter time scales in order to build up an understanding of function in dynamic brain states.
Some of the state global functional activity term associations, particularly negative ones, remain difficult to interpret. In state 13, there is a strong association with the term sensorimotor, however all of the top ranked communities for this state are negatively associated with functions that may have a closer association to resting state activity. This could be due to putative link between the central executive activity and reward observed in primates (Sigmund et al. 2001), but may also be due to ranking error or noise in our graph model. However, the roles of many states become more clear when combining functional information from either anticorrelated or correlated mesoscale communities with global tendencies in functional activity. We hypothesize that strongly cohesive anticorrelated networks may be entering a coordinated down state due to changes in metabolic or functional demand (Tomasi et al. 2017; Passow et al. 2015; Thompson 2018).
One issue with our approach is that the Louvain implementation we use with directed modularity does not fully capture the signal of edge directionality in community detection (see Additional file 1: Section 2). This problem may be partially mitigated by the fact that we found the edge weights in question to not be highly asymmetric when measured as a fraction of matrix energy. However, a community detection methods that more directly account for directed edges, such as InfoMap (Rosvall et al. 2009), or the Markov structure of the model, such as Jin et al. (2011) may identify other other forms of community structure in our graph models that are worth investigation. In particular we intend to investigate more general implementations of the Louvain algorithm that are optimised for directed networks (Li et al. 2018; Dugué and Perez 2015).
Presently, our framework also does not fully take advantage of the multuiplex graph structure of the model, for example using multilayer community detection which can be complex to parametrise (Hanteer and Magnani 2020). However, a potential advantage of the HMGM framework is that it provides a way to ground the interlayer coupling parameters used in some multilayer community detection using a natural property of the model, the probability of state transition. In our future work we intend to investigate multilayer community detection approaches to look at dynamic changes in network membership using coupling parameters based on the transition probabilities between state layers.
We plan to apply our multiplex analysis framework to conditions of altered consciousness in deep anaesthesia and determine novel spatiotemporal networks that characterise this condition with comparison to our current graph model for resting wakefulness. In this way we hope to elucidate the complex network dynamics underlying conscious brain activity (Huang et al. 2021).
Availability of data and materials
Data, community ranking, and model selection code is available from the authors upon request.
Abbreviations
 BOLD:

Blood Oxygen Level Dependent signal
 CEN:

Central Executive Network
 DMN:

Default mode network
 fMRI:

functional MRI (Magnetic Resonance Imaging)
 FO:

Fractional occupancy
 PCA:

Principal Component Analysis
 HMM:

Hidden Markov model
 ROI:

Region of interest
 SM:

Sensorimotor
 SN:

Salience network
References
Aggarwal CC, Aggarwal, LagerstromFife (2020) Linear algebra and optimization for machine learning. Springer
Bassett DS, Sporns O (2017) Network neuroscience. Nat Neurosci 20(3):353–364
Biswal B, Zerrin Yetkin F, Haughton VM, Hyde JS (1995) Functional connectivity in the motor cortex of resting human brain using echoplanar MRI. Magnet Reson Med 34(4):537–541
Blondel VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):10008
Brown R (2006) What is a brain state? Philos Psychol 19(6):729–742
Calhoun VD, Adali T (2012) Multisubject independent component analysis of fMRI: a decade of intrinsic networks, default mode, and neurodiagnostic discovery. IEEE Rev Biomed Eng 5:60–73
Calhoun VD, Adalı T, Pekar JJ (2004) A method for comparing group fMRI data using independent component analysis: application to visual, motor and visuomotor tasks. Magn Reson Imaging 22(9):1181–1191
Caviness VS, Meyer J, Makris N, Kennedy DN (1996) MRIbased topographic parcellation of human neocortex: an anatomically specified method with estimate of reliability. J Cogn Neurosci 8(6):566–587
Chen S, Langley J, Chen X, Hu X (2016) Spatiotemporal modeling of brain dynamics using restingstate functional magnetic resonance imaging with gaussian hidden Markov model. Brain Connect 6(4):326–334
de Vos F, Koini M, Schouten TM, Seiler S, van der Grond J, Lechner A, Schmidt R, de Rooij M, Rombouts SA (2018) A comprehensive analysis of resting state fMRI measures to classify individual patients with Alzheimer’s disease. NeuroImage 167:62–72
Dugué N, Perez A (2015) Directed louvain: maximizing modularity in directed networks. Ph.D. thesis, Université d’Orléans
Dupont P, Callut J, Dooms G, Monette JN, Deville Y, Sainte B (2006) Relevant subgraph extraction from random walks in a graph. Universite Catholique de Louvain, UCL/INGI, Number RR, vol 7
Fonov V, Evans AC, Botteron K, Almli CR, McKinstry RC, Collins DL, Group BDC et al (2011) Unbiased average ageappropriate atlases for pediatric studies. NeuroImage 54(1):313–327
Girvan M, Newman ME (2002) Community structure in social and biological networks. Proc Natl Acad Sci 99(12):7821–7826
Guimera R, Amaral LAN (2005) Functional cartography of complex metabolic networks. Nature 433(7028):895–900
Hanteer O, Magnani M (2020) Unspoken assumptions in multilayer modularity maximization. Sci Rep 10(1):1–15
Hindriks R, Adhikari MH, Murayama Y, Ganzetti M, Mantini D, Logothetis NK, Deco G (2016) Can slidingwindow correlations reveal dynamic functional connectivity in restingstate fMRI? NeuroImage 127:242–256
Hipólito I, Ramstead MJ, Convertino L, Bhat A, Friston K, Parr T (2021) Markov blankets in the brain. Neurosci Biobehav Rev 125:88–97
Horn JL (1965) A rationale and test for the number of factors in factor analysis. Psychometrika 30(2):179–185
Huang X, Chen D, Ren T, Wang D (2021) A survey of community detection methods in multilayer networks. Data Min Knowl Discov 35(1):1–45
Humphreys LG, Montanelli RG Jr (1975) An investigation of the parallel analysis criterion for determining the number of common factors. Multivar Behav Res 10(2):193–205
Jalili M (2016) Functional brain networks: does the choice of dependency estimator and binarization method matter? Sci Rep 6(1):1–12
Jaynes ET (1957) Information theory and statistical mechanics. Phys Rev 106(4):620
Jin D, Liu D, Yang B, Liu J, He D (2011) Ant colony optimization with a new random walk model for community detection in complex networks. Adv Complex Syst 14(05):795–815
Karahanoğlu FI, Van De Ville D (2017) Dynamics of largescale fMRI networks: deconstruct brain activity to build better models of brain function. Curr Opin Biomed Eng 3:28–36
Kirchhoff M, Parr T, Palacios E, Friston K, Kiverstein J (2018) The Markov blankets of life: autonomy, active inference and the free energy principle. J R Soc Interface 15(138):20170792
Kokkonen SM, Nikkinen J, Remes J, Kantola J, Starck T, Haapea M, Tuominen J, Tervonen O, Kiviniemi V (2009) Preoperative localization of the sensorimotor area using independent component analysis of restingstate fMRI. Magn Reson Imaging 27(6):733–740
Lambiotte R, Delvenne JC, Barahona M (2008) Laplacian dynamics and multiscale modular structure in networks. arXiv:0812.1770
Leicht EA, Newman ME (2008) Community structure in directed networks. Phys Rev Lett 100(11):118703
Leskovec J, Faloutsos C (2006) Sampling from large graphs. In: Proceedings of the ACM SIGKDD, vol 12, pp 631–636
Li L, He X, Yan G (2018) Improved louvain method for directed networks. In: International conference on intelligent information processing. Springer, pp 192–203
Li W, Xu J, Huo J, Wang L, Gao Y, Luo J (2019) Distribution consistency based covariance metric networks for fewshot learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 8642–8649
Liu F, Choi D, Xie L, Roeder K (2018) Global spectral clustering in dynamic networks. Proc Natl Acad Sci 115(5):927–932
Luecken MD, Page MJ, Crosby AJ, Mason S, Reinert G, Deane CM (2018) Commwalker: correctly evaluating modules in molecular networks in light of annotation bias. Bioinformatics 34(6):994–1000
Ma H, Leng S, Aihara K, Lin W, Chen L (2018) Randomly distributed embedding making shortterm highdimensional data predictable. Proc Natl Acad Sci 115(43):9994–10002
Makris N, Goldstein JM, Kennedy D, Hodge SM, Caviness VS, Faraone SV, Tsuang MT, Seidman LJ (2006) Decreased volume of left and total anterior insular lobule in schizophrenia. Schizophr Res 83(2–3):155–171
Martinet LE, Kramer M, Viles W, Perkins L, Spencer E, Chu C, Cash S, Kolaczyk E (2020) Robust dynamic community detection with applications to human brain functional networks. Nat Commun 11(1):1–13
Menon V (2011) Largescale brain networks and psychopathology: a unifying triple network model. Trends Cogn Sci 15(10):483–506
Mhuircheartaigh RN, Warnaby C, Rogers R, Jbabdi S, Tracey I (2013) Slowwave activity saturation and thalamocortical isolation during propofol anesthesia in humans. Sci Transl Med 5(208):208–148208148
Michael AM, Anderson M, Miller RL, Adalı T, Calhoun VD (2014) Preserving subject variability in group fMRI analysis: performance evaluation of GICA vs. IVA. Front Syst Neurosci 8:106
Mitra A, Snyder AZ, Tagliazucchi E, Laufs H, Elison J, Emerson RW, Shen MD, Wolff JJ, Botteron KN, Dager S et al (2017) Restingstate fMRI in sleeping infants more closely resembles adult sleep than adult wakefulness. PLoS ONE 12(11):0188122
Newman ME (2006) Modularity and community structure in networks. Proc Natl Acad Sci 103(23):8577–8582
Nicosia V, Mangioni G, Carchiolo V, Malgeri M (2009) Extending the definition of modularity to directed graphs with overlapping communities. J Stat Mech Theory Exp 2009(03):03024
NiMARE (2019) https://nimare.readthedocs.io/en/latest/about.html. Accessed 10 Sept 2021
Papo D (2019) Gauging functional brain activity: from distinguishability to accessibility. Front Physiol 10:509
Passow S, Specht K, Adamsen TC, Biermann M, Brekke N, Craven AR, Ersland L, Grüner R, KlevenMadsen N, Kvernenes OH et al (2015) Defaultmode network functional connectivity is closely related to metabolic activity. Hum Brain Mapp 36(6):2027–2038
Poldrack RA, Kittur A, Kalar D, Miller E, Seppa C, Gil Y, Parker DS, Sabb FW, Bilder RM (2011) The cognitive atlas: toward a knowledge foundation for cognitive neuroscience. Front Neuroinform 5:17
Pullon RM, Yan L, Sleigh JW, Warnaby CE (2020) Granger causality of the electroencephalogram reveals abrupt global loss of cortical information flow during propofolinduced loss of responsiveness. Anesthesiology 133(4):774–786
Ran Q, Jamoulle T, Schaeverbeke J, Meersmans K, Vandenberghe R, Dupont P (2020) Reproducibility of graph measures at the subject level using restingstate fMRI. Brain Behav 10(8):2336–2351
Rand WM (1971) Objective criteria for the evaluation of clustering methods. J Am Stat Assoc 66(336):846–850
Reichardt J, Bornholdt S (2006) Statistical mechanics of community detection. Phys Rev E 74(1):016110
Rosazza C, Minati L (2011) Restingstate brain networks: literature review and clinical applications. Neurol Sci 32(5):773–785
Rosvall M, Bergstrom CT (2008) Maps of random walks on complex networks reveal community structure. Proc Natl Acad Sci 105(4):1118–1123
Rosvall M, Axelsson D, Bergstrom CT (2009) The map equation. Eur Phys J Special Top 178(1):13–23
Ryali S, Supekar K, Chen T, Kochalka J, Cai W, Nicholas J, Padmanabhan A, Menon V (2016) Temporal dynamics and developmental maturation of salience, default and centralexecutive network interactions revealed by variational Bayes hidden Markov modeling. PLoS Comput Biol 12(12):1005138
Sämann PG, Wehrle R, Hoehn D, Spoormaker VI, Peters H, Tully C, Holsboer F, Czisch M (2011) Development of the brain’s default mode network from wakefulness to slow wave sleep. Cerebral Cortex 21(9):2082–2093
Shen X, Meyer FG (2008) Lowdimensional embedding of fMRI datasets. NeuroImage 41(3):886–902
Shine JM, Bissett PG, Bell PT, Koyejo O, Balsters JH, Gorgolewski KJ, Moodie CA, Poldrack RA (2016) The dynamics of functional brain networks: integrated network states during cognitive task performance. Neuron 92(2):544–554
Shulman GL, Fiez JA. Corbetta M, Buckner RL, Miezin FM, Raichle ME (1997) Common blood flow changes across visual tasks: II. Decreases in cerebral cortex. J Cogn Neurosci 9:648–63
Sigmund K, Hauert C, Nowak MA (2001) Reward and punishment. Proc Natl Acad Sci 98(19):10757–10762
Smallwood J, Brown K, Baird B, Schooler JW (2012) Cooperation between the default mode network and the frontalparietal network in the production of an internal train of thought. Brain Res 1428:60–70
Smith SM, Fox PT, Miller KL, Glahn DC, Fox PM, Mackay CE, Filippini N, Watkins KE, Toro R, Laird AR et al (2009) Correspondence of the brain’s functional architecture during activation and rest. Proc Natl Acad Sci 106(31):13040–13045
Smith K, Abásolo D, Escudero J (2017) Accounting for the complex hierarchical topology of EEG phasebased functional connectivity in network binarisation. PLoS ONE 12(10):0186164
Sridharan D, Levitin DJ, Menon V (2008) A critical role for the right frontoinsular cortex in switching between centralexecutive and defaultmode networks. Proc Natl Acad Sci 105(34):12569–12574
Stevner A, Vidaurre D, Cabral J, Rapuano K, Nielsen SFV, Tagliazucchi E, Laufs H, Vuust P, Deco G, Woolrich M et al (2019) Discovery of key wholebrain transitions and dynamics during human wakefulness and nonREM sleep. Nat Commun 10(1):1035
Suk HI, Wee CY, Lee SW, Shen D (2016) Statespace model with deep learning for functional dynamics estimation in restingstate fMRI. NeuroImage 129:292–307
Thompson GJ (2018) Neural and metabolic basis of dynamic resting state fMRI. NeuroImage 180:448–462
Ting CM, Samdin SB, Tang M, Ombao H (2020) Detecting dynamic community structure in functional brain networks across individuals: a multilayer approach. IEEE Trans Med Imaging 40(2):468–480
Tomasi DG, ShokriKojori E, Wiers CE, Kim SW, Demiral ŞB, Cabrera EA, Lindgren E, Miller G, Wang GJ, Volkow ND (2017) Dynamic brain glucose metabolism identifies anticorrelated corticalcerebellar networks at rest. J Cereb Blood Flow Metab 37(12):3659–3670
Vidaurre D, Quinn AJ, Baker AP, Dupret D, TejeroCantero A, Woolrich MW (2016) Spectrally resolved fast transient brain states in electrophysiological data. NeuroImage 126:81–95
Vidaurre D, Smith SM, Woolrich MW (2017) Brain network dynamics are hierarchically organized in time. Proc Natl Acad Sci 114(48):12827–12832
Vidaurre D, Abeysuriya R, Becker R, Quinn AJ, AlfaroAlmagro F, Smith SM, Woolrich MW (2018) Discovering dynamic brain networks from big data in rest and task. NeuroImage 180:646–656
Ward JH Jr (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58(301):236–244
Woolrich MW, Jbabdi S, Patenaude B, Chappell M, Makni S, Behrens T, Beckmann C, Jenkinson M, Smith SM (2009) Bayesian analysis of neuroimaging data in FSL. NeuroImage 45(1):173–186
Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, Wager TD (2011) Largescale automated synthesis of human functional neuroimaging data. Nat Methods 8(8):665–670
Acknowledgements
We would like to thank the reviewers and editors for their helpful suggestions in restructuring and correcting this manuscript. We are grateful to the attendees and organisers of the Communities in Networks conference, where this work was originally presented, for the opportunity to contribute to this Special Issue. We are also grateful to Mark Woolrich, Angus Stevner, and the members of the Oxford Anaesthesia Neuroimaging and Protein Informatics Groups, for their insightful questions and comments.
Funding
JBW is supported by the Commonwealth Scholarship Commission UK, the Ernest Oppenheimer Memorial Trust (South Africa) and Human Brain Project, Specific Grant Agreement 3 (award reference 945539), CEW is funded by MRC Development Pathway Funding Scheme (award reference MR/R006423/1), and GDR is partially supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grants EP/R018472/1 and EP/T018445/1. This research is funded in part by the Wellcome Trust (grant 203139/Z/16/Z). For the purpose of open access, the authors have applied a CCBY public copyright license to any Author Accepted Manuscript version arising from this submission. Data collection was funded by the National Institute for Academic Anaesthesia, and the International Anaesthesia Research Society.
Author information
Authors and Affiliations
Contributions
JW prepared the draft manuscript and developed the analysis methods. CW, CD and GR edited the manuscript. CW provided the raw data and interpretation of neuroscientific results. CD and GW supervised the analytical methods development. GR contributed to interpretation of model results. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
The study was approved by the Local Research Ethics Committee (Oxford Research Ethics Committee B, Oxford, UK) and data collection was performed between October and December 2009. The study was performed in line with the Declaration of Helsinki and all subjects gave written informed consent.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Additional file 1.
Supplementary Information.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wilsenach, J.B., Warnaby, C.E., Deane, C.M. et al. Ranking of communities in multiplex spatiotemporal models of brain dynamics. Appl Netw Sci 7, 15 (2022). https://doi.org/10.1007/s41109022004542
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s41109022004542
Keywords
 Community ranking
 Generative models
 Model selection
 Multiplex networks
 Networks neuroscience
 Spatiotemporal networks