Skip to main content

Ranking of communities in multiplex spatiotemporal models of brain dynamics

Abstract

As a relatively new field, network neuroscience has tended to focus on aggregate behaviours of the brain averaged over many successive experiments or over long recordings in order to construct robust brain models. These models are limited in their ability to explain dynamic state changes in the brain which occurs spontaneously as a result of normal brain function. Hidden Markov Models (HMMs) trained on neuroimaging time series data have since arisen as a method to produce dynamical models that are easy to train but can be difficult to fully parametrise or analyse. We propose an interpretation of these neural HMMs as multiplex brain state graph models we term Hidden Markov Graph Models. This interpretation allows for dynamic brain activity to be analysed using the full repertoire of network analysis techniques. Furthermore, we propose a general method for selecting HMM hyperparameters in the absence of external data, based on the principle of maximum entropy, and use this to select the number of layers in the multiplex model. We produce a new tool for determining important communities of brain regions using a spatiotemporal random walk-based procedure that takes advantage of the underlying Markov structure of the model. Our analysis of real multi-subject fMRI data provides new results that corroborate the modular processing hypothesis of the brain at rest as well as contributing new evidence of functional overlap between and within dynamic brain state communities. Our analysis pipeline provides a way to characterise dynamic network activity of the brain under novel behaviours or conditions.

Introduction

The brain activity of healthy subjects at rest is commonly used as a baseline against which a wide range of both pathological (e.g. dementia) and healthy (e.g. sleep) conditions are compared (de Vos et al. 2018; Mitra et al. 2017; Pullon et al. 2020). Often, activity under one condition is modelled as a single static pattern of activity, ignoring large scale dynamic shifts. However, neuroimaging researchers have begun to recognise that subjects move through a wide array of brain activity configurations even while relaxed or asleep (Vidaurre et al. 2016; Karahanoğlu and Van De Ville 2017; Suk et al. 2016). A brain state is a configuration of brain activity evoked in response to a stimulus or to facilitate more complex responses (Brown 2006). Neuroimaging time series provide a way to observe these reconfigurations as spatial patterns of metabolic or electrophysiological activity, termed functional activity (Papo 2019). In order to generate these patterns, brain regions must coordinate through transfer of information. This exchange between brain regions defines the state’s functional connectivity. Functional activity can therefore be interpreted as a realisation from a brain state graph model which describes brain dynamics and the relationships between brain regions in the state (Bassett and Sporns 2017). This relates to models of the relationship between observed state and environment in which states are realisations of a so-called Markov blanket taking input from the environment to create an internal model of the external and internal environment (Hipólito et al. 2021; Kirchhoff et al. 2018). In these graph models, nodes are anatomically or functionally defined brain regions and edge strength is determined by the level of information shared between these regions (their functional connectivity).

The dynamics of communities of brain regions are of particular interest due to the important functional roles some communities play. Previous work has focused on deriving communities of brain regions using a number of methods including dynamic community detection (Martinet et al. 2020). State space models have also been proposed that focus on the changing community structure within brain states from inferred functional connectivity (Ting et al. 2020; Liu et al. 2018). Our novel framework uses a Hidden Markov Model (HMM) approach to construct a model, we term a Hidden Markov Graph Model (HMGM). This framework is fully unsupervised requiring no sliding window-based estimation or thresholding of the functional connectivity, and no prior assumptions about the number of states or embedding dimension.

We analyse brain state dynamics as a multiplex graph with modular (community) structure at both the temporal (state switching dynamics) and spatial (brain region communication) levels. In order to differentiate the temporal communities of states and the less functionally relevant spatial communities from the most relevant we use the term network. Network here is used exclusively to refer to modular subgraphs of coordinated brain regions within a state that are functionally important (rather than being synonymous with the term graph). These brain networks form the basis of our understanding of the functional connectivity pathways within the brain and are integral to our understanding of the role of changing brain configurations in wakefulness and beyond (Rosazza and Minati 2011).

We have developed a method based on the HMGM framework to identify the importance of possible brain networks using random walks to ascribe to each module in each state an importance or T-score based on their functional connectivity and co-activation. Notably, the method does not apply random walk information to partition the graph but rather to determine the relative importance of communities within a partition (Rosvall and Bergstrom 2008). Our method provides a means to characterise dynamic functional activity under novel conditions or behaviours. As a proof of principle, we apply our pipeline to neuroimaging data from subjects at rest and provide new evidence for both modular and nested functional activity in the awake brain.

Static brain state models

In the simplest brain state models (see Fig. 1A), functional activity arises as noisy realisations of a single static brain state. Considerable progress has been made using this static framework to characterise the vast repertoire of activity patterns observed during wakefulness. Models using both weighted and unweighted graph structures derived from Independent and Principal Component Analysis (ICA and PCA respectively) have revealed key modules within the brain across a wide range of conditions. These include both behavioural and task-based conditions (sensory, motor etc.) and resting state conditions in the absence of direct stimulation (Calhoun and Adali 2012; Smith et al. 2009; Kokkonen et al. 2009; Sämann et al. 2011; Calhoun et al. 2004). Recent results from both electrophysiological data derived from Electroencephalography (EEG) and Blood Oxygen Dependent (BOLD) data derived from functional MRI (fMRI), suggest that weighted network models produce more reliably reproducible and robust results than do binarised network models (Jalili 2016; Ran et al. 2020; Smith et al. 2017).

Fig. 1
figure 1

This figure shows how brain activity can be modelled as being generated by a system of either static (A) or dynamic (B) states, and, in particular, how such a dynamic brain state model can be interpreted as a multiplex network with modular structure (C). In A a static pattern (left) of functional activity (colour of functional activity map) and connectivity (edges between regions) is observed (green arrow) as a stationary multi-ROI multi-subject time series (right) in which each dimension is the activity observed for a particular Region of Interest (ROI) in each subject (separated by a dotted line). B Shows state dynamics for a multi-subject system with multiple states (left). In this system each state is represented by a colour and arrow length indicates its duration in time. This is observed as a multivariate time series composed of weakly stationary segments (right). Segment colour indicates the state that generated it. In C we use temporal relationships between states to represent the system as a dynamic multiplex graph. This system is decomposed (purple arrow) into its essential temporal (coloured ellipses) and spatial modules or functional networks (coloured subgraphs). Dashed circles around states show state hubs, important states in each community which are central to the dynamics and facilitation of brain activity across subjects

Studies using static models have helped neuroscientists to build up vast libraries of associations between cognitive functions and specific brain regions (Poldrack et al. 2011). However, the static approach makes it difficult to account for inter-subject variability as well as dynamic changes in state that occur in time as different cognitive and functional demands are placed on the brain (Michael et al. 2014). These demands result in activity in one moment that is often functionally incompatible with activity in the the next, driving the need for dynamic approaches to brain state modelling (Sridharan et al. 2008).

Dynamic brain state models

Moving window-based approaches produce a series of snapshots of the activity pattern of the brain. Although these methods have proved incredibly useful in understanding changing brain state, they are limited in their ability to reliably detect changes in functional connectivity between regions over time (Hindriks et al. 2016). By contrast, state space models (Fig. 1B) and in particular Hidden Markov Models (HMMs) (Vidaurre et al. 2017; Chen et al. 2016), have arisen as an alternative to the sliding window approach and use a number of simplifying assumptions to improve on these models’ tractability and specifiability (Suk et al. 2016). More recently, dynamic community detection methods have been proposed which capture many of the same features as dynamic state space models, however these methods often still rely on sliding window approximations of functional connectivity to construct a series of dynamic networks (Martinet et al. 2020; Liu et al. 2018).

The chief underlying assumption of HMMs is that brain dynamics can be parametrised by a finite state, positive recurrent, Markov process where functional activity and connectivity is determined by an observation model, typically a multivariate normal distribution (Vidaurre et al. 2016; Ting et al. 2020). In these models, dynamic switching between states can be interpreted as a temporal graph of probable state transitions (Fig. 1C). The full model can thus be interpreted as a nested, or multiplex graph in which the layers are brain states (with brain regions as nodes) and the interlayer directed edges are transition probabilities between state layers.

Novel multiplex approach

A state characterises a pattern of activity across the whole brain at a given time; however, it is most often characterised in terms of just a few key subgraphs of interacting brain regions (see Fig. 1C) (Ting et al. 2020; Liu et al. 2018). Much progress has been made to characterise the vast repertoire of activity patterns observed during resting states and task performance. These enquiries have given rise to a number of re-occurring and important networks, associated with a wide range of brain functions and behaviours (Shulman et al. 1997; Biswal et al. 1995; Menon 2011). The most prevalent and widely characterised of these are the so-called resting state networks, termed the Default-Mode (DMN), Salience (SN) and Central Executive (CEN) Networks as well as those active during sensory and motor tasks including: the sensorimotor, visual and auditory networks (Ryali et al. 2016). The mechanisms underlying these networks are interdependent with recruitment of one network often necessitating the further recruitment of other networks (Karahanoğlu and Van De Ville 2017). Conversely, some networks are known to be largely mutually antagonistic in activity, with DMN and SN activity generally being anticorrelated with sensorimotor-like activity in resting wakefulness (Vidaurre et al. 2018).

Although state space modelling of brain dynamics is a relatively young field, one key finding has been the multi-scale modularity of brain states. In particular, Louvain modularity-based community detection applied to the temporal graph of state transitions has shown that states are organised modularly into communities under a variety of conscious conditions including resting wakefulness and sleep (Vidaurre et al. 2017; Stevner et al. 2019).

In order to construct a set of plausible brain states models we train a number of HMMs with different numbers of states on resting state data. We then utilise our novel cross-validated maximum entropy procedure, based on the maximum entropy principle, to select the HMM that best generalises across subjects (Jaynes 1957). We convert the selected HMM into a dynamic graph model by transforming the state covariance matrices into weighted, directed graphs based on the regional correlations within each state and node attributes given by the state mean activity. The intralayer network which we term the Markov Information Matrix of the states is motivated by an interpretation of brain states as realisations of an underlying Markov blanket or network as in Hipólito et al. (2021).

Ranking the importance of networks within the brain

We perform two-level Louvain community detection to discover important communities of brain states (temporal communities) and brain regions within a state (spatial communities). We use community centrality statistics to identify the hub states of key activity in each network. Within each hub state we look at spatial community structure to determine the key actors in the dynamics of the model that may be important to the overall dynamics of wakefulness across subjects.

Random walks provide an effective way to construct representative samples from a graph in a way that preserves local structure (Dupont et al. 2006; Leskovec and Faloutsos 2006). In complex interdependent data sets random walk sampling can be used to remove baseline levels of interdependence and discern the most robust relationships in a one dimensional model, by conditioning out local inhomogeneity in noisy activity (Luecken et al. 2018). Here, we extend this principle to network sampling across two dimensions, space and time. Our method is based on a non-parametric random walk statistic that combines a temporal walk between layers with a spatial walk between regions. We use random walks to sample plausible patterns of functional network activity from the local functional activity background. We then use the samples as a benchmark against which to score functional coordination in our spatial communities. This statistical score, termed the T-score, is simple to compute given the graph model and putative network and is inspired by a similar method for analysing large, complex protein graphs with metalayer information (Luecken et al. 2018).

Our method allows us to determine which spatial communities are highly co-activated or inactivated relative to the expected dynamics across states in that brain area, providing a generalisable procedure to determine functionally relevant brain state communities. Our within state community functional associations largely agree with macroscopic analysis of the state functional activity maps, but provide an additional layer of information in the form of networks that provide clarification and depth to our understanding of brain states at the mesoscale.

Metatextual and network analysis of brain state models

We use the powerful metanalysis tool, Neurosynth (Yarkoni et al. 2011), to determine functional associations between each brain state, it’s most important networks and important functional terms from the literature. Neurosynth provides scores based on either correlations between brain images and the occurrence of a predefined set of terms in the literature or, in conjunction with the NIMARE package (NiMARE 2019), a posteriori probabilities of associations between the image and an exhaustive list of literature terms. Using these tools and images derived from our brain states, termed functional activity maps, we provide evidence to corroborate the modular processing hypothesis in resting wakefulness (Reichardt and Bornholdt 2006). Key to our findings is that the states associated with resting state networks tend to self-associate while being anticorrelated with sensorimotor associated states.

Methods

Fig. 2
figure 2

Flow diagram of the graph modelling and analysis pipeline. Following preprocessing of the fMRI data we obtain multivariate regional brain activity time series for all N subjects. Variational Bayes inference is then used to train HMMs (using the HMM-MAR package (Vidaurre et al. 2016)). A is a sketch of an HMM fitted to the \(X_{n,t}\) data for subject n and time point t. Each hidden brain state \(S_{n,t}\) \(\Sigma (S_{n,t})\) has mean activity \(\mu (S_{n,t})\) and covariance \(\Sigma (S_{n,t})\) (after backprojection). State change from \(S_{n,t}\) is determined by the transition matrix P. B The number of hidden states, K, is determined using mean subjectwise cross-validated maximum entropy, which is calculated over the fractional occupancies, \(\kappa _{s,n,k}\) for each subject-state pair up to K states. C Adjacency matrix of the interlayer temporal directed transition graph determined by the Markov transition matrix of the HMM, with temporal communities in red along the diagonal. D Each state itself can be considered a layer with edges relating brain regions by their correlation in activity derived from their modelled covariance \(\Sigma (s)\), with node weights (regional mean activity) determined by \(\mu (s)\). Of the states, some are highly connected state hubs, h(U), belonging to a temporal community U (red shading). E Each hub state (layer) h(U) is analysed and internal spatial communities are determined. F Internal communities are ranked according to their level of coherent brain activity compared to many repeated random walk samples from the multiplex model. G The results of ranking summarised by the community T-score. High T-score corresponds to a higher than expected level of community coherent activity when compared to the rest of the multiplex graph in this brain area. We propose functions for highly ranked communities by mapping these regions onto a 3D functional activity map and compared them to maps and terms drawn from the neuroscience literature with NeuroSynth

In the following sections, “Acquisition and pre-processing of fMRI data for HMM modelling" and “Model specification and generalisability” sections, we explain the preprocessing of the data and define the state space (HMM) model and novel model selection criterion. We will see that each brain state s can be thought of as a pattern of activity represented by a weighted graph \(G(s)=\{V,a(s),W(s)\}\) in which each node is a brain region \(x\in V\), (with \(|V|=D\) nodes), each with a level of functional activity \(a(s)^x\) attributed to x. Similarly, each edge in G(s) is weighted by \(W(s)^{x,y}\in W(s)\) the level of information flow from region x to region \(y\in V\) (edge absence is represented by \(W(s)^{x,y}=0\)), with \(W(s)^{x,y}\ne W(s)^{y,x}\) in general.

As we shall show, Hidden Markov Modelling with our new model selection method, provides a means to construct a dynamic state space model from multi-subject fMRI time series data in a data driven way. We use inter-regional correlations to determine the state graphs and use the temporal relationships between states to determine the directed interlayer edges (see Fig. 1C). Lastly, in “Louvain and hierarchical temporal clustering”, “Community hub selection”, “Identifying functionally important spatial communities” and “Analysis of states and communities with NeuroSynth” sections we set out methods to explore the spatiotemporal modular and functional structure of these multiplex brain state models.

Acquisition and pre-processing of fMRI data for HMM modelling

Ten minutes of whole brain fMRI activity were recorded separately for each of \(N=15\) wakeful subjects (with eyes closed) as part of a previous study (Mhuircheartaigh et al. 2013). The brain volumes produced by the scanner were aligned to the MNI152 standard brain template (Fonov et al. 2011). This resulted in a high dimensional time series of each subject’s fMRI (BOLD) signal for each voxel, with a temporal resolution of 3 s and a spatial resolution of 2 mm\(^3\) (Woolrich et al. 2009).

Recordings were collected separately from each subject. Of the 200 volumes recorded per subject (each time point is one volume), four dummy volumes were removed to exclude any non-steady-state magnetisation effects. This was followed by motion correction with MCFLIRT (Motion Correction FMRIB’s Linear Image Registration Tool), spatial smoothing using a Gaussian kernel of 5 mm full width half-maximum, global intensity normalisation, and temporal high-pass filtering with a cutoff of 0.02 Hz to remove low frequency scanner drift. Automated removal of non-brain tissue was initially performed before statistical analysis using BET (Brain Extraction Tool), with further manual correction in FSLview. Further spatiotemporal artefact removal was carried out by independent component in FSL melodic (Woolrich et al. 2009).

We selected regions of interest in our study based on the Harvard-Oxford (HO) probabilistic cortical and subcortical brain parcellations, which assigns to each voxel a probability for each brain region. We assign each voxel a unique region identity according to the maximum probability across regions in the HO parcellation. Excluding white matter regions the resulting parcellation of 63 Regions of Interest (ROIs) includes 48 cortical and 15 subcortical brain regions (Caviness et al. 1996; Makris et al. 2006). ROI time series were calculated using the ROI spatial mean BOLD signal at each time point. This results in a \(D=63\) dimensional time series with \(T=196\) time points per subject. Each of the D constituent ROI time series were temporal mean subtracted and normalised by the standard deviation.

Model fitting presents two challenges, the first is that the time taken to fit the model scales with parametric complexity, and the second is that a poorly parametrised model may lead to overfitting or underfitting. To address these challenges, dimensionality reduction by principal components of the original D dimensional time series was performed to reduce parametric complexity while also reducing overall noise. This approach is justified by the generally low embedding dimension of most real world data, including neuroimaging data (Ma et al. 2018; Shen and Meyer 2008). In order to balance dimensionality reduction and retention of signal, Parallel Analysis is used (see Additional file 1: Section 1) to obtain a \(D\times d\) eigenmatrix A of the first \(d<D\) eigenvectors (Horn 1965). This method assumes roughly linear separability of uncorrelated noise from signal, but has been shown to outperform a number of methods, including maximum likelihood estimation, in simulation (Humphreys and Montanelli 1975). The reduced d dimensional time series \(\{X_{n,t}^*\}_{t\in {\mathbb {N}}_T}\) is then inputted to train a noise reduced HMM model of the data.

Model specification and generalisability

We use the HMM-MAR package to train HMMs with multivariate normal observations by Variational Bayes (Vidaurre et al. 2016), whilst separating the data by subject into distinct trials of length T. For further details on model fitting see (Vidaurre et al. 2016). Figure 2A shows how observations of the fMRI BOLD signal at each time point are modelled across subjects. Dynamics for each subject are modelled and fitted using a shared set of states \({\mathcal {S}}\) with finite \({S}=\{1,2, \ldots ,K\}\) and Markov transition matrix P.

We give a brief overview of HMM dynamics. We note that a key parameter, for these dynamics, the number of brain states, \(K=|{\mathcal {S}}|\), that best generalises these dynamics across subjects is unknown. Consequently, we introduce a novel framework for selecting K based on an information theoretic criterion that maximises generalisability by maximising entropy of the state dynamics across subjects.

In each HMM state trajectory, the initial state of each subject’s trial is selected independently at random. Under the Markov assumption of the model the resulting subject-specific state dynamics are assumed independent realisations of the same stochastic process, \(S_{n,t}\). For \(t>1\), \(S_{n,t}\) is conditionally dependent on the previous time step \(S_{n,t-1}\) so that

$$\begin{aligned} Pr(S_{n,t}=s|S_{n,t-1}=s',{\mathcal {M}})=P_{s,s'}, \end{aligned}$$
(1)

for \(s'\in {\mathcal {S}}\). Each brain state \(s\in {\mathcal {S}}\) is associated with an observation model \(O(s)\sim MVN(\mu ^*(s),\Sigma ^*(s))\). The \(O(S_{n,t})\) model the row dimensionally reduced brain data \(X_{n,t}^*\). In order to obtain the full model, the reduced model is then back-projected into D dimensional brain region space [see Eq. (2)].

Novel model selection criterion based on fractional occupancy

The Markov chain defined by P and any given initial state \(s_0\in {\mathcal {S}}\), has a unique stationary distribution \(\pi _s\) that is independent of \(s_0\) assuming the chain is irreducible and the states are positive recurrent. The probability \(\pi _s\) is the long run probability of the re-occurrence of state s. Selection of the number of these hidden states is carried out by cross-validated entropy maximisation over the related fractional occupancy distribution. The fractional occupancy distribution \(\kappa\) is defined by subject n for each state s and given by

$$\begin{aligned} \kappa (s,n|{\mathcal {M}},X) = \frac{1}{T}\sum \limits _{t=1}^T Pr(S_{n,t}=s|{\mathcal {M}},X) \end{aligned}$$

where \(P(S_{n,t}=s|{\mathcal {M}},X)\) is the posterior probability of state s occurring at time t given the model \({\mathcal {M}}\) and data X. The fractional is the probability of finding subject n in s over the entire trial of length T. The distribution \(\kappa\) for subject n is related to the stationary distribution \(\pi _s\) by the well-known limit

$$\begin{aligned} \kappa (s,n|{\mathcal {M}},X) \xrightarrow [\infty ]{T} \pi _s. \end{aligned}$$

That is to say that \(\kappa\) asymptotically approximates the long run average state dynamics of the model as trial length increases. Knowing this, our goal is to select the model whose fractional occupancy maximises the entropy pooled across subjects by maximising the objective function

$$\begin{aligned} H(k) = - \sum \limits _{n=1}^N\kappa (s,n|{\mathcal {M}}(n,k),X_n)\log [\kappa (s,n|{\mathcal {M}}(n,k),X_n)] \end{aligned}$$

where the model \({\mathcal {M}}(n,k)\) is the model trained using all trials except the data from subject n assuming k hidden states, and \(X_n\) is the trial data from subject n (see Fig. 2B).

By selecting the initial number of states \(K={{\,\mathrm{arg\,max}\,}}H(k)\), we appeal to the information theoretic principle of maximum entropy which states that the model which maximises the uncertainty over the data tends to be the one that best approximates the true data distribution (Jaynes 1957). More specifically, our goal is to obtain a set of states with similar uncertainty about subject behaviour over the course of the experiment. We shall see in “Entropy relates to model selection” section that the goal of state-subject uncertainty maximisation relates closely to that of optimal model selection. We note that to the best of our knowledge this is the first application of such a subject-specific entropic criterion in state space model selection.

The state Markov information graph

First model parameters \(\mu ^*(s)\) and \(\Sigma ^*(s)\) for state s from the HMM model \({\mathcal {M}}\) are backprojected using the transpose eigenmatrix A to obtain a model in D dimensional brain space so that the full D dimensional model has mean \(\mu (s)\) and variance \(\Sigma (s)\) defined over the ROIs and given by

$$\begin{aligned} \Sigma (s)=A\Sigma ^*(s)A^T \qquad \text {and} \qquad \mu (s)=\mu ^*(s)A^T. \end{aligned}$$
(2)

Using the full model, each state s has normally distributed observations with mean \(\mu (s)^x\) and covariance \(\Sigma (s)^{x,y}\), for brain regions \(x,y \in V\). We use these to define a graph \(G(s)=(V,a(s),W(s))\) over the set of R brain regions, node weights a(s) and edge weights W(s), which we take to be a proxy for the information flow between regions. More specifically, we estimate the weights W(s) by the correlation matrix \(|\rho (s)|\), as derived from the state covariance matrix \(\Sigma (s)\).

Here, \(a(s)^x=\mu (s)^x\) are the mean regional functional activity at brain region x in s. The weighted edge (directed information flow) from regions x to y are

$$\begin{aligned} W(s)^{x,y} = \frac{|\rho (s)^{x,y}|}{\sum \nolimits _{z=1}^D|\rho (s)^{x,z}|}. \end{aligned}$$
(3)

The resulting edge weights matrix W(s), defines a Markov transition matrix, a model of information flow between brain regions in state s in which information flow between x and y is defined both into x from y, \(W(s)^{x,y}\) and out of x to y, \(W(s)^{y,x}\). Note this defines a potentially asymmetric and directed graph with edges (information flow) both into and out of x. The rationale for using such a Markov transition matrix to define edge weights is to convert the entire network into a dynamic Markov graph in which information is propagated probabilistically both in time and space. This is useful in particular in “Analysis of states and communities with NeuroSynth” section.

Louvain and hierarchical temporal clustering

We perform Louvain modularity detection on the directed Markov transition and information graphs (Blondel et al. 2008). Suppose \(G=(V,E,W)\) is a potentially directed and weighted graph with vertex set V, edge set E and weight matrix W. The Louvain algorithm involves the greedy optimisation of an objective function \(Q({\mathcal {U}})\), termed the modularity score for \({\mathcal {U}}\) a partition of V (see Additional file 1: Section 2) (Girvan and Newman 2002; Newman 2006). The algorithm allows for a resolution parameter \(\gamma\) which determines the relative size of communities and goes to one as \(\gamma \rightarrow \infty\) (Lambiotte et al. 2008).

We use a form of the Louvain optimisation algorithm originally designed for undirected networks but complement this with a version of the modularity \(Q({\mathcal {U}})\) which has been adapted for directed networks in Nicosia et al. (2009) and Leicht and Newman (2008). In order to assess the validity of this approach, a rough measure of the degree of symmetry in a weight matrix W can be given by the fraction of the energy of the adjacency matrix (as measured by the Frobenius norm) that is contributed by the symmetric part, \(\text {Sym}(W)\) (see Additional file 1: Section 3) (Aggarwal 2020).

In the case of temporal communities, we determine the significance of the community partitioning by comparing \(Q({\mathcal {U}})\) to an empirical distribution composed of modularity scores from 10,000 partitions constructed by random permutation of the community labels. In addition, in order to examine the state-subject relationships directly, we perform agglomerative hierarchical linkage clustering based on correlation in fractional occupancy \(\kappa\) using Ward’s method (Ward 1963).

Community hub selection

State hubs are the states most central to the dynamics of the model and facilitate the switching dynamics within each community. These are selected by maximising the community centrality z-score, z(s), for each community \(U\subset {\mathcal {S}}\) (Guimera and Amaral 2005; Shine et al. 2016). This score measures the within community degree centrality of a node relative to the mean community connectivity (see Supplementary Information 4). Hubs are then analysed for their community structure, using the same Louvain algorithm as in “Louvain and hierarchical temporal clustering” section but this time on the directed brain state graph G(s).

Identifying functionally important spatial communities

Not all detected communities are as relevant to a state’s functional role as others. Performance of these roles requires both functional activation and coordination of brain regions. To discern which communities are the most functionally cohesive, we rank communities by comparing to samples of regional activity from the full multiplex graph model (see Fig. 2C, D). We used random walks to sample plausible patterns of functional network activity and employ them as a benchmark against which to measure the level of coordination within spatial communities. Controlling for the local level of background activity in space and time allows for a more representative indication of functional cohesion within brain networks identified by community detection than naive comparison of communities by community mean functional activity.

We introduce to neuroimaging the Functional Homogeneity, FH, as our community coherence measure, a statistic derived from the mean activity \(\mu (s)\) and \(\Sigma (s)\) that is high when the community mean activity is most in agreement with the directions of maximum community functional connectivity and low otherwise. It is a measure of the alignment between the two key features of spatial communities, their level of shared information and activation. This measure is well suited for neuroimaging data, and is well established in computer vision and image classification where it is known as the covariance metric and measures the agreement between and within image classes (Li et al. 2019). The FH for a community C in a state s is

$$\begin{aligned} FH(s,C) = {\mu (s)^C}^T\Sigma (s)^C \mu (s)^C, \end{aligned}$$
(4)

where the superscript C refers to the submatrix given by removal of all rows and columns not corresponding to regions in community C. This metric is key to the community ranking procedure which follows a six step process:

  1. 1

    Given a community \(C\subset V\) in state G(s) we calculate FH(sC).

  2. 2

    Sample a state \(s'\) from the stationary distribution \(\pi\).

  3. 3

    Select a region \(x\in C\) and sample |C| nodes from \(G(s')\) starting at \(x\in V\) in \(G(s')\).

  4. 4

    Repeat steps 2 and 3 to construct a representative sample of paired states and brain regions \((s_1,C_1),(s_2,C_2) \ldots ,(s_L,C_L)\)

  5. 5

    Calculate the T-score for functional cohesiveness of a subgraph

    $$\begin{aligned} T(s,C) = \frac{1}{L}\sum \limits _{l=1}^L I[FH(s,C)>FH(s_l,C_l)] \end{aligned}$$

    where I is the standard indicator function and rank the communities in s by decreasing T-score.

  6. 6

    Determine whether the community represents a correlated or anticorrelated brain subgraph by the sign of \(E_C[\mu (s)]=\sum _{x\in C} \mu (s)^x\).

The T-scores of all the communities in a specific state can then be used to order the states in terms of which are most likely to contribute to the functional cohesion of the state. Note that T(sC) is a score between zero and one, with one implying that the community C is much more functionally cohesive than other comparable brain subgraphs in space and time. T-scores are not designed to be compared across states. These steps are summarised by steps E to F in Fig. 2.

Analysis of states and communities with NeuroSynth

NeuroSynth is a meta-analysis tool that takes in 3D images of brain activity (termed functional activity maps) in MNI152 standard space and returns a scored association (based on the Pearson correlation) between the activity maps and other images from published articles that directly reference a given term i (Yarkoni et al. 2011). We choose the six terms most clearly associated with resting state activity default mode, salience, executive, these are the resting state network terms and sensorimotor, auditory and visual, sensory network terms. We used these to characterise the mean activity of a given state s by projecting the activity pattern \(\mu (s)\) back into 3D brain standard space (see Supplementary Figure S-2A) and inputting the resulting map into NeuroSynth.

The resulting score for a state s and term i is denoted \(\theta _{i,s}\in [-1,1]\), with 1 indicating perfect correlation between the state’s mean functional activity map and i and -1 indicating perfectly anticorrelated activity. We note that although these terms, while chosen to relate to known resting state patterns, are not equivalent and should be thought of as suggestive of a global pattern of activity (or its absence). We explore the activity of actual networks in our spatial community analysis “Community rankings reveal spatiotemporal modules of functional activity” section.

We propose that the global score \(\theta\) can also be considered a dynamically changing property of the system. Given a score \(\theta _{i,s}\) for a term i and state s, the one step ahead predicted score is

$$\begin{aligned} E_{t+1}[\theta _{i,s}] = \sum \limits _{s'\in {\mathcal {S}}} P_{s,s'} \theta _{i,s'}. \end{aligned}$$
(5)

We use this predicted score to examine the global properties of the activity observed after reaching a given state.

NeuroSynth can also be used in conjunction with the newly developed package NiMARE to directly calculate the posterior probability of terms from a large corpus of neuroimaging journal abstracts and images given a selection of brain voxels in standard space (NiMARE 2019). Due to the variability in brain region size, regions selected by community membership are downsampled by selecting 10,000 voxels with replacement from each community which was found to produce stable posterior probabilities up to the third decimal place.

We use NeuroSynth with NiMARE to determine a plausible function for each of our spatial brain region communities, selecting only those terms that are most a posteriori probable and which had a functional rather than anatomical interpretation (see Supplementary Figure S-2B). We pass each community from each hub state through our spatiotemporal community ranking method resulting in a ranked list of communities of brain regions per state and then pass each top ranked community through the NiMARE/NeuroSynth method to determine their most likely functional term associations. In order to be comparable with the global score \(\theta\), the NeuroSynth score is either a positive or negative association depending on the mean activity of the regions as suggested in “Identifying functionally important spatial communities” section.

Validation of model framework

A detailed validation of key features of the modelling and analysis framework was carried out using synthetic data (see Additional file 1: Section 5). This includes validation of the dimensionality reduction method as a means to reduce the computational demand of modelling while retaining community structure using the Adjusted Rand Index (ARI) (Rand 1971). Validation is also performed for the Markov Information Graph-based community detection and model selection procedures. Other key components of the model such as the HMM inference procedure have already been validated using synthetic data with detailed simulations (Vidaurre et al. 2016, 2018).

Not all components of the modelling and analysis framework could be validated by simulation as it was considered beyond the scope of this document to generate realistic synthetic community functional homogeneity and NeuroSynth scores. The community importance ranking procedure is instead validated using real annotation metadata and the NeuroSynth tool.

Results

Results for our multisubject HMM model training and multiplex graph model analysis are given below.

Dimensionality reduction

We select the appropriate number of principal components using the method of parallel analysis outlined in Additional file 1: Section 1. This resulted in a reduced set of \(d=9\) dimensions that account for roughly \(75\%\) of the total variance, which are then used in fitting the model. Validation of this approach using synthetic data is explored in Additional file 1: Sections 5.1 and 5.2.

Entropy relates to model selection

Applying our cross validated maximum entropy Hidden Markov Model selection criteria by maximising the cross-validated entropy H(k), we obtain an HMM with \(K=33\) initial states. Figure 3 shows that the entropy maximum also coincides with the maximisation of the cross-validated Bayesian log-likelihood, which is a general indicator of model fit. To further reduce the risk of overfitting, we exclude those states that occur in less than 25\(\%\) of subjects and renormalise P so that the rows again sum to one. The resulting model has a total of \(K=27\) brain states.

Fig. 3
figure 3

Selection of the number of hidden states by minimising the negative cross-validated entropy. The axes show the negative cross-validated log-likelihood \(-\text {cvLL}\) (left) and negative cross-validated entropy \(-\text {cvH}\) (right). Qualitative similarities are evident between the two criteria suggesting deeper similarities between likelihood and entropy maximisation

Network dynamics indicate clustering of activity patterns in space and time

Table 1 shows that states positively correlated with resting state activity terms are significantly more likely to transition to states with similar associations and vice versa (see Supplementary Figure S-3 for linear model comparison). In contrast, states correlated with resting state terms tended to transition to states that are negatively correlated with the sensory terms. This suggests that states associated with the former resting state networks tend to co-occur to the exclusion of sensory and sensorimotor patterns of activity. These results indicate a spatiotemporal separation between resting state network activity and sensory activity.

States with high scores for sensory activity terms show a far weaker positive affinity for transition to each other than do the former resting state network terms. This suggests that concurrent activity in space and time is most likely between states with high resting state network activity. This pattern of concurrent activity is only weakly suggestive for sensory modes of activity. In contrast, robust mutually antagonistic spatiotemporal relationships between sensory and resting state network associations are present. We shall see in “Community rankings reveal spatiotemporal modules of functional activity” section this pattern of mutual exclusivity is mirrored by the most central states in the network or hub states at both the global (functional activity map) and the local (network community) levels. States show a general trend of transitioning from terms with one global activity association to another state that scores highly for the same association, suggesting some level of brain state inertia in the global pattern of functional activity.

Table 1 This table shows the relationships between NeuroSynth terms scores, calculated using the mean activity brain map for each state and the one step ahead projected score for each term according to the model (see Eq. 5)

Evidence for metatastate structure in wakefulness

In order to demonstrate the presence of temporal community structure, we performed hierarchical linkage clustering using the correlation in \(\kappa\) between subjects and states. We also calculated the normalised degree of symmetry in P, \(\text {Sym}(P)=0.9921\) indicating a degree of symmetry in P (with \(\text {Sym}(P)=1\) when P is completely symmetric). Figure 4A suggests a temporally clustered pattern of state fractional occupancy in which certain states are more likely to co-occur in one subset of subjects than in the other. Figure 4B shows the transition probability matrix P organised into communities by Louvain community detection, where \(\gamma =0.48\) (as selected by Variation of Information minimisation) (Lambiotte et al. 2008). Temporal communities indicate modules of clustered state transitions. This temporal community partition was tested for robustness by comparing the Q modularity statistic to 10,000 random partitions with the same community labels (\(p=\) 1e−4).

Each community, \(U\subset {\mathcal {S}}\), is characterised by a hub state h(U) determined by the state with the highest community degree z-score, a measure of state centrality to the temporal network (see Supplementary Figure S-1). Figure 4C, shows the long run probability of state s re-occurence \(\pi _s\). Re-occurence and centrality to a community appear to be strongly correlated as states more central to their communities according to the z-score, z(s), also tended to have a higher stationary probability \(\pi _s\), with correlation coefficient \(\rho =0.537\) (\(p=0.004\)). This observation suggests that as mediators of network dynamics, community hub states tend to re-occur, playing a central role in the overall network dynamics as well as in their own community.

Fig. 4
figure 4

A summary of the subject state network dynamics. A Clustergram showing the relationships in Fractional Occupancy (FO), the proportion of time spent in a state clustered by subjects (vertically) and by states (horizontally). B The log of the state transition matrix P is shown, where states have been grouped along the diagonal, according to their community membership. C Pie chart showing the state occupancy at equilibrium (the probability of finding a subject in a state in the limit as time goes to infinity). Wedges in this pie chart are the individual hub states in each community according to the community z-score. These two scores share a significant 0.537 (\(p=0.004\)), indicating the importance of community centrality to long run behaviour

Community rankings reveal spatiotemporal modules of functional activity

Louvain community detection was performed for each of the community hub state graphs G(h(U)) for each community U in partition \({\mathcal {U}}\). We assessed the degree of symmetry in the Markov Information graph of each hub states and found that \(\textit{Sym}(W(h(U)))>0.99\) for all communities U. Here, the Variation of Information was not used to select \(\gamma\) as differing recommended \(\gamma\) between hubs was found to produce communities of inconsistent and incomparable sizes; we thus select the resolution as \(\gamma =2\) for all hub states. This was found to produce median spatial community network sizes that were sufficiently small on average (roughly 4 regions per community) for our community ranking method to efficiently sample the graph while also being large enough to detect functionally conserved brain state networks.

We perform NeuroSynth analysis by taking the mean functional activity maps generated for the hub states as input in combination with the resting state network terms default mode, salience, executive and the sensory network terms sensorimotor, auditory and visual (see Supplementary Figure S-2A for algorithmic explanation). The results in Table 2 suggests a separation between sensory and resting state activity in space and time with hub states scoring highly for either resting state or sensory terms but rarely both. Table 2 gives the highest ranked functional terms (filtering out purely anatomical terms) in each hub state for the top three ranked spatial network communities (using our ranking method). The top terms for each of the networks (communities) in the states largely coincide with the functional associations ascribed to each of the hub states themselves.

Exploring these relationships, we see that in some cases the connections between spatial community function and hubs are direct. State 23 shows a positive association with observation and action in dominant spatial communities and a strong association with all three sensory network terms. State 11 shows a clear association with auditory activity as well as a top ranked community association with the term voice. In state 15, which shows a strong correlation with \(\textit{visual}\) activity, the top ranked communities include positive associations with the face (a common object of visual processing).

In some states we see both strong positive and negative associations. Global negative asssociations are difficult to interpret in isolation as evidence of anticorrelated network behaviour within a state, however when paired with mesoscale information from the top ranked communities a stronger case is possible. State 32 appears mixed in activity but shows strong to moderate negative correlations with visual and auditory processing. The latter of these is corroborated by the anticorrelated speech network. State 30 is another state with mixed associations based purely on global functional activity, however, we see both moderate negative correlation globally with visual activity, and a specific negatively correlated community related to visual tasks or processing, suggesting a visual down state. A similar explanation can be used for state 5. State 23 is a sensory associated state with sensory associations at both the global and network scales. State 23 is negatively correlated with default mode activity. The default mode network is involved in language comprehension and reasoning, explaining the anticorrelated network associated with syntactic processing. Negatively associated communities may more generally suggest decreased metabolic or functional demand for these in networks leading to a coordinated down state.

Table 2 Summary of the NeuroSynth results for the hub states

Discussion

In this paper we present a fully unsupervised pipeline for characterising the spatiotemporal activity of neuronal brain states in terms of a multiplex brain state graph model. This pipeline involves the training of an HMM in order to obtain a multiplex spatiotemporal directed brain state graph that represents the dynamics of subjects in resting wakefulness. We present a method for obtaining a set of states (layers) that generalises well over subjects and use this method to determine key states in the network dynamics. Lastly, we characterise the spatiotemporal components of the model that are most central and most functionally coherent, characterising these using metatextual image analysis of the neuroscience literature.

Our HMGM-based methodology reveals a rich array of complementary communities acting together to produce modes of neural behaviour during resting wakefulness. Crucially, we have shown that patterns of activity resembling the resting state networks tend to co-occur and that these patterns tend to preclude sensory and sensorimotor patterns of activity. This modularity of brain state function has been suggested by others (Smallwood et al. 2012; Vidaurre et al. 2017), but metaanalysis of terms associated with these functions allows us to characterise individual states and quantify their change in character through time.

Within each hub brain state the division between functions was not clearly partitioned, with many terms featuring communities with memory or autobiographical associations, possibly suggesting an undercurrent of narrative thought which persists across numerous states. Alternatively, this may be due to artefacts caused by auditory memory-related tasks studies in the NeuroSynth database. It is important to note that spatiotemporal state-based activity analysis is novel and so terms in the literature which derive from static models of activity may not map accurately onto dynamic patterns of activity. In particular, transient states may be smoothed out of these analyses meaning that new studies will need to be performed focusing on dynamic functional activity change at much shorter time scales in order to build up an understanding of function in dynamic brain states.

Some of the state global functional activity term associations, particularly negative ones, remain difficult to interpret. In state 13, there is a strong association with the term sensorimotor, however all of the top ranked communities for this state are negatively associated with functions that may have a closer association to resting state activity. This could be due to putative link between the central executive activity and reward observed in primates (Sigmund et al. 2001), but may also be due to ranking error or noise in our graph model. However, the roles of many states become more clear when combining functional information from either anticorrelated or correlated mesoscale communities with global tendencies in functional activity. We hypothesize that strongly cohesive anticorrelated networks may be entering a coordinated down state due to changes in metabolic or functional demand (Tomasi et al. 2017; Passow et al. 2015; Thompson 2018).

One issue with our approach is that the Louvain implementation we use with directed modularity does not fully capture the signal of edge directionality in community detection (see Additional file 1: Section 2). This problem may be partially mitigated by the fact that we found the edge weights in question to not be highly asymmetric when measured as a fraction of matrix energy. However, a community detection methods that more directly account for directed edges, such as InfoMap (Rosvall et al. 2009), or the Markov structure of the model, such as Jin et al. (2011) may identify other other forms of community structure in our graph models that are worth investigation. In particular we intend to investigate more general implementations of the Louvain algorithm that are optimised for directed networks (Li et al. 2018; Dugué and Perez 2015).

Presently, our framework also does not fully take advantage of the multuiplex graph structure of the model, for example using multilayer community detection which can be complex to parametrise (Hanteer and Magnani 2020). However, a potential advantage of the HMGM framework is that it provides a way to ground the interlayer coupling parameters used in some multilayer community detection using a natural property of the model, the probability of state transition. In our future work we intend to investigate multilayer community detection approaches to look at dynamic changes in network membership using coupling parameters based on the transition probabilities between state layers.

We plan to apply our multiplex analysis framework to conditions of altered consciousness in deep anaesthesia and determine novel spatiotemporal networks that characterise this condition with comparison to our current graph model for resting wakefulness. In this way we hope to elucidate the complex network dynamics underlying conscious brain activity (Huang et al. 2021).

Availability of data and materials

Data, community ranking, and model selection code is available from the authors upon request.

Abbreviations

BOLD:

Blood Oxygen Level Dependent signal

CEN:

Central Executive Network

DMN:

Default mode network

fMRI:

functional MRI (Magnetic Resonance Imaging)

FO:

Fractional occupancy

PCA:

Principal Component Analysis

HMM:

Hidden Markov model

ROI:

Region of interest

SM:

Sensorimotor

SN:

Salience network

References

  • Aggarwal CC, Aggarwal, Lagerstrom-Fife (2020) Linear algebra and optimization for machine learning. Springer

  • Bassett DS, Sporns O (2017) Network neuroscience. Nat Neurosci 20(3):353–364

    Article  Google Scholar 

  • Biswal B, Zerrin Yetkin F, Haughton VM, Hyde JS (1995) Functional connectivity in the motor cortex of resting human brain using echo-planar MRI. Magnet Reson Med 34(4):537–541

    Article  Google Scholar 

  • Blondel VD, Guillaume J-L, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech Theory Exp 2008(10):10008

    Article  MATH  Google Scholar 

  • Brown R (2006) What is a brain state? Philos Psychol 19(6):729–742

    Article  Google Scholar 

  • Calhoun VD, Adali T (2012) Multisubject independent component analysis of fMRI: a decade of intrinsic networks, default mode, and neurodiagnostic discovery. IEEE Rev Biomed Eng 5:60–73

    Article  Google Scholar 

  • Calhoun VD, Adalı T, Pekar JJ (2004) A method for comparing group fMRI data using independent component analysis: application to visual, motor and visuomotor tasks. Magn Reson Imaging 22(9):1181–1191

    Article  Google Scholar 

  • Caviness VS, Meyer J, Makris N, Kennedy DN (1996) MRI-based topographic parcellation of human neocortex: an anatomically specified method with estimate of reliability. J Cogn Neurosci 8(6):566–587

    Article  Google Scholar 

  • Chen S, Langley J, Chen X, Hu X (2016) Spatiotemporal modeling of brain dynamics using resting-state functional magnetic resonance imaging with gaussian hidden Markov model. Brain Connect 6(4):326–334

    Article  Google Scholar 

  • de Vos F, Koini M, Schouten TM, Seiler S, van der Grond J, Lechner A, Schmidt R, de Rooij M, Rombouts SA (2018) A comprehensive analysis of resting state fMRI measures to classify individual patients with Alzheimer’s disease. NeuroImage 167:62–72

    Article  Google Scholar 

  • Dugué N, Perez A (2015) Directed louvain: maximizing modularity in directed networks. Ph.D. thesis, Université d’Orléans

  • Dupont P, Callut J, Dooms G, Monette J-N, Deville Y, Sainte B (2006) Relevant subgraph extraction from random walks in a graph. Universite Catholique de Louvain, UCL/INGI, Number RR, vol 7

  • Fonov V, Evans AC, Botteron K, Almli CR, McKinstry RC, Collins DL, Group BDC et al (2011) Unbiased average age-appropriate atlases for pediatric studies. NeuroImage 54(1):313–327

    Article  Google Scholar 

  • Girvan M, Newman ME (2002) Community structure in social and biological networks. Proc Natl Acad Sci 99(12):7821–7826

    Article  MathSciNet  MATH  Google Scholar 

  • Guimera R, Amaral LAN (2005) Functional cartography of complex metabolic networks. Nature 433(7028):895–900

    Article  Google Scholar 

  • Hanteer O, Magnani M (2020) Unspoken assumptions in multi-layer modularity maximization. Sci Rep 10(1):1–15

    Article  Google Scholar 

  • Hindriks R, Adhikari MH, Murayama Y, Ganzetti M, Mantini D, Logothetis NK, Deco G (2016) Can sliding-window correlations reveal dynamic functional connectivity in resting-state fMRI? NeuroImage 127:242–256

    Article  Google Scholar 

  • Hipólito I, Ramstead MJ, Convertino L, Bhat A, Friston K, Parr T (2021) Markov blankets in the brain. Neurosci Biobehav Rev 125:88–97

    Article  Google Scholar 

  • Horn JL (1965) A rationale and test for the number of factors in factor analysis. Psychometrika 30(2):179–185

    Article  MATH  Google Scholar 

  • Huang X, Chen D, Ren T, Wang D (2021) A survey of community detection methods in multilayer networks. Data Min Knowl Discov 35(1):1–45

    Article  MathSciNet  MATH  Google Scholar 

  • Humphreys LG, Montanelli RG Jr (1975) An investigation of the parallel analysis criterion for determining the number of common factors. Multivar Behav Res 10(2):193–205

    Article  Google Scholar 

  • Jalili M (2016) Functional brain networks: does the choice of dependency estimator and binarization method matter? Sci Rep 6(1):1–12

    Article  MathSciNet  Google Scholar 

  • Jaynes ET (1957) Information theory and statistical mechanics. Phys Rev 106(4):620

    Article  MathSciNet  MATH  Google Scholar 

  • Jin D, Liu D, Yang B, Liu J, He D (2011) Ant colony optimization with a new random walk model for community detection in complex networks. Adv Complex Syst 14(05):795–815

    Article  MathSciNet  Google Scholar 

  • Karahanoğlu FI, Van De Ville D (2017) Dynamics of large-scale fMRI networks: deconstruct brain activity to build better models of brain function. Curr Opin Biomed Eng 3:28–36

    Article  Google Scholar 

  • Kirchhoff M, Parr T, Palacios E, Friston K, Kiverstein J (2018) The Markov blankets of life: autonomy, active inference and the free energy principle. J R Soc Interface 15(138):20170792

    Article  Google Scholar 

  • Kokkonen S-M, Nikkinen J, Remes J, Kantola J, Starck T, Haapea M, Tuominen J, Tervonen O, Kiviniemi V (2009) Preoperative localization of the sensorimotor area using independent component analysis of resting-state fMRI. Magn Reson Imaging 27(6):733–740

    Article  Google Scholar 

  • Lambiotte R, Delvenne J-C, Barahona M (2008) Laplacian dynamics and multiscale modular structure in networks. arXiv:0812.1770

  • Leicht EA, Newman ME (2008) Community structure in directed networks. Phys Rev Lett 100(11):118703

    Article  Google Scholar 

  • Leskovec J, Faloutsos C (2006) Sampling from large graphs. In: Proceedings of the ACM SIGKDD, vol 12, pp 631–636

  • Li L, He X, Yan G (2018) Improved louvain method for directed networks. In: International conference on intelligent information processing. Springer, pp 192–203

  • Li W, Xu J, Huo J, Wang L, Gao Y, Luo J (2019) Distribution consistency based covariance metric networks for few-shot learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 8642–8649

  • Liu F, Choi D, Xie L, Roeder K (2018) Global spectral clustering in dynamic networks. Proc Natl Acad Sci 115(5):927–932

    Article  MathSciNet  MATH  Google Scholar 

  • Luecken MD, Page MJ, Crosby AJ, Mason S, Reinert G, Deane CM (2018) Commwalker: correctly evaluating modules in molecular networks in light of annotation bias. Bioinformatics 34(6):994–1000

    Article  Google Scholar 

  • Ma H, Leng S, Aihara K, Lin W, Chen L (2018) Randomly distributed embedding making short-term high-dimensional data predictable. Proc Natl Acad Sci 115(43):9994–10002

    Article  MathSciNet  MATH  Google Scholar 

  • Makris N, Goldstein JM, Kennedy D, Hodge SM, Caviness VS, Faraone SV, Tsuang MT, Seidman LJ (2006) Decreased volume of left and total anterior insular lobule in schizophrenia. Schizophr Res 83(2–3):155–171

    Article  Google Scholar 

  • Martinet L-E, Kramer M, Viles W, Perkins L, Spencer E, Chu C, Cash S, Kolaczyk E (2020) Robust dynamic community detection with applications to human brain functional networks. Nat Commun 11(1):1–13

    Article  Google Scholar 

  • Menon V (2011) Large-scale brain networks and psychopathology: a unifying triple network model. Trends Cogn Sci 15(10):483–506

    Article  Google Scholar 

  • Mhuircheartaigh RN, Warnaby C, Rogers R, Jbabdi S, Tracey I (2013) Slow-wave activity saturation and thalamocortical isolation during propofol anesthesia in humans. Sci Transl Med 5(208):208–148208148

    Article  Google Scholar 

  • Michael AM, Anderson M, Miller RL, Adalı T, Calhoun VD (2014) Preserving subject variability in group fMRI analysis: performance evaluation of GICA vs. IVA. Front Syst Neurosci 8:106

    Article  Google Scholar 

  • Mitra A, Snyder AZ, Tagliazucchi E, Laufs H, Elison J, Emerson RW, Shen MD, Wolff JJ, Botteron KN, Dager S et al (2017) Resting-state fMRI in sleeping infants more closely resembles adult sleep than adult wakefulness. PLoS ONE 12(11):0188122

    Article  Google Scholar 

  • Newman ME (2006) Modularity and community structure in networks. Proc Natl Acad Sci 103(23):8577–8582

    Article  Google Scholar 

  • Nicosia V, Mangioni G, Carchiolo V, Malgeri M (2009) Extending the definition of modularity to directed graphs with overlapping communities. J Stat Mech Theory Exp 2009(03):03024

    Article  Google Scholar 

  • NiMARE (2019) https://nimare.readthedocs.io/en/latest/about.html. Accessed 10 Sept 2021

  • Papo D (2019) Gauging functional brain activity: from distinguishability to accessibility. Front Physiol 10:509

    Article  Google Scholar 

  • Passow S, Specht K, Adamsen TC, Biermann M, Brekke N, Craven AR, Ersland L, Grüner R, Kleven-Madsen N, Kvernenes O-H et al (2015) Default-mode network functional connectivity is closely related to metabolic activity. Hum Brain Mapp 36(6):2027–2038

    Article  Google Scholar 

  • Poldrack RA, Kittur A, Kalar D, Miller E, Seppa C, Gil Y, Parker DS, Sabb FW, Bilder RM (2011) The cognitive atlas: toward a knowledge foundation for cognitive neuroscience. Front Neuroinform 5:17

    Article  Google Scholar 

  • Pullon RM, Yan L, Sleigh JW, Warnaby CE (2020) Granger causality of the electroencephalogram reveals abrupt global loss of cortical information flow during propofol-induced loss of responsiveness. Anesthesiology 133(4):774–786

    Article  Google Scholar 

  • Ran Q, Jamoulle T, Schaeverbeke J, Meersmans K, Vandenberghe R, Dupont P (2020) Reproducibility of graph measures at the subject level using resting-state fMRI. Brain Behav 10(8):2336–2351

    Article  Google Scholar 

  • Rand WM (1971) Objective criteria for the evaluation of clustering methods. J Am Stat Assoc 66(336):846–850

    Article  Google Scholar 

  • Reichardt J, Bornholdt S (2006) Statistical mechanics of community detection. Phys Rev E 74(1):016110

    Article  MathSciNet  Google Scholar 

  • Rosazza C, Minati L (2011) Resting-state brain networks: literature review and clinical applications. Neurol Sci 32(5):773–785

    Article  Google Scholar 

  • Rosvall M, Bergstrom CT (2008) Maps of random walks on complex networks reveal community structure. Proc Natl Acad Sci 105(4):1118–1123

    Article  Google Scholar 

  • Rosvall M, Axelsson D, Bergstrom CT (2009) The map equation. Eur Phys J Special Top 178(1):13–23

    Article  Google Scholar 

  • Ryali S, Supekar K, Chen T, Kochalka J, Cai W, Nicholas J, Padmanabhan A, Menon V (2016) Temporal dynamics and developmental maturation of salience, default and central-executive network interactions revealed by variational Bayes hidden Markov modeling. PLoS Comput Biol 12(12):1005138

    Article  Google Scholar 

  • Sämann PG, Wehrle R, Hoehn D, Spoormaker VI, Peters H, Tully C, Holsboer F, Czisch M (2011) Development of the brain’s default mode network from wakefulness to slow wave sleep. Cerebral Cortex 21(9):2082–2093

    Article  Google Scholar 

  • Shen X, Meyer FG (2008) Low-dimensional embedding of fMRI datasets. NeuroImage 41(3):886–902

    Article  Google Scholar 

  • Shine JM, Bissett PG, Bell PT, Koyejo O, Balsters JH, Gorgolewski KJ, Moodie CA, Poldrack RA (2016) The dynamics of functional brain networks: integrated network states during cognitive task performance. Neuron 92(2):544–554

    Article  Google Scholar 

  • Shulman GL, Fiez JA. Corbetta M, Buckner RL, Miezin FM, Raichle ME (1997) Common blood flow changes across visual tasks: II. Decreases in cerebral cortex. J Cogn Neurosci 9:648–63

  • Sigmund K, Hauert C, Nowak MA (2001) Reward and punishment. Proc Natl Acad Sci 98(19):10757–10762

    Article  Google Scholar 

  • Smallwood J, Brown K, Baird B, Schooler JW (2012) Cooperation between the default mode network and the frontal-parietal network in the production of an internal train of thought. Brain Res 1428:60–70

    Article  Google Scholar 

  • Smith SM, Fox PT, Miller KL, Glahn DC, Fox PM, Mackay CE, Filippini N, Watkins KE, Toro R, Laird AR et al (2009) Correspondence of the brain’s functional architecture during activation and rest. Proc Natl Acad Sci 106(31):13040–13045

    Article  Google Scholar 

  • Smith K, Abásolo D, Escudero J (2017) Accounting for the complex hierarchical topology of EEG phase-based functional connectivity in network binarisation. PLoS ONE 12(10):0186164

    Article  Google Scholar 

  • Sridharan D, Levitin DJ, Menon V (2008) A critical role for the right fronto-insular cortex in switching between central-executive and default-mode networks. Proc Natl Acad Sci 105(34):12569–12574

    Article  Google Scholar 

  • Stevner A, Vidaurre D, Cabral J, Rapuano K, Nielsen SFV, Tagliazucchi E, Laufs H, Vuust P, Deco G, Woolrich M et al (2019) Discovery of key whole-brain transitions and dynamics during human wakefulness and non-REM sleep. Nat Commun 10(1):1035

    Article  Google Scholar 

  • Suk H-I, Wee C-Y, Lee S-W, Shen D (2016) State-space model with deep learning for functional dynamics estimation in resting-state fMRI. NeuroImage 129:292–307

    Article  Google Scholar 

  • Thompson GJ (2018) Neural and metabolic basis of dynamic resting state fMRI. NeuroImage 180:448–462

    Article  Google Scholar 

  • Ting C-M, Samdin SB, Tang M, Ombao H (2020) Detecting dynamic community structure in functional brain networks across individuals: a multilayer approach. IEEE Trans Med Imaging 40(2):468–480

    Article  Google Scholar 

  • Tomasi DG, Shokri-Kojori E, Wiers CE, Kim SW, Demiral ŞB, Cabrera EA, Lindgren E, Miller G, Wang G-J, Volkow ND (2017) Dynamic brain glucose metabolism identifies anti-correlated cortical-cerebellar networks at rest. J Cereb Blood Flow Metab 37(12):3659–3670

    Article  Google Scholar 

  • Vidaurre D, Quinn AJ, Baker AP, Dupret D, Tejero-Cantero A, Woolrich MW (2016) Spectrally resolved fast transient brain states in electrophysiological data. NeuroImage 126:81–95

    Article  Google Scholar 

  • Vidaurre D, Smith SM, Woolrich MW (2017) Brain network dynamics are hierarchically organized in time. Proc Natl Acad Sci 114(48):12827–12832

    Article  Google Scholar 

  • Vidaurre D, Abeysuriya R, Becker R, Quinn AJ, Alfaro-Almagro F, Smith SM, Woolrich MW (2018) Discovering dynamic brain networks from big data in rest and task. NeuroImage 180:646–656

    Article  Google Scholar 

  • Ward JH Jr (1963) Hierarchical grouping to optimize an objective function. J Am Stat Assoc 58(301):236–244

    Article  MathSciNet  Google Scholar 

  • Woolrich MW, Jbabdi S, Patenaude B, Chappell M, Makni S, Behrens T, Beckmann C, Jenkinson M, Smith SM (2009) Bayesian analysis of neuroimaging data in FSL. NeuroImage 45(1):173–186

    Article  Google Scholar 

  • Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, Wager TD (2011) Large-scale automated synthesis of human functional neuroimaging data. Nat Methods 8(8):665–670

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank the reviewers and editors for their helpful suggestions in restructuring and correcting this manuscript. We are grateful to the attendees and organisers of the Communities in Networks conference, where this work was originally presented, for the opportunity to contribute to this Special Issue. We are also grateful to Mark Woolrich, Angus Stevner, and the members of the Oxford Anaesthesia Neuroimaging and Protein Informatics Groups, for their insightful questions and comments.

Funding

JBW is supported by the Commonwealth Scholarship Commission UK, the Ernest Oppenheimer Memorial Trust (South Africa) and Human Brain Project, Specific Grant Agreement 3 (award reference 945539), CEW is funded by MRC Development Pathway Funding Scheme (award reference MR/R006423/1), and GDR is partially supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grants EP/R018472/1 and EP/T018445/1. This research is funded in part by the Wellcome Trust (grant 203139/Z/16/Z). For the purpose of open access, the authors have applied a CC-BY public copyright license to any Author Accepted Manuscript version arising from this submission. Data collection was funded by the National Institute for Academic Anaesthesia, and the International Anaesthesia Research Society.

Author information

Authors and Affiliations

Authors

Contributions

JW prepared the draft manuscript and developed the analysis methods. CW, CD and GR edited the manuscript. CW provided the raw data and interpretation of neuroscientific results. CD and GW supervised the analytical methods development. GR contributed to interpretation of model results. All authors read and approved the final manuscript.

Corresponding author

Correspondence to James B. Wilsenach.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the Local Research Ethics Committee (Oxford Research Ethics Committee B, Oxford, UK) and data collection was performed between October and December 2009. The study was performed in line with the Declaration of Helsinki and all subjects gave written informed consent.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Supplementary Information.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wilsenach, J.B., Warnaby, C.E., Deane, C.M. et al. Ranking of communities in multiplex spatiotemporal models of brain dynamics. Appl Netw Sci 7, 15 (2022). https://doi.org/10.1007/s41109-022-00454-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41109-022-00454-2

Keywords

  • Community ranking
  • Generative models
  • Model selection
  • Multiplex networks
  • Networks neuroscience
  • Spatiotemporal networks