 Research
 Open Access
Sparse matrix computations for dynamic network centrality
 Francesca Arrigo^{1}Email author and
 Desmond J. Higham^{1}
 Received: 28 February 2017
 Accepted: 1 June 2017
 Published: 24 June 2017
Abstract
Time sliced networks describing humanhuman digital interactions are typically large and sparse. This is the case, for example, with pairwise connectivity describing social media, voice call or physical proximity, when measured over seconds, minutes or hours. However, if we wish to quantify and compare the overall timedependent centrality of the network nodes, then we should account for the global flow of information through time. Because the timedependent edge structure typically allows information to diffuse widely around the network, a natural summary of sparse but dynamic pairwise interactions will generally take the form of a large dense matrix. For this reason, computing nodal centralities for a timedependent network can be extremely expensive in terms of both computation and storage; much more so than for a single, static network. In this work, we focus on the case of dynamic communicability, which leads to broadcast and receive centrality measures. We derive a new algorithm for computing timedependent centrality that works with a sparsified version of the dynamic communicability matrix. In this way, the computation and storage requirements are reduced to those of a sparse, static network at each time point. The new algorithm is justified from first principles and then tested on a large scale data set. We find that even with very stringent sparsity requirements (retaining no more than ten times the number of nonzeros in the individual time slices), the algorithm accurately reproduces the list of highly central nodes given by the underlying full system. This allows us to capture centrality over time with a minimal level of storage and with a cost that scales only linearly with the number of time points. We also describe and test three variants of the proposed algorithm that require fewer parameters and achieve a further reduction in the computational cost.
Keywords
 Dynamic network
 Sparsification
 Centrality
 Katz centrality
 Social network analysis
Introduction
In network science, centrality measures assign to each node a value that summarizes some aspect of its relative importance. Such measures arose in the social sciences, but have now become very widely used by researchers who wish to summarize important features of large, complex networks (Estrada 2010; Newman 2010; Wasserman and Faust 1994). Because matrix representations of networks are typically sparse, and because centrality measures typically involve the solution of linear systems or eigenvalue problems, it is feasible to compute centrality measures on a current desktop computer for networks with, say, a number of nodes in the millions.

dynamic broadcast centrality takes large values for nodes that are effective at distributing information,

dynamic receive centrality takes large values for nodes that are effective at gathering information.
In a case study on Twitter data, this approach was seen to be successful, in the sense of correlating well with the independent views of social media experts (Lafin et al. 2013). It was also found to outperform the crude alternative of simply aggregating all edges into a single static network that forgets the timeordering of the interactions; see (Mantzaris and Higham 2016) for further discussion. Tests in (Chen et al. 2016; Mantaris and Higham 2013) also showed that dynamic broadcast centrality can be effective at quantifying the potential for the spread of disease across timeordered interactions. In (Fenu and Higham 2017) it is shown how to perform dynamic communicability computations via a large block matrix that is amenable to modern iterative techniques. A weaker version of dynamic broadcast communicability, essentially given by applying the sign function, was proposed in (Lentz et al. 2013) to quantify what those authors term accessibilty.
As we explain in the next section, the computation of dynamic centrality can be expensive in terms of both storage and computational effort, as a result of inevitable matrix fillin as temporal information accumulates. Our overall aim here is to address this issue by deriving a new algorithm that delivers good approximations to the original dynamic broadcast centrality measure while retaining the benefits of the sparsity present in the time slices.
We note that other approaches to computation of node centrality for timedependent networks have been put forward. For example, (Tang et al. 2009; 2010a, b) made use of paths rather than walks, which, for our purposes, leads to an infeasibly expensive algorithm. In (Taylor et al. 2017) a blockmatrix approach was suggested which allows centrality measures for static networks to be applied. However, as mentioned in (Mantzaris and Higham 2016), that formulation does not fully respect the arrow of time.
Compared with the earlier conference paper (Arrigo and Higham 2017), this article includes further computational results and accompaining discussions, and in particular has a new section (“Further reduction” section) that shows how performance can be improved by reducing the number of parameters.
Background and notation
In (Grindrod et al. 2011) the concept of a dynamic walk of length p was introduced to extend to the temporal case the wellknown concept of a walk of length p in static networks. Loosely, we have a (possibly repeated) sequence of p+1 nodes connected by edges that appear in a suitable order. More precisely, a dynamic walk of length p from node i _{1} to node i _{ p+1} consists of a sequence of nodes i _{1},i _{2},…,i _{ p+1} and a sequence of times \(t_{r_{1}}\leq t_{r_{2}}\leq \cdots \leq t_{r_{p}}\) such that \(a^{[r_{m}]}_{i_{m}i_{m+1}} \neq 0\) for m=1,2,…,p. We stress that more than one edge can share a time slot, and that time slots must be ordered but do not need to be consecutive.
where Q ^{[−1]}=I is the identity matrix of order n, 0<α<1/ρ ^{∗}, and \(\rho ^{*}=\max \limits _{k=0:M}\left \{\rho \left (A^{[k]}\right)\right \}\) is the largest spectral radius among the spectral radii of the matrices {A ^{[k]}}. Here the free parameter α plays the same role as in the classical Katz centrality measure for static networks (Estrada 2010; Katz 1953; Newman 2010). For simplicity, our notation does not explicitly record the dependence of Q upon α.
To avoid overflow in the computations, a normalization step Q↦Q/∥Q∥ should follow each iteration in (1b). Throughout this work we use the Euclidean norm.
The requirement α<1/ρ ^{∗} ensures that the resolvents in (1a) exist and can be expanded as \(\left (I\alpha A^{[k]}\right)^{1} = \sum _{p=0}^{\infty } \left (\alpha A^{[k]}\right)^{p}\). It follows that the entries of Q ^{[k]} provide a weighted count of the dynamic walks between any two nodes in the networks using the ordered sequence of matrices A ^{[0]},A ^{[1]},…,A ^{[k]}, weighting walks of length p by a factor α ^{ p }. Hence, (Q ^{[k]})_{ ij } is an overall measure of the ability of node i to send messages to node j.

given a summary of how much information is flowing into each node, we can propagate this information forward when new edges emerge: receive centrality cares about where the information terminates, but

a summary of how much information is flowing out of each node cannot be straightforwardly updated when new edges emerge: broadcast centrality cares about where the information originates.
Our focus here is on the natural setting where data is processed sequentially, with the centrality scores being updated as each new time slice A ^{[k]} arrives. As confirmed in “Numerical tests” section on some real data sets, we then face a fundamental issue with the use of the dynamic communicability matrix: although the time slices are typically sparse, Q ^{[k]} generally evolves into a dense matrix. At this stage, computing dynamic communicability from (1b) requires us to store a full O(n ^{2}) matrix and solve at each subsequent time point a corresponding full linear system. In the next section, we therefore develop and justify an approximation where matrix fillin is controlled so that the benefits of sparse matrix storage and computation are recovered.
Sparsification
with \(\widehat {Q}^{[1]} = I\) and α<1/ρ ^{∗}, as before. As discussed in (Grindrod et al. 2011), this matrix product can be interpreted in terms of network combinatorics; at each time step a dynamic traversal can either wait, as described by the identity matrix I, or take a current edge, as described by latest adjacency matrix, A ^{[k]}. In the latter case, the length of the walk (i.e., the number of edges used) has increased by one, and thus we multiply the corresponding matrix by α. An alternative interpretation is that we are using a second order Taylor approximation for each of the resolvents appearing in (1a). This simplification is likely to be reasonable when either (a) α is chosen to be small, so that short walks are favoured, or (b) the powers of A ^{[k]} do not grow rapidly with k (which is typically the case for sparse matrices).
where \(\widehat {Q}^{[1]} = I\) and for any nonnegative matrix C, the matrix \(\left \lfloor C \right \rfloor _{\theta _{k}}\) arises from setting to zero all entries where c _{ ij }≤θ _{ k }.
Remark 1
The matrices \(\left \{\widehat {Q}^{[k]}\right \}_{k=0}^{M}\) are nonnegative by construction.
A little twist
From a network science perspective, the approach just presented has a strong limitation. Imagine a user i of Twitter who remains inactive for a long time after each tweet. After such inactivity, the thresholding may zero out all entries in the ith row of one of the matrices \(\widehat {Q}^{[k]}\). From that time, the ith row of the matrices appearing in (3) will always be zero, and no subsequent activity of node i will be registered by this approach.
The matrix \({\mathcal {A}}^{[k]}\) keeps track of those edges that appear at step k and would otherwise get lost. Indeed, the matrix product W ^{[k]} A ^{[k]} returns a matrix that has nonzero entries (if any) only in the rows corresponding to those nodes that have either been inactive until step k or have broadcast very little information (which thus was thresholded in a previous iteration). The penalisation by α is added because we are taking one hop in the network. Finally, the multiplication by m _{ k } comes from the fact that a poor choice of the parameter α may compromise the results. Indeed, the entries of \({\mathcal {A}}^{[k]}\) may be too large with respect to those appearing in \(\left \lfloor \widehat {Q}^{[k1]}\left (I+\alpha A^{[k]}\right)\right \rfloor _{\theta _{k}}\), thus leading to a complete reshaping of the rankings. We refer the reader to “Numerical tests” section for an example of this issue.
Remark 2
It is possible for the contribution added by \(m_{k}{\mathcal {A}}^{[k]}\) to be zero. This happens when the zero rows in \(\left \lfloor \widehat {Q}^{[k1]}\left (I+\alpha A^{[k]}\right)\right \rfloor _{\theta _{k}}\) correspond to nodes that are not broadcasting information at step k.
Remark 3
Note that if A ^{[k]}=0 for some k, then \(\widehat {Q}^{[k]} = \widehat {Q}^{[k1]}\), just as Q ^{[k]}=Q ^{[k−1]}.
On the thresholding parameters

the value of α ^{ p } dominates the contribution given by the products of the adjacency matrices, i.e., there are not too many walks of length p between the two nodes under consideration;

the information has not moved from a certain node for a long time and the normalization step has made the corresponding contribution smaller than the other entries.
In both cases, we are dismissing information that has little potential, as it is not diffused much. Clearly, an overstringent selection of the parameters θ _{ k } may lead to an excessive penalization of these two types of behaviours. Our strategy is to make an initial choice for the maximum number of nonzeros that we will allow in the matrices \(\widehat {Q}^{[k]}\), for k=0,1,…,M. Then, as the iteration proceeds, the thresholding value θ _{ k } is chosen so as to make \(\left \lfloor \widehat {Q}^{[k1]}\left (I + \alpha A^{[k]}\right)\right \rfloor _{\theta _{k}}\) have approximately this desired level of sparsity.
We point out that the maximum number of nonzeros one wants to allow has to be at least n+nnz(A ^{[0]}), where nnz(A ^{[0]}) is the number of nonzeros in the matrix A ^{[0]}. Consequently, θ _{0}<α. Indeed, if this is not the case, then we will have θ _{ k }≥α for all k and therefore that \(\widehat {Q}^{[k]} = I\) for all k.
Cost comparison

We have reduced storage requirements by a factor of n.

We have reduced the dominant computational task at each time step from solving a full linear system to solving a sparse linear system. For general complex networks with no exploitable structure, it is difficult to be precise about the resulting gain, but we note that if a standard iterative scheme is used, then the cost of each matrixvector multiplication is reduced by a factor of n.
Comparing top K lists
The main goal of this work is to match the broadcast ranking of the nodes in an evolving network using a sparse approximation to the dynamic communicability matrix. As usual in network science, we are not interested in matching exactly the rankings of all nodes in the network, but rather to accurately capture the top K≪n most influential broadcasters. Although there is no perfect way to summarize and compare rankings, it is clear that generic correlation coefficients like Pearson’s correlation coefficient or Kendall’s tau have the major drawback in this context that they treat entire vectors, and hence all network nodes.
where Δ is the symmetric difference operator between two sets and S denotes the cardinality of the set S. When the sequences contained in x and y are completely different, the intersection similarity between the two is maximum and equals 1. On the other hand, when isim_{ K }(x,y)=0 for all K, then the two lists are identical.
Relationship with the Jaccard index
Numerical tests
Our tests were performed on three different datasets with various values of the parameter α. The dataset Enron is available at (Leskovec 2014) and contains daily information over 1138 days starting 11 May 1999 representing emails between 151 Enron employees, including to, cc, and bcc. Many of the directed adjacency matrices are empty, meaning that there are days during which no emails are sent. The largest spectral radius is ρ ^{∗}=4.17, thus the upper limit for α is 0.24.
The undirected Real dataset is from (Eagle and Pentland 2006). Here, we have 106 nodes representing people interacting over 365 days. In each of the 365 days interaction occurs when two nodes communicate by telephone at least once. Here ρ ^{∗}=8.22 and thus we have to impose α<0.12.
Finally, the dataset FBsoc (Opsahl 2009; Opsahl and Panzarasa 2009) represents a Facebooklike Social Network originating from an online community of students at the University of California. The directed dataset contains the 1899 users who sent or received at least one message over a period of 191 days starting 19 April 2004. The largest spectral radius is ρ ^{∗}=7.59 and hence α<0.13.
In all tests, unless otherwise specified we allowed for a number of nonzeros proportional to \(N = c\overline {n}\), where \(\overline {n} = n + \frac {1}{M+1}\sum _{k=0}^{M}{\texttt {nnz}}\left (A^{[k]}\right)\) and c=10. This is motivated by our aim to work only with matrices whose sparsity level is compatible with that of the individual network time slices.
All experiments were performed using MATLAB Version 9.1.0.441655 (R2016b) on an HP EliteDesk running Scientific Linux 7.3 (Nitrogen), a 3.2 GHz Intel Core i7 processor, and 4 GB of RAM.
Illustrative test with Enron dataset
Before testing the performance of (4), in this subsection we discuss the effect of including the multiplication by m _{ k }. In “Sparsification” section we argue that setting m _{ k }≡1 for all k=0,1,…,M in (4) may lead to poor results. Clearly, this is not always the case, but, as we will see here, this choice together with a compounding choice of the downweighting parameter α, may result in a complete misplacement of the top ranked broadcasters in the network.
These results show that when m _{ k }≡1 the intersection similarity and the value of ℓ _{ K } between the two vectors can be maximum even when comparing only a few top ranked nodes for α as small as 0.5/ρ ^{∗}. The right hand plots in the two figures show how an adaptive choice of m _{ k } can work successfully over a wide range of α choices.
Enron dataset
Enron: Top 10 ranked nodes: exact, approximate and with aggregate outdegree
Q ^{[M]} 1  48  67  147  73  13  50  137  49  9  139 
\(\widehat {Q}^{[M]}\mathbf {1}\)  48  67  147  73  13  50  137  49  9  139 
Outdegree  67  50  141  13  48  69  107  147  73  70 
Enron: intersection similarity between the top K=1,2,…,20 ranked nodes in Q ^{[M]} 1 and \(\widehat {Q}^{[M]}\mathbf {1}\)
K  1  2  3  4  5  6  7  8  9  10 
isim_{ K }  0  0  0  0  0  0  0  0  0  0 
K  11  12  13  14  15  16  17  18  19  20 
isim_{ K }  0  0.01  0.02  0.03  0.03  0.03  0.03  0.03  0.03  0.03 
Enron: evolution of \(\ell _{K}\left (Q^{[M]}\mathbf {1},\widehat {Q}^{[M]}\mathbf {1}\right)\) for K=2,3,…,20
K  1  2  3  4  5  6  7  8  9  10 
ℓ _{ K }    0  0  0  0  0  0  0  0  0 
K  11  12  13  14  15  16  17  18  19  20 
ℓ _{ K }  0  0.08  0.15  0.14  0.07  0  0.06  0  0.05  0 
Real dataset
We see in Fig. 6 that the highly ranked nodes are well approximated. Even though the original dynamic communicability matrix is full, we see from the zero rows in Fig. 8 that many nodes have no activity recorded after our approximation method is applied. Overall, \(\widehat {Q}^{[M]}\) has 2583 nonzeros, corresponding to 23% sparsity.
Real: Top 10 ranked nodes: exact, approximate and with aggregate outdegree
Q ^{[M]} 1  5  102  8  26  49  46  3  4  1  30 
\(\widehat {Q}^{[M]}\mathbf {1}\)  5  8  102  26  49  46  3  4  1  30 
outdegree  5  8  4  2  3  20  40  6  23  53 
Real: intersection similarity between the top K=1,2,…,20 ranked nodes in Q ^{[M]} 1 and \(\widehat {Q}^{[M]}\mathbf {1}\)
K  1  2  3  4  5  6  7  8  9  10 
isim_{ K }  0  0.25  0.17  0.13  0.10  0.08  0.07  0.06  0.06  0.05 
K  11  12  13  14  15  16  17  18  19  20 
isim_{ K }  0.05  0.04  0.04  0.04  0.04  0.04  0.03  0.03  0.03  0.03 
Real: evolution of \(\ell _{K}\left (Q^{[M]}\mathbf {1},\widehat {Q}^{[M]}\mathbf {1}\right)\) for K=2,3,…,20
K  1  2  3  4  5  6  7  8  9  10 
ℓ _{ K }    0.50  0  0  0  0  0  0  0  0 
K  11  12  13  14  15  16  17  18  19  20 
ℓ _{ K }  0  0  0  0.07  0  0  0  0  0  0.05 
FBsoc dataset
We note in Fig. 10 that, at least visually, the ranking of highly central nodes seems less successful than for the previous two data sets.
FBsoc: Top 10 ranked nodes: exact, approximate and with aggregate outdegree
Q ^{[M]} 1  9  103  212  41  263  321  400  372  281  36 
\(\widehat {Q}^{[M]}\mathbf {1}\)  9  103  41  212  400  321  36  372  44  713 
outdegree  32  598  372  1624  42  103  713  638  495  617 
FBsoc: intersection similarity between the top K=1,2,…,20 ranked nodes in Q ^{[M]} 1 and \(\widehat {Q}^{[M]}\mathbf {1}\)
K  1  2  3  4  5  6  7  8  9  10 
isim_{ K }  0  0  0.11  0.08  0.11  0.12  0.12  0.12  0.13  0.14 
K  11  12  13  14  15  16  17  18  19  20 
isim_{ K }  0.14  0.14  0.15  0.15  0.15  0.16  0.16  0.17  0.17  0.17 
FBsoc: evolution of \(\ell _{K}\left (Q^{[M]}\mathbf {1},\widehat {Q}^{[M]}\mathbf {1}\right)\) for K=2,3,…,20
K  1  2  3  4  5  6  7  8  9  10 
ℓ _{ K }    0  0.33  0  0.20  0.17  0.14  0.13  0.22  0.20 
K  11  12  13  14  15  16  17  18  19  20 
ℓ _{ K }  0.09  0.17  0.23  0.22  0.20  0.19  0.24  0.28  0.26  0.20 
Further reduction
followed by normalization, where \(\widehat {Q}^{[1]}= I\) and \({\mathcal {A}}^{[k]} = \alpha W^{[k]}A^{[k]}\) as before.
followed by normalization, where \(\widehat {Q}^{[1]}= I\) and \({\mathcal {A}}^{[k]} = \alpha W^{[k]}A^{[k]}\) as before. It is easy to see that the parameter θ cannot exceed the value of α. Indeed, if this was the case and θ≥α, then \(\widehat {Q}^{[k]} = I\) for all k=0,1,…,M, since \(\widehat {Q}^{[0]} = I\) and thus \(\left \lfloor \widehat {Q}^{[k1]}\left (I+\alpha A^{[k]}\right)\right \rfloor _{\theta } = I\) and W ^{[k]}=0 for all k=1,2,…M. Therefore, θ<α.
returning an iteration that only requires the selection of one, fixed, thresholding parameter θ<α.
We tested the performance of the methods just described on the datasets Enron and Real. Before moving on to the discussion of the ranking performance of these variants of (4), we want to list the timings required for their computation. Concerning the first dataset, the timings required for the computation of the approximating matrices are 1.98 s for (6a), 1.97 s for (6b) and 1.83 s for (6c). Concerning the Real dataset, the computations were carried out in 0.67 s for (6a), 0.32 s for (6b) and 0.28 s for (6c). As one would expect, the time required by the methods decreases with the number (and type) of parameters that need to be estimated at each iteration.
Overall, the method described in (6a) seems to be the one performing best, since it returns a matrix that has the same level of sparsity of the one obtained using (4), but the resulting ranking vector better matches Q ^{[M]} 1.
If we now look at the results obtained for the network Real, we observe again that the best performance, in terms of the intersection similarity between the top K entries of \(\widehat {Q}^{[M]}\mathbf {1}\) and Q ^{[M]} 1, is achieved using iterations (6b) and (6c); the iteration described in (6a) performs better than the original one. If we now look at the level of sparsity of the final approximation matrices, we have that overall \(\widehat {Q}^{[M]}\) computed using (6a) has 2581 nonzeros, corresponding to 23% sparsity. Both (6b) and (6c) return matrices that have 4243 nonzeros, corresponding to 40.4% sparsity. Thus, as before, these two latter methods return better approximation to the ranking vector because they are retaining more information in the matrices used in the computations.
Overall, a good compromise seems to be the use of (6a), which returns comparable results to those obtained by the original iteration while retaining the same level of sparsity.
Conclusions
Timedependency adds an extra dimension to network science computations, potentially causing a dramatic increase in both strorage requirements and computation time. In the case of Katzstyle centrality measures, which are based on the solution of linear algebraic systems, allowing for the arrow of time leads naturally to full matrices that keep track of all possible routes for the flow of information. Such a buildup of intermediate data can make largescale computations infeasible. In this work, we derived a sparsification technique that delivers accurate approximations to the fullmatrix centrality rankings, while retaining the level of sparsity present in the network timeslices. With the new algorithm, as we move forward in time the storage cost remains fixed and the computational cost scales linearly, so the overall task is equivalent to solving a single Katzstyle problem at each new time point. We also proposed three variants of this algorithm that require the computation of a smaller number of parameters. In particular, one of these variants requires only one parameter and returns rankings that are comparable with those provided by the original algorithm.
Declarations
Funding
This study used preexisting data that is publicly available from http://snap.stanford.edu/data/and http://toreopsahl.com/dataset/\#online_social_network. The work of the authors was supported by the Engineering and Physical Sciences Research Council under grant EP/M00158X/1.
Authors’ contributions
Both authors contributed equally. Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Acar, E, Dunlavy DM, Kolda TG (2009) Link prediction on evolving data using matrix and tensors factorizations In: ICDMW’09: Proceedings of the 2009 IEEE International Conference on Data Mining Workshop, 262–269. doi:10.1109/ICDMW.2009.54.
 Achlioptas, D, Karnin ZS, Liberty E (2013) Nearoptimal entrywise sampling for data matrices. In: Burges CJC, Bottou L, Welling M, Ghahramani Z, Weinberger KQ (eds)Advances in Neural Information Processing Systems, 1565–1573.. Curran Associates, Inc., Red Hook, NY,.Google Scholar
 Arora, S, Hazan E, Kale S (2006) A fast random sampling algorithm for sparsifying matrices In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 272–279.. SpringerVerlag, Berlin.View ArticleGoogle Scholar
 Arrigo, F, Higham DJ (2017) Preserving Sparsity in Dynamic Network Computations. In: Cherifi H, Gaito S, Sala A (eds), 147–157.. Springer, Cham. doi:10.1007/9783319509013_2.
 Chen, I, Benzi M, Chang HH, Hertzberg VS (2016) Dynamic communicability and epidemic spread: a case of study on the empirical dynamic contact network. J Complex Netw. doi:10.1093/comnet/cnw17.
 Eagle, N, Pentland AS (2006) Reality mining: sensing complex social systems. Pers Ubiquitous Comput 10(4): 255–268.View ArticleGoogle Scholar
 Estrada, E (2010) The Structure of Complex Networks. Oxford University Press, Oxfords.Google Scholar
 Fagin, R, Kumar R, Sivakumar D (2003) Comparing top K lists. SIAM J Discrete Math 17(1): 134–160.MathSciNetView ArticleMATHGoogle Scholar
 Fenu, C, Higham DJ (2017) Block matrix formulation for evolving networks. SIAM J Matrix Anal Appl 38: 343–360.MathSciNetView ArticleMATHGoogle Scholar
 Grindrod, P, Parsons MC, Higham DJ, Estrada E (2011) Communicability across evolving networks. Phys Rev E 83(4): 046120.ADSView ArticleGoogle Scholar
 Holme, P, Saramäki J (2011) Temporal networks. Phys Rep 519: 97–125.ADSView ArticleGoogle Scholar
 Jaccard, P (1901) Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bulletin de la Société Vaudoise des Sciences Naturelles37: 547–579.Google Scholar
 Katz, L (1953) A new status index derived from sociometric analysis. Psychometrika 18(1): 39–43.View ArticleMATHGoogle Scholar
 Lafin, P, Mantzaris AV, Grindrod P, Ainley F, Otley A, Higham DJ (2013) Discovering the validating influence in a dynamic online social network. Soc Netw Anal Mining 3: 1311–1323.View ArticleGoogle Scholar
 Lentz, HHK, Selhorst T, Sokolov IM (2013) Unfolding accessibility provides a macroscopic approach to temporal networks. Phys Rev Lett 110: 118701.ADSView ArticleGoogle Scholar
 Leskovec, J (2014) SNAP: network dataset. https://snap.stanford.edu/data/.
 Mantaris, AV, Higham DJ (2013) Dynamic communicability predicts infectiousness. In: Holme P Saramäki J (eds)Temporal Networks, 283–294.. Springer, Berlin.View ArticleGoogle Scholar
 Mantzaris, AV, Higham DJ (2016) Asymmetry through time dependency. Eur Phys J B 89(3): 71.ADSMathSciNetView ArticleGoogle Scholar
 Newman, MEJ (2010) Networks: An Introduction. Oxford University Press, Oxford.View ArticleMATHGoogle Scholar
 Opsahl, T (2009) Online social networks dataset. https://toreopsahl.com/datasets/\#online_social_network.
 Opsahl, T, Panzarasa P (2009) Clustering in weighted networks. Soc Netw 31(2): 155–163.View ArticleGoogle Scholar
 Tang, J, Musolesi M, Mascolo C, Latora V (2009) Temporal distance metrics for social network analysis In: Proceedings of the 2nd ACM SIGCOMM Workshop on Online Social Networks (WOSN09), Barcelona.Google Scholar
 Tang, J, Musolesi M, Mascolo C, Latora V (2010a) Characterising temporal distance and reachability in mobile and online social networks. SIGCOMM Comput Commun Rev 40: 118–124.View ArticleGoogle Scholar
 Tang, J, Scellato S, Musolesi M, Mascolo C, Latora V (2010b) Smallworld behavior in timevarying graphs. Phys Rev E 81: 05510.Google Scholar
 Taylor, D, Myers SA, Clauset A, Porter MA, Mucha PJ (2017) Eigenvectorbased centrality measures for temporal networks. Multiscale Model Simul 15: 537–574.MathSciNetView ArticleMATHGoogle Scholar
 Wasserman, S, Faust K (1994) Social Network Analysis: Methods and Applications. Cambridge University Press, Cambridge.View ArticleMATHGoogle Scholar