An information-theoretic, all-scales approach to comparing networks

As network research becomes more sophisticated, it is more common than ever for researchers to find themselves not studying a single network but needing to analyze sets of networks. An important task when working with sets of networks is network comparison, developing a similarity or distance measure between networks so that meaningful comparisons can be drawn. The best means to accomplish this task remains an open area of research. Here we introduce a new measure to compare networks, the Network Portrait Divergence, that is mathematically principled, incorporates the topological characteristics of networks at all structural scales, and is general-purpose and applicable to all types of networks. An important feature of our measure that enables many of its useful properties is that it is based on a graph invariant, the network portrait. We test our measure on both synthetic graphs and real world networks taken from protein interaction data, neuroscience, and computational social science applications. The Network Portrait Divergence reveals important characteristics of multilayer and temporal networks extracted from data.

for evaluating and understanding networks. It is now increasingly common to deal with multiple networks at once, from brain networks taken from multiple subjects or across longitudinal studies [4], to multilayer networks extracted from high-throughput experiments [5], to rapidly evolving social network data [6,7,8]. A common task researchers working in these areas will face is comparing networks, quantifying the similarities and differences between networks in a meaningful manner. Applications for network comparison include comparing brain networks for different subjects, or the same subject before and after a treatment, studying the evolution of temporal networks [9], classifying proteins and other sequences [10,11,12], classifying online social networks [11], or evaluating the accuracy of statistical or generative network models [13].
Combined with a clustering algorithm, a network comparison measure can be used to aggregate networks in a meaningful way, for coarse-graining data and revealing redundancy in multilayer networks [5]. Treating a network comparison measure as an objective function, optimization methods can be used to fit network models to data.
Approaches to network comparison can be roughly divided into two groups, those that consider or require two graphs defined on the same set of nodes, and those that do not. The former eliminates the need to discover a mapping between node sets, making comparison somewhat easier. Yet, two networks with identical topologies may have no nodes or edges in common simply because they are defined on different sets of nodes. While there are scenarios where assuming the same node sets is appropriate-for example, when comparing the different layers of a multilayer network one wants to capture explicitly the correspondences of nodes between layers [5]-here we wish to relax this assumption and allow for comparison without node correspondence, where no nodes are necessarily shared between the networks.
A common approach for comparison without assuming node correspondence is to build a comparison measure using a graph invariant. Graph invariants are properties of a graph that hold for all isomorphs of the graph. Using an invariant mitigates any concerns with the encoding or structural representation of the graphs, and the comparison measure is instead focused entirely on the topology of the network. Graph invariants may be probability distributions. Suppose P and Q represent two graph-invariant distributions corresponding to graphs G 1 and G 2 , respectively. Then, a common approach to comparing G 1 and G 2 is by comparing P and Q. Information theory provides tools for comparing distributions, such as the Jensen-Shannon divergence: where KL (P || Q) is the Kullback-Leibler (KL) divergence (or relative entropy) between P and Q and M = (P + Q)/2 is the mixture distribution of P and Q. The Jensen-Shannon divergence has a number of nice properties, including that it is symmetric and normalized, making it a popular choice for applications such as ours [14,15]. In this work, we introduce a novel graph-invariant distribution that is general and free of assumptions and we can then use common information-theoretic divergences such as Jensen-Shannon to compare networks via these graph invariants.
The rest of this paper is organized as follows. In Sec. 2 we describe network portraits [16], a graph invariant matrix representation of a network that is useful for visualization purposes but also capable of comparing pairs of networks. Section 3 introduces Network Portrait Divergences, a principled informationtheoretic measure for comparing networks, building graph-invariant distributions using the information contained within portraits. Network Portrait Divergence has a number of desirable properties for a network comparison measure. In Sec. 4 we apply this measure to both synthetic networks (random graph ensembles) and real-world datasets (multilayer biological and temporal social networks), demonstrating its effectiveness on practical problems of interest. Lastly, we conclude in Sec. 5 with a discussion of our results and future work.

Network portraits
Network portraits were introduced in [16] as a way to visualize and encode many structural properties of a given network. Specifically, the network portrait B is the array with ( , k) elements B ,k ≡ the number of nodes who have k nodes at distance (2) This matrix encodes many structural features of the graph. The zeroth row stores the number of nodes N in the graph: The first row captures the degree distribution P(k): 1Note that a distance = 0 is admissible, with two nodes i and j at distance 0 when i = j. This means that the matrix B so defined has a zeroth row. It also has a zeroth column, as there may be nodes that have zero nodes at some distance . This occurs for nodes with eccentricity less than the graph diameter.
as neighbors are at distance = 1. The second row captures the distribution of next-nearest neighbors, and so forth for higher rows. The number of edges M is N k=0 k B 1,k = 2M. The graph diameter d is The shortest path distribution is also captured: the number of shortest paths of length is 1 And the portraits of random graphs present very differently from highly ordered structures such as lattices ( Fig. 1), demonstrating how dimensionality and regularity of the network is captured in the portrait [16].
One of the most important properties of portraits is that they are a graph invariant: it is a function f such that f (G) = f (H) whenever G and H are isomorphic graphs. Note that the converse is not necessarily true: that f (G) = f (H) does not imply that G and H are isomorphic. As a counter-example, the non-isomorphic distance-regular dodecahedral and Desargues graphs have equal portraits [16].

Portraits of weighted networks
The original work defining network portraits [16] did not consider weighted networks, where a scalar quantity w i j is associated with each (i, j) ∈ E. An important consideration is that path lengths for weighted networks are generally computed by summing edge weights along a path, leading to path lengths ∈ R (typically) instead of path lengths ∈ Z. To address this, in the Appendix (App. A) we generalize the portrait to weighted networks, specifically accounting for how real-valued path lengths must change the definition of the matrix B.

Comparing networks by comparing portraits
Given that a graph G admits a unique B-matrix makes these portraits a valuable tool for network comparison.
Instead of directly comparing graphs G and G , we may compute their portraits B and B , respectively, and then compare these matrices. We review the comparison method in our previous work [16]. First, compute for each portrait B the matrix C consisting of row-wise cumulative distributions of B: The row-wise Kolmogorov-Smirnov test statistic K between corresponding rows in C and C : allows a metric-like graph comparison. This statistic defines a two-sample hypothesis test for whether or not the corresponding rows of the portraits are drawn from the same underlying, unspecified distribution. If the two graphs have different diameters, the portrait for the smaller diameter graph can be expanded to the same size as the larger diameter graph by defining empty shells > d as B ,k = Nδ 0,k . Lastly, aggregate the test statistics for all pairs of rows using a weighted average to define the similarity ∆(G, G ) between G and G : where is a weight chosen to increase the impact of the lower, more heavily occupied shells.
While we did develop a metric-like quantity for comparing graphs based on the KS-statistics (Eqs. (4) and (5)), we did not emphasize the idea. Instead, the main focus of the original portraits paper was on the use of the portrait for visualization. In particular, Eq. (4) is somewhat ad hoc. Here we now propose a stronger means of comparison using network portraits that is interpretable and grounded in information theory.

An information-theoretic approach to network comparison
Here we introduce a new way of comparing networks based on portraits. This measure is grounded in information theory, unlike the previous, ad hoc comparison measure, and has a number of other desirable attributes we discuss below.
The rows of B may be interpreted as probability distributions: is the (empirical) probability that a randomly chosen node will have k nodes at distance . This invites an immediate comparison per row for two portraits: where KL (p || q) is the Kullback-Liebler (KL) divergence between two distributions p and q, and Q is defined as per Eq. (6) for the second portrait (i.e., Q(k | ) = 1 N B ,k ). The KL-divergence admits an information-theoretic interpretation that describes how many extra bits are needed to encode values drawn from the distribution P if we used the distribution Q to develop the encoding instead of P.
However, while this seems like an appropriate starting point for defining a network comparison, Eq. (7) has some drawbacks: 1. KL (P(k) || Q(k)) is undefined if there exists a value of k such that P(k) > 0 and Q(k) = 0. Given that rows of the portraits are computed from individual networks, which may have small numbers of nodes, this is likely to happen often in practical use.
2. The KL-divergence is not symmetric and does not define a distance.
3. Defining a divergence for each pair of rows of the two matrices gives max(d, d ) + 1 separate divergences, where d and d are the diameters of G and G , respectively. To define a scalar comparison value (a similarity or distance measure) requires an appropriate aggregation of these values, just like the original approach proposed in [16]; we return to this point below.
The first two drawbacks can be addressed by moving away from the KL-divergence and instead using, e.g., the Jensen-Shannon divergence or Hellinger distance. However, the last concern, aggregating over max(d, d )+1 difference quantities, remains for those measures as well.
Given these concerns, we propose the following, utilizing the shortest path distribution encoded by the network portraits. Consider choosing two nodes uniformly at random with replacement. The probability that they are connected is where n c is the number of nodes within connected component c, the sum c n 2 c runs over the number of connected components, and the n c satisfy c n c = N. Likewise, the probability the two nodes are at a distance from one another is Lastly, the probability that one of the two nodes has k − 1 other nodes at distance is given by We propose to combine these probabilities into a single distribution that encompasses the distances between nodes weighted by the "masses" or prevalences of other nodes at those distance, giving us the probability for choosing a pair of nodes at distance and for one of the two randomly chosen nodes to have k nodes at that distance : and likewise for Q(k, ) using B instead of B. However, this distribution is not normalized unless the graph G is connected. It will be advantageous for this distribution to be normalized in all instances, therefore, we condition this distribution on the two randomly chosen nodes being connected: This now defines a single (joint) distribution P (Q) for all rows of B (B ) which can then be used to define a single KL-divergence between two portraits: where the log is base 2.
where M = 1 2 (P + Q) is the mixture distribution of P and Q. Here P and Q are defined by Eq. (13), and the KL (· || ·) is given by Eq. (14).

Synthetic networks
To understand the performance of the Network Portrait Divergence, we begin here by examining how it relates different realizations of the following synthetic graphs: 1. Erdős-Rényi (ER) graphs G(N, p) [19], the random graph on N nodes where each possible edge exists independently with constant probability p; 2. Barabási-Albert (BA) graphs G(N, m) [20], where N nodes are added sequentially to a seed graph and each new node attaches to m existing nodes according to preferential attachment. Prob. density Compare ER graphs Compare BA graphs ER vs. BA graphs (same k )

Measuring network perturbations with Network Portrait Divergence
Next, we ask how well the Network Portrait Divergence measures the effects of network perturbations.
We performed two kinds of rewiring perturbations to the links of a given graph G: (i) random rewiring, where each perturbation consists of deleting an existing link chosen uniformly at random and inserting a link between two nodes chosen uniformly at random; and (ii) degree-preserving rewirings [21], where each perturbation consists of removing a randomly chosen pair of links (i, j) and (u, v) and inserting links (i, u) and ( j, v). The links (i, j) and (u, v) are chosen such that (i, u) E and ( j, v) E, ensuring that the degrees of the nodes are constant under the rewiring.
We expect that random rewirings will lead to a stronger change in the network than the degree-preserving rewiring. To test this, we generate an ER or BA graph G, apply a fixed number n of rewirings to a copy of G, and use the Network Portrait Divergence to compare the networks before and after rewirings. Figure   3 shows how D JS changes on average as a function of the number of rewirings, for both types of rewirings and both ER and BA graphs. The Network Portrait Divergence increases with n, as expected. Interestingly, below n ≈ 100 rewirings, the different types of rewirings are indistinguishable, but for n > 100 we see that . A random rewiring is the deletion of an edge chosen uniformly at random followed by the insertion of a new edge between two nodes chosen uniformly at random. Degree-preserving rewiring chooses a pair of edges (u, v) and (x, y) and rewires them across nodes to (u, x) and (v, y) such that the degrees of the chosen nodes remain unchanged [21]. Errorbars denote ± 1 s.d.
random rewirings lead to a larger divergence from the original graph than degree-preserving rewirings. This is especially evident for BA graphs, where the scale-free degree distribution is more heavily impacted by the random rewiring than for ER graphs. The overall D JS is also higher in value for BA graphs than ER graphs. This is plausible because the ER graph is already maximally random, whereas many correlated structures exist in a random realization of the BA graph model that can be destroyed by perturbations [22].

Comparing real networks
We now apply the Network Portrait Divergence to real world networks, to evaluate its performance when used for several common network comparison tasks. Specifically, we study two real-world multiplex networks, using D JS to compare across the layers of these networks. We also apply D JS to a temporal network, measuring how the network changes over time. This last network has associated edge weights, and we consider it as both an unweighted and a weighted network.
The datasets for the three real-world networks we study are as follows: Arabidopsis GPI network The Genetic and Protein Interaction (GPI) network of Arabidopsis Thaliana taken from BioGRID 3.2.108 [23,5]. This network consists of 6,980 nodes representing proteins and 18,654 links spread across seven multiplex layers. These layers represent different interaction modes from direct interaction of protein and physical associations of proteins within complexes to suppressive and synthetic genetic interactions. Full details of the interaction layers are described in [5]. elegans has many advantages as a model organism in general [26,27], and its neuronal wiring diagram is completely mapped experimentally [27,28], making its connectome an ideal test network dataset.

Open source developer collaboration network This network represents the software developers working
on open source projects hosted by IBM on GitHub (https://github.com/ibm). This network is temporal, evolving over the years 2013-2017, allowing us to compare its development across time. Aggregating all activity, this network consists of 679 nodes and 3,628 links. Each node represents a developer who has contributed to the source code of the project, as extracted from the git metadata logs [29,30,31].
Links occur between developers who have edited at least one source code file in common, a simple measure of collaboration. To study this network as a weighted network, we associate with each link (i, j) an integer weight w i j equal to the number of source files edited in common by developers i and j.
For these data, the Network Portrait Divergence reveals several interesting facets of the multilayer structure of the Arabidopsis network (Fig. 4) The multilayer C. elegans network, consisting of only three layers, is easier to understand than Arabidopsis. Here we find that the electrical junction layer is more closely related to the monadic synapse layer than it is to the polyadic synapse layer, while the polyadic layer is more closely related to the monadic synapse layer than to the electrical junction layer. The C. elegans data aggregated all polyadic synapses together into one layer accounting for over half of the total links in the network, but it would be especially interesting to determine what patterns for dyadic, triadic, etc. synapses can be revealed with the Network Portrait Divergence.
The third real-world network we investigate is a temporal network (Fig. 5). This network encodes the collaboration activities between software developers who have contributed to open source projects owned by IBM on GitHub.com. Here each node represents a developer and a links exist between two developers when they have both edited at least one file in common among the source code hosted on GitHub. This network is growing over time as more projects are open-sourced by IBM, and more developers join and contribute to those projects. We draw the IBM developer network for each year from 2013 to 2017 in Fig. 5A, while  of the network do not depend strongly on the choice of binning, and we capture patterns across time similar to, though not identical to, the patterns found analyzing the unweighted networks (shown in Fig. 5).

Conclusion
In this paper we have introduced a measure, the Network Portrait Divergence, for comparing networks, and validated its performance on both synthetic and real-world network data. Network Portrait Divergence provides an information-theoretic interpretation that naturally encompasses all scales of structure within networks. It does not require the networks to be connected, nor does it make any assumptions as to how the two networks being compared are related, or indexed, or even that their node sets are equal. Further, Network Portrait Divergence can naturally handle both undirected and directed, unweighted networks, and we have introduced a generalization for weighted networks. The Network Portrait Divergence is based on a graph invariant, the network portrait. Comparison measures based on graph invariants are desirable as they will only be affected by the topology of the networks being studied, and not other externalities such as the format or order in which the networks are recorded or analyzed. The computational complexity of the Network Portrait Divergence compares favorably to many other graph comparison measures, particularly spectral measures, but it remains a computation that is quadratic in the number of nodes of the graph. To scale to very large networks will likely require further efficiency gains, probably from approximation strategies to efficiently infer the shortest path distributions [32].
Our approach bears superficial similarities with other methods. Graph distances and shortest path length distributions are components common to many network comparison methods, including our own, although the Network Portrait Divergences utilizes a unique composition of all the shortest path length distributions for the networks being compared. At the same time, other methods, including ours, use the Jensen-Shannon divergence to build comparison measures. For example, the recent work of Chen et al. [15] uses the Shannon entropy and Jensen-Shannon Divergence of a probability distribution computed by normalizing e A , the exponential of the adjacency matrix also known as the communicability matrix. This is an interesting approach, as are other approaches that examine powers of the adjacency matrix, but it suffers from a drawback: when comparing networks of different sizes, the underlying probability distributions must be modified in an ad hoc manner [15]. The Network Portrait Divergence, in contrast, does not need such modification.
The Network Portrait Divergence, and other methods, is based upon the Jensen-Shannon divergence between graph-invariant probability distributions, but many other information-theoretic tools exist for comparing distributions, including f -divergences such as the Hellinger distance or total variation distance, Bhattacharyya distance, and more. Using different measures for comparison may yield different interpretations and insights, and thus it is fruitful to better understand their use across different network comparison problems.
Network Portrait divergence lends itself well to developing statistical procedures when combined with suitable graph null models. For example, one could quantify the randomness of a structural property of a network by comparing the real network to a random model that controls for that property. Further, to estimate the strength or significance of an observed divergence between graphs G 1 and G 2 , one could generate a large number of random graph null proxies for either G 1 or G 2 (or both) and compare the divergences found between those nulls with the divergence between the original graphs. These comparisons could be performed using we consider this to be beyond the scope of our current work. Instead, we have focused our method on highlighting several areas, particularly within real world applications but also using some intuitive synthetic scenarios, where the method is effective.
As network datasets increase in scope, network comparison becomes an increasingly common and

A Portraits and Network Portrait Divergences for weighted networks
The portrait matrix B (Eq. (2)) is most naturally defined for unweighted networks since the path lengths for unweighted networks count the number of edges traversed along the path to get from one node to another.
Since the number of edges is always integer-valued, these lengths can be used to define the rows of B. For weighted networks, on the other hand, path lengths are generally computed by summing edge weights along a path and will generally be continuous rather than integer-valued.
To generalize the portrait to weighted networks requires (i) using an algorithm for finding shortest paths accounting for edge weights (here we will use Dijkstra's algorithm [33] N + N 2 log N). This again is more costly than the total complexity for the unweighted portrait, O(M N + N 2 ) , but this is unavoidable as finding minimum-cost paths is generically more computationally intensive than finding minimum-length paths.
The simplest choice for aggregating shortest paths by length is to introduce a binning strategy for the continuous path lengths. Let d 0 = 0 < d 1 < · · · < d b+1 = L max define a set of b intervals or bins, where L max is the length of the longest shortest path. Then the weighted portrait B can be defined such that B i,k ≡ the number of nodes with k nodes at distances d i ≤ < d i+1 . That is, the i-th row of the weighted portrait accounts for all shortest paths with lengths falling inside the i-th bin [d i , d i+1 ). (We also take the last bin to be inclusive on both sides, [d b , L max ]).
To compute B using a binning requires determining the b + 1 bin edges. Here we consider a simple, adaptive binning based on quantiles of the shortest path distribution, but a researcher is free to adopt a different binning strategy as needed. Let L(G) = { i j | i, j ∈ V ∧ i j < ∞} be the set of all unique shortest path lengths between connected pairs of nodes in graph G. We then define our binning to be the b contiguous intervals that partition L into subsets of (approximately) equal size. Taking b = 100, for example, ensures that each bin contains approximately 1% of the shortest path lengths. The number of bins b can be chosen by the researcher to suit her needs, or automatically using any of a number of histogram binning rules such as Freedman-Diaconis [35] or Sturges' Rule [36]. Figure 7 shows the portrait for a weighted network, in this case taken from the IBM developer collaboration network. Edge (i, j) in this network has associated non-negative edge weight w i j = the number of files edited in common by developers i and j. The network is the union of the networks shown in Fig. 5A; we draw the giant connected component of this network in Fig. 7A. For this network, we consider shortest paths found using Dijkstra's algorithm with reciprocal edge weights, i.e., the "length" of a path (i = i 0 , i 1 , i 2 , . . . , i n+1 = j) is i j = n t=0 w −1 i t ,i t +1 , as larger edge weights define more closely related developers. However, this choice is not necessary in general. The cumulative distribution of shortest path lengths, which we computed on all components of the network, is shown in Fig. 7B. Lastly, Fig. 7C shows the portrait B for this network. For illustration, we draw the vertical positions of the rows in this matrix using the bin edges. These bin edges are highlighted on the cumulative distribution shown in Fig. 7B. With a new definition for B now in place for weighted networks, the Network Portrait Divergence can be computed exactly as before (Definition 3.1). However, to compare portraits for two graphs G and G , it is important for the path length binning to be the same for both. We do this here by computing b bins as quantiles of L = L(G) ∪ L(G ) and then compute B(G) and B(G ) as before. This ensures the rows of B and B are compatible in the distributions used within Definition 3.1.