A family of tractable graph metrics

Important data mining problems such as nearest-neighbor search and clustering admit theoretical guarantees when restricted to objects embedded in a metric space. Graphs are ubiquitous, and clustering and classification over graphs arise in diverse areas, including, e.g., image processing and social networks. Unfortunately, popular distance scores used in these applications, that scale over large graphs, are not metrics and thus come with no guarantees. Classic graph distances such as, e.g., the chemical distance and the Chartrand-Kubiki-Shultz distance are arguably natural and intuitive, and are indeed also metrics, but they are intractable: as such, their computation does not scale to large graphs. We define a broad family of graph distances, that includes both the chemical and the Chartrand-Kubiki-Shultz distances, and prove that these are all metrics. Crucially, we show that our family includes metrics that are tractable. Moreover, we extend these distances by incorporating auxiliary node attributes, which is important in practice, while maintaining both the metric property and tractability.


Introduction
Graph similarity and the related problem of graph isomorphism have a long history in data mining, machine learning, and pattern recognition [1][2][3].Graph distances naturally arise in this literature: intuitively, given two (unlabeled) graphs, their distance is a score quantifying their structural differences.A highly desirable property for such a score is that it is a metric, i.e., it is non-negative, symmetric, positive-definite, and, crucially, satisfies the triangle inequality.Metrics exhibit significant computational advantages over non-metrics.For example, operations such as nearest-neighbor search [4][5][6], clustering [7], outlier detection [8], and diameter computation [9] admit fast algorithms precisely when performed over objects embedded in a metric space.To this end, proposing tractable graph metrics is of paramount importance in applying such algorithms to graphs.
Unfortunately, graph metrics of interest are often computationally expensive.A well-known example is the chemical distance [10].Formally, given graphs G A and G B , represented by their adjacency matrices A, B ∈ {0, 1} n×n , the chemical distance d P n (A, B) is defined in terms of a mapping between the two graphs that minimizes their edge discrepancies, i.e.: d P n (A, B) = min P ∈P n AP − P B F , (1) where P n is the set of permutation matrices of size n and • F is the Frobenius norm (see Sec. 2 for definitions).The Chartrand-Kubiki-Shultz (CKS) [11] distance is an alternative: CKS is again given by (1) but, instead of edges, matrices A and B contain the pairwise shortest path distances between any two nodes.The chemical and CKS distances have important properties.First, they are zero if and only if the graphs are isomorphic, which appeals to both intuition and practice; second, as desired, they are metrics over the quotient space defined by graph isomorphism (see Sec. 2); third, they have a natural interpretation, capturing global structural similarities between graphs.However, finding an optimal permutation P is notoriously hard; graph isomorphism, which is equivalent to deciding if there exists a permutation P such that AP = P B (for both adjacency and path matrices), is famously a problem that is neither known to be in P nor shown to be NP-hard [12].There is a large and expanding literature on scalable heuristics to estimate the optimal permutation P [13][14][15][16].Despite their computational advantages, unfortunately, using them to approximate d P n (A, B) breaks the metric property.
This significantly degrades the performance of many important tasks that rely on computing distances between graphs.For example, there is a clear separation on the approximability of clustering over metric and non-metric spaces [7].We also demonstrate this empirically in Section 6 (c.f.Fig. 1): attempting to cluster graphs sampled from well-known families based on non-metric distances significantly increases the misclassification rate, compared to clustering using metrics.
An additional issue that arises in practice is that nodes often have attributes not associated with adjacency.For example, in social networks, nodes may contain profiles with a user's age or gender; similarly, nodes in molecules may be labeled by atomic numbers.Such attributes are not captured by the chemical or CKS distances.However, in such cases, only label-preserving permutations P may make sense (e.g., mapping females to females, oxygens to oxygens, etc.).Incorporating attributes while preserving the metric property is thus important from a practical perspective.

Contributions
We seek generalizations of the chemical and CKS distances that (a) satisfy the metric property and (b) are tractable: by this, we mean that they can be computed either by solving a convex optimization problem, or by a polynomial time algorithm.Specifically, we study generalizations of (1) of the form: where S ⊂ R n×n is closed and bounded, • is a matrix norm, and A, B ∈ R n×n are arbitrary real matrices (representing adjacency, path distances, weights, etc.).We make the following contributions: • We prove sufficient conditions on S and norm • under which ( 2) is a pseudometric, i.e., a metric over a quotient space defined by equivalence relation d S (A, B) = 0.In particular, we show that d S is a pseudometric when: (i) S = P n and • is any entry-wise or operator norm; (ii) S = W n , the set of doubly stochastic matrices, • is an arbitrary entrywise norm, and A, B are symmetric; a modification on d S extends this result to both operator norms as well as arbitrary matrices (capturing, e.g., directed graphs); and (iii) S = O n , the set of orthogonal matrices, and • is the operator or entrywise 2-norm.We also characterize the corresponding equivalence classes (see Sec. 3).Relaxations (ii) and (iii) are very important from a practical standpoint.For all matrix norms, computing (2) with S = W n is tractable, as it is a convex optimization.For S = O n , (2) is non-convex but is still tractable, as it reduces to a spectral decomposition.This was known for the Frobenius norm [17]; we prove this is also the case for the operator 2-norm.
• We include node attributes in a natural way in the definition of d S as both soft (i.e., penalties in the objective) or hard constraints in Eq. ( 2).Crucially, we do this without affecting the pseudometric property and tractability.This allows us to explore label or feature preserving permutations, that incorporate both (a) exogenous node attributes, such as, e.g., user age or gender in a social network, as well as (b) endogenous, structural features of each node, such as its degree or the number of triangles that pass through it.We numerically show that adding these constraints can speed up the computation of d S .From an experimental standpoint, we extensively compare our tractable metrics to several existing heuristic approximations.We also demonstrate the tractability of our metrics by parallelizing their execution using the Alternating Direction Method of Multipliers (ADMM) [18], which we implement over a compute cluster using Apache Spark [19].

Related Work
Graph distance (or similarity) scores find applications in varied fields such as in image processing [1], chemistry [10,20], and social network analysis [2,3].Graph distances are easy to define when, contrary to our setting, the correspondence between graph nodes is known, i.e., graphs are labeled [3,21,22].Beyond the chemical distance, classic examples of distances between unlabeled graphs are the edit distance [23,24] and the maximum common subgraph distance [25,26], both of which also have versions for labeled graphs.Both are pseudometrics and are hard to compute, while existing heuristics [27,28] do not satisfy the triangle inequality.The reaction distance [29] is also a pseudometric, and is directly related to the chemical distance [10] when edits are restricted to edge additions and deletions.Jain [30] also considers an extension of the chemical distance, limited to the Frobenius norm, that incorporates edge attributes.However, it is not immediately clear how to relax the above pseudometrics [29,30] to attain tractability, while keeping the pseudometric property.
A pseudometric can also be induced by embedding graphs in a metric space and measuring the distance between embeddings [31][32][33].Several works follow such an approach, mapping graphs, e.g., to spaces determined by their spectral decomposition [34][35][36].In general, in contrast to our pseudometrics, such approaches are not as discriminative, as embeddings summarize graph structure.Continuous relaxations of graph isomorphism, both convex and non-convex [15,17,37], have found applications in a variety of contexts, including social networks [38], computer vision [39], shape detection [40,41], and neuroscience [42].Lyzinski et al. [15] in particular show (both theoretically and experimentally) that a non-convex relaxation is advantageous over one of the relaxations we consider here (namely, d S with S = W n , • = • F ) in recovering the optimal permutation P .They also incorporate features via a trace penalty as we do in Sec.3.2 (c.f.Eq. ( 17)).None of the above works however focus on the metric properties of the resulting relaxations, which several fail to satisfy [15,38,[40][41][42].
Metrics naturally arise in data mining tasks, including clustering [43,44], Nearest Neighbour (NN) search [4][5][6], and outlier detection [8].Some of these tasks become tractable, or admit formal guarantees, precisely when performed over a metric space.For example, finding the nearest neighbor [4][5][6] or the diameter of a data-set [9] become polylogarithimic under metric assumptions; similarly, approximation algorithms for clustering (which is NP-hard) rely on metric assumptions, whose absence leads to a deterioration of known bounds [7].Our search for metrics is motivated by these considerations.
The present paper is an extended version of a paper by the same authors that appeared in the 2018 SIAM International Conference on Data Mining [45], which did not contain any proofs.In addition to the material included in the conference version, the present paper contains (a) proofs of all main theorems, establishing sufficient conditions under which a solution to (2) yields a pseudo-metric, (b) a polynomial-time spectral algorithm for computing (2) over the Stiefel manifold, (c) extensions of our metrics to graphs of unequal sizes, and (d) an extended experiment section.

Notation and Preliminaries
We begin by introducing some terminology that we use throughout the paper.A summary of our notation can be found in Table 1.
Graphs.We represent an undirected unweighted graph G(V , E) with node set V = [n] ≡ {1, . . ., n} and edge set E ⊆ [n] × [n] by its adjacency matrix, i.e.A = [a i,j ] i,j∈[n] ∈ {0, 1} n×n such that a ij = a ji = 1 if and only if (i, j) ∈ E. In particular, A is symmetric, i.e.A = A .We denote the set of all real, symmetric matrices by S n .Directed unweighted graphs are represented by (possibly non-symmetric) binary matrices A ∈ {0, 1} n×n , and weighted graphs by real matrices A ∈ R n×n .
Matrix Norms.Given a matrix A = [a ij ] i,j∈[n] ∈ R n×n and a p ∈ N + ∪ {∞}, its induced or operator p-norm is defined in terms of the vector p-norm through A p = sup x∈R n : x p =1 Ax p , while its entry-wise p-norm is given by A p = ( n i=1 n j=1 |a ij | p ) 1/p , for p ∈ N + , and A ∞ = max i,j |a i,j |.We denote the entry-wise 2-norm (i.e., the Frobenius norm) as • F .Recall that all matrix norms on R n×n are equivalent: given two norms • , • , there exist constants c 1 , c 2 > 0 such that Permutation, Doubly Stochastic, and Orthogonal Matrices.We denote the set of permutation matrices as the set of doubly-stochastic matrices (i.e., the Birkhoff polytope) as and the set of orthogonal matrices (i.e., the Stiefel manifold) as Note that i.e., permutation matrices are precisely the matrices that are both doubly stochastic and orthogonal.Moreover, the Birkoff-von Neumann Theorem [46] states that i.e., the Birkoff polytope is the convex hull of P n .Hence, every doubly stochastic matrix can be written as a convex combination of permutation matrices.
Metrics.Given a set Ω, a function d : Ω × Ω → R is called a metric, and the pair (Ω, d) is called a metric space, if for all x, y, z ∈ Ω: A function d is called a pseudometric if it satisfies (9a), (9c), and (9d), but the positive definiteness property (9b), also known as the identity of indiscernibles, is replaced by the (weaker) property: The chemical distance, given by (1), extends the second relationship in (11) to capture distances between graphs.More generally, let • be a matrix norm in R n×n .For some Ω ⊆ R n×n , define d S : Ω × Ω → R + as: where S ⊂ R n×n is a closed and bounded set, so that the infimum is indeed attained.Note that d S is the chemical distance (1) when Ω = R n×n , S = P n and • = • F .The Chartrant-Kibiki-Shultz (CKS) distance [11] can also be defined in terms of (12), with matrices A, B containing pairwise path distances between any two nodes; equivalently, CKS is the chemical distance of two weighted complete graphs with path distances as edge weights.
The Weisfeiler-Lehman (WL) Algorithm.The WL algorithm [47] is a heuristic for solving the graph isomorphism problem.We use this algorithm to (a) describe the quotient space over which (12) is a metric when S = W n (see Sec. 3), and (b) to generate node embeddings in our experiments (see Sec. 6).
To gain some intuition on the algorithm, note that two isomorphic graphs must have the same degree distribution.More broadly, the distributions of k-hop neighborhoods in the two graphs must also be identical.Building on this observation, to test if two undirected, unweighted graphs are isomorphic, WL colors the nodes of a graph G(V , E) iteratively.At iteration 0, each node v ∈ V receives the same color c 0 (v) := 1. Colors at iteration k + 1 ∈ N are defined recursively via where hash is a perfect hash function, and is a list containing the colors of all of v's neighbors at iteration k.
Intuitively, two nodes in V share the same color after k iterations if their k-hop neighborhoods are isomorphic.WL terminates when the partition of V induced by colors is stable from one iteration to the next.This coloring extends to weighted directed graphs by appending weights and directions to colors in clist k v .After coloring two graphs G A , G B , WL declares a non-isomorphism if their color distributions differ.If not, then they may be isomorphic and WL gives a set of constraints on candidate isomorphisms: a permutation P under which AP = P B must map nodes in G A to nodes in G B of the same color.

Main Results
Motivated by the chemical and CKS distances, we establish general conditions on S and • under which d S is a metric over Ω, for arbitrary weighted graphs.For concreteness, we focus here on distances between graphs of equal size.Extensions to graphs of unequal size are described in Sec. 5.

A Family of Graph Metrics
Optimization over Permutation Matrices.Our first result establishes that d P n is a pseudometric over all weighted graphs when • is an arbitrary entry-wise or operator norm.
Theorem 1 If S = P n and • is an arbitrary entry-wise or operator norm, then d S given by ( 12) is a pseudometric over Ω = R n×n .
Hence, d P n is a pseudometric under any entry-wise or operator norm over arbitrary directed, weighted graphs.
Optimization over the Birkhoff Polytope.Our second result states that the pseudometric property extends to the relaxed version of the chemical distance, in which permutations are replaced by doubly stochastic matrices.
Theorem 2 If S = W n and • is an arbitrary entry-wise norm, then d S given by ( 12) is a pseudometric over Ω = S n×n .If • is an arbitrary entry-wise or operator norm, then its symmetric extension dS (A, Hence, if S = W n and • is an arbitrary entry-wise norm, then (12) defines a pseudometric over undirected graphs.The symmetry property (9c) breaks if • is an operator norm or graphs are directed.In both of these two cases d S is a quasimetric over the quotient space Ω/ ∼ d , and symmetry is attained via the symmetric extension dS .
Theorem 2 has significant practical implications.In contrast to d P n and its extensions implied by Theorem 1, computing d W n under any operator or entry-wise norm is tractable, in the sense that involves minimizing a convex function subject to linear constraints [48][49][50].

Optimization over the Stiefel Manifold.
A more limited result applies to the case when S is the Stiefel manifold O n : and • is either the operator (i.e., spectral) or the entry-wise (i.e., Frobenius) 2-norm, then d S given by ( 12) is a pseudometric over Ω = R n×n .
Though (12) is not a convex problem when S = O n , it is also tractable.Umeyama [17] shows that the optimization can be solved exactly when • = • F and Ω = S n (i.e., for undirected graphs) by performing a spectral decomposition on A and B. We extend this result, showing that the same procedure also applies when • is the operator 2-norm (see Thm. 7 in Sec.4.3).In the general case of directed graphs, (12) is a classic example of a problem that can be solved through optimization on manifolds [51].
Equivalence Classes.Observe that the equivalence of matrix norms, as stated by Eq. ( 3), implies that if d S (A, B) = 0 for one matrix norm • in ( 12), it will be so for all.As a result, pseudometrics d S defined through (12) for a given S have the same quotient space Ω/ ∼ d S , irrespectively of norm • .We therefore turn our attention to characterizing this quotient space in the three cases when S is the set of permutation, doubly stochastic, and orthononal matrices.
When S = P n , Ω/ ∼ d P n is the quotient space defined by graph isomorphism: any two adjacency matrices A, B ∈ R n×n satisfy d P n (A, B) = 0 if and only if their (possibly weighted) graphs are isomorphic.
When S = W n , the quotient space Ω/ ∼ d W n has a connection to the Weisfeiler-Lehman (WL) algorithm [47] described in Section 2: Ramana et al. [52] show that d W n (A, B) = 0 if and only if G A and G B receive identical colors by the WL algorithm (see also [53] for another characterization of this quotient space).This equivalence relation is sometimes called called fractional linear isomorphism [52].
Finally, if S = O n and Ω = S n , i.e., graphs are undirected, then this case is a refinement of the quotient space of co-spectrality.

Incorporating Metric Embeddings
We have seen that the chemical distance d P n can be relaxed to d W n or d O n , gaining tractability while still maintaining the metric property.In practice, nodes in a graph often contain additional attributes that one might wish to leverage when computing distances.In this section, we show that such attributes can be seamlessly incorporated in d S either as soft or hard constraints, without violating the metric property.Metric Embeddings.Given a graph G A of size n, a metric embedding of G A is a mapping ψ A : [n] → Ω from the nodes of the graph to a metric space ( Ω, d).That is, ψ A maps nodes of the graph to Ω, where Ω is endowed with a metric d.We refer to a graph endowed with an embedding ψ A as an embedded graph, and denote this by (A, ψ A ), where A ∈ R n×n is the adjacency matrix of G A .We list two examples: Example 1: Node Attributes.Consider an embedding of a graph to (R k , • 2 ) in which every node v ∈ V is mapped to a k-dimensional vector describing "local" attributes.These can be exogenous: e.g., features extracted from a user's profile (age, binarized gender, etc.) in a social network.Alternatively, attributes may be endogenous or structural, extracted from the adjacency matrix A, e.g., the node's degree, the size of its k-hop neighborhood, its page-rank, etc.
Example 2: Node Colors.Let Ω be an arbitrary finite set endowed with the Kronecker delta as a metric, that is, for s, s ∈ Ω, Given a graph G A , a mapping ψ A : [n] → Ω is then a metric embedding.The values of Ω are invariably called colors or labels, and a graph embedded in Ω is a colored or labeled graph.Colors can again be exogenous or structural: e.g., if the graph represents an organic molecule, colors can correspond to atoms, while structural colors can be, e.g., the output of the WL algorithm (see Section 2) after k iterations.
As discussed below, node attributes translate to soft constraints in metric (12), while node colors correspond to hard constraints.The unified view through embeddings allows us to establish metric properties for both simultaneously (c.f.Theorems 4 and 5) .
Embedding Distance.Consider two embedded graphs (A, ψ A ), (B, ψ B ) of size n that are embedded in the same metric space ( Ω, d).For u ∈ [n], a node in the first graph, and v ∈ [n], a node in the second graph, the embedded distance between the two nodes is given by d(ψ the corresponding matrix of embedded distances.After mapping nodes to the same metric space, it is natural to seek P ∈ P n that preserve the embedding distance.This amounts to finding a P ∈ P n that minimizes: Note that, in the case of colored graphs and the Kronecker delta distance, minimizing ( 16) finds a P ∈ P n that maps nodes in A to nodes in B of equal color.It is not hard to verify that min P ∈P n tr P D ψ A ,ψ B induces a metric between graphs embedded in ( Ω, d).In fact, this follows from the more general theorem we prove below (Theorem.4) for A = B = 0, i.e., for distances between embedded graphs with no edges.Despite the combinatorial nature of P n , the problem of minimizing ( 16) over P n is a maximum weighted matching problem, which can be solved through, e.g., the Hungarian algorithm [54], in O(n 3 ) time [55].We note that this metric is not as expressive as (12): depending on the definition of the embeddings ψ A , ψ B , attributes may only capture "local" similarities between nodes, as opposed to the "global" view of a mapping attained by (12).
A Unified, Tractable Metric.Motivated by the above considerations, we focus on unifying the "global" metric (12) with the "local" metrics induced by arbitrary graph embeddings.Given a metric space ( Ω, d), let Ψ n Ω = {ψ : [n] → Ω} be the set of all mappings from [n] to Ω.Then, given two embedded graphs (A, ψ A ), (B, ψ B ) ∈ R n×n × Ψ n Ω, we define: for some compact set S ⊂ R n×n and matrix norm • .Our next result states that incorporating this linear term does not affect the pseudometric property of d S .
Theorem 4 If S = P n and • is an arbitrary entry-wise or operator norm, then d S given by ( 17) is a pseudometric over the set of embedded graphs We stress here that this result is non-obvious: it is not true that adding any linear term to d S leads to a quantity that satisfies the triangle inequality.It is precisely because D ψ A ,ψ B contains pairwise distances that Theorem 4 holds.We can similarly extend Theorem 2: Theorem 5 If S = W n and • is an arbitrary entry-wise norm, then d S given by ( 17) is a pseudometric over Ω = S n × Ψ n Ω, the set of symmetric graphs embedded in ( Ω, d).Moreover, if • is an arbitrary entry-wise or operator norm, then the symmetric extension dS of (17) is a pseudometric over Adding the linear term (16) in d S has significant practical advantages.Beyond expressing exogenous attributes, a linear term involving colors, combined with a Kronecker distance, translates into hard constraints: any permutation attaining a finite objective value must map nodes in one graph to nodes of the same color.Theorem 5 therefore implies that such constraints can thus be added to the optimization problem, while maintaining the metric property.In practice, as the number of variables in optimization problem ( 12) is n 2 , incorporating such hard constraints can significantly reduce the problem's computation time; we illustrate this in Section 6.Note that adding (16) to d O n does not preserve the metric property.

Proof of Theorems 1-3.
We define several properties that play a crucial role in our proofs.The proofs of Theorems 1-3 rely on several common lemmas.The first three establish conditions under which (12) satisfies the triangle inequality (9d), symmetry (9c), and weak property (9e), respectively: Proof Observe first that all vector p-norms are invariant to permutations of a vector's entries; hence, for any vector x ∈ R d , if P ∈ P n , P x p = x p .Hence, if • is an operator p-norm, P = 1, for all P ∈ S. Every operator norm is submultiplicative; as a result P A ≤ P A = A and, similarly, AP ≤ A , so the lemma follows for operator norms.On the other hand, if • is an entry-wise norm, then A is invariant to permutations of either A's rows or columns.Matrices P A and AP precisely amount to such permutations, so P A = AP = A and the lemma follows also for entrywise norms.
Hence, Theorem 1 follows as a direct corollary of Lemmas 1-4.Indeed, d P n is non-negative, symmetric by Lemmas 2 and 4, satifies the triangle inequality by Lemmas 1 and 4, as well as property (9e) by Lemma 3; hence d P n is a pseudometric over R n×n .Our next lemma shows that the Stiefel manifold O n is contractive for 2-norms: Lemma 5 Let • be the operator (i.e., spectral) or the entry-wise (i.e., Frobenius) 2-norm.Then, S = O n is contractive w.r.t.• .Proof Any U ∈ O n is an orthogonal matrix; hence, U 2 = U F = 1.Both norms are submultiplicative: the first as an operator norm, the second from the Cauchy-Schwartz inequality.Hence, for U ∈ O n , we have U A ≤ U A = A .
Note that an alternative proof can be obtained by the fact that both norms are unitarily invariant (see Lemma 12).Theorem 3 therefore follows from Lemmas 1-3 and Lemma 5, along with the fact that O n is a group.Note that O n is not contractive w.r.t.other norms, e.g., • 1 or • ∞ .
To prove Theorem 2, we first show that Lemma 4 along with the Birkoff-von Neumann theorem imply that W n is also contractive: Lemma 6 Let • be any operator or entry-wise norm.Then, W n is contractive w.r.t.• .
Proof By the Birkoff-con Neumann theorem [46], Hence, for any W ∈ W n there exist P i ∈ P n , θ i > 0, i = 1, . . ., k, such that W = k i=1 θ i P i and k i=1 θ i = 1.Both operator and entrywise p-norms are convex functions; hence, for any A ∈ R n×N : The statement AW ≤ A follows similarly.
Unfortunately, the Birkhoff polytope W n is not a group, as it is not closed under inversion.Nevertheless, it is closed under transposition; in establishing (partial) symmetry of d W n , we leverage the following lemma: The first part of Theorem 2 therefore follows from Lemmas 1, 3, 6, and 7: this is because W n contains the identity I and is closed under both transposition and multiplication, while all entry-wise norms are transpose invariant.
To prove the second part, observe that operator norms are not transpose invariant.However, if • is an operator norm, or Ω = R n×n , then Lemma 6 and Lemma 1 imply that d W n satisfies non-negativity (9a) and the triangle inequality (9d), while Lemma 3 implies that it satisfies (9e).These properties are inherited by extension dS , given by (10), which also satisfies symmetry (9c), and the second part of Theorem 2 follows.

Proof of Theorems 4 and 5
We begin by establishing conditions under which d S satisfies the triangle inequality (9d).We note that, in contrast to Lemma 1, we require the additional condition that S ⊆ W n , which is not satisfied by O n .Lemma 8 Given a norm • , suppose that S ⊆ R n×n is (a) contractive w.r.t.• , (b) closed under multiplication, and (c) is a subset of W n , i.e., contains only doubly stochastic matrices.Then, for any using the fact that both P and P are contractions.On the other hand, (as d is a metric, and P , P are non-negative) where the last inequality follows as both P , P are • 1 -norm bounded by 1 for every P ∈ S.
The weak property (9e) is again satisfied provided the identity is included in S.

Lemma 9 If I ∈ S, then d S ((A, ψ A ), (A, ψ
To attain symmetry over Ω = R n×n ×Ψ n Ω, we again rely on closure under inversion, as in Lemma 2; nonetheless, in contrast to Lemma 2, due to the linear term, we also need to assume the orthogonality of elements of S.

Lemma 10 Given a norm • , suppose that S (a) is contractive w.r.t. • , (b) is closed under inversion, and (c) is a subset of O n , i.e., contains only orthogonal matrices. Then, d S ((A, ψ
Proof As in the proof of Lemma 2, we can show that contractiveness w.r.t.• along with closure under inversion imply that: AP − P B = BP −1 − P −1 A .As S is closed under inversion, min P ∈S f (P ) = min P :P −1 ∈S f (P ) for all f : S → R, while orthogonality implies P −1 = P for all P ∈ S. Hence, d S ((A, ψ A ), (B, ψ B )) equals min Theorem 4 therefore follows from the above lemmas, as S = P n contains I, it is closed under multiplication and inversion, is a subset of W n ∩ O n by (7), and is contractive w.r.t.all operator and entrywise norms.Theorem 5 also follows by using the following lemma, along with Lemmas 8 and 9.
Lemma 11 Suppose that • is transpose invariant, and S is closed under transposition.Then, d S ((A, ψ A ), (B, ψ Proof By the transpose invariance of • and the symmetry of A and B, we have that: AP − P B = BP − P A .Moreover, as S is closed under transposition, min P ∈S f (P ) = min P :P ∈S f (P ) for any f : S → R. In this section, we describe how to compute the metric d S in polynomial time when S = O n and • is the Frobenious norm or the operator 2-norm.The algorithm for the Frobenius norm, and the proof of its correctness, is due to Umeyama [17]; we reprove it for completeness, along with its extension to the operator norm.
Both cases make use of the following lemma: Lemma 12 For any matrix M ∈ R n×n and any matrix P ∈ O n we have that P M = MP = M , where • is either the Frobenius or operator 2-norm.
Proof Recall that the operator 2-norm where σ max denotes the largest singular value.Hence, P M 2 = sup x 0 P Mx 2 / x 2 = σ max (M P P M) = σ max (M M) = M 2 .as P P = I.Using the fact that M 2 = M 2 for all M ∈ R n×n , as well as that P P = I, we can show that The Frobenius norm is M F = tr(M M) = tr(MM ) = M F , hence P M F = tr(M P P M) = tr(M M) = M F and, as in the case of the operator norm, we can similarly show In both norm cases, for A, B ∈ S n , we can compute d S using a simple spectral decomposition, which dominates computations and can be performed in O(n 3 ) time.Let A = U Σ A U T and B = V Σ B V T be the spectral decomposition of A and B. As A and B are real and symmetric, we can assume U , V ∈ O n .Recall that U −1 = U and V −1 = V , while Σ A and Σ B are diagonal and contain the eigenvalues of A and B sorted in increasing order; this ordering matters for computations below.
The following theorem establishes that this decomposition readily yields the distance d S , as well as the optimal orthogonal matrix P * , when Proof The proof makes use of the following lemma by Hoffman and Wielandt [56]: Note that if Σ A and Σ B are diagonal matrices with the ordered eigenvalues of A and B in the diagonal, then the conclusion of Lemma 13 can be written as where we define ∆ ≡ U P V .As a product of orthogonal matrices, ∆ ∈ O n .Notice that Therefore, for any where the first inequality follows by Lemma 13 if we notice that AP − P B = A − P BP −1 and that P BP −1 and B have the same spectrum for any P .If we choose P = U V then ∆ = I and the result follows.
We can compute d S when S = O n and • is the operator norm in the exact same way.

and the minimum is attained by
Proof The proof follows the same steps as the proof of Theorem 6, using Lemma 14 below instead of Lemma 13.

Lemma 14 If A and B are Hermitian matrices with eigenvalues a
Proof This is the second exercise following Corollary 6.3.4 in Horn and Johnson [57].We reprove this here for completeness.
If we choose X = B, Y = A and i = n we get a j + bn+1−j ≤ c n for all j = 1, . . ., n.Since bn+1−j = −b j we get that a j − b j ≤ c n , for any j.Similarly, by exchanging the role of A and B, we can lower bound the largest eigenvalue of B − A, say d n , by b j − a j for any j.Notice that, by definition of the operator norm and the fact that Taking the maximum over j we get that A − B 2 ≥ max j |a j − b j |, and the lemma follows.
Note again that if Σ A and Σ B are diagonal matrices with the ordered eigenvalues of A and B in the diagonal, then the conclusion of Lemma 14 can be written as The proof of Thm. 7 proceeds along the same steps as the above proof, using again the fact that, by Lemma 12, M 2 = MP 2 = P M 2 for any P ∈ O n and any matrix M, along with Lemma 15.

Graphs of Different Sizes
For simplicity, we have described distances over graphs of equal sizes.There are several applications [15,[58][59][60] where by design we want to compare (and align the nodes of) equal-sized graphs.E.g., in computer vision, one might want to establish a correspondence among the nodes of two graphs, each representing a geometrical relation among m special points in two images of objects of the same type.When poses of objects do not differ significantly, the same number, m, of special points will be extracted from each image, and hence the graphs being compared will have the same size.
We can nevertheless extend our approach to graphs of different sizes.We can do so by extending two graphs, G A and G B , with dummy nodes such that the new graphs G A and G B have the same number of nodes.Many papers follow this approach, e.g.[61][62][63][64][65][66][67][68][69].If G A has n A nodes and G B has n B nodes we can, for example, add n B dummy nodes to G A and n A dummy nodes to G A .Once we have G A and G B of equal size, we can use the methods we already described to compute a distance between G A and G B and return this distance as the distance between G A and G B .
Possible graph extensions differ in how the dummy nodes connect to existing graph nodes, how dummy nodes connect to themselves, and what kind of penalty we introduce for associating dummy nodes with existing graph nodes.Method 1.One way of extending the graphs is to add dummy nodes and leave them isolated, i.e., with no edges to either existing nodes or other dummy nodes.Although this might work when both graphs are dense, it might lead to non desirable results when one of the graphs is sparse.For example, let G A be 3 isolated nodes and G B be the complete graph on 4 nodes minus the edges forming triangle {(1, 2), (2, 3), (3, 1)}.Let us assume that S = P n , such that, when we compute the distance between G A and G B , we produce an alignment between the graphs.One desirable outcome would be for G A to be aligned with the three nodes in G B that have no edges among them.This is basically solving the problem of finding a sparse subgraph inside a dense graph.However, computing d S (A , B ), where A and B are the extended adjacency matrices, could equally well align G A with the 3 dummy node of G B .Method 2. Alternatively, one could add dummy nodes and connect each dummy node to all existing nodes and all other dummy nodes.This avoids the issue described for method 1.However, this creates a similar non-desirable situation: since the dummy nodes in each extended graph form a clique, we might align G A , or G B , with just dummy nodes, instead of producing an alignment between existing nodes in G A and existing nodes in G B .Method 3. If both G A and G B are unweighted graphs, a method that avoids both issues above (aligning a sparse graph with isolated dummy nodes or aligning a dense graphs with cliques of dummy nodes) is to connect each dummy node to all existing nodes and all other dummy nodes with edges of weight 1/2.This method works because, when S = P n , it discourages alignments of edges between existing nodes in G A to dummy-dummy edges or dummy-existing node edges in G B , and vice versa.Method 4. One can also discourage aligning existing node with dummy nodes by introducing a soft linear term as in (17), penalizing mappings between dummy and existing nodes.Method 5. Finally, a method of ensuring that the graphs have equal size is repeating them, i.e., creating "super" graphs that consist of multiple replicas of the same graph as connected components, resulting in two graphs of size equal to the least common multiple (LCM) of the sizes of the two original graphs.This is of particular use when a spectral approach, like the ones used to optimize over O n : this is because repetition, in effect, only changes the multiplicity of each value in the spectrum, which can be done (a) without affecting the spectrum structure, and (b) efficiently, once the LCM is computed.

Experiments
We experimentally study the properties of different graph distance measures, including metrics from our family, over several graph classes.Our main observation is that computing a heuristic estimate P of P * = arg min P ∈P n AP − P B , and using P to estimate d P n (A, B) leads to violations of the metric property.In contrast, our proposed approach of computing d S (A, B) for some S for which d a metric, and for which its computation is tractable, yields significantly improved performance in tasks such as clustering graphs (see Fig. 1).

Experimental Setup
Graphs.We use synthetic graphs from six classes summarized in Table 3: Barabasi Albert with degree d (Bd), Erdos Renyi with probablity p (Ep), Power Law Tree (P), Regular with degree d (Rd), Small World (S), Watts Strogatz with degree d (Ws).In addition, we use a dataset of small graphs, comprising all 853 connected graphs of 7 nodes [70].Finally, we use a collaboration graph with 5242 nodes and 14496 edges representing author collaborations [71].
Algorithms.We compare our metrics to several competitors outlined in Table 2.All receive only two unlabeled undirected simple graphs A and B and output a matching a matrix P either in W n or in P n estimating P * .If P ∈ P n , we compute A P − P B 1 .If P ∈ W n , then we compute both A P − P B 1 and A P − P B F ; all norms are entry-wise.We also implement our two relaxations d W and d O n , for two different matrix norm combinations.
We briefly review here additional impementation details about the algorithms summarized in Table 2.
• NetAlignBP, IsoRank, SparseIsoRank and NetAlignMR, for which code is publicly available [72], are described by [14].Natalie is described in [16]; code is again available [73].All five algorithms output P ∈ P n .• The algorithm in [15] outputs one P ∈ P n and one P ∈ W n .We use P ∈ P n to compute AP − P B 1 and call this InnerPerm.We use P ∈ W n to compute AP − P B 1 and AP − P B 2 and call these algorithms InnerDSL1 and InnerDSL2 respectively.We use our own CVX-based projected gradient descent solver for the non-convex optimization problem the authors propose.• DSL1 and DSL2 denote d S (A, B) when S ∈ W n and • is • 1 (element-wise) and • F , respectively.We implement them in Matlab (using CVX) as well as in C, aimed for medium size graphs and multi-core use.We also implemented a distributed version in Apache Spark [19] that scales to very large graphs over multiple machines based on the Alternating Directions Method of Multipliers [18].• ORTHOP and ORTHFR denote d S (A, B) when S ∈ O n and • is • 2 (operator norm) and • F respectively.We compute them using an eigendecomposition (See Sec.4.3).• For small graphs, we compute d P n (A, B) using our brute-force GPU-based code, available at [74].For a single pair of graphs with n ≥ 15 nodes, EXACT already takes several days to finish.For • = • 1 in d S (element-wise or matrix norm), we have implemented the chemical distance as an integer value LP and solved it using branch-and-cut.It did not scale well for n ≥ 15. • We implemented the WL algorithm over Spark to run, multithreaded, on a machine with 40 CPUs.We use all public algorithms as black boxes with their default parameters, as provided by the authors.

Experimental Results
Clustering Graphs.The difference between our metrics and non-metrics is striking when clustering graphs.This is illustrated by the clustering experiment shown in Fig. 1.Graphs of size n = 50 from the 6 classes in Table 3 are clustered together through hierarchical agglomerative clustering.We compute distances between them using nine different algorithms; only the distances in our family (DSL1, DSL2, ORTHOP, and ORTHFR) are metrics.The quality of clusters induced by our metrics are far superior than clusters induced by non-metrics; in fact, OR-THOP and ORTHFR can lead to no misclassifications.This experiment strongly suggests our produced metrics correctly capture the topology of the metric space between these larger graphs.
Triangle Inequality Violations.Given graphs A, B and C and a distance d, a Triangle Inequality Violation (TIV) occurs when d(A, C) > d(A, B) + d(B, C).Being metrics, none of our distances induce TIVs; this is not the case for the remaining algorithms in Table 2. Fig. 2 shows the TIV fraction across the synthetic graphs of Table 3 while Fig. 3 shows the fraction of TIVs found on the 853 small graphs (n = 7).NetAlignMR also produces no TIVs on the small graphs, but it does induce TIVs in synthetic graphs.We observe that it is easier to find TIVs when graphs are close: in synthetic graphs, TIVs abound for n = 10.No algorithm performs well across all categories of graphs.
Effect of TIVs on Clustering.Next, to investigate the effect of TIVs on clustering, we artificially introduced triangle inequality violations into the pairs of distances between graphs.We then re-evaluated clustering performance for hierarchical agglomerative clustering using the Ward method, which performed best in Fig. 1.Fig. 4 shows the fraction of misclassified graphs as the fraction of TIVs introduced increases.To incur as small a perturbation on distances as possible, we introduce TIVs as follows: For every three graphs, A, B, C, with probability p, we set d(A, C) = d(A, B) + d(B, C).Although this does not introduce a TIV w.r.t.A,B, and C, this distortion does introduce TIVs w.r.t.other triplets involving A and C. We repeat this 20 times for each algorithm and each value of p, and compute the average fraction of TIVs, shown in the x-axis, and the average fraction of misclassified graphs, shown in the y-axis.As little as 1% TIVs significantly deteriorate clustering performance.Note that the fraction of TIVs is computed over the total number of TIVs possible, which grows cubicly with the number of graphs being clustered.We also see that, even after introducing TIVs, clustering based on metrics outperforms clustering based on non-metrics.
Comparison to Chemical Distance.We compare how different distance scores relate to the chemical distance EXACT through two experiments on the small graphs (computation on larger graphs is prohibitive).In Figure 5(a), we compare the distances between small graphs with 7 nodes produced by the different algorithms and EXACT using the DISTATIS method of [75].Let D ∈ R 835×835 + be the matrix of distances between graphs under an algorithm.DISTATIS computes the normalized Laplacian of this matrix, given by L = −U DU / U DU 2 where U = I − 11 n .
The DISTATIS score is the cosine similarity of such Laplacians (vectorized).We see that our metrics produce distances attaining high similarity with EXACT, though NetAlignBP has the highest similarity.We measure proximity to EXACT with an additional test.Given D, we compute the nearest neighbor (NN) meta-graph by connecting a graph in D to every graph at distance less than its average distance to other graps.This results in a (labeled) meta-graph, which we can compare to the NN meta-graph induced by other algorithms, measuring the fraction of distinct edges.Fig. 5(b) shows that our algorithms perform quite well, though Natalie yields the smallest distance to EXACT.
Incorporating Constraints.Computation costs can be reduced through metric embeddings, as in (17).To show this, we produce a copy of the 5242 node collaboration graph with permuted node labels.We then run the WL algorithm [47] to produce structural colors, which induce coloring constraints on P ∈ W n .The WL algorithm reaches a fixed point after k = 5 iterations.The support of P (i.e., the number of variables in the optimization ( 12)), the support of AP − P A (i.e., the number of non-zero summation terms in the objective of ( 12)), as well as the execution time τ of the WL algorithm, are summarized in Table 4.The original unconstrained problem involves 5242 2 ≈ 27.4M variables.However, after using WL and induced costraints, the effective dimension of the optimization problem (12) reduces considerably.This, in turn, speeds up convergence time, shown in Fig. 6: including the time to compute constraints, a solution is found 110 times faster after the introduction of the constraints.

Conclusion
Our work suggests that incorporating soft and hard constraints has a great potential to further improve the efficiency of our metrics.In future work, we intend to investigate and characterize the resulting equivalence classes under different soft and hard constraints, and to quantify these gains in efficiency.We also plan to develop scalable distributed solvers for our family of metrics.A good starting point is the Alternating Direction Method of Multipliers [76,77], which enjoys several useful properties.Specifically, under proper tuning and mild convexity assumptions, it achieves the convergence rate of the fastest-possible first-order method [78,79], it can be less affected by the topology of the communication network in a cluster than, e.g.gradient descent [80,81], and it parallelizes well both on share-memory multiprocessor systems, GPUs and computer clusters [18,82,83].Determining the necessity of the conditions used in proving that d S is a metric is also an open problem.Finally, we are investigating generalizations of our family of metrics to multi-metrics, i.e. we want to define a tractable closeness score for a set of n > 2 graphs that satisfies a generalization of the properties of metrics for more than two elements [84]./hard constraints on the numbers of variables ( P 0 ) and terms of objective ( AP − P A 0 ) using k iterations of the WL coloring algorithm.The last column shows the execution time of WL on a 40 CPU machine using Apache Spark [19].

Lemma 1 Lemma 2
Given a matrix norm • , suppose that set S ⊆ R n×n is (a) contractive w.r.t.• , and (b) closed under multiplication.Then, for any A, B, C ∈ R n×n , d S given by (12) satisfies d S (A, C) ≤ d S (A, B) + d S (B, C).Proof Consider P ∈ arg min P ∈S AP − P B , and P ∈ arg min P ∈S BP − P C .Then, from closure under multiplication, P P ∈ S. Hence, d S (A, C) ≤ AP P − P P C = AP P − P BP + P BP − P P C ≤ AP P − P BP + P BP − P P C = (AP − P B)P + P (BP − P C) ≤ AP − P B + BP − P C = d S (A, B) + d S (B, C), where the last inequality follows from the fact that P , P are contractions.Given a matrix norm • , suppose that S ⊂ R n×n is (a) contractive w.r.t.• , and (b) closed under inversion.Then, for all A, B ∈ R n×n , d S (A, B) = d S (B, A).Proof Observe that property (b) implies that, for all P ∈ S, P is invertible and P −1 ∈ S. Hence, AP −P B = P P −1 AP − P BP −1 P = P (P −1 A−BP −1 )P ≤ BP −1 −P −1 A , as P is a contraction w.r.t • .We can similarly show that BP −1 −P −1 A ≤ AP −P B , hence AP − P B = BP −1 − P −1 A .As S is closed under inversion, min P ∈S f (P ) = min P :P −1 ∈S f (P ), for every f : S → R. Hence d S (A, B) = min P ∈S BP −1 − P −1 A = minP :P −1 ∈S BP −1 − P −1 A = min P ∈S BP − P A = d S (B, A).Lemma 3 If I ∈ S, then d S (A, A) = 0 for all A ∈ R n×n .Proof Indeed, if I ∈ S, then 0 ≤ d S (A, A) ≤ AI − IA = 0.Both the set of permutation matrices P n and the Stiefel manifold O n are groups w.r.t.matrix multiplication: they are closed under multiplication, contain the identity I, and are closed under inversion.Hence, if they are also contractive w.r.t. a matrix norm • , d P n and d O n defined in terms of this norm satisfy all assumptions of Lemmas 1-3.We therefore turn our attention to this property.Lemma 4 Let • be any operator or entry-wise norm.Then, S = P n is contractive w.r.t.• .

Lemma 7
Suppose that • is transpose-invariant, and S ⊆ R n×n is closed under transposition.Then, d S (A, B) = d S (B, A) for all A, B ∈ S n .Proof By transpose invariance and the symmetry of A and B, we have that: AP − P B = BP − P A .Moreover, as S is closed under transposition, for every f : S → R, min P ∈S f (P ) = min P :P ∈S f (P ).Hence, d S (A, B) = min P ∈S BP − P A = min P :P ∈S BP − P A = d S (B, A).

P
∈ arg min P ∈S AP − P B + tr P D ψ A ,ψ B , and P ∈ arg min P ∈S BP − P C + tr P D ψ B ,ψ C .Then, from closure under multiplication, P P ∈ S. We have that d S ((A, ψ A ), (C, ψ C )) ≤ AP P − P P C + tr (P P ) D ψ A ψ C As in the proof of Lemma 1, we can show that AP P − P P C = AP P − P BP + P BP − P P C ≤ AP P − P BP + P BP − P P C = (AP − P B)P + P (BP − P C) ≤ (AP − P B) P + P (BP − P C) ≤ AP − P B + BP − P C Hence, d S ((A, ψ A ), (B, ψ B )) equals min P ∈S AP − P B + tr P D ψ A ,ψ B = min P ∈S BP − P A + tr P D ψ A ,ψ B = min P :P ∈S BP − P A + tr (P ) D ψ B ,ψ A = d S ((B, ψ B ), (A, ψ A )) 4.3 Metric Computation Over the Stiefel Manifold.

Figure 1 :Figure 2 :Figure 3 :
Figure 1: Clustering Misclassification Error.A clustering experiment using metrics and non-metrics (y-axis) for different clustering parameters (x-axis) is shown in (a), left.We sample graphs with n = 50 nodes from the six classes, shown in the adjacent table in (d), bottom-center.We compute distances between them using nine different algorithms from Table 2.Only the distances in our family (DSL1, DSL2, ORTHOP, and ORTHFR) are metrics.The resulting graphs are clustered using hierarchical agglomerative clustering [44] using Average, Centroid, Complete, Median, Single, W ard, W eighted as a means of merging clusters.Colors represent the fraction of misclassified graphs, with the minimal misclassification rate per distance labeled explicitly.Metrics outperform other distance scores across all clustering methods.The error rate of a random guess is ≈ 0.8.
Graph Isomorphism, Chemical Distance, and CKS Distance.Let A, B ∈ R n×n be the adjacency matrices of two graphs G A and G B .Then, G A and G B are isomorphic if and only if there exists P ∈ P n s.t.

Definition 1
We say that a set S ⊆ R n×n is closed under multiplication if P , P ∈ S implies that P • P ∈ S. We say that S ⊆ R n×n is closed under transposition if P ∈ S implies that P ∈ S, and closed under inversion if P ∈ S implies that P −1 ∈ S. Given a matrix norm • , we say that set S ⊆ R n×n is contractive w.r.t.• if AP ≤ A and P A ≤ A , for all P ∈ S and A ∈ R n×n .Put differently, S is contractive if and only if every linear transform P ∈ S is a contraction w.r.t.• .

Table 4 :
Effect of coloring/hard constraints.