A generalized configuration model with degree correlations and its percolation analysis

In this paper we present a generalization of the classical configuration model. Like the classical configuration model, the generalized configuration model allows users to specify an arbitrary degree distribution. In our generalized configuration model, we partition the stubs in the configuration model into b blocks of equal sizes and choose a permutation function h for these blocks. In each block, we randomly designate a number proportional to q of stubs as type 1 stubs, where q is a parameter in the range [0,1]. Other stubs are designated as type 2 stubs. To construct a network, randomly select an unconnected stub. Suppose that this stub is in block i. If it is a type 1 stub, connect this stub to a randomly selected unconnected type 1 stub in block h(i). If it is a type 2 stub, connect it to a randomly selected unconnected type 2 stub. We repeat this process until all stubs are connected. Under an assumption, we derive a closed form for the joint degree distribution of two random neighboring vertices in the constructed graph. Based on this joint degree distribution, we show that the Pearson degree correlation function is linear in q for any fixed b. By properly choosing h, we show that our construction algorithm can create assortative networks as well as disassortative networks. We present a percolation analysis of this model. We verify our results by extensive computer simulations.


Introduction
Recent advances in the study of networks that arise in field of computer communications, social interactions, biology, economics, information systems, etc., indicate that these seemingly widely different networks possess a few common properties.Perhaps the most extensively studied properties are power-law degree distributions [1], the small-world property [29], network transitivity or "clustering" [29].Other important research subjects on networks include network resilience, existence of community structures, synchronization, spreading of information or epidemics.A fundamental issue relevant to all the above research issues is the correlation between properties of neighboring vertices.In the ecology and epidemiology literature, this correlation between neighboring vertices is called assortative mixing.
In general, assortative mixing is a concept that attempts to describe the correlation between properties of two connected vertices.Take social networks for example.vertices may have ages, weight, or wealthiness as their properties.It is found that friendship between individuals are strongly affected by age, race, income, or languages spoken by the individuals.If vertices with similar properties are more likely to be connected together, we say that the network shows assortative mixing.On the other hand, if vertices with different properties are likely to be connected together, we say that the network shows disassortative mixing.It is found that social networks tend to show assortative mixing, while technology networks, information networks and biological networks tend to show disassortative mixing [21].The assortativity level of a network is commonly measured by a quantity proposed by Newman [20] called assortativity coefficient.If the assortativity level to be measured is degree, assortativity coefficient reduces to the standard Pearson correlation coefficient [20].Specifically, let X and Y be the degrees of a pair of randomly selected neighboring vertices, the Pearson degree correlation function is the correlation coefficient of X and Y , i.e.

ρ(X, Y
where σ X and σ Y denote the standard deviation of X and Y respectively.We refer the reader to [20,21,12,30,22,3,17] for more information on assortativity coefficient and other related measures.In this paper we shall focus on degree as the vertex property.
Researchers have found that assortative mixing plays a crucial role in the dynamic processes, such as information or disease spreading, taking place on the topology defined by the network [18,3,7,17,26].Assortativity also has a fundamental impact to the network resilience as the network loses vertices or edges [28].In order to study information propagation or network resilience, researchers may need to build models with assortative mixing or disassortative mixing.Newman [20] and Xulvi-Brunet et al. [30] proposed algorithms to generate networks with assortative mixing or disassortative mixing based on an idea of rewiring edges.Boguñá et al. [3] proposed a class of random networks in which hidden variables are associated with the vertices of the networks.Establishment of edges are controlled by the hidden variables.Ramezanpour et al. [23] proposed a graph transformation method to convert a configuration model into a graph with degree correlations and non-vanishing clustering coefficients as the network grows in size.However, the degree distribution no longer remains the same as the network is transformed.Zhou [31] also proposed a method to generate networks with assortative mixing or disassortative mixing using a Monte Carlo sampling method.Comparing with these methods, our method has an advantage that specified degree distributions are preserved in the constructed networks.In addition, our method allows us to derive a closed form for the Pearson degree correlation function for two random neighboring vertices.
In this paper we propose a method to generate random networks that possess either assortative mixing property or disassortative mixing property.Our method is based on a modified construction method of the configuration model proposed by Bender et al. [2] and Molloy et al. [15].The modified construction method is as follows.Given a degree distribution, generate a sequence of degrees.Each vertex is associated with a set of "stubs" where the number of stubs is equal to the degree of the vertex.We sort and arrange the stubs of all vertices in ascending order (descending order would be fine as well).Sort and divide the stubs into b blocks.We associate each block with another block.This association forms a permutation, i.e. no two distinct blocks are associated with a common block.We randomly designate a fixed number of stubs in each block as type-1 stubs.The rest stubs are all designated as type-2 stubs.To connect stubs, we randomly select a stub.If it is a type-1 stub, we connect it to a randomly selected type-1 stub in the associated block.If it is a type-2 block, we connect it to a randomly selected type-2 stub out of all type-2 stubs.We repeat this process until all stubs are connected.To generate a generalized configuration model with assortative mixing property, we select permutation of blocks such that a block of stubs with large degrees is associated with another block of large degrees.To generate a network with disassortative mixing, we select permutation such that a block of large degrees is assciated with a block of small degrees.We present the detail of the construction algorithm in Section 2. For this model, we derive a closed form for the Pearson correlation coefficient of the degrees of two neighboring vertices.From the Pearson correlation coefficient of degrees we show that our constructed network can be assortative or disassortative as desired.
In this paper we present an application of the proposed random graph model.We consider a percolation analysis of the generalized configuration model.Percolation has been a powerful tool to study network resilience under breakdowns or attacks.Cohen et.al [6] studied the resilience of networks with scale-free degree distributions.Particularly, Cohen et.al studied the stability of such networks, including the Internet, subject to random crashes.Percolation has also been used to study disease spreading in epidemic networks [5,16,25].Percolation has been used to study the effectiveness of immunization or quarantine to confine a disease.Schwartz et al. [27] studied percolation in a directed scale-free network.Newman [19] and Vázquez et al. [28] studied percolation in networks with degree correlation.Vázquez et al. assumed general random networks, their solution involves with the eigenvalues of a D × D matrix, where D is the total number of degrees in the network.The percolation analysis of our model involves with solving roots of a simultaneous system of b nonlinear equations, where b is the number of blocks in the generalized configuration model.Since b is typically a small integer, we have significantly reduced the complexity.
The rest of this paper is organized as follows.In Section 2 we present our construction method of a random network.In Section 3 we derive a closed form for the joint degree distribution of two randomly selected neighboring vertices from a network constructed by the algorithm in Section 2. In Section 4, we show that the Pearson degree correlation function of two neighboring vertices is linear.We then show how permutation function h should be selected such that a constructed random graph is associatively or disassortatively mixed.In Section 5 we present a percolation analysis of this model.Numerical examples and simulation results are presented in Section 6.Finally, we give conclusions in Section 7.

Construction of a Random Network
Research on random networks was pioneered by Erdős and Rényi [8].Although Erdős-Rényi's model allows researchers to study many network problems, it is limited in that the vertex degree has a Poisson distribution asymptotically as the network grows in size.The configuration model [2,15] can be considered as an extension of the Erdős and Rényi model that allows general degree distributions.Configuration models have been used successfully to study the size of giant components.It has been used to study network resilience when vertices or edges are removed.It has also been used to study the epidemic spreading on networks.We refer the readers to [21] for more details.In this paper we propose an extension of the classical configuration model.This model generates networks with specified degree sequences.In addition, one can specify a positive or a negative degree correlation for the model.Let there be n vertices and let p k be the probability that a randomly selected vertex has degree k.We sample the degree distribution {p k } n times to obtain a degree sequence k 1 , k 2 , . . ., k n for the n vertices.We give each vertex i a total of k i stubs.There are 2m = n i=1 k i stubs totally, where m is the number of edges of the network.In a classical configuration model, we randomly select an unconnected stub, say s, and connect it to another randomly selected unconnected stub in [1, 2m] − {s}.We repeat this process until all stubs are connected.The resulting network can be viewed as a matching of the 2m stubs.Each possible matching occurs with equal probability.The consequence of this construction is that the degree correlation of a randomly selected pair of neighboring vertices is zero.To achieve nonzero degree correlation, we arrange the 2m stubs in ascending order (descending order will also work) according to the degree of the vertices, to which the stubs belong.We label the stubs accordingly.We partition the 2m stubs into b blocks evenly.We select integer b such that 2m is divisible by b.Each block has 2m/b stubs.Block i, where i = 1, 2, . . ., b, contains stubs (i − 1)(2m/b) + j for j = 1, 2, . . ., 2m/b.Next, we choose a permutation function h of {1, 2, . . ., b}.If h(i) = j, we say that block j is associated with block i.In this paper we select h such that i.e., if blocks i and h(i) are mutually associated with each other.In each block, we randomly designate 2mq/b stubs as type 1 stubs, where q is a parameter in the range [0, 1).Other stubs are designated as type 2 stubs.Randomly select an unconnected stub.Suppose that this stub is in block i.If it is a type 1 stub, connect this stub to a randomly selected unconnected type 1 stub in block h(i).If it is a type 2 stub, connect it to a randomly selected unconnected type 2 stub in [1, 2m].We repeat this process until all stubs are connected.The construction algorithm is shown in Algorithm 1.
Inputs: degree sequence {ki : i = 1, 2, . . ., n}; Outputs: graph (G, V, E); Create 2m stubs arranged in descending order; Divide 2m stubs into b blocks evenly.Initially, all stubs are unconnected.For each block, randomly designate 2mq/b stubs as type 1 stubs.All other stubs are designated as type 2 stubs; while there are unconnected stubs do Randomly select a stub.Assume that the stub is in block i; if type 1 stub then connect this stub with a randomly selected unconnected type 1 stub in block h(i); else connect this stub with a randomly selected unconnected type 2 stub in [1, 2m]; end end

Algorithm 1: Construction Algorithm
We make a few remarks.First, note that in networks constructed by this algorithm, there are mq edges that have two type-1 stubs on their two sides.These edges create degree correlation in the network.On the other hand, there are m(1 − q) edges in the network that have two type-2 stubs on their two sides.These edges do not contribute towards degree correlation in the network.Second, random networks constructed by this algorithm possess the following property.A randomly selected stub connects to another randomly selected in the associated block with probability q.With probability 1 − q, this stub connects to a randomly selected stub in [1, 2m].Finally, note that standard configuration models can have multiple edges connecting two particular vertices.There can also be edges connecting a vertex to itself.These are called multiedges and self edges.In our constructed networks, multiedges and self edges can also exist.However, it is not difficult to show that the expected density of multiedges and self edges approaches to zero as n becomes large.Due to space limit, we shall not address this issue in this paper.

Joint Distribution of Degrees
Consider a randomly selected edge in a random network constructed by the algorithm described in Section 2. In this section we analyze the joint degree distribution of the two vertices at the two ends of the edge.
We randomly select a vertex and let Z be the degree of this vertex.Since the selection of vertices is random, The expectation of Z can also be expressed as The expected number of stubs of the network is E(Z) • n.We would like to evenly allocate these stubs into b blocks such that each block has nE(Z)/b stubs on average.We make the following assumption.
for all i = 1, 2, . . ., b.In addition, we assume that the degree sequence k 1 , k 2 , . . ., k n sampled from the distribution {p k } can be evenly placed in b blocks.Specifically, there exist mutually disjoint sets
• Note that the construction algorithm described in Section 2 works without Assumption 1.However, this assumption allows us to derive a very simple expression for the joint probability mass function (pmf) of X and Y .This simple expression allows us to analyze the assortativity and disassortativity of the model.For degree sequences that do not satisfy Assumption 1, the analyses in Sections 3, 4 and 5 are only approximate.In Section 6, we shall compare simulation results of models constructed without Assumption 1 with analytical results.• Assumption 1 is quite restrictive.In Section 6 we discuss how one modifies a degree distribution to make the distribution satisfy Assumption 1.We also remark that from (2) one can view as a probability mass function.Eq. ( 3) can be equivalently be expressed as for all i = 1, 2, . . ., b.We can equivalently say that distribution {p k } satisfies Assumption 1. • Finally, we remark that a common way to generate stubs from a degree distribution is to first generate a sequence of uniform pseudo random variables over [0, 1].Then, transform the uniform random variables using the inverse cumulative distribution function of the degree distribution [4].This approach would encounter difficulties as far as Assumption 1 is concerned, because the stubs produced are not likely to be evenly allocated among blocks.If the network is large, the following approach based on proportionality can be used.Specifically, for degree k with probability mass p k , create np k vertices and nkp k corresponding stubs.If n is large, the strong law of large numbers ensures that this approach and the inversion method produce approximately the same number of stubs.Using this approach, the probability masses of the degree distribution and the stubs sampled from the degree distribution both satisfy Assumption 1 and can be placed evenly in blocks at the same time.
We randomly select a stub in the range [1, 2m].Denote this stub by t.Let v be the vertex, with which stub t is associated.Let Y be the degree of v. Now connect stub t to a randomly selected stub according to the construction algorithm in Section 2. Let this stub be denoted by s.Let u be the vertex, with which s is associated, and let X be the degree of u.Since stub t is randomly selected from range [1, 2m], the distribution of Y is where Z is the degree of a randomly selected vertex.
To study the joint pmf of X and Y , we first study the conditional pmf of X, given Y , and the marginal pmf X.In the rest of this section, we assume that Assumption 1 holds.Suppose x is a degree in set H i .The total number of stubs which are associated with vertices with degree x is nxp x .By Assumption 1, all nxp x stubs are in block i.We consider two cases, in which stub t connects to stub s.In the first case, stub t is of type 1.This occurs with probability q.In this case, stub t must belong to a vertex with a degree in block h(i).With probability the construction algorithm in Section 2 connects t to stub s, where δ() is the delta function.In the second case, stub t is of type 2. This occurs with probability 1 − q.In this case, stub t can be associated with a degree in any block.With probability the construction algorithm connects stub t to stub s.Combining the two cases in ( 5) and ( 6), we have for y ∈ H h(i) .If y ∈ H j for j = h(i), Now assume that the network is large.That is, we consider a sequence of constructed graphs, in which n → ∞, m → ∞, while keeping 2m/n = E(Z).Under this asymptotics, Eqs. ( 7) and ( 8) converge to From the law of total probability we have Pr(X = x|Y = y) Pr(Y = y) Substituting ( 4) and ( 9) into (10), we have Since the partition of stubs is uniform, From ( 9) we derive the joint pmf of X and Y where We summarize the results in the following theorem.
Theorem 1.Let G be a graph generated by the construction algorithm described in Section 2 based on a sequence of degrees k 1 , k 2 , . . ., k n .Randomly select an edge from G. Let X and Y be the degrees of the two vertices at the two ends of the edge.Then, the marginal pmf of X and Y are given in (12) and ( 4), respectively.The joint pmf of X and Y is given in (13).

Assortativity and Disassortativity
In this section, we present an analysis of the Pearson degree correlation function of two random neighboring vertices.The goal is to search for permutation function h such that the numerator of ( 1) is non-negative (resp.non-positive) for the network constructed in this section.
From (12), we obtain the expected value of X where Now we consider the expected value of the product XY .We have from ( 13) that Note from ( 15) and ( 17) that Based on (18), we summarize the Pearson degree correlation function in the following theorem.
Theorem 2. Let G be a graph generated by the construction algorithm in Section 2. Randomly select an edge from the graph.Let X and Y be the degrees of the two vertices at the two ends of this edge.Then, the Pearson degree correlation function of X and Y is where and σ X and σ Y are the standard deviation of the pmfs in ( 12) and (4).
In view of (19), the sign of ρ(X, Y ) depends on the constant c.To generate assortative (resp.disassortative) mixing random graphs we sort u i 's in descending order first and then choose the permutation h that maps the largest number of u i 's to the largest (resp.smallest) number of u i 's.This is formally stated in the following corollary.
(i) If we choose the permutation h with h(π(i)) = π(i) for all i, then the constructed random graph is assortative mixing.(ii) If we choose the permutation h with h(π(i)) = π(b + 1 − i) for all i, then the constructed random graph is disassortative mixing.

The proof of
Proof.(Corollary 1) (i) Consider the circular shift permutation σ j (•) with σ j (i) = (i + j − 1 mod b) + 1 for j = 1, 2, . . ., b. From symmetry, we have Using the upper bound of the Hardy, Littlewood and Pólya rearrangement inequality in ( 21) and h(π In view of ( 18) and ( 21), we conclude that the generated random graph is assortative mixing.
(ii) Using the lower bound of the Hardy, Littlewood and Pólya rearrangement inequality in (21) and h(π In view of ( 18) and ( 21), we conclude that the generated random graph is disassortative mixing.

An Application: Percolation
In this section we present a percolation analysis of the generalized configuration model.
We consider node percolation of a random network with n vertices.Recall that we define Z to be the degree of a randomly selected vertex in the network.Let p k = Pr(Z = k) be given and let E(Z) be the expected value of Z.
Let φ be the probability that a node stays in the network after the percolation.That is, 1 − φ is the probability that a node is removed from the network.In the literature of percolation analysis, φ is called the occupation probability.We assume that φ ∈ (0, 1).Let α i be the probability that along an edge with one end attached to a stub in block i, one can not reach a giant component.Let η i be the probability that a randomly selected vertex from block i is in a giant component after the random removal of vertices.Then, Let η be the probability that a randomly selected vertex is in a giant component after the random removal of vertices.Then, We now derive a set of equations for α i , i = 1, 2, . . ., b.We randomly select an edge.Call this edge e.Let D be the event that e does not connect to a giant component.Let B i be the event that one end of this edge is associated with a stub in block i. Suppose that the other end of e is attached to a vertex called v. Then by the law of total probability we have where Y is the degree of v and B j is the event that vertex v is in block j.
According to (9), we have If vertex v is removed from the network through percolation, then edge e does not lead to a giant component.This occurs with probability 1 − φ.With probability φ, vertex v is not removed.Conditioning on Y = k, edge e does not lead to a giant component if all the k − 1 edges of v do not.In addition, conditioning on B j , event D is independent from event B i .Combining these facts together, we have Substituting ( 27) and ( 28) into (26), we have Let for i = 1, 2, . . ., b. Combining constant terms, we rewrite (29) in terms of g i (z), i.e.
Expressing (31) in the form of vectors, we have where α is a vector in [0, 1] b and f is a vector function that maps from [0, 1] b to [0, 1] b .In this section, we use boldface letters to denote vectors.The i-th entry of f (α) is denoted by Solutions of (32) are called the fixed points of the function f .Note that α i = 1 for all i = 1, 2, . . ., b, is always a root of (32).Denote point (1, 1, . . ., 1) by 1.We are searching for a condition under which α = 1 is the only solution of (32) in [0, 1] b , and a condition under which (32) has additional solutions.Define where a = (a 1 , a 2 , . . ., a b ) is a point in [0, 1] b .Matrix J (a) is called the Jacobian matrix of function f (x).For function f defined in (33), the Jacobian matrix has the following form where 1 b×b is a b × b matrix of unities, and D{g 1 (a 1 ), g 2 (a 2 ), . . ., g b (a b )} is a diagonal matrix.In (35), matrix H is a permutation matrix whose (i, j) entry is one if j = h(i), and is zero otherwise.Let φλ 1 , φλ 2 , . . ., φλ b be the eigenvalues of J (1) with Since g j is a power series with non-negative coefficients for all j, g j is strictly increasing and g j (1) > 0. Thus, J (1) is a positive matrix.According to the Perron-Frobenius theorem [14,11], φλ 1 is real, positive and strictly larger than φλ 2 in absolute value.In addition, there exists an eigenvector v associated with the dominant eigenvalue that is positive component-wise.
The existence of roots of (32) is summarized in the following main result.
The solution of (32) can be in one of two cases.
1.If 0 < φ < φ , point 1 is an attracting fixed point.In addition, it is the only fixed point in [0, 1] b .2. If φ < φ < 1, point 1 is either a repelling fixed point or a saddle point of the function f in (32).There exists another fixed point in [0, 1) b .This additional fixed point is an attracting fixed point.
The proof of Theorem 3 is presented in the appendix.Note that in case 1 of Theorem 3, the only root is α = 1.From ( 24), η i = 0 for all i = 1, 2, . . ., b.It follows that η = 0 and the network has no giant component.In case 2, the network has a giant component whose size is determined by the additional fixed point.
We first study the behavior of f in the neighborhood of 1.We consider the following iteration where the initial vector x 0 is in the neighborhood of the fixed point 1.Assume that g i (x) can be linearized, i.e. g i (x) can be approximated by keeping two terms in its Taylor expansion around one for all i = 1, 2, . . ., b.Now substituting (38) into (37) and noting that for all i = 1, 2, . . ., b, we obtain the following matrix equation where we recall that J (1) is the Jacobian matrix stated in (35).Substituting (39) repeatedly into itself, we obtain If the dominant eigenvalue φλ 1 < 1, x n − 1 → 0 and 1 is an attracting fixed point.If all eigenvalues are greater than one in absolute value, x moves away from 1.In this case, 1 is a repelling fixed point.Suppose that some eigenvalues are greater than one and some are less than one in absolute values.In this case, point 1 is called a saddle point.Point x n is attracted to 1, if x 0 − 1 is a linear combination of the eigenvectors associated with eigenvalues smaller than one in absolute values.Otherwise, x n moves away from 1.

Numerical and Simulation Results
We report our simulation results in this section.Recall that we derive the degree covariance of two neighboring vertices based on Assumption 1. Assumption 1 is somewhat restrictive.For degree sequences that do not satisfy Assumption 1, the analyses in Sections 3 and 4 are only approximate.In this section, we compare simulation results with the analytical results in Section 4.
We have simulated the construction of networks with 4000 vertices.We use the batch mean simulation method to control the simulation variance.Specifically, each simulation is repeated 100 times to obtain 100 graphs.Eq. ( 1) was applied to compute the assortativity coefficient for each graph.One average is computed for every twenty repetitions.Ninety percent confidence intervals are computed based on five averages.We have done extensive number of simulations for uniform and Poisson distributed degree distributions.We have found that simulation results on Pearson degree correlation coefficient agree extremely well with (19) for a wide range of b and q.Due to space limit, we do not present these results in the paper.We have also simulated power-law degree distributions.Specifically, we assume that the exponent of the powerlaw distribution is negative two, i.e., p k ≈ k −2 for large k.We first fix b at six.The degree correlations for power-law degree distributions are shown in Figure 1 and Figure 2 for an assortatively mixed network and a disassortatively mixed network, respectively.The discrepancy between the simulation result and the analytical result is quite noticeable in Figure 1 when q is large, while the two results agree very well in Figure 2.This is because power-law distributions can generate very large sample values for degrees.As a result, Assumption 1 may fail in this case.We decrease b to two, which increases the block size.The corresponding Pearson degree correlation function for an disassortatively mixed network is presented in Figure 3.One can see that the approximation accuracy is dramatically increased as the block size is increased.For percolation analysis, we study the critical value of φ.We assume that degrees are geometrically distributed.However, geometrical distributions do not satisfy Assumption 1. Assumption 1 is essential.Without this assumption, 1 is not a fixed point and numerical calculations would fail.We can adjust the probability masses to make Assumption 1 hold.We illustrate this modification for the b = 2 case.We start with a geometric degree distribution (1 − p)p k , where k = 0, 1, . .., and p = 2/3.The corresponding E(Z) = 2.We thus have pk = k(1 − p)p k /2, k = 0, 1, . . .We move part of the probability mass from p4 to p5 .After this modification, the distribution We study b = 2 and b = 3.In both cases, we study two permutations of blocks suggested in Section 4 for assortativity and disassortativity.For assortative networks, h(i) = i.For disassortative networks, h(i) = b + 1 − i.In the case of b = 3, we have also studied a rotational permutation, i.e., h(i) = ((i + 1) mod b) + 1.The critical values of φ obtained using (36) are shown in Table 1.We also numerically calculate the critical values of φ.In this numerical study, we gradually decrease φ until (32) fails to have a solution in the interior of [0, 1) b .From these results, we see that the critical values of φ obtained from (36) agree very well with those obtained numerically.
Finally, we study the giant component sizes of the generalized configuration models.We numerically solve (32) to obtain vector α, and then compute η using (25).In this study, we continue to assume that degrees are geometrically distributed as we did in the study of Table 1.The giant component sizes are shown in Figure 4. From this figure, we see that assortative networks have smaller percolation thresholds than disassortative networks.Hence, giant components emerge more easily in assortative networks.However, disassortative networks tend to have larger giant component sizes than assortative networks for large φ.The effect of assortativity and disassortativity to the giant component sizes and the percolation thresholds observed in this example agrees q = 0.2 q = 0.5 q = 0. with that observed in Newman [19].For the effect of q, larger values of q decrease the percolation thresholds and the giant component sizes of assortative networks.On the other hand, larger values of q increase the percolation thresholds and the giant component sizes of disassortative networks.

Conclusions
In this paper we have presented an extension of the classical configuration model.Like a classical configuration model, the extended configuration model allows users to specify an arbitrary degree distribution.In addition, the model allows users to specify a positive or a negative assortative coefficient.We derived a closed form for the assortative coefficient of this model.We verified our result with simulations.Now we prove Theorem 3.
Proof (Theorem 3).Now we analyze the first case in Theorem 3. We have shown that fixed point 1 is attracting.We now show that there is no other fixed point in [0, 1] b .Suppose not.Assume that there is another distinct fixed point.Denote it by x.From Lemma 1, we have Since g i is a power series with non-negative coefficients, g i is monotonically increasing, differentiable and g i is also increasing.Thus, J Substituting the last inequality repeatedly into itself, we have as n → ∞, since the dominant eigenvalue of J (1) is strictly less than one.We thus reach a contradiction to the assumption that x is distinct from 1. Now we consider the second case.We first show that there exists a point x in [0, 1] b such that x − f (x) ≥ 0.
Denote such a point by η.We choose where is a small positive number and v is the eigenvalue of J (1) associated with the dominant eigenvalue φλ 1 .For small , we have It follows from the above equation that where I is the b×b identity matrix.Since v is an eigenvector of J (1) associated with φλ 1 , (47) reduces to η − f (η) = (φλ 1 − 1) v.
We now show that fixed point z is attracting.From (41) since both 1 and z are fixed points, we have where z i < c i < 1.From (51), the unity is an eigenvalue of J (c) and 1 − z is the associated eigenvector.Since J (c) is a positive matrix and 1 − z is a positive vector component-wise, by the Perron-Frobenius theorem, the unity is the dominant eigenvalue of J (c) [14].By the definition in (30), g i is strictly increasing for all i.It follows that g i (c i ) > g i (z i ) and from (35) we have component-wise.Recall that we assume q < 1.With φ > 0, it is clear that both J (c) and J (z) are irreducible matrices.From Theorem 9 of [24](see also [9]), (52) implies that the spectral radius of J (z) is strictly less than that of J (c).This implies that z is an attracting fixed point.

Fig. 3 .
Fig. 3. Degree correlation of a assortative model.Power-law degree distribution and b = 2