Skip to main content

A generalized configuration model with degree correlations and its percolation analysis

Abstract

In this paper we present a generalization of the classical configuration model. Like the classical configuration model, the generalized configuration model allows users to specify an arbitrary degree distribution. In our generalized configuration model, we partition the stubs in the configuration model into b blocks of equal sizes and choose a permutation function h for these blocks. In each block, we randomly designate a number proportional to q of stubs as type 1 stubs, where q is a parameter in the range [0,1]. Other stubs are designated as type 2 stubs. To construct a network, randomly select an unconnected stub. Suppose that this stub is in block i. If it is a type 1 stub, connect this stub to a randomly selected unconnected type 1 stub in block h(i). If it is a type 2 stub, connect it to a randomly selected unconnected type 2 stub. We repeat this process until all stubs are connected. Under an assumption, we derive a closed form for the joint degree distribution of two random neighboring vertices in the constructed graph. Based on this joint degree distribution, we show that the Pearson degree correlation function is linear in q for any fixed b. By properly choosing h, we show that our construction algorithm can create assortative networks as well as disassortative networks. We present a percolation analysis of this model. We verify our results by extensive computer simulations.

Introduction

Recent advances in the study of networks that arise in field of computer communications, social interactions, biology, economics, information systems, etc., indicate that these seemingly widely different networks possess a few common properties. Perhaps the most extensively studied properties are power-law degree distributions (Barabási and Albert 1999), the small-world property (Watts and Strogatz 1998), network transitivity or “clustering” (Watts and Strogatz 1998). Other important research subjects on networks include network resilience, existence of community structures, synchronization, spreading of information or epidemics. A fundamental issue relevant to all the above research issues is the correlation between properties of neighboring vertices. In the ecology and epidemiology literature, this correlation between neighboring vertices is called assortative mixing.

In general, assortative mixing is a concept that attempts to describe the correlation between properties of two connected vertices. Take social networks for example. vertices may have ages, weight, or wealthiness as their properties. It is found that friendship between individuals are strongly affected by age, race, income, or languages spoken by the individuals. If vertices with similar properties are more likely to be connected together, we say that the network shows assortative mixing. On the other hand, if vertices with different properties are likely to be connected together, we say that the network shows disassortative mixing. It is found that social networks tend to show assortative mixing, while technology networks, information networks and biological networks tend to show disassortative mixing (Newman 2010). The assortativity level of a network is commonly measured by a quantity proposed by Newman (2003) called assortativity coefficient. If the assortativity level to be measured is degree, assortativity coefficient reduces to the standard Pearson correlation coefficient (Newman 2003). Specifically, let X and Y be the degrees of a pair of randomly selected neighboring vertices, the Pearson degree correlation function is the correlation coefficient of X and Y, i.e.

$$ \rho(X,Y)\overset{\text{def}}{=} \frac{\mathbf{\textsf{E}}(XY)-\mathbf{\textsf{E}}(X)\mathbf{\textsf{E}}(Y)}{\sigma_{X} \sigma_{Y}}, $$
(1)

where σX and σY denote the standard deviation of X and Y respectively. We refer the reader to (Newman 2003; 2010; Litvak and van der Hofstad 2012; Xulvi-Brunet and Sokolov 2005; Nikoloski et al. 2005; Boguñá et al. 2003; Moreno et al. 2003) for more information on assortativity coefficient and other related measures. In this paper we shall focus on degree as the vertex property.

Researchers have found that assortative mixing plays a crucial role in the dynamic processes, such as information or disease spreading, taking place on the topology defined by a network. Boguñá et al. (2002) studied epidemic spreading in correlated networks. Boguñá et al. (2003) and Eguíluz (2002) studied the epidemic threshold in correlated networks with scaled-free degree distributions. Moreno et al. (2003) proposed a numerical method to solve epidemic models in large correlated networks, where Monte Carlo simulations are difficult. Schläpfer et al. (2012) studied the propogation speed in a correlated scale-free network. Johnson et al. (2010) showed that disassortative networks arise naturally from maximum entropy principles. Braha et al. (2007, 2016) showed the important dynamic role of node-node correlations and assortativity in engineering directed networks. Pomerance et al. (2009) examined the role of assortative mixing in the context of genetic Boolean networks and the effect of assortativity on the stability of Boolean networks. Assortativity also has a fundamental impact to the network resilience as the network loses vertices or edges (Vázquez and Moreno 2003). In order to study information propagation or network resilience, researchers may need to build models with assortative mixing or disassortative mixing. Callaway et al. (2001) proposed a growing network, in which at each time step a vertex is added and with a probability two vertices are chosen randomly and are connected by an edge. The authors showed that the network possesses positive degree correlation. Catanzaro et al. (2004) proposed a growing model based on preferential attachment. The authors showed that the growing network possesses positive degree correlation. Zhou et al. (2008) proposed another growing network model generated as assortatively as possible based on a greedy algorithm. Methods in Callaway et al. (2001); Catanzaro et al. (2004); Zhou et al. (2008) have time complexity of O(n), where n is the number of vertices. However, they cannot produce disassortatively mixed networks. Users have no explicit control of the level of correlation. Ramezanpour et al. (2005) analyzed the edge-dual graphs of configuration models. They showed that the edge-dual graphs possess non-zero degree correlations and large clustering coefficients. The time complexity of transforming a graph to its edge-dual graph is O(m), where m is the number of edges in the network. However, it seems not possible to tell if the degree correlation is positive or negative analytically. In addition, the degree distribution of the edge-dual graphs can not be determined independently. Newman (2003) and Xulvi-Brunet et al. (2005) proposed algorithms to generate networks with assortative mixing or disassortative mixing by rewiring edges. These algorithms are iteration based and their execution time seems uncontrolled. Bassler et al. (2015) proposed an algorithm to construct a random graph for a given joint degree matrix. The time complexity of the algorithm is O(nm).

In this paper we propose a method to generate random networks that possess either assortative mixing property or disassortative mixing property. Our method is based on a modified construction method of the configuration model. Bender et al. (1978), Bollob́as (1980) and Molloy et al. (1995; 1998) laid down a mathematical foundation for random networks with a given degree distribution. Newman et al. (2001) proposed a construction algorithm for this class of random graphs. Later, graphs so constructed are commonly referred to as the configuration models. Our modified construction method is as follows. Recall that in the construction of configuration models each edge has two “stubs”. We sort and arrange the stubs of all vertices according to degrees. We divide the stubs into blocks and choose a permutation among blocks. To connect stubs, stubs are either randomly connected to a stub in the associated block, or are randomly connected to any stub available. The details of our construction algorithm are presented in “Construction of a random network” section. Our method has an advantage that specified degree distributions are preserved in the constructed networks. In addition, our method allows us to derive a closed form for the Pearson degree correlation function for two random neighboring vertices under an assumption. The time complexity of our construction algorithm is O(m).

In this paper we present an application of the proposed random graph model. We consider a percolation analysis of the generalized configuration model. Percolation has been a powerful tool to study network resilience under breakdowns or attacks. Cohen et al. (2000) studied the resilience of networks with scale-free degree distributions. Particularly, Cohen et. al studied the stability of such networks, including the Internet, subject to random crashes. Percolation has also been used to study disease spreading in epidemic networks (Cardy and Grassberger 1985; Moore and Newman 2000; Sander et al. 2002). Percolation has been used to study the effectiveness of immunization or quarantine to confine a disease. Schwartz et al. (2002) studied percolation in a directed scale-free network. Newman (2002) and Vázquez et al. (2003) studied percolation in networks with degree correlation. Vázquez et al. assumed general random networks, their solution involves with the eigenvalues of a D×D matrix, where D is the total number of degrees in the network. The percolation analysis of our model involves with solving roots of a simultaneous system of b nonlinear equations, where b is the number of blocks in the generalized configuration model. Since b is typically a small integer, we have significantly reduced the complexity.

The rest of this paper is organized as follows. In “Construction of a random network” section we present our construction method of a random network. In “Joint distribution of degrees” section we derive a closed form for the joint degree distribution of two randomly selected neighboring vertices from a network constructed by the algorithm in “Construction of a random network” section. In “Assortativity and disassortativity” section, we show that the Pearson degree correlation function of two neighboring vertices is linear. We then show how permutation function h should be selected such that a constructed random graph is associatively or disassortatively mixed. In “An application: percolation” section we present a percolation analysis of this model. Numerical examples and simulation results are presented in “Numerical and simulation results” section. Finally, we give conclusions in “Conclusions” section.

Construction of a random network

Research on random networks was pioneered by Erdős and Rényi (1959). Although Erdős-Rényi’s model allows researchers to study many network problems, it is limited in that the vertex degree has a Poisson distribution asymptotically as the network grows in size. The configuration model (Bender and Canfield 1978; Molloy and Reed 1995) can be considered as an extension of the Erdős and Rényi model that allows general degree distributions. Configuration models have been used successfully to study the size of giant components. It has been used to study network resilience when vertices or edges are removed. It has also been used to study the epidemic spreading on networks. We refer the readers to (Newman 2010) for more details. In this paper we propose an extension of the classical configuration model. This model generates networks with specified degree sequences. In addition, one can specify a positive or a negative degree correlation for the model. Let there be n vertices and let pk be the probability that a randomly selected vertex has degree k. We sample the degree distribution {pk}n times to obtain a degree sequence k1,k2,…,kn for the n vertices. We give each vertex i a total of ki stubs. There are \(2m=\sum _{i=1}^{n} k_{i}\) stubs totally, where m is the number of edges of the network. In a classical configuration model, we randomly select an unconnected stub, say s, and connect it to another randomly selected unconnected stub in [1,2m]−{s}. We repeat this process until all stubs are connected. The resulting network can be viewed as a matching of the 2m stubs. Each possible matching occurs with equal probability. The consequence of this construction is that the degree correlation of a randomly selected pair of neighboring vertices is zero. To achieve nonzero degree correlation, we arrange the 2m stubs in ascending order (descending order will also work) according to the degree of the vertices, to which the stubs belong. We label the stubs accordingly. We partition the 2m stubs into b blocks evenly. We select integer b such that 2m is divisible by b. Each block has 2m/b stubs. Block i, where i=1,2,…,b, contains stubs (i−1)(2m/b)+j for j=1,2,…,2m/b. Next, we choose a permutation function h of {1,2,…,b}. If h(i)=j, we say that block j is associated with block i. In this paper we select h such that

$$h(h(i))=i, $$

i.e., if blocks i and h(i) are mutually associated with each other. In each block, we randomly designate 2mq/b stubs as type 1 stubs, where q is a parameter in the range [0,1). Other stubs are designated as type 2 stubs. Randomly select an unconnected stub. Suppose that this stub is in block i. If it is a type 1 stub, connect this stub to a randomly selected unconnected type 1 stub in block h(i). If it is a type 2 stub, connect it to a randomly selected unconnected type 2 stub in [1,2m]. We repeat this process until all stubs are connected. The construction algorithm is shown in Algorithm 1.

We make a few remarks.

Remark 1

  1. 1.

    First, note that in networks constructed by this algorithm, there are mq edges that have two type-1 stubs on their two sides. These edges create degree correlation in the network. On the other hand, there are m(1−q) edges in the network that have two type-2 stubs on their two sides. These edges do not contribute towards degree correlation in the network. We remark that the graphs produced by our construction algorithm preserve user’s degree distributions. Under an assumption to be stated in “Joint distribution of degrees” section, we shall derive a simple expression (Eq. (19) in Theorem 2) for the degree correlation coefficient of the constructed graphs. This expression allows users to specify a targeted level of correlation. Specifically, users choose q to control the magnitude of correlation. Function h controls the sign of correlation coefficients, i.e., assortative mixing versus disassortative mixing.

  2. 2.

    Recall that the ensemble of a random graph consists of all matchings of stubs. For configuration models, all matchings of stubs are equally likely, which implies that all graphs in the ensemble occur with the same probability. In contrast, the distribution of graphs produced by our construction algorithm is quite complicated. It requires further investigations in future work. The ensemble of the generalized configuration model consists of groups of random networks. Each group corresponds to a particular assignment of types to stubs. In a particular group in the ensemble, a randomly selected stub connects to another randomly selected stub in the associated block with probability q. With probability 1−q, a stub connects to a randomly selected stub in [1,2m].

  3. 3.

    Note that standard configuration models can have multiple edges connecting two particular vertices. There can also be edges connecting a vertex to itself. These are called multi-edges and self edges. In our constructed networks, multi-edges and self edges can also exist. However, it is not difficult to show that the expected density of multi-edges and self edges approaches to zero as n becomes large. Due to space limit, we shall not address this issue in this paper.

    We also remark that since we allow multi-edges and self edges, our construction algorithm is simple, efficient, and unbiased. If multi-edges and self edges are not allowed, the construction can either take extremely long time or a bias is introduced. Klein-Hennig et al. (2012) showed that the bias can persist even as networks grow in size.

Joint distribution of degrees

Consider a randomly selected edge in a random network constructed by the algorithm described in “Construction of a random network” section. In this section we analyze the joint degree distribution of the two vertices at the two ends of the edge.

We randomly select a vertex and let Z be the degree of this vertex. Since the selection of vertices is random,

$$\Pr(Z=k_{i})=\frac{1}{n} $$

for i=1,2,…,n. Thus, the expectation of Z is

$$\mathbf{\textsf{E}}(Z)=\frac{\sum_{i=1}^{n} k_{i}}{n}= \frac{2m}{n}. $$

The expectation of Z can also be expressed as

$$ \mathbf{\textsf{E}}(Z)=\sum_{k=0}^{\infty} k p_{k}. $$
(2)

The expected number of stubs of the network is E(Zn. We would like to evenly allocate these stubs into b blocks such that each block has nE(Z)/b stubs on average. To provide rigorous mathematical analysis, we make the following assumption.

Assumption 1

The degree distribution {pk} is said to satisfy this assumption if one can find mutually disjoint sets H1,H2,…,Hb, such that

$$\bigcup_{i=1}^{b} H_{i}=\{0, 1, 2, \ldots\} $$

and

$$ \sum_{k\in H_{i}} k p_{k} =\mathbf{\textsf{E}}(Z)/b $$
(3)

for all i=1,2,…,b. In addition, we assume that the degree sequence k1,k2,…,kn sampled from the distribution {pk} can be evenly placed in b blocks. Specifically, there exist mutually disjoint sets H1,H2,…,Hb that satisfy

  1. 1.

    \(\bigcup _{i=1}^{b} H_{i}=\{1, 2, \ldots, n\}\),

  2. 2.

    kikj for any \(i\in H_{\ell _{1}}, j\in H_{\ell _{2}}, \ell _{1}\ne \ell _{2}\), and

  3. 3.

    \(\sum _{j\in H_{i}} k_{j}=2m/b\) for all i=1,2,…,b.

We randomly select a stub in the range [1,2m]. Denote this stub by t. Let v be the vertex, with which stub t is associated. Let Y be the degree of v. Now connect stub t to a randomly selected stub according to the construction algorithm in “Construction of a random network” section. Let this stub be denoted by s. Let u be the vertex, with which s is associated, and let X be the degree of u. Since stub t is randomly selected from range [1,2m], the distribution of Y is

$$ \Pr(Y=k)=\frac{n k p_{k}}{2m} = \frac{k p_{k}}{\mathbf{\textsf{E}}(Z)}, $$
(4)

where Z is the degree of a randomly selected vertex.

To study the joint pmf of X and Y, we first study the conditional pmf of X, given Y, and the marginal pmf X. In the rest of this section, we assume that Assumption 1 holds. Suppose x is a degree in set Hi. The total number of stubs which are associated with vertices with degree x is nxpx. By Assumption 1, all nxpx stubs are in block i. We consider two cases, in which stub t connects to stub s. In the first case, stub t is of type 1. This occurs with probability q. In this case, stub t must belong to a vertex with a degree in block h(i). With probability

$$ \frac{qnxp_{x}}{2mq/b-\delta_{i, h(i)}}, $$
(5)

the construction algorithm in “Construction of a random network” section connects t to stub s. In (5) δi,j is the Kronecker delta, is equal to one if i=j, and is equal to zero otherwise. In the second case, stub t is of type 2. This occurs with probability 1−q. In this case, stub t can be associated with a degree in any block. With probability

$$ \frac{(1-q)nx p_{x}}{2m(1-q)-1} $$
(6)

the construction algorithm connects stub t to stub s. Combining the two cases in (5) and (6), we have

$$ \Pr(X=x | Y=y) =\frac{q^{2} {nxp}_{x}}{2mq/b-\delta_{i, h(i)}}+ \frac{(1-q)^{2} nx p_{x}}{2m(1-q)-1}, $$
(7)

for yHh(i). If yHj for jh(i),

$$ \Pr(X=x | Y=y)= \frac{(1-q)^{2} {nxp}_{x}}{2m(1-q)-1}. $$
(8)

Now assume that the network is large. That is, we consider a sequence of constructed graphs, in which n,m, while keeping 2m/n=E(Z). Under this asymptotics, Eqs. (7) and (8) converge to

$$ \Pr(X=x | Y=y)\to \left\{\begin{array}{ll} \frac{qb+(1-q)}{\mathbf{\textsf{E}}(Z)}{xp}_{x}, & \quad y\in H_{h(i)}\\ \frac{1-q}{\mathbf{\textsf{E}}(Z)}{xp}_{x}, & \quad y\in H_{j}, j\ne h(i). \end{array}\right. $$
(9)

From the law of total probability we have

$$\begin{array}{@{}rcl@{}} \Pr(X=x)&=&\sum_{y\in H_{h(i)}} \Pr(X=x | Y=y)\Pr(Y=y)\\ &&\quad+\sum_{j\ne h(i)}\sum_{y\in H_{j}} \Pr(X=x | Y=y)\Pr(Y=y). \end{array} $$
(10)

Substituting (4) and (9) into (10), we have

$$ \Pr(X=x) =\sum_{y\in H_{h(i)}} \frac{qb+(1-q)}{\mathbf{\textsf{E}}(Z)}{xp}_{x} \frac{y p_{y}}{\mathbf{\textsf{E}}(Z)} +\sum_{j\ne h(i)}\sum_{y\in H_{j}} \frac{1-q}{\mathbf{\textsf{E}}(Z)}{xp}_{x}\frac{y p_{y}}{\mathbf{\textsf{E}}(Z)}. $$
(11)

Since the partition of stubs is uniform,

$$\sum_{y\in H_{j}} n y p_{y}=2m/b $$

and thus,

$$\sum_{y\in H_{j}} y p_{y}=\mathbf{\textsf{E}}(Z)/b $$

for any j=1,2,…,b. Substituting this into (11), we have

$$ \Pr(X=x)=\frac{x p_{x}}{\mathbf{\textsf{E}}(Z)}. $$
(12)

From (9) we derive the joint pmf of X and Y

$$\begin{array}{@{}rcl@{}} &&\Pr(X=x, Y=y) =\Pr(X=x | Y=y)\Pr(Y=y)\\ &&=\left\{\begin{array}{ll} \left(b q+1-q\right)\frac{x p_{x}}{\mathbf{\textsf{E}}(Z)} \frac{y p_{y}}{\mathbf{\textsf{E}}(Z)},\\ & \quad y\in H_{h(i)}, x\in H_{i}(1-q)\frac{x p_{x}}{\mathbf{\textsf{E}}(Z)}\frac{yp_{y}}{\mathbf{\textsf{E}}(Z)},\\ &\quad x\in H_{i}, y\in H_{j}, j\ne h(i) \end{array}\right. \\ &&=C_{ij}\frac{xy p_{x} p_{y}}{(\mathbf{\textsf{E}}(Z))^{2}}, \end{array} $$
(13)

where

$$ C_{ij}=\left\{\begin{array}{ll} b q+1-q, & \quad h(i)=j\\ 1-q, & \quad h(i)\ne j. \end{array}\right. $$
(14)

We summarize the results in the following theorem.

Theorem 1

Let \({\mathcal {G}}\) be a graph generated by the construction algorithm described in “Construction of a random network” section based on a sequence of degrees k1,k2,…,kn. Randomly select an edge from \({\mathcal {G}}\). Let X and Y be the degrees of the two vertices at the two ends of the edge. Then, the marginal pmf of X and Y are given in (12) and (4), respectively. The joint pmf of X and Y is given in (13).

Finally, we present some remarks on Assumption 1.

Remark 2

  1. 1.

    We first note that Assumption 1 is very restrictive. It is assumed only for the sake of mathematical cleanness. Distributions functions rarely satisfy this assumption. Without Assumption 1, it is possible that probability masses of some degrees are across boundaries of blocks. That is, part of some probability masses can be in one block and part of the masses is in a neighboring block. One needs to keep track of how probability masses are split across boundaries. Without Assumption 1, all analyses reported in this paper still can be done. However, the result can be very messy. This additional complexity not only offers no further insights, but may also clog the readability of this paper. For degree sequences that do not satisfy Assumption 1, the analyses in “Joint distribution of degrees”, “Assortativity and disassortativity” and “An application: percolation” sections are only approximate. In “Numerical and simulation results” section, we shall compare simulation results of models constructed without Assumption 1 with analytical results. We shall see that the difference is very small.

  2. 2.

    We also remark that from (2) one can view

    $$ \tilde p_{k}=\frac{k p_{k}}{\mathbf{\textsf{E}}(Z)} $$

    as a probability mass function. Eq. (3) can be equivalently be expressed as

    $$ \sum_{k\in H_{i}} \tilde p_{k}=1/b $$

    for all i=1,2,…,b. We can equivalently say that distribution \(\{\tilde p_{k}\}\) satisfies Assumption 1.

  3. 3.

    Finally, we remark that a common way to generate stubs from a degree distribution is to first generate a sequence of uniform pseudo random variables over [0,1]. Then, transform the uniform random variables using the inverse cumulative distribution function of the degree distribution (Bratley et al. 1987). This approach would encounter difficulties as far as Assumption 1 is concerned, because the stubs produced are not likely to be evenly allocated among blocks. If the network is large, the following approach based on proportionality can be used. Specifically, for degree k with probability mass pk, create npk vertices and nkpk corresponding stubs. If n is large, the strong law of large numbers ensures that this approach and the inversion method produce approximately the same number of stubs. Using this approach, the probability masses of the degree distribution and the stubs sampled from the degree distribution both satisfy Assumption 1 and can be placed evenly in blocks at the same time.

Assortativity and disassortativity

In this section, we present an analysis of the Pearson degree correlation function of two random neighboring vertices. The goal is to search for permutation function h such that the numerator of (1) is non-negative (resp. non-positive) for the network constructed in this section.

From (12), we obtain the expected value of X

$$ \mathbf{\textsf{E}}(X)=\sum_{x} x \Pr(X=x) =\sum_{i=1}^{b} \sum_{x\in H_{i}}\frac{x^{2} p_{x}}{\mathbf{\textsf{E}}(Z)} =\frac{1}{\mathbf{\textsf{E}}(Z)}\sum_{i=1}^{b} u_{i}, $$
(15)

where

$$ u_{i} \stackrel{\scriptstyle\rm def}{=} \sum_{x\in H_{i}} x^{2} p_{x}. $$
(16)

Now we consider the expected value of the product XY. We have from (13) that

$$\begin{array}{*{20}l} &\mathbf{\textsf{E}}(XY)=\sum_{x}\sum_{y} xy\Pr(X=x, Y=y) =\sum_{i=1}^{b}\sum_{j=1}^{b} \sum_{x\in H_{i}} \sum_{y\in H_{j}} \frac{C_{ij} x^{2} y^{2} p_{x} p_{y}}{(\mathbf{\textsf{E}}(Z))^{2}}\\ &=\sum_{i}\sum_{j} \frac{C_{ij}u_{i} u_{j}}{(\mathbf{\textsf{E}}(Z))^{2}} =\frac{1}{(\mathbf{\textsf{E}}(Z))^{2}}\left((1-q)\sum_{i}\sum_{j} u_{i} u_{j} + qb \sum_{i} u_{i} u_{h(i)}\right). \end{array} $$
(17)

Note from (15) and (18) that

$$ \mathbf{\textsf{E}}(XY)-\mathbf{\textsf{E}}(X)\mathbf{\textsf{E}}(Y) =\frac{q}{(\mathbf{\textsf{E}}(Z))^{2}}\Big (b\sum_{i} u_{i} u_{h(i)}- \sum_{i}\sum_{j} u_{i} u_{j} \Big). $$
(18)

Based on (18), we summarize the Pearson degree correlation function in the following theorem.

Theorem 2

Let \({\mathcal {G}}\) be a graph generated by the construction algorithm in “Construction of a random network” section. Randomly select an edge from the graph. Let X and Y be the degrees of the two vertices at the two ends of this edge. Then, the Pearson degree correlation function of X and Y is

$$ \rho(X, Y)=cq, $$
(19)

where

$$c=\frac{b\sum_{i} u_{i} u_{h(i)}- \sum_{i}\sum_{j} u_{i} u_{j}}{\sigma_{X}\sigma_{Y}(\mathbf{\textsf{E}}(Z))^{2}}, $$

and σX and σY are the standard deviation of the pmfs in (12) and (4).

In view of (19), the sign of ρ(X,Y) depends on the constant c. To generate assortative (resp. disassortative) mixing random graphs we sort ui’s in descending order first and then choose the permutation h that maps the largest number of ui’s to the largest (resp. smallest) number of ui’s. This is formally stated in the following corollary.

Corollary 1

Let π(·) be the permutation such that uπ(i) is the ith largest number among ui,i=1,2,…,b, i.e.,

$$u_{\pi(1)} \ge u_{\pi(2)} \ge \ldots \ge u_{\pi(b)}. $$

(i) If we choose the permutation h with h(π(i))=π(i) for all i, then the constructed random graph is assortative mixing. (ii) If we choose the permutation h with h(π(i))=π(b+1−i) for all i, then the constructed random graph is disassortative mixing.

The proof of Corollary 1 is based on the famous Hardy, Littlewood and Pólya rearrangement inequality (see e.g., the book Marshall et al. (2011), pp. 141).

Proposition 1

(Hardy, Littlewood and Pólya rearrangement inequality) If ui,vi,i=1,2,…,b are two sets of real number. Let u[i] (resp. v[i]) be the ith largest number among ui,i=1,2,…,b (resp. vi,i=1,2,…,b). Then

$$ \sum_{i=1}^{b} u_{[i]} v_{[b-i+1]} \le \sum_{i=1}^{b} u_{i} v_{i} \le \sum_{i=1}^{b} u_{[i]} v_{[i]}. $$
(20)

Proof

(Corollary 1) (i) Consider the circular shift permutation σj(·) with σj(i)=(i+j−1 mod b)+1 for j=1,2,…,b. From symmetry, we have σj(i)=σi(j). Thus,

$$ \sum_{i=1}^{b} \sum_{j=1}^{b} u_{i} u_{j}=\sum_{i=1}^{b} \sum_{j=1}^{b} u_{i} u_{\sigma_{i}(j)}= \sum_{j=1}^{b} \sum_{i=1}^{b} u_{i} u_{\sigma_{j}(i)}. $$
(21)

Using the upper bound of the Hardy, Littlewood and Pólya rearrangement inequality in (21) and h(π(i))=π(i) yields

$$ \sum_{i=1}^{b} u_{i} u_{\sigma_{j}(i)} \le \sum_{i=1}^{b} u_{[i]} u_{[i]} = \sum_{i=1}^{b} u_{\pi(i)} u_{h(\pi(i))} = \sum_{i=1}^{b} u_{i} u_{h(i)}. $$
(22)

In view of (18) and (21), we conclude that the generated random graph is assortative mixing.

(ii) Using the lower bound of the Hardy, Littlewood and Pólya rearrangement inequality in (21) and h(π(i))=π(b+1−i) yields

$$ \sum_{i=1}^{b} u_{i} u_{\sigma_{j}(i)}\ge \sum_{i=1}^{b} u_{[i]} u_{[b+1-i]} =\sum_{i=1}^{b} u_{\pi(i)} u_{h(\pi(i))} = \sum_{i=1}^{b} u_{i} u_{h(i)}. $$
(23)

In view of (18) and (21), we conclude that the generated random graph is disassortative mixing. □

An application: percolation

In this section we present a percolation analysis of the generalized configuration model.

We consider node percolation of a random network with n vertices. Recall that we define Z to be the degree of a randomly selected vertex in the network. Let pk= Pr(Z=k) be given and let E(Z) be the expected value of Z.

Let ϕ be the probability that a node stays in the network after the percolation. That is, 1−ϕ is the probability that a node is removed from the network. In the literature of percolation analysis, ϕ is called the occupation probability. We assume that ϕ(0,1). Let αi be the probability that along an edge with one end attached to a stub in block i, one can not reach a giant component. Let ηi be the probability that a randomly selected vertex from block i is in a giant component after the random removal of vertices. Then,

$$ \eta_{i}=\phi\sum_{k\in H_{i}} p_{k} \left(1-\alpha_{i}^{k}\right). $$
(24)

Let η be the probability that a randomly selected vertex is in a giant component after the random removal of vertices. Then,

$$ \eta=\sum_{i=1}^{b} \eta_{i} \sum_{k\in H_{i}} p_{k}. $$
(25)

We now derive a set of equations for αi,i=1,2,…,b. We randomly select an edge. Call this edge e. Let D be the event that e does not connect to a giant component. Let Bi be the event that one end of this edge is associated with a stub in block i. Suppose that the other end of e is attached to a vertex called v. Then by the law of total probability we have

$$ \Pr(D | B_{i})=\sum_{j=1}^{b}\sum_{k=1}^{\infty} \Pr(D | Y=k, B_{j}, B_{i}) \Pr(Y=k, B_{j} | B_{i}), $$
(26)

where Y is the degree of v and Bj is the event that vertex v is in block j. According to (9), we have

$$ \Pr(Y=k, B_{j} | B_{i})=\left\{\begin{array}{ll} \frac{qb+(1-q)}{\mathbf{\textsf{E}}(Z)}k p_{k}, & \quad k\in H_{h(i)},\\ \frac{1-q}{\mathbf{\textsf{E}}(Z)}k p_{k}, & \quad k\in H_{j}, j\ne h(i). \end{array}\right. $$
(27)

If vertex v is removed from the network through percolation, then edge e does not lead to a giant component. This occurs with probability 1−ϕ. With probability ϕ, vertex v is not removed. Conditioning on Y=k, edge e does not lead to a giant component if all the k−1 edges of v do not. In addition, conditioning on Bj, event D is independent from event Bi. Combining these facts together, we have

$$\begin{array}{@{}rcl@{}} \Pr(D | Y=k, B_{j}, B_{i})&=& \Pr(D | Y=k, B_{j}) \\ &=& 1-\phi+\phi \alpha_{j}^{k-1}. \end{array} $$
(28)

Substituting (27) and (28) into (26), we have

$$\begin{array}{@{}rcl@{}} \alpha_{i} &=&\sum_{k\in H_{h(i)}}\left(1-\phi+\phi \alpha_{h(i)}^{k-1}\right) \frac{(bq+1-q)k p_{k}}{\mathbf{\textsf{E}}(Z)}\\ &&\quad+\sum_{j=1,j\ne h(i)}^{b} \sum_{k\in H_{j}}\left(1-\phi+\phi \alpha_{j}^{k-1}\right) \frac{(1-q)k p_{k}}{\mathbf{\textsf{E}}(Z)}. \end{array} $$
(29)

Let

$$ g_{i}(x)=\sum_{k\in H_{i}}\frac{k p_{k} x^{k-1}}{\mathbf{\textsf{E}}(Z)} $$
(30)

for i=1,2,…,b. Combining constant terms, we rewrite (29) in terms of gi(z), i.e.

$$ \alpha_{i} = 1-\phi+\phi\left((bq+1-q)g_{h(i)}(\alpha_{h(i)}) +(1-q)\sum_{j=1,j\ne h(i)}^{b} g_{j}(\alpha_{j})\right). $$
(31)

Expressing (31) in the form of vectors, we have

$$ {\boldsymbol{\alpha}}={\boldsymbol{{f}}}({\boldsymbol{\alpha}}), $$
(32)

where α is a vector in [0,1]b and f is a vector function that maps from [0,1]b to [0,1]b. In this section, we use boldface letters to denote vectors. The i-th entry of f(α) is denoted by

$$ f_{i}({\boldsymbol{\alpha}})= 1-\phi+\phi\left((bq+1-q)g_{h(i)}(\alpha_{h(i)}) +(1-q)\sum_{j=1, j\ne h(i)}^{b} g_{j}(\alpha_{j})\right). $$
(33)

Solutions of (32) are called the fixed points of the function f.

Note that αi=1 for all i=1,2,…,b, is always a root of (32). Denote point (1,1,…,1) by 1. We are searching for a condition under which α=1 is the only solution of (32) in [0,1]b, and a condition under which (32) has additional solutions. Define

$$ {\boldsymbol{{J}}}({\boldsymbol{{a}}})= \left.\left(\begin{array}{cccc} \frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{1}} & \frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{2}} & \ldots & \frac{\partial f_{1}(\boldsymbol{x})}{\partial x_{b}}\\ \frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{1}} & \frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{2}} & \ldots & \frac{\partial f_{2}(\boldsymbol{x})}{\partial x_{b}}\\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial f_{b}(\boldsymbol{x})}{\partial x_{1}} & \frac{\partial f_{b}(\boldsymbol{x})}{\partial x_{2}} & \ldots & \frac{\partial f_{b}(\boldsymbol{x})}{\partial x_{b}} \end{array}\right)\right|_{\boldsymbol{x}={\boldsymbol{a}}}, $$
(34)

where a=(a1,a2,…,ab) is a point in [0,1]b. Matrix J(a) is called the Jacobian matrix of function f(x). For function f defined in (33), the Jacobian matrix has the following form

$$ {\boldsymbol{{J}}}({\boldsymbol{{a}}})=\phi (b q {\boldsymbol{{H}}}+(1-q){\boldsymbol{1}}_{b\times b}) {\boldsymbol{{D}}}\{g_{1}'(a_{1}), g_{2}'(a_{2}), \ldots, g_{b}'(a_{b})\}, $$
(35)

where 1b×b is a b×b matrix of unities, and D{g1′(a1),g2′(a2),…,gb′(ab)} is a diagonal matrix. In (35), matrix H is a permutation matrix whose (i,j) entry is one if j=h(i), and is zero otherwise. Let ϕλ1,ϕλ2,…,ϕλb be the eigenvalues of J(1) with

$$|\lambda_{1}|\ge |\lambda_{2}|\ge \ldots\ge |\lambda_{b}|. $$

Since gj is a power series with non-negative coefficients for all j, gj′ is strictly increasing and gj′(1)>0. Thus, J(1) is a positive matrix. According to the Perron-Frobenius theorem (Meyer 2000; Lancaster and Tismenetsky 1985), ϕλ1 is real, positive and strictly larger than ϕλ2 in absolute value. In addition, there exists an eigenvector v associated with the dominant eigenvalue that is positive component-wise.

The existence of roots of (32) is summarized in the following main result.

Theorem 3

Let

$$ \phi^{\star}=1/\lambda_{1}. $$
(36)

The solution of (32) can be in one of two cases.

  1. 1.

    If 0<ϕ<ϕ, point 1 is an attracting fixed point. In addition, it is the only fixed point in [0,1]b.

  2. 2.

    If ϕ<ϕ<1, point 1 is either a repelling fixed point or a saddle point of the function f in (32). There exists another fixed point in [0,1)b. This additional fixed point is an attracting fixed point.

The proof of Theorem 3 is presented in the Appendix. Note that in case 1 of Theorem 3, the only root is α=1. From (24), ηi=0 for all i=1,2,…,b. It follows that η=0 and the network has no giant component. In case 2, the network has a giant component whose size is determined by the additional fixed point.

We first study the behavior of f in the neighborhood of 1. We consider the following iteration

$$ {\boldsymbol{{x}}}_{n+1}={\boldsymbol{{f}}}({\boldsymbol{{x}}}_{n}),\qquad n=0,1, 2, \ldots $$
(37)

where the initial vector x0 is in the neighborhood of the fixed point 1. Assume that gi(x) can be linearized, i.e. gi(x) can be approximated by keeping two terms in its Taylor expansion around one

$$ g_{i}(x)\approx g_{i}(1)+g_{i}'(1)(x-1) $$
(38)

for all i=1,2,…,b. Now substituting (38) into (37) and noting that

$$\begin{array}{@{}rcl@{}} &&g_{i}(1) =1/b\\ && (bq+1-q)g_{h(i)}(1)+(1-q)\sum_{j=1, j\ne h(i)}^{b} g_{j}(1)=1. \end{array} $$

for all i=1,2,…,b, we obtain the following matrix equation

$$ {\boldsymbol{{x}}}_{n+1}-{\boldsymbol{1}} = {\boldsymbol{{J}}}({\boldsymbol{1}}) ({\boldsymbol{{x}}}_{n} -{\boldsymbol{1}}), $$
(39)

where we recall that J(1) is the Jacobian matrix stated in (35). Substituting (39) repeatedly into itself, we obtain

$${\boldsymbol{{x}}}_{n}-{\boldsymbol{1}}=({\boldsymbol{{J}}}({\boldsymbol{1}}))^{n} ({\boldsymbol{{x}}}_{0}-{\boldsymbol{1}}). $$

If the dominant eigenvalue ϕλ1<1,xn10 and 1 is an attracting fixed point. If all eigenvalues are greater than one in absolute value, x moves away from 1. In this case, 1 is a repelling fixed point. Suppose that some eigenvalues are greater than one and some are less than one in absolute values. In this case, point 1 is called a saddle point. Point xn is attracted to 1, if x01 is a linear combination of the eigenvectors associated with eigenvalues smaller than one in absolute values. Otherwise, xn moves away from 1.

Numerical and simulation results

We report our simulation results in this section. Recall that we derive the degree covariance of two neighboring vertices based on Assumption 1. Assumption 1 is extremely restrictive. For degree sequences that do not satisfy Assumption 1, the analyses in “Joint distribution of degrees” and “Assortativity and disassortativity” sections are only approximate. In this section, we compare simulation results with the analytical results in “Assortativity and disassortativity” section.

We have simulated the construction of networks with 4000 vertices. We use the batch mean simulation method to control the simulation variance. Specifically, each simulation is repeated 100 times to obtain 100 graphs. Eq. (1) was applied to compute the assortativity coefficient for each graph. One average is computed for every twenty repetitions. Ninety percent confidence intervals are computed based on five averages. We have done extensive number of simulations for uniform and Poisson distributed degree distributions. We have found that simulation results on Pearson degree correlation coefficient agree extremely well with (19) for a wide range of b and q. Due to space limit, we do not present these results in the paper. We have also simulated power-law degree distributions. Specifically, we assume that the exponent of the power-law distribution is negative two, i.e., pkk−2 for large k. We first fix b at six. The degree correlations for power-law degree distributions are shown in Figs. 1 and 2 for an assortatively mixed network and a disassortatively mixed network, respectively. The discrepancy between the simulation result and the analytical result is quite noticeable in Fig. 1 when q is large, while the two results agree very well in Fig. 2. This is because power-law distributions can generate very large sample values for degrees. As a result, Assumption 1 may fail in this case. We decrease b to two, which increases the block size. The corresponding Pearson degree correlation function for an disassortatively mixed network is presented in Fig. 3. One can see that the approximation accuracy is dramatically increased as the block size is increased.

Fig. 1
figure 1

Degree correlation of an assortative model. Power-law degree distribution and b=6

Fig. 2
figure 2

Degree correlation of a disassortative model. Power-law degree distribution and b=6

Fig. 3
figure 3

Degree correlation of a assortative model. Power-law degree distribution and b=2

For percolation analysis, we study the critical value of ϕ. We assume that degrees are geometrically distributed. However, geometrical distributions do not satisfy Assumption 1. Assumption 1 is essential. Without this assumption, 1 is not a fixed point and numerical calculations would fail. We must adjust the probability masses to make Assumption 1 hold. We illustrate this modification for the b=2 case. We start with a geometric degree distribution (1−p)pk, where k=0,1,…, and p=2/3. The corresponding E(Z)=2. We thus have

$$\tilde p_{k}=k (1-p)p^{k}/2,\quad k=0, 1, \ldots $$

We move part of the probability mass from \(\tilde p_{4}\) to \(\tilde p_{5}\). After this modification, the distribution \(\{\tilde p_{k}\}\) becomes

$$ \tilde p_{k}=\left\{\begin{array}{ll} k(1-p)p^{k}/2 & \text{\(k\ge 0, k\ne 4, k\ne 5\);}\\ 2(1-p)p^{4}-0.0782 & \text{if \(k=4\);}\\ 5(1-p)p^{5}/2 +0.0782 & \text{if \(k=5\)}. \end{array}\right. $$
(40)

Let H1={0,1,2,3,4} and H2={k:k≥5}. It is easy to verify that \(\{\tilde p_{k}\}\) satisfies Assumption 1.

We study b=2 and b=3. Note that (40) is modified for b=2. For b=3, one needs to adjust two probability masses. We omit the details for space reason. In both cases, we study two permutations of blocks suggested in “Assortativity and disassortativity” section for assortativity and disassortativity. For assortative networks, h(i)=i. For disassortative networks, h(i)=b+1−i. In the case of b=3, we have also studied a rotational permutation, i.e., h(i)=((i+1) mod b)+1. The critical values of ϕ obtained using (36) are shown in Table 1. We also numerically calculate the critical values of ϕ. In this numerical study, we gradually decrease ϕ until (32) fails to have a solution in the interior of [0,1)b. From these results, we see that the critical values of ϕ obtained from (36) agree very well with those obtained numerically.

Table 1 Critical values of ϕ

Finally, we study the giant component sizes of the generalized configuration models. We numerically solve (32) to obtain vector α, and then compute η using (25). In this study, we continue to assume that degrees are geometrically distributed as we did in the study of Table 1. The giant component sizes are shown in Fig. 4. From this figure, we see that assortative networks have smaller percolation thresholds than disassortative networks. Hence, giant components emerge more easily in assortative networks. However, disassortative networks tend to have larger giant component sizes than assortative networks for large ϕ. The effect of assortativity and disassortativity to the giant component sizes and the percolation thresholds observed in this example agrees with that observed in Newman (2002). For the effect of q, larger values of q decrease the percolation thresholds and the giant component sizes of assortative networks. On the other hand, larger values of q increase the percolation thresholds and the giant component sizes of disassortative networks.

Fig. 4
figure 4

Size of the giant component vs. ϕ

Conclusions

In this paper we have presented an extension of the classical configuration model. Like a classical configuration model, the extended configuration model allows users to specify an arbitrary degree distribution. In addition, the model allows users to specify a positive or a negative assortative coefficient. We derived a closed form for the assortative coefficient of this model. We verified our result with simulations.

Appendix

In this appendix we prove Theorem 3. To achieve this, we need a matrix version of the mean value theorem. We state the result in the following lemma.

Lemma 1

Suppose that x and y are two points in [ 0,1]b. Then, there exists constants ci in the open intervals (min(xi,yi), max(xi,yi)), such that

$$ {\boldsymbol{{f}}}({\boldsymbol{{x}}})-{\boldsymbol{{f}}}({\boldsymbol{{y}}})={\boldsymbol{{J}}}({\boldsymbol{{c}}})({\boldsymbol{{x}}} - {\boldsymbol{{y}}}), $$
(41)

where c=(c1,c2,…,cb).

Proof of Lemma 1.Suppose that x and y are two points in [0,1]b. Consider

$$\begin{array}{@{}rcl@{}} f_{i}({\boldsymbol{{x}}})-f_{i}({\boldsymbol{{y}}})&=&\phi(bq+1-q)\left(g_{h(i)}(x_{h(i)})-g_{h(i)}(y_{h(i)})\right) \\ &&\ +\phi(1-q)\sum_{j=1, j\ne h(i)}^{b} \left(g_{j}(x_{j})-g_{j}(y_{j})\right). \end{array} $$
(42)

Since function gj is continuous and differentiable in (0,1), by the mean value theorem there is a cj, where min(xj,yj)<cj< max(xj,yj), such that

$$ g_{j}'(c_{j})=\frac{g_{j}(x_{j})-g_{j}(y_{j})}{x_{j}- y_{j}} $$
(43)

for all j. Substituting (43) into (42) and expressing (42) in matrix form, we immediately prove (41).

The proof of Theorem 3 also needs the Poincaré-Miranda Theorem, which is a gereralization of the intermediate value theorem. We quote the Poincaré-Miranda Theorem from (Kulpa 1997). Let Ib=[0,1]b be the b-dimensional cube of the Euclidean space Rb. For each ib denote

$$I_{i}^{-} \overset{\text{def}}{=} \{{\boldsymbol{{x}}}\in I^{b} : x_{i}=0\},\qquad I_{i}^{+}\overset{\text{def}}{=}\{{\boldsymbol{{x}}}\in I^{b} : x_{i}=1\} $$

the i-th opposite faces.

Proposition 2

(Poincaré-Miranda Theorem) Let f:IbRb,f=(f1,f2,…,fb), be a continuous map such that for each \(i\le b, f_{i}(I_{i}^{-}) \subset (-\infty, 0]\) and \(f_{i}(I_{i}^{+}) \subset [0, +\infty)\). Then, there exists a point cIb such that f(c)=0.

Now we prove Theorem 3.

Proof

(Theorem 3) Now we analyze the first case in Theorem 3. We have shown that fixed point 1 is attracting. We now show that there is no other fixed point in [0,1]b. Suppose not. Assume that there is another distinct fixed point. Denote it by x. From Lemma 1, we have

$$ {\boldsymbol{1}}-{\boldsymbol{{x}}}={\boldsymbol{{J}}}({\boldsymbol{{c}}})({\boldsymbol{1}}-{\boldsymbol{{x}}}). $$
(44)

Since gi is a power series with non-negative coefficients, gi is monotonically increasing, differentiable and gi′ is also increasing. Thus,

$$\begin{array}{@{}rcl@{}} {\boldsymbol{{J}}}({\boldsymbol{{c}}}) &=& \phi(bq{\boldsymbol{{H}}}+(1-q){\boldsymbol{1}}_{b\times b}){\boldsymbol{{D}}}\{g_{1}'(c_{1}), g_{2}'(c_{2}), \ldots, g_{b}'(c_{b})\} \\ &\le& \phi(bq\boldsymbol{H}+(1-q)\boldsymbol{1}_{b\times b})\boldsymbol{D}\{g_{1}'(1), g_{2}'(1), \ldots, g_{b}'(1)\}\\ &=& \boldsymbol{J}(\boldsymbol{1}) \end{array} $$
(45)

component-wise. Inequality (45) is due to the fact that H,1b×b and the two diagonal matrices are all non-negative. Substituting the inequality above into (44), we have

$${\boldsymbol{1}}-{\boldsymbol{{x}}} \le {\boldsymbol{{J}}}({\boldsymbol{1}})({\boldsymbol{1}}-{\boldsymbol{{x}}}). $$

Substituting the last inequality repeatedly into itself, we have

$${\boldsymbol{1}}-{\boldsymbol{{x}}}\le {\boldsymbol{{J}}}({\boldsymbol{1}})^{n} ({\boldsymbol{ 1}}-{\boldsymbol{{x}}}) \to {\boldsymbol{0}}, $$

as n, since the dominant eigenvalue of J(1) is strictly less than one. We thus reach a contradiction to the assumption that x is distinct from 1.

Now we consider the second case. We first show that there exists a point x in [0,1]b such that

$${\boldsymbol{{x}}}-{\boldsymbol{{f}}}({\boldsymbol{{x}}})\ge {\boldsymbol{0}}. $$

Denote such a point by η. We choose

$$ {\boldsymbol{\eta}}={\boldsymbol{1}}-\epsilon{\boldsymbol{{v}}}, $$
(46)

where ε is a small positive number and v is the eigenvalue of J(1) associated with the dominant eigenvalue ϕλ1. For small ε, we have

$${\boldsymbol{{f}}}({\boldsymbol{\eta}})={\boldsymbol{{f}}}({\boldsymbol{1}}-\epsilon{\boldsymbol{{v}}})\approx {\boldsymbol{{f}}}({\boldsymbol{1}})-{\boldsymbol{{J}}}({\boldsymbol{1}})(\epsilon{\boldsymbol{{v}}}) ={\boldsymbol{1}}-{\boldsymbol{{J}}}({\boldsymbol{1}})(\epsilon{\boldsymbol{{v}}}). $$

It follows from the above equation that

$$ {\boldsymbol{\eta}}-{\boldsymbol{{f}}}({\boldsymbol{\eta}})\approx ({\boldsymbol{{J}}}({\boldsymbol{ 1}})-{\boldsymbol{{I}}})(\epsilon{\boldsymbol{{v}}}), $$
(47)

where I is the b×b identity matrix. Since v is an eigenvector of J(1) associated with ϕλ1, (47) reduces to

$${\boldsymbol{\eta}}-{\boldsymbol{{f}}}({\boldsymbol{\eta}})=(\phi\lambda_{1}-1)\epsilon{\boldsymbol{{v}}}. $$

Since ϕλ1>1 and v>0 entry-wise, we have

$$ {\boldsymbol{\eta}}-{\boldsymbol{{f}}}({\boldsymbol{\eta}})>{\boldsymbol{0}} $$
(48)

for some ε>0.

Next we shall show that (32) has another fixed point in [0,1)b. To apply Proposition 2, we transform system (32) by changing variables. That is, for any xi[0,ηi], where ηi is the i-th entry of η defined in (46). We define yi=xi/ηi, for i=1,2,…,b. Then, we define function F:[0,1]b→[0,1]b, where the i-th entry of F is

$$F_{i}({\boldsymbol{{y}}})=\eta_{i} y_{i} - f_{i}(\eta_{1} y_{1}, \eta_{2} y_{2}, \ldots, \eta_{b} y_{b}). $$

We now show that for any \({\boldsymbol {{y}}}\in I_{i}^{-}\),

$$\begin{array}{@{}rcl@{}} &&F_{i}({\boldsymbol{{y}}})\\ &&=-f_{i}(\eta_{1} y_{1}, \eta_{2} y_{2}, \ldots, \eta_{b} y_{b})\\ &&=-(1-\phi)-\phi\left((bq+1-q)g_{h(i)}(\eta_{h(i)}y_{h(i)})+(1-q)\sum_{j\ne h(i)} g_{j}(\eta_{j} y_{j})\right)\\ &&\le 0, \end{array} $$

since gj(ηjyj)≤1/b for all j. Next, consider y in \(I_{i}^{+}\). In this case,

$$\begin{array}{@{}rcl@{}} F_{i}({\boldsymbol{{y}}})&=&\eta_{i}-f_{i}(\eta_{1} y_{1}, \ldots,\eta_{i-1} y_{i-1}, \eta_{i}, \eta_{i+1} y_{i+1},\ldots, \eta_{b} y_{b})\\ &\ge&\eta_{i} -f_{i}(\eta_{1},\ldots, \eta_{i-1},\eta_{i}, \eta_{i+1},\ldots, \eta_{b}) \end{array} $$
(49)
$$\begin{array}{@{}rcl@{}} &\ge& 0, \end{array} $$
(50)

where (49) follows from the monotonicity of gj for all j, and (50) follows from (48). From Proposition 2, F(y)=0 has a root in [0,1]b. Equivalently, (32) has a root in [0,1)b. We denote this root by z.

?

We now show that fixed point z is attracting. From (41) since both 1 and z are fixed points, we have

$$ {\boldsymbol{1}}-{\boldsymbol{{z}}} = {\boldsymbol{{J}}}({\boldsymbol{{c}}})({\boldsymbol{1}}-{\boldsymbol{{z}}}), $$
(51)

where zi<ci<1. From (51), the unity is an eigenvalue of J(c) and 1z is the associated eigenvector. Since J(c) is a positive matrix and 1z is a positive vector component-wise, by the Perron-Frobenius theorem, the unity is the dominant eigenvalue of J(c) (Meyer 2000). By the definition in (30), gi′ is strictly increasing for all i. It follows that gi′(ci)>gi′(zi) and from (35) we have

$$\begin{array}{@{}rcl@{}} {\boldsymbol{{J}}}({\boldsymbol{{c}}}) &=& \phi(bq{\boldsymbol{{H}}}+(1-q){\boldsymbol{1}}_{b\times b}){\boldsymbol{{D}}}\{g_{1}'(c_{1}), g_{2}'(c_{2}), \ldots, g_{b}'(c_{b})\} \\ &> & \phi(bq{\boldsymbol{{H}}}+(1-q){\boldsymbol{1}}_{b\times b}){\boldsymbol{{D}}}\{g_{1}'(z_{1}), g_{2}'(z_{2}), \ldots, g_{b}'(z_{b})\} \\ &=& {\boldsymbol{{J}}}({\boldsymbol{{z}}}) \end{array} $$
(52)

component-wise. Recall that we assume q<1. With ϕ>0, it is clear that both J(c) and J(z) are irreducible matrices. From Theorem 9 of Rheinboldt and Vandergraft (1973) (see also Guiver (2018)), (52) implies that the spectral radius of J(z) is strictly less than that of J(c). This implies that z is an attracting fixed point. □

References

  • Barabási, A-L, Albert R (1999) Emergence of scaling in random networks. Science 286:509–512.

    Article  MathSciNet  MATH  Google Scholar 

  • Bassler, KE, Genio CID, Erdős PL, Miklós I, Toroczkai Z (2015) Exact sampling of graphs with prescribed degree correlations. New J Phys 17:083052.

    Article  MathSciNet  Google Scholar 

  • Bender, EA, Canfield ER (1978) The asymptotic number of labelled graphs with given degree sequences. J Comb Theory Ser A 24:296–307.

    Article  MATH  Google Scholar 

  • Boguñá, M, Pastor-Satorras R (2002) Epidemic spreading in correlated complex networks. Phys Rev E 66:047104.

    Article  Google Scholar 

  • Boguñá, M, Pastor-Satorras R, Vespignani A (2003) Absence of epidemic threshold in scale-free networks with degree correlations. Phys Rev Lett 90:028701.

    Article  MATH  Google Scholar 

  • Bollobás, B (1980) A probabilistic proof of an asymptotic formula for the number of labelled regular graphs. Eur J Comb 1:311–316.

    Article  MathSciNet  MATH  Google Scholar 

  • Braha, D (2016) The complexity of design networks: Structure and dynamics. In: Cash P, Mario TS, Štorga (eds)Experimental Design Research, 129–151. https://doi.org/10.1007/978-3-319-33781-4_8.

    Google Scholar 

  • Braha, D, Bar-Yam Y (2007) The statistical mechanics of complex product development: Empirical and analytical results. Manag Sci 53(7):1127–1145.

    Article  MATH  Google Scholar 

  • Bratley, P, Fox BL, Schrage LE (1987) A Guide to Simulation, 2nd edn. Springer, New York.

    Book  Google Scholar 

  • Callaway, DS, Hopcroft JE, Kleinberg JM, Newman MEJ, Strogatz SH (2001) Are randomly grown graphs really random?Phys Rev E 64:041902.

    Article  Google Scholar 

  • Cardy, JL, Grassberger P (1985) Epidemic models and percolation. J Phys A Math Gen 18(6):L267–L271. https://doi.org/10.1088/0305-4470/18/6/001. https://doi.org/10.1088%2F0305-4470%2F18%2F6%2F001.

    Article  MathSciNet  MATH  Google Scholar 

  • Catanzaro, M, Caldarelli G, Pietronenero L (2004) Assortative model for social networks. Phys Rev E 70:037101.

    Article  Google Scholar 

  • Cohen, R, Erez K, ben-Avraham D, Havlin S (2000) Resilence of the internet to random breakdowns. Phys Rev Lett 85:4626–4628.

    Article  Google Scholar 

  • Eguíluz, VM, Klemm K (2002) Epidemic threshold in structured scale-free networks. Phys Rev Lett 89(10, 108701):108701. https://doi.org/10.1103/PhysRevLett.89.108701.

    Article  Google Scholar 

  • Erdős, Rényi (1959) On random graphs. Publ Math 6:290–297.

    MathSciNet  Google Scholar 

  • Guiver, C (2018) On the strict monotonicity of spectral radii for classes of bounded positive linear operators. Positivity 22:1173–1190.

    Article  MathSciNet  MATH  Google Scholar 

  • Johnson, S, Torres JJ, Marro J, Munoz MA (2010) The entropic origin of disassortativity in complex networks. Phys Rev Lett 104:108702.

    Article  Google Scholar 

  • Klein-Hennig, H, Hartmann AK (2012) Bias in generation of random graphs. Phys Rev E 85:026101.

    Article  Google Scholar 

  • Kulpa, W (1997) The Poincaré-Miranda theorem. Am Math Mon 104(6):545–550.

    MATH  Google Scholar 

  • Lancaster, P, Tismenetsky M (1985) The Theory of Matrices. Academic Press, New York.

    MATH  Google Scholar 

  • Litvak, DN, van der Hofstad R (2012) Degree-degree correlations in random graphs with heavy-tailed degrees. Department of Applied Mathematics, University of Twente, Enschede, the Netherlands. http://doc.utwente.nl/84367/.

    Google Scholar 

  • Marshall, AW, Olkin I, Arnold BC (2011) Inequalities: Theory of Majorization and Its Applications. Springer, New York.

    Book  MATH  Google Scholar 

  • Meyer, C (2000) Matrix Analysis and Applied Linear Algebra. SIAM, Philadelphia, USA.

    Book  Google Scholar 

  • Molloy, M, Reed B (1995) A critical point for random graphs with a given degree sequence. Random Struct Alg 6:161–179.

    Article  MathSciNet  MATH  Google Scholar 

  • Molloy, M, Reed B (1998) The size of the giant component of a random graph with a given degree sequence. Comb Probab Comput 7:295–306.

    Article  MathSciNet  MATH  Google Scholar 

  • Moore, C, Newman MEJ (2000) Epidemics and percolation in small-world networks. Phys Rev E 61:5678.

    Article  Google Scholar 

  • Moreno, Y, Gómez JB, Pacheco AF (2003) Epidemic incidence in correlated complex networks. Phys Rev E 68:035103.

    Article  Google Scholar 

  • Newman, MEJ (2001) Clustering and preferential attachment in growing networks. Phys Rev E 64:025102.

    Article  Google Scholar 

  • Newman, MEJ (2002) Assortative mixing in networks. Phys Rev Lett 89:208701.

    Article  Google Scholar 

  • Newman, MEJ (2003) Mixing patterns on networks. Phys Rev E 67:026126.

    Article  MathSciNet  Google Scholar 

  • Newman, M (2010) Networks: An Introduction. Oxford University Press, New York.

    Book  MATH  Google Scholar 

  • Nikoloski, Z, Deo N, Kucera L (2005) Degree-correlation of a scale-free random graph process. In: Stefan F (ed)2005 European Conference on Combinatorics, Graph Theory and Applications (EuroComb ’05), 239–244. http://www.dmtcs.org/proceedings/html/dmAE0148.abs.html.

  • Pomerance, A, Ott E, Girvan M, Losert W (2009) The effect of network topology on the stability of discrete state models of genetic control. Proc Natl Acad Sci 106:8209–8214.

    Article  Google Scholar 

  • Ramezanpour, A, Karimipour V, Mashaghi A (2005) Generating correlated networks from uncorrelated ones. Phys Rev E 67:046107.

    Article  Google Scholar 

  • Rheinboldt, WC, Vandergraft JS (1973) A simple approach to the perron–frobenius theory for positive operators on general partially-ordered finite-dimensional linear spaces. Math Comput 27:139–145.

    Article  MathSciNet  MATH  Google Scholar 

  • Sander, LM, Warren CP, Sokolov IM, Simon C, Koopman J (2002) Percolation on heterogeneous networks as a model for epidemics. Math Biosci 80:293–305.

    Article  MathSciNet  MATH  Google Scholar 

  • Schwartz, N, Cohen R, ben-Avraham D, Barabasi A-L, Havlin S (2002) Percolation in directed scale-free networks. Phys Rev E 66:015104.

    Article  MathSciNet  Google Scholar 

  • Schläpfer, M, Buzna L (2012) Decelerated spreading in degree-correlated networks. Phys Rev E 85:015101.

    Article  Google Scholar 

  • Vázquez, A, Moreno Y (2003) Resilence to damage of graphs with degree correlations. Phys Rev E 67:015101.

    Article  Google Scholar 

  • Watts, DJ, Strogatz SH (1998) Collective dynamics of ’small-world’ networks. Nature 393:440–442.

    Article  MATH  Google Scholar 

  • Xulvi-Brunet, R, Sokolov IM (2005) Changing correlations in networks: assortativity and dissortativity. Acta Phys Pol B 36(5):1431–1455.

    Google Scholar 

  • Zhou, J, Xu X, Zhang J, Sun J, Small M, Lu J-A (2008) Generating an assortative network with a given degree distribution. Intern J Bifuration Chaos 18(11):3495–3502.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

DL, CC analyzed the degree correlation and the percolation of the generalized configuration model. MZ performed numerical study on the percolation analysis. HL developed a computer program to simulate the generalized configuration model. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Duan-Shin Lee.

Ethics declarations

Competing interests

The authors declare that we have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, DS., Chang, CS., Zhu, M. et al. A generalized configuration model with degree correlations and its percolation analysis. Appl Netw Sci 4, 124 (2019). https://doi.org/10.1007/s41109-019-0240-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41109-019-0240-2

Keywords