Skip to main content

Network reconstruction via density sampling

Abstract

Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its (global) link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link densities. We then introduce our core technique for reconstructing both the topology and the link weights of the unknown network in detail. When tested on real economic and financial data sets, our method achieves a remarkable accuracy and is very robust with respect to the sampled subsets, thus representing a reliable practical tool whenever the available topological information is restricted to small portions of nodes.

Introduction

Reconstructing a weighted, directed network means providing an algorithm to estimate the presence and the weight of all links in the network, making optimal use of the available information (Wells 2004; Upper 2011; Mastromatteo et al. 2012; Baral and Fique 2012; Drehmann and Tarashev 2013; Hałaj and Kok 2013; Anand et al. 2014; Montagna and Lux 2014; Peltonen et al. 2015; Cimini et al. 2015b). Since several networks are in general compatible with the known information, the output of such a procedure cannot identify a unique network but rather an ensemble of possible ones. This leads to a (large) set of candidate networks to be sampled with a certain probability, where the latter has to be specified in such a way that the resulting ensemble average is as close as possible to the empirical, unknown network. Maximum-entropy is a powerful method to construct probability distributions that realise a certain set of constraints on average. Treating the available pieces of information as empirical constraints in the maximum-entropy procedure ensures that the statistical inference carried out via the resulting distribution is maximally unbiased.

In many situations, e.g. for economic, interbank or other financial networks, the strength sequence (i.e. the list of strengths of all nodes) is known while there is little or no information available about the topology (i.e. the binary structure) of the network. Exploiting the strength sequence as the only constraint of the maximum entropy procedure leads to an unrealistic ensemble where the likely networks are (almost) completely connected (Mastrandrea et al. 2014). This occurs because, when replicating the empirical strengths in absence of topological information, the method tends to distribute non-zero link weights as evenly as possible (i.e. between all pairs of nodes). When such unrealistically dense networks are used as proxies to measure, e.g. the level of systemic risk in a financial network, the resulting estimates are completely unreliable. By contrast, it has been shown that, if the degree sequence is known in addition to the strength sequence, the network reconstruction procedure improves tremendously and achieves a remarkable accuracy, as a result of a much more faithful replication of the topology (Mastrandrea et al. 2014; Cimini et al. 2015a). Notice that, if the link weights are specified by the matrix W, whose entry w ij ≥0 represents the weight of the directed link from node i to node j, the topology is specified by the binary adjacency matrix A whose entry a ij =1 if w ij is strictly positive and zero otherwise.

Although complete information on the degree sequence is rarely available, this kind of information can be retrieved from the strength sequence, provided that the latter is complemented with some kind of topological information: in (Musmeci et al. 2013) this information consists of the degree sequence of only a subset I of nodes, {k i } iI , while in (Cimini et al. 2015b) the information used is the total number of links, L, of the network.

In this paper we face the problem of reconstructing weighted, directed networks, for which the only information available is represented by the set of out-strengths \(s^{out}_{i}=\sum _{j(\neq i)}w_{ij}\) and in-strengths \(s^{in}_{i}=\sum _{j(\neq i)}w_{ji}\) (i.e. the total rows and columns sums of the adjacency matrix) as well as the link density of a subset I of nodes, i.e. \(c_{I}=\frac {L_{I}}{n_{I}(n_{I}-1)}\), with \(L_{I}=\sum _{i\in I}\sum _{j(\neq i)\in I}a_{ij}\) being the observed number of internal links to the subset I. By doing so, we do not require information which is either too detailed (as the degree sequence of even a small subset of nodes) or simply unaccessible (as the total number of links). However, the information encoded into the link density of the chosen subset must be representative of the global one, in order to accurately reconstruct a given network: for this reason, we also propose a recipe about how properly sampling the nodes set of our network. As we will show, the random-nodes sampling scheme provides the best way to draw representative subsets out of the whole nodes set.

Concerning the reconstruction of the weighted structure, we will employ the degree-corrected gravity model (Cimini et al. 2015b) with a correction term ensuring that the strengths are reproduced even in absence of self-loops, i.e. of diagonal terms indicating self-interactions. As we will show, such a correction becomes more and more important as the strength of the considered node is increased, whence the need to properly account for it.

The rest of the paper is organized as follows. In “Methods” section we illustrate the two steps characterizing our reconstruction method and provide measures to test the effectiveness of the algorithm; in “Results” section we apply our method to two real networks, an economic one and a financial one, and in “Conclusions” section we discuss the results.

Methods

Inferring the topological structure

In order to reconstruct the topological structure of a network W, whenever the nodes strengths \(\{s^{out}_{i}\}_{i=1}^{N}\) and \(\{s^{in}_{i}\}_{i=1}^{N}\) and the total number of links L are known, one can follow the algorithm proposed in (Cimini et al. 2015b), which prescribes to solve the equation

$$ L=\langle L\rangle $$
(1)

with \(L=\sum _{i}\sum _{j(\neq i)}a_{ij}\), \(\langle L\rangle =\sum _{i}\sum _{j(\neq i)}p_{ij}\) and \(p_{ij}=(zs^{out}_{i}s^{in}_{j})/(1+zs^{out}_{i}s^{in}_{j})\), in order to estimate the unknown parameter z and quantify the probability p ij that a directed link from i to j exists. However, a global (yet very simple) piece of information as L may be not always available. In these cases, an algorithm resorting upon local information has to be employed. In this paper we propose an algorithm to infer the unknown parameter z whenever the information of only a subset I of nodes is accessible. Notice that a possible solution to this problem has already been provided in (Musmeci et al. 2013), where the supposedly known piece of information is represented by the degree sequence of the nodes in I, i.e. {k i } iI , an hypothesis leading to the equation

$$ \sum_{i\in I}\left(k^{out}_{i}+k^{in}_{i}\right)=\sum_{i\in I}\left(\langle k^{out}_{i}\rangle+\langle k^{in}_{i}\rangle\right) $$
(2)

(with \(\langle k^{out}_{i}\rangle =\sum _{j(\neq i)\in V}p_{ij}\) and \(\langle k^{in}_{i}\rangle =\sum _{j(\neq i)\in V}p_{ji}\) and V indicating the whole nodes set). However, the knowledge of the number of neighbors of even a small subset of nodes may be unavailable as well. For this reason, here we make use of a simpler, more easily accessible, information and suppose to know only the link density within the subset I. Our recipe thus reads

$$ c_{I}=\langle c_{I}\rangle $$
(3)

where c I =L I /[n I (n I −1)], n I =|I| is the number of nodes constituting the subset I, \(L_{I}=\sum _{i\in I}\sum _{j(\neq i)\in I}a_{ij}\) is the observed number of links within it and \(\langle L_{I}\rangle =\sum _{i\in I}\sum _{j(\neq i)\in I}p_{ij}\) is the expected value of L I .

Remarkably, Eq. (3) can be easily extended to infer the structure of a different subset (say I ), provided that the link density of the latter could be guessed from the known value c I . As an example, let us assume the existence of a linear proportionality between the two values \(c_{I^{\prime }}\phantom {\dot {i}\!}\) and c I : in this case, the equation to be solved would be

$$ c_{I}=f\langle c_{I'}\rangle. $$
(4)

More explicitly, such a condition translates into the equation

$$ c_{I}=\frac{f}{n_{I'}(n_{I'}-1)}\sum_{i\in I'}\sum_{j(\neq i)\in I'}\frac{z_{I'}s^{out}_{i}s^{in}_{j}}{1+z_{I'}s^{out}_{i}s^{in}_{j}} $$
(5)

which shows that the observed quantity tuning the parameter \(z_{I^{\prime }}\phantom {\dot {i}\!}\) is \(c_{I}\cdot n_{I^{\prime }}(n_{I^{\prime }}-1)\phantom {\dot {i}\!}\), i.e. the link density of the known subset, corrected by a volume term.

The value f=1 corresponds to the assumption that the network is homogeneous. This is equivalent to requiring that any two different subsets have exactly the same link density and that, in turn, any subset provides a representative value of the global network density. As we will show in what follows, a random sampling of the set of nodes indeed ensures that this assumption is verified with high accuracy, for the networks considered here.

Inferring the weighted structure

Beside reconstructing a network topological features, the approach proposed in (Cimini et al. 2015b) satisfactorily reproduces also its weighted structure. This approach is based on the degree-corrected gravity model prescription, which reads

$$ w_{ij} = \left\{ \begin{array}{cl} 0 & \mathrm{with~ probability }~1-p_{ij},\\ \frac{s_{i}^{out}s_{j}^{in}}{Wp_{ij}} & \mathrm{with~ probability }~p_{ij} \end{array} \right. $$
(6)

leading to the expectations \(\langle w_{ij}\rangle =s^{out}_{i}s^{in}_{j}/W\) and ensuring that \(s^{out}_{i}=\langle s^{out}_{i}\rangle =\sum _{j}w_{ij},\:\forall \:i\) and \(s^{in}_{i}=\langle s^{in}_{i}\rangle =\sum _{j}w_{ji},\:\forall \:i\) (i.e. that the in-strength and out-strength sequences are, on average, reproduced) as long as all entries are summed over.

However, in many real-world networks self-loops are either absent or explicitly excluded: this implies that either the diagonal terms of the adjacency matrix are equal to zero or that our sums should run over ji. This causes the expectations coming from the degree-corrected gravity model to need an extra-term to restore the correct value. More explicitly,

$$ \langle s_{i}^{out}\rangle=\sum_{j(\neq i)}\langle w_{ij}\rangle=\frac{s_{i}^{out}(W-s_{i}^{in})}{W}=s_{i}^{out}-\frac{s_{i}^{out}s_{i}^{in}}{W}, $$
(7)
$$ \langle s_{i}^{in}\rangle=\sum_{j(\neq i)}\langle w_{ji}\rangle=\frac{s_{i}^{in}(W-s_{i}^{out})}{W}=s_{i}^{in}-\frac{s_{i}^{out}s_{i}^{in}}{W} $$
(8)

and the missing term to be added up to our expectations is precisely the diagonal term, i.e. 〈w ii 〉.

Here we provide a solution to the problem above, by redistributing the diagonal term 〈w ii 〉 across the N−1 entries of the ith row and the N−1 entries of the ith column. In order to implement it, a procedure inspired to the iterative proportional fitting (IPF) algorithm (Bishop et al. 2007) can be devised. More specifically, redistributing the diagonal terms across the corresponding rows and columns amounts to redistribute the strengths of the following matrix on the entries equal to 1. Notice that we need to explicitly distinguish the strengths along rows and columns, since the generic weight w ij needs a correction affecting both i and j.

In order to achieve the aforementioned redistribution, one can compute the iterations of the IPF algorithm

$$ \left\{ \begin{array}{cl} w_{ij}^{(n)}&=\frac{s^{out}_{i}s^{in}_{i}}{W}\left(\frac{w_{ij}^{(n-1)}}{\sum_{k(\neq i)}w_{ik}^{(n-1)}}\right)\\ w_{ij}^{(n+1)}&=\frac{s^{out}_{j}s^{in}_{j}}{W}\left(\frac{w_{ij}^{(n)}}{\sum_{k(\neq j)}w_{kj}^{(n)}}\right) \end{array} \right. $$
(10)

upon setting the matrix defined by \(w_{ij}^{(0)}=1,\:\forall \:i\neq j\) as the initial configuration. As a consequence, we need to correct our probabilistic recipe as

$$ w_{ij} = \left\{ \begin{array}{cl} 0 & \mathrm{with~ probability}~1-p_{ij},\\ \left(\frac{s_{i}^{out}s_{j}^{in}}{W}+w_{ij}^{(\infty)}\right)\frac{1}{p_{ij}} & \mathrm{with~ probability}~p_{ij}. \end{array} \right. $$
(11)

For all practical purposes, a small number of iterations is often enough to achieve a satisfactory degree of accuracy. Here we explicitly report the analytical functional form of the first three IPF algorithm iterations only:

$$\begin{array}{@{}rcl@{}} w_{ij}^{(1)}&=&\frac{s^{out}_{i}s^{in}_{i}}{W}\left[\frac{1}{N-1}\right];\\ w_{ij}^{(2)}&=&\frac{s^{out}_{i}s^{in}_{i}}{W}\left[\frac{s^{out}_{j}s^{in}_{j}}{\sum_{l(\neq j)}s^{out}_{l}s^{in}_{l}}\right];\\ w_{ij}^{(3)}&=&\frac{s^{out}_{i}s^{in}_{i}}{W}\left[\frac{s^{out}_{j}s^{in}_{j}}{\sum_{l(\neq j)}s^{out}_{l}s^{in}_{l}}\right]\left[\frac{1}{\sum_{k(\neq i)}\frac{s^{out}_{k}s^{in}_{k}}{\sum_{m(\neq k)}s^{out}_{m}s^{in}_{m}}}\right]. \end{array} $$
(12)

A pseudo-code summarizing the two main steps of our algorithm (i.e. Eqs. (3) and (11)) is provided in Appendix 1.

Testing our reconstruction algorithm

An algorithm aiming at reconstructing the topological structure of a network is an example of a binary classificator which tries to infer whether each link is present or not. In order to test the performance of our reconstruction method we, thus, consider four indicators: the number of true positives, true negatives, false positives and false negatives. In network terms, the expectation value of such indices reads \(\langle TP\rangle =\sum _{i}\sum _{j(\neq i)}a_{ij}p_{ij}\), \(\langle TN\rangle =\sum _{i}\sum _{j(\neq i)}(1-a_{ij})(1-p_{ij})\), \(\langle FP\rangle =\sum _{i}\sum _{j(\neq i)}(1-a_{ij})p_{ij}\) and \(\langle FN\rangle =\sum _{i}\sum _{j(\neq i)}a_{ij}(1-p_{ij})\). However, the information provided by these indicators is often condensed into four alternative indices. The first one is called sensitivity (or true positive rate), \(\langle TPR\rangle =\frac {\langle TP\rangle }{L}\), and quantifies the percentage of 1s that are correctly recovered by our method. The second index is the specificity (or true negative rate), \(\langle SPC\rangle =\frac {\langle TN\rangle }{N(N-1)-L}\), and quantifies the percentage of 0s that are correctly recovered by our method. The third index is the precision (or positive predicted value), \(\langle PPV\rangle =\frac {\langle TP\rangle }{\langle L\rangle }\), and measures the performance of our method in correctly placing the 1s with respect to the total number of predicted 1s. The fourth index is the accuracy, \(\langle ACC\rangle =\frac {\langle TP\rangle +\langle TN\rangle }{N(N-1)}\), and quantifies the overall performance of our method in correctly placing both the 1s and the 0s.

To test the effectiveness of the weighted reconstruction, instead, we use the cosine similarity measure which estimates the distance between the observed weights \(\{w_{ij}\}_{i,j=1}^{N}\) and the conditional expected weights under our model \(\{\langle w_{ij}|a_{ij}=1\rangle \}_{i,j=1}^{N}\) by treating the corresponding matrices as vectors of real numbers and measuring their overlap. In formulas,

$$ \theta=\frac{\mathbf{W}\cdot \langle\mathbf{W}\rangle}{||\mathbf{W}||\:||\langle\mathbf{W}\rangle||} $$
(13)

with θ=−1 indicating maximum dissimilarity, θ=0 indicating absence of correlations and θ=1 indicating perfect overlap.

Results

World Trade Web

The first network we have analyzed is the World Trade Web (WTW), i.e. the network whose nodes are the world countries and whose links represent the trade volumes between them: in other words, w ij quantifies the volume of export from i to j. We remand the reader to (Gleditsch 2002) for more details on the dataset. For the sake of illustration, we show detailed results for the snapshot of the WTW in year 2000. We have however analyzed other temporal snapshots as well and found comparable results (see Appendix 2).

Table 1 sums up the results of our analysis when the nodes subset I is chosen at random. We see that the performance of our algorithm is not affected by the cardinality of I upon which the estimation of z is carried on, providing remarkably good results for all the chosen values. In particular, our method is overall very accurate, being able to correctly recover the 80% of 1s and the 73% of 0s, a result to be compared with the performance of a perfect classifier, for which 〈T P R〉=〈S P C〉=1, and with that of a random classifier, for which 〈T P R〉=1−〈S P C〉=c (c being the link density of the whole network). The high accuracy of our reconstruction method is also witnessed by the low rate of false positives of our algorithm, due to the accurate estimation of the actual link density. As discussed in (Squartini et al. 2016), overestimating the link density would have increased the expected TPR (a method predicting a complete network is characterized by 〈T P R〉=1), at the price of increasing the rate of false positives as well, thus decreasing the predictive power of the method itself.

Table 1 Statistical indicators used to evaluate the performance of our sampled-based reconstruction method, for different cardinalities n of the known subset I. Results are shown together with the 95% confidence intervals (not shown whenever their difference affects the significant digits beyond the third one)

Our method performs well also in reproducing the weighted structure of the WTW: upon adding the correction term up to the third iteration of the IPF algorithm, the largest expected in-strength (reading \(\langle s^{in}_{i,\:corr}\rangle =\sum _{j(\neq i)}\left (\frac {s^{out}_{j}s^{in}_{i}}{W}+w_{ji}^{(3)}\right),\:\forall \:i\)) accounts for the 95% of the observed value. On the other hand, the non-corrected value \(\langle s^{in}_{i}\rangle =\sum _{j(\neq i)}\left (\frac {s^{out}_{j}s^{in}_{i}}{W}\right)\) accounts for the 82% only. Better results are obtained for the out-strength sequence: the corrected value for the node characterized by the maximum out-strength amounts at the 99% of the corresponding observed value (the non-corrected value accounts for the 88%).

Overall, we obtain a value θ WTW0.712 for all the considered cardinalities n I , indicating a satisfactorily high level of similarity between our weights prediction and their observed values.

e-MID interbank network

The second network we have tested our method upon is the electronic Market for Interbank Deposits (e-MID), i.e. the network whose nodes are banks and whose generic link ij represents the loan granted from i to j. We remand the reader to (Iori et al. 2006) for more details on the dataset.

Table 1 summarizes the results of our analysis on e-MID in the year 1999 only (again, similar results hold for the other years in our data set - see Appendix). As for the WTW, the performance of our algorithm is not affected by n I providing again very good results for the whole range of values of the subsets cardinality. In particular, our method is again very accurate, being able to correctly recover the 64% of 1s and the 86% of 0s. Even if the predictive power of our method is lower than for the WTW case, the accuracy values are comparable, amounting at 80%.

Our method performs also very well in reproducing the e-MID weighted structure: the correction term coming from the IPF algorithm and calculated for the maximum \(\langle s^{out}_{i,\:corr}\rangle =\sum _{j(\neq i)}\left (\frac {s^{out}_{i}s^{in}_{j}}{W}+w_{ij}^{(3)}\right),\:\forall \:i\) accounts for the 99% of the observed value. On the other hand, the usual value \(\langle s^{out}_{i}\rangle =\sum _{j(\neq i)}\left (\frac {s^{out}_{i}s^{in}_{j}}{W}\right)\) accounts for the 88% only. A comparable result is obtained for the in-strength sequence: the corrected value for the node characterized by the maximum in-strength still amounts at the 99% of the corresponding observed value (the non-corrected value accounts for the 96%).

The value θ e-MID0.82 indicates that, on average, a very high level of similarity between observed and predicted weights is again obtained, confirming the degree-corrected gravity model as a good predictor of the links weights.

Random-nodes sampling scheme

The sampling-based reconstruction algorithm we have proposed in the present paper rests upon the homogeneity assumption, according to which any subset of nodes picked at random provides a representative value of the density of the whole network. Table 2 collects the estimations of the link density, averaged over all sampled subsets of a given cardinality: remarkably, the obtained values are accurate even for low cardinalities. In order to assess the magnitude of fluctuations, we have also explicitly computed the empirical probability distributions of the link density estimates, obtained by random sampling our nodes subsets. These distributions are shown in Fig. 1 (right panels). Naturally, the smaller the cardinality of the considered nodes subsets, the more spaced the values of the observable link density and the less smooth the corresponding probability distribution. These findings suggests that our homogeneity assumption is indeed verified, provided that nodes are sampled according to the random selection scheme (Genois et al. 2015).

Fig. 1
figure 1

Left panels: scatter plots of the link density c I versus the internal total strength \(s^{tot}_{I}\) of the subset I. Nodes characterized by large values of the total strength tend to form densely-connected groups, while nodes characterized by small values of the total strength tend, on the contrary, to form loosely-connected groups. Right panels: empirical probability distributions of the link density c I , when nodes belonging to I are chosen randomly. Each distribution is peaked around the density value of the whole network. Top panels refer to the WTW, bottom panels to e-MID

Table 2 Link density estimation for different cardinalities n of the random sampled subset I. Results are based on 1000 samples and are shown together with the 95% confidence intervals

As a comparison, we have also sampled nodes sequentially, i.e. by, first, ordering nodes according to their total strength \(s^{tot}_{i}=s^{out}_{i}+s^{in}_{i}\) and, then, considering bunches of n subsequent nodes (again, for each value of n). For each subset of nodes we have calculated the corresponding internal link density and plotted it versus the total internal strength of nodes, i.e. \(s^{tot}_{I}=\sum _{i\in I}\left (s^{out}_{i}+s^{in}_{i}\right)\). As shown in Fig. 1 (left panels), such a procedure provides insights on the structural organization of both WTW and e-MID: nodes characterized by large values of the total strength tend to form densely-connected groups whereas nodes characterized by small values of the total strength tend to form loosely-connected groups. Such an evidence confirms the presence of a core-periphery structure, with nodes having a smaller total strength establishing connections with nodes having a large total strength which, in turn, tend to connect preferentially with each other (as a sort of “rich-club”) (Fagiolo et al. 2010; De Masi et al. 2006). Our analysis suggests that a sampling-based reconstruction procedure must rest upon a “balanced” sampling of the nodes, biased neither towards the “core” portion of nodes (which would lead to severely overestimate the overall network density), nor towards the “periphery” portion of nodes (which would lead to severely underestimate the overall network density). Interestingly, in a recent paper comparing several network sampling techniques was found that the least biased sampling scheme for estimating a given network density is precisely the random-nodes one (Blagus et al. 2016).

Conclusions

The present contribution proposes a recipe to reconstruct a network from a very limited amount of information. In particular, we address the problem of inferring the binary and the weighted structure of a given network from the knowledge of the nodes strengths and the link density of only a subset of nodes. As we have shown in the paper, the best sampling scheme is the random-nodes selection scheme which ensures that an accurate estimation of the whole network density can indeed be achieved. On the contrary, selecting nodes on the basis of more informative structural properties (as the degree, or the strength) could bias the estimation of the connectance towards unrealistically too large, or too small, values. The role played by the available piece of topological information is fundamental not only to achieve an accurate reconstruction of the purely binary structure but also of the weighted structure, as evident upon inspecting Table 1.

The aforementioned results have been obtained by estimating the link density of the whole network upon considering only nodes subsets: in other words, we have verified that different random subsets (even with different cardinality) are characterized by very similar densities, in turn implying that the whole network density can be estimated (with a high degree of accuracy) by considering a subset randomly drawn from the whole set of nodes. However, the proposed algorithm can be also used to reconstruct networks with a modular structure, upon tuning the link densities of the different modules via Eq. (4): examples are provided by interbank networks structured into jurisdictions, the latter playing the role of the subsets to be reconstructed.

Appendix 1

A pseudo-code summarizing the main steps of the reconstruction algorithm presented in the paper follows.

Appendix 2

Additional years have been analysed for both the WTW and e-MID (see Tables 3 and 4).

Table 3 Statistical indicators used to evaluate the performance of our sampled-based reconstruction method, for different cardinalities n of the known subset I. Results are shown together with the 95% confidence intervals (not shown whenever their difference affects the significant digits beyond the third one)
Table 4 Statistical indicators used to evaluate the performance of our sampled-based reconstruction method, for different cardinalities n of the known subset I. Results are shown together with the 95% confidence intervals (not shown whenever their difference affects the significant digits beyond the third one)

References

Download references

Acknowledgments

This work was supported by the EU projects CoeGSS (grant num. 676547), Multiplex (grant num. 317532), Shakermaker (grant num. 687941), SoBigData (grant num. 654024), GrowthCom (grant num. 611272) and the FET projects SIMPOL (grant num. 610704) and DOLFINS (grant num. 640772). AG acknowledges the CNR Strategic Project CRISISLAB funded by Italian Government. DG acknowledges support from the Econophysics foundation (Stichting Econophysics, Leiden, the Netherlands).

Authors’ contributions

TS, GC, AG and DG participated in the design of the analysis. TS and GC performed the statistical analysis. All authors wrote, read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tiziano Squartini.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Squartini, T., Cimini, G., Gabrielli, A. et al. Network reconstruction via density sampling. Appl Netw Sci 2, 3 (2017). https://doi.org/10.1007/s41109-017-0021-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41109-017-0021-8

PACS numbers