Skip to main content

Advertisement

Fig. 2 | Applied Network Science

Fig. 2

From: Learning compact graph representations via an encoder-decoder network

Fig. 2

(a) If we simply count or consider the sub-structures independently while ignoring their co-occurrence relationship (Kong et al. 2011; Natarajan and Ranu 2018; Vishwanathan et al. 2010; Wang et al. 2017), the representations will not be very similar. (b) On the other hand, if only node co-occurrence is considered (Grover and Leskovec 2016; Perozzi et al. 2014; Tang et al. 2015), graphs \(\mathcal {G}_{2}\) and \(\mathcal {G}_{4}\) (similarly, \(\mathcal {G}_{1}\) and \(\mathcal {G}_{3}\)) end up having more similar representations (due to similar nodes) even though they do not share sub-structures that co-occur often. In (c), the approach considered in this paper, the representations for graphs \(\mathcal {G}_{1}\) and \(\mathcal {G}_{2}\) (similarly, \(\mathcal {G}_{3}\) and \(\mathcal {G}_{4}\)) end up being more similar because both graphs have sub-structures that co-occur frequently in the dataset

Back to article page