Skip to main content


Fig. 5 | Applied Network Science

Fig. 5

From: Learning compact graph representations via an encoder-decoder network

Fig. 5

(a) An encoder-decoder can be used to learn similar representations for sub-structures that share the same function. Here, the sub-structures “C-C-C”, “D-A-D”, and “E-D-E” are structurally dissimilar. However, they seem to be serving the same function of connecting similar regions together. If these patterns appear frequently, the encoder-decoder will learn to capture the functional similarity by learning similar representations for all three sub-structures. The learned representations are more compact since the co-occurrence dependencies of sub-structures are considered. (b) On the other hand, simply counting the occurrence of sub-structures to create a graph representation can result in representations that are less compact and less useful as the different sub-structures are treated independently

Back to article page