Skip to main content
Fig. 8 | Applied Network Science

Fig. 8

From: Selective network discovery via deep reinforcement learning on embedded spaces

Fig. 8

Accuracy index versus time step for all tested embeddings and configurations. The results are arranged from top to bottom by the embedding type: PPR, MOD, GLEE, Laplacian, PCA, and Node2Vec. The anomaly density value for each plot represents the edge density of the anomaly or target subnetwork and P represents the density of the background network. The anomalies get sparser from left to right, causing the discovery task to increase in complexity and in general for the performance of each embedding to diminish. On the other hand, for a fixed anomaly density value, decreasing the background density P makes the discovery task easier. We observe that PPR, MOD, GLEE, and Laplacian embedding all perform well on the task and PPR maintaining the best performance. Note that the sharp dips in the plots correspond to regions when the agent has to travel from the first anomalous subnetwork to the second and there are no target nodes in the boundary set

Back to article page