Skip to main content

Table 8 Results of direct attacks against each classifier with the PubMed dataset, with attacker budgets of 5, 10, and 20 edge perturbations

From: Complex network effects on the robustness of graph convolutional networks

  

Budget 5

Budget 10

Budget 20

Defense

Training

Net

FGA

IG

Net

FGA

IG

Net

FGA

IG

Jaccard

Rand.

0.784

N/A

N/A

0.952

N/A

N/A

0.992

N/A

N/A

Jaccard

SD

0.208

N/A

N/A

0.248

N/A

N/A

0.328

N/A

N/A

Jaccard

GC

0.208

N/A

N/A

0.328

N/A

N/A

0.456

N/A

N/A

GCN

Rand.

0.92

N/A

N/A

1.0

N/A

N/A

1.0

N/A

N/A

GCN

SD

0.952

N/A

N/A

1.0

N/A

N/A

1.0

N/A

N/A

GCN

GC

0.936

N/A

N/A

0.984

N/A

N/A

1.0

N/A

N/A

Cheb

Rand.

0.056

N/A

N/A

0.088

N/A

N/A

0.088

N/A

N/A

Cheb

SD

0.072

N/A

N/A

0.088

N/A

N/A

0.104

N/A

N/A

Cheb

GC

0.088

N/A

N/A

0.112

N/A

N/A

0.152

N/A

N/A

GAT

Rand.

0.792

N/A

N/A

0.896

N/A

N/A

0.992

N/A

N/A

GAT

SD

0.936

N/A

N/A

0.992

N/A

N/A

1.0

N/A

N/A

GAT

GC

0.92

N/A

N/A

0.984

N/A

N/A

0.992

N/A

N/A

  1. Results are included for Nettack (Net), FGA, and IG-FGSM (IG). For each classifier, we train with random (Rand.), StratDegree (SD), and GreedyCover (GC). Each entry is a probability of attack success, thus higher is better for the attacker and lower is better for the defender. To yield the most robust classifier, the defender picks the classifier/training method combination that minimizes the worst-case attack probability. These entries are listed in bold. Entries representing the most robust case for random training are in italic. Entries listed as N/A did not finish in the allotted time (24 h per trial). While StratDegree and GreedyCover work well in conjunction with the Jaccard classifier, a disparity in the classification margin hinders their performance in the best case, using a ChebNet classifier