Skip to main content

Table 1 Table for comparison of the performances of our method and direct nonparametric approaches such as local linear regression and kNN regression for predicting a response at an unlabeled node

From: Semisupervised regression in latent structure networks on unknown manifolds

 

d

n

s

true

sub

llr

kNN

1

4.00

500.00

5.00

0.00

0.52

2266.60

1.39

2

4.00

750.00

5.00

0.00

0.27

5995.41

1.01

3

4.00

1000.00

5.00

0.00

0.23

197.57

1.43

4

4.00

1250.00

5.00

0.00

0.19

10914.72

1.13

5

4.00

1500.00

5.00

0.00

0.20

484.39

1.35

6

4.00

500.00

8.00

0.00

0.31

117.90

0.48

7

4.00

750.00

8.00

0.00

0.20

3.98

0.60

8

4.00

1000.00

8.00

0.00

0.25

0.92

0.55

9

4.00

1500.00

8.00

0.00

0.13

14.55

0.64

10

4.00

500.00

15.00

0.00

0.32

0.08

0.34

11

4.00

750.00

15.00

0.00

0.15

0.09

0.26

12

4.00

1000.00

15.00

0.00

0.10

0.05

0.30

13

4.00

1500.00

15.00

0.00

0.08

0.03

0.24

  1. The manifold is \({\mathcal {M}}=\psi ([0,1])\) where \(\psi (t)=\frac{1}{\sqrt{2}}(cos(t),sin(t),cos(t),sin(t))\), and the mean squared errors for predicted responses are computed over 100 Monte Carlo samples. The regression model is \(y_i=\alpha +\beta t_i+\epsilon _i\), \(\epsilon _i {\mathop {\sim }\limits ^{iid}} N(0,10^{-4})\), \(i \in [s]\), \(\alpha =2.0\), \(\beta =5.0\). Following our notation, d denotes the dimension of the ambient space, n denotes the total number of nodes and s denotes the number of labeled nodes. The columns titled “true”, “sub”,“llr”,“kNN” respectively correspond to the sample mean squared errors of the predicted responses obtained from linear regression on the true regressors, from linear regression on the raw-stress embeddings (which is our suggested method), from local linear regression on the adjacency spectral estimates and from kNN regression (with \(k=3\) in our simulations) on the adjacency spectral estimates. The “true” column has all zeros because the sample mean squared error for the predicted response from linear regression on the true regressors yield values in the order of \(10^{-4}\) or \(10^{-5}\) which get approximated by 0.00 when rounded up to two places after decimal