Semisupervised regression in latent structure networks on unknown manifolds

Random graphs are increasingly becoming objects of interest for modeling networks in a wide range of applications. Latent position random graph models posit that each node is associated with a latent position vector, and that these vectors follow some geometric structure in the latent space. In this paper, we consider random dot product graphs, in which an edge is formed between two nodes with probability given by the inner product of their respective latent positions. We assume that the latent position vectors lie on an unknown one-dimensional curve and are coupled with a response covariate via a regression model. Using the geometry of the underlying latent position vectors, we propose a manifold learning and graph embedding technique to predict the response variable on out-of-sample nodes, and we establish convergence guarantees for these responses. Our theoretical results are supported by simulations and an application to Drosophila brain data.


Introduction
Random graphs have long been an area of interest for scientists from different disciplines, primarily because of their applicability in modeling networks ( [1], [2]).Latent position random graphs ( [3]) constitute a category of random graphs where each node is associated with an unobserved vector, known as the latent position.One popular model, the random dot product graph model ( [4]), comprise a subcategory of network models where the probability of edge formation between a pair of nodes is given by the inner product of their respective latent position vectors.This model was further generalized to the generalized random dot product graph model ( [5]) which replaces the inner product with the indefinite inner product (see [5]).A survey of inference problems under the random dot product model can be found in [6].In [7], it is shown that under certain regularity conditions, latent position random graphs can be equivalently thought of as generalized random dot product graphs whose nodes lie on a low dimensional manifold, which motivates the model we study in this work.Consider observing a random dot product graph whose latent positions lie on an unknown one-dimensional manifold in ambient space R d , and suppose responses are recorded at some of these nodes.We choose to work in a semisupervised setting because in realistic scenarios, collecting observations is easier than obtaining labels corresponding to those observations.It is assumed that the responses are linked to the scalar pre-images of the corresponding latent positions via a regression model.In this semisupervised setting, we aim to predict the responses at the out-of-sample nodes.
The semisupervised learning framework in network analysis problems has been considered in a number of previous works.In [8], a framework for regularization on graphs with labeled and unlabeled nodes was developed to predict the labels of unlabeled nodes.A dimensionality reduction technique was proposed from a graph-based algorithm developed to represent data on low dimensional manifold in high dimensional ambient space in [9].In the context of latent position networks with underlying low dimensional manifold structure, [10] discusses the problem of carrying out inference on the distribution of the latent positions of a random dot product graph, which are assumed to lie on a known low dimensional manifold in a high dimensional ambient space.Moreover, [11] studies the problem of two-sample hypothesis testing for equality of means in a random dot product graph whose latent positions lie on a one-dimensional manifold in a high dimensional ambient space, where the manifold is unknown and hence must be estimated.To be more precise, [11] proposes a methodology to learn the underlying manifold, and proves that the power of the statistical test based on the resulting embeddings can approach the power of the test based on the knowledge of the true manifold.
In our paper, we study the problem of predicting response covariate in a semisupervised setting, in a random dot product graph whose latent positions lie on an unknown one-dimensional manifold in ambient space R d .Our main result establishes a convergence guarantee for the predicted responses when the manifold is learned using a particular manifold learning procedure (see Section 2.3).As a corollary to our main result, we derive a convergence guarantee for the power of the test for model validity based on the resulting embeddings.To help develop intuition, we first consider the problem of regression parameter estimation assuming the underlying manifold is known, and we show that a particular estimator is consistent in this setting.We present an illustrative example of an application of our theoretical results.A connectome dataset consisting of a network of 100 Kenyon cell neurons in larval Drosophila (details in [12]) indicates the presence of an underlying low dimensional manifold structure.Each node (that is, each Kenyon cell) is coupled with a response covariate, and the latent position of each node is estimated by a six-dimensional vector, using adjacency spectral embedding (see Section 2.2).A scatterplot is obtained for each pair of dimensions of the estimated latent positions, and thus a 6 × 6 matrix of scatterplots is obtained (Figure 5).Each dimension is seen to be approximately related to another, and hence it is assumed that the latent positions lie on an one-dimensional manifold in six-dimensional ambient space.In order to capture the underlying structure, we construct a localization graph on the estimated latent positions and embed the dissimilarity matrix of shortest path distances into one-dimension (see Section 2.3 for description of the method of embedding).A scatterplot of the first two dimensions of the estimated latent positions is presented in Figure 1, where the size of the points varies as per the values of the associated response covariate.A scatterplot of the responses y i against the one-dimensional embeddings ẑi is also presented along with fitted regression line indicating a significant effect.These results demonstrate that it will be reasonable to posit that the responses are linked to the embeddings via a simple linear regression model.
Figure 1 Illustrative application of response prediction in latent structure networks on unknown manifolds.Our methodology is applied to the connectome of the right hemisphere of the Drosophila larval mushroom body.Left panel: scatter plot of two dimensions of the estimated latent positions for the 100 Kenyon cell neurons, obtained from spectral embedding of the network; the dot size represents the response variable y i (the distance in microns between bundle entry point of neuron i and the mushroom body neuropil).Right panel: plot of responses y i against learnt 1-d embeddings ẑi approximating geodesic distances along this curve, for the 100 Kenyon cell neurons, together with the regression line.In the left panel we observe that a one-dimensional curve captures nonlinear structure in the spectral embedding.In the right panel we observe that response regressed against geodesic distance indicates a significant effect (p < 0.01 for H 0 : a = 0 in y i = aẑ i + b + η i ).
Our analysis raises the question of testing the validity of a simple linear regression model assumed to be linking the nodal responses to the scalar pre-images of the latent positions.Our theory shows that the power of the test for validity of the model based on the raw-stress embeddings approximates the power of the test based on the true regressors.
In Section 2 , we discuss key points about random dot product graphs and manifold learning.Section 3 discusses the models and the methods to carry out subsequent inference on the corresponding regression models.Section 4 presents our theoretical results in both the settings of known and unknown manifolds.Section 5 presents our findings from simulations.Section 6 revisits our connectome application.Section 7 discusses the results and poses some questions that require further investigation.The proofs of our theoretical results are given in Section 8.

Background notations, definitions and results
In this section, we introduce and explain the notations used throughout the paper.We also state relevant definitions and results pertaining to random dot product graphs (in Section 2.2) and manifold learning (in Section 2.3).

Notations
We shall denote a vector by a bold lower-case letter, x for instance.Bold upper-case letters such as P will be used to represent matrices.The Frobenius norm and the maximum row-norm of a matrix B will be denoted respectively by B F and B 2,∞ .The i-th row of a matrix B will be denoted by B i * , and the j-th column of B will be denoted by B * j .We will denote the n × n identity matrix by I n , and J n will denote the n × n matrix whose each entry equals one.Also, H n = I n − 1 n J n will denote the n × n centering matrix.Unless otherwise mentioned, x will represent the Euclidean norm of a vector x.The set of all orthogonal d × d matrices will be denoted by O(d).The set of all positive integers will be denoted by N and for any n ∈ N, [n] will denote the set {1, 2, 3..., n}.

Preliminaries on Random Dot Product Graphs
A graph is an ordered pair (V, E) where V is the set of vertices (or nodes) and E ⊂ V × V is the set of edges connecting vertices.An adjacency matrix A of a graph is defined as A ij = 1 if (i, j) ∈ E, and A ij = 0 otherwise.Here, we deal with hollow and undirected graphs; hence A is symmetric and A ii = 0 for all i.Latent position random graphs are those for which each node is associated with a vector that is called its latent position, denoted by x i , and the probability of formation of an edge between the i-th and j-th nodes is given by κ(x i , x j ) where κ is a suitable kernel.
Random vectors drawn from any arbitrary probability distribution cannot be latent positions of a random dot product graph, as their magnitudes can be unbounded whereas probabilities must lie in the interval [0, 1].The following definition allows us to work with a restricted class of distributions more amenable to random dot product graphs.
Definition 1 (Inner product distribution): If F is a probability distribution function on R d such that for any x, y ∈ supp(F ), x T y ∈ [0, 1], then F is called an inner product distribution on R d .
Next, we define the random dot product graphs, the basis of the models considered in this paper.
Definition 2 (Random Dot Product Graph): Suppose G is a hollow, undirected random graph with latent positions x 1 , . . .
T be its latent position matrix and A be its adjacency matrix.The graph G is called random dot product graph if for all i < j, A ij ∼ Bernoulli(x T i x j ) independently.The probability distribution of A is given by If x 1 , ...x n ∼ iid F are the latent positions where F is an inner product distribution, then we write (A, X) ∼ RDP G(F ).The distribution of the adjacency matrix conditional upon x 1 , ...x n , is The latent positions of a random dot product graph are typically unknown and need to be estimated in practice.The following definition puts forth an estimate of these latent positions via adjacency spectral embedding.Definition 3 (Adjacency spectral embedding): Suppose λ i is the i-th largest (in magnitude) eigenvalue of A, and let v i be the corresponding orthonormal eigenvector.Define Now, we present two results from the literature which give us the consistency and asymptotic normality of suitably rotated adjacency spectral estimates of the latent positions of a random dot product graph.
Theorem 1 (Theorem 8 from [6]): Suppose x 1 , x 2 , ...x n ∈ R d denote the latent positions of a random dot product graph with n nodes, and let T be the latent position matrix.Assume that for all n sufficiently large, rank(X (n) ) = d, and there exists a > 0 such that n) denotes the corresponding adjacency matrix and let X(n) be the adjacency spectral embedding of A (n) in R d .There exists a constant C > 0 and a sequence W (n) ∈ O(d) such that Observe that under the assumption b n > (log(n)) 4+a , Eq. ( 1) implies Henceforth, we will denote this optimally rotated adjacency spectral embedding by X(n) , that is, X(n) = X(n) W (n) .To simplify notations we will often omit the superscript n, and we will use xi to denote the i-th row of X. Observe that in the attempt to estimate the true latent positions from the adjacency matrix, we encounter an inherent non-identifiability issue: for any W ∈ O(d), E(A|X) = XX T = (XW)(XW) T .This is the reason why the adjacency spectral embedding needs to be rotated suitably so that it can approximate the true latent position matrix.
Theorem 2 (Theorem 9 from [6]): Suppose (A (n) , X (n) ) ∼ RDP G(F ) be adjacency matrices and latent position matrices for a sequence of random dot product graphs, for which the latent positions are generated from an inner product distribution F on R d .Let where Then, there exists a sequence W (n) ∈ O(d) such that for all u ∈ R d , where Φ 0,Σ(x) (.) denotes the distribution function of multivariate normal N (0, Σ(x)) distribution.
Under suitable regularity conditions, a combined use of Theorem 2 and Delta method gives us the asymptotic distribution of √ n(γ(x i ) − γ(x i )) for a function γ : R d → R, which depends on the true distribution F of the latent positions.Therefore, var(γ(x i ) − γ(x i )) can be approximated from the optimally rotated adjacency spectral estimates xi and their empirical distribution function.In a random dot product graph for which the latent positions lie on a known one-dimensional manifold in ambient space R d , and the nodal responses are linked to the scalar pre-images of the latent positions via a simple linear regression model, we can use the approximated variance of (γ(x i ) − γ(x i )) (motivated by works in Chapters 2, 3 of [13]) to improve the performance (in terms of mean squared errors) of the naive estimators of the regression parameters, which are obtained by replacing x i with xi in the least square estimates of the regression parameters.We demonstrate this method in detail in Section 7 (see Figure 7).
Remark 1 Suppose F is a probability distribution satisfying supp(F ) = K ⊂ R d , and z 1 , ....z n ∼ iid F are latent positions of a hollow symmetric latent position random graph with associated kernel κ.Extensive works presented in [7] show that if κ ∈ L 2 (R d × R d ), then there exists a mapping q : R d → L 2 (R d ) such that the graph can be equivalently represented as a generalized random dot product graph with latent positions x i = q(z i ) ∈ L(R d ).If κ is assumed to be Hölder continuous with exponent c, then the Hausdorff dimension of q(K) can be bounded by d c , as shown in [7].In [14], it has been shown that if K is a Riemannian manifold, then stronger assumptions lead us to the conclusion that q(K) ⊂ L 2 (R d ) is also Riemannian manifold diffeomorphic to K. Thus, under suitable regularity assumptions, any latent position graph can be treated as a generalized random dot product graph with latent positions on a low dimensional manifold.
After stating the relevant definitions and results pertinent to random dot product graphs, in the following section we introduce the manifold learning technique we will use in this paper.Just for the sake of clarity, the topic of manifold learning in general has nothing to do with random dot product graph model; hence Section 2.2 and Section 2.3 can be read independently.

Manifold learning by raw-stress minimization
Our main model is based on a random dot product graph whose latent positions lie on a onedimensional Riemannian manifold.Since one-dimensional Riemannian manifolds are isometric to one-dimensional Euclidean space, we wish to represent the latent positions as points on the real line.This is the motivation behind the use of the following manifold learning technique, which relies upon approximation of geodesics by shortest path distances on localization graphs ( [15], [16], [17]).Given points x 1 , . . .x n ∈ M where M is an unknown one-dimensional manifold in ambient space R d , the goal is to find ẑ1 , . . .ẑn ∈ R, such that the interpoint Euclidean distances between ẑi approximately equal the interpoint geodesic distances between x i .However, the interpoint geodesic distances between x i are unknown.The following result shows how to estimate these unknown geodesic distances under suitable regularity assumptions.
Theorem 3 (Theorem 3 from [17]) Suppose M is a one-dimensional compact Riemannian manifold in ambient space R d .Let r 0 and s 0 be the minimum radius of curvature and the minimum branch separation of M. Suppose ν is given and suppose λ > 0 is chosen such that λ < s 0 and λ < 2 π r 0 √ 24ν.Additionally, assume x 1 , . . .x n ∈ M are such that for every u ∈ M, d M (u, x i ) < δ.A localization graph is constructed on x i as nodes under the following rule: two nodes x i and x j are joined by an edge if x i − x j < λ.When δ < νλ 4 , the following condition holds for all i, j ∈ [n], where d n,λ (x i , x j ) denotes the shortest path distance between x i and x j .

Given the dissimilarity matrix
where w ij ≥ 0 are weights.For the purpose of learning the manifold M, we set w ij = 1 for all i, j, and compute Since the scalars ẑi are obtained by embedding D into one-dimension upon minimization of raw-stress, we shall henceforth refer to ẑi as the one-dimensional raw-stress embeddings of D.
Remark 2 In practice, raw-stress is minimized numerically by iterative majorization (Chapter 8 of [18]).Standard algorithms can sometimes be trapped in nonglobal minima.However, nearly optimal values can be obtained by repeated iterations of Guttman transformation (Chapter 8 of [18]) when the configurations are intialized by classical multidimensional scaling.In our paper, for theoretical results, we assume that the global minima is achieved.

Model and Methodology
Here we describe our models under both the assumptions of known and unknown manifold.In each case, we assume that we observe a random dot product graph for which the latent positions of the nodes lie on a one-dimensional manifold in d-dimensional ambient space.Under the assumption that the underlying manifold is known, each node is coupled with a response linked to the scalar pre-image of the corresponding latent position via a regression model, and our goal is to estimate the regression parameters.When the underlying manifold is assumed to be unknown, our model involves a network with a small number of labeled nodes and a large number of unlabeled nodes, and our objective is to predict the response at a given unlabeled node assuming that the responses for the labeled nodes are linked to the scalar pre-images of the respective latent positions via a regression model.In the setting of unknown manifold, we can approximate the true regressors only up to scale and location transformations, and due to this non-identifiablity issue we carry out prediction of responses instead of estimation of regression parameters when the manifold is unknown.
Remark 3 We would like to remind the reader here that the setting of the known manifold is not realistic.We take the setting of the known manifold into account to help familiarize the reader with the best case scenario, for sake of comparison with the results obtained in the realistic setting of the unknown manifold.

Regression parameter estimation on known manifold
Suppose ψ : R → R d is a known bijective function and let M = ψ([0, L]) be a one-dimensional compact Reimannian manifold.Consider a random dot product graph for which the nodes (with latent positions x i ) lie on the known one-dimensional manifold M in d-dimensional ambient space.Let t 1 , . . .t n be the scalar pre-images of the latent positions such that where n is the number of nodes of the graph.Suppose, for each i, the i-th node is coupled with a response y i which is linked to the latent position via the following regression model where i ∼ iid N (0, σ 2 ) for all i ∈ [n].Our goal is to estimate α and β.If the true regressors t i were known, we could estimate α and β by their ordinary least square estimates given by βtrue Since the true latent positions x i are unknown, we estimate the true regressors t i by ti = arg min t xi − ψ(t) where xi is the optimally rotated adjacency spectral estimate for the i-th latent position x i .The existence of ti is guaranteed by the compactness of the manifold M = ψ([0, L]).We then substitute t i by ti in αtrue and βtrue to obtain the substitute (or the plug-in estimators) given by The steps to compute αsub and βsub are formally stated in Algorithm 1a.
1: Compute adjacency spectral embedding X of the adjacency matrix A into R d .2: Use the given rotation matrix W to get the optimally rotated adjacency spectral embedding X = XW.
3: Obtain the pre-images of the projections of the estimated latent positions on the manifold by ti = arg mint xi − ψ(t) .

Prediction of responses on unknown manifold
Here, we assume that ψ : [0, L] → R d is unknown and arclength parameterized, that is, ψ(t) = 1 for all t.Additionally, assume that M = ψ([0, L]) is a compact Reimannian manifold.Consider a n-node random dot product graph whose nodes (with latent positions x i ) lie on the unknown manifold M = ψ([0, L]) in ambient space R d .Assume that the first s nodes of the graph are coupled with a response covariate and the response y i at the i-th node is linked to the latent position via a linear regression model where i ∼ iid N (0, σ 2 ) for all i ∈ [s].Our goal is to predict the response for the r-th node, where r > s.
where l is such that s < r < l ≤ n.We then use a simple linear regression model on the bivariate data (y i , ẑi ) s i=1 to predict the response for ẑr corresponding to the r-th node.The abovementioned procedure to predict the response at the r-th node from given observations is formally described in Algorithm 1b.

Main results
In this section we present our theoretical results showing consistency of the estimators of the regression parameters on a known manifold, and convergence guarantees for the predicted responses based on the raw-stress embeddings on an unknown manifold.In the setting of unknown manifold, as a corollary to consistency of the predicted responses, we also derive a convergence guarantee for a test for validity of a simple linear regression model based on an approximate F -statistic.

The case of known manifold
Recall that we observe a random dot product graph with n nodes for which the latent positions x i lie on a one-dimensional manifold.Our following result shows that we can consistently estimate (α, β) by (α sub , βsub ).
Theorem 4 Suppose ψ : [0, L] → R d is bijective, and its inverse γ satisfies ∇γ(w) < K for all w ∈ ψ([0, L]), for some K > 0. Let x i = ψ(t i ) be the latent position of the ith node of a random dot product graph with n nodes, and assume be the latent position matrix and suppose X(n) is the adjacency spectral embedding of the adjacency matrix F is known.Then, as n → ∞, we have αsub → P α and βsub → P β, where A rough sketch of proof for Theorem 4 is as follows.Note that ti = arg min t xi − ψ(t) where xi is the optimally rotated adjacency spectral estimate of x i , and t i = arg min t x i − ψ(t) .Recall that Theorem 1 tells us that xi is consistent for x i .This enables us to prove that ti is consistent for t i which ultimately leads us to the consistency of αsub and βsub , because αsub and βsub are computed by replacing the true regressors t i with ti in the expressions of αtrue and βtrue which are consistent for α and β respectively.Theorem 4 gives us the consistency of the substitute estimators of the regression parameters under the assumption of boundedness of the gradient of the inverse function.Since continuously differentiable functions have bounded gradients on compact subsets of their domain, we can apply Theorem 4 whenever γ = ψ −1 can be expressed as a restriction to a function with continuous partial derivatives.As a direct consequence of Theorem 4, our next result demonstrates that the substitute estimators have optimal asymptotic variance amongst all linear unbiased estimators.

The case of unknown manifold
Recall that our goal is to predict responses in a semisupervised setting on a random dot product graph on an unknown one-dimensional manifold in ambient space R d .We provide justification for the use of Algorithm 1b for this purpose, by showing that the predicted response ỹr at the r-th node based on the raw-stress embeddings approaches the predicted response based on the true regressors t i as n → ∞.
Intuition suggests that in order to carry out inference on the regression model, we must learn the unknown manifold M. We exploit the availability of large number of unlabeled nodes whose latent positions lie on the one-dimensional manifold, to learn the manifold.Observe that since the underlying manifold ψ([0, L]) is arclength parameterized, the geodesic distance between any two points on it is the same as the interpoint distance between their corresponding pre-images.
Results from the literature ( [16]) show that if an appropriate localization graph is constructed on sufficiently large number of points on a manifold, then the shortest path distance between two points approximates the geodesic distance between those two points.Therefore, on a localization graph of an appropriately chosen neighbourhood parameter λ, constructed on the adjacency spectral estimates x1 , x2 , ..., xn , the shortest path distance d n,λ (x i , xj ) between xi and xj is expected to approximate the geodesic distance d M (x i , x j ) = |t i − t j |.Note that here is no need to rotate the adjacency spectral estimates xi , because the shortest path distance is sum of Euclidean distances and Euclidean distances are invariant to orthogonal transformations.Thus, when the dissimilarity matrix D = (d n,λ (x i , xj )) l i,j=1 is embedded into one-dimension by minimization of raw-stress, we obtain embeddings ẑ1 , . . .ẑl for which interpoint distance |ẑ i − ẑj | approximates the interpoint distance |t i − t j |.In other words, the estimated distances from the raw-stress embeddings applied to the adjacency spectral estimates of the latent positions approximate the true geodesic distances, which is demonstrated by the following result.This argument is the basis for construction of a sequence of predicted responses based on the raw-stress embeddings, which approach the predicted responses based on the true regressors as the number of auxiliary nodes go to infinity.
Theorem 5 (Theorem 4 from [11]): Suppose the function ψ : [0, L] → R d is such that ψ(t) = 1 for all t ∈ [0, L].Let x i ∈ R d be the latent position for the i-th node of the random dot product graph, and t i ∈ R be such that x i = ψ(t i ) for all i, and assume t i ∼ iid g where g is an absolutely continuous probability density function satisfying supp(g) = [0, L] and g min > 0.
Let xi be the adjacency spectral estimate of the true latent position x i for all i.There exist sequences {n K } ∞ K=1 of number of nodes and {λ K } ∞ K=1 of neighbourhood parameters satisfying n K → ∞, λ K → 0 as K → ∞, such that for any fixed integer l satisfying s < l < n K for all K, holds, where (ẑ 1 , ....ẑ l ) is minimizer of the raw stress criterion, that is Theorem 5 shows that the one-dimensional raw-stress embeddings ẑ1 , . . .ẑl satisfy . This means that for every i ∈ [l], ẑi approximates t i up to scale and location transformations.Since in simple linear regression the accuracy of the predicted response is independent of scale and location transformations, we can expect the predicted response at a particular node based on ẑi to approach the predicted response based on the true regressors t i .The following theorem, our key result in this paper, demonstrates that this is in fact the case.
Theorem 6 Consider a random dot product graph for which each node lies on an arclength parameterized one-dimensional manifold ψ([0, L]) where ψ is unknown.Let x i = ψ(t i ) be the latent position of the i-th node for all i.Assume where s is a fixed integer.The predicted response at the r-th node based on the true regressors is ŷr = αtrue + βtrue t r .There exist sequences n K → ∞ of number of nodes and λ K → 0 of neighbourhood parameters such that for every r > s, |ŷ r − ỹ(K) ) being the adjacency matrix when the number of nodes is n K and l being a fixed natural number that satisfies l > r > s.
Recall that the validity of a simple linear regression model can be tested by an F -test, whose test statistic is dependent on the predicted responses based on the true regressors.Since we have a way to approximate the predicted responses based on the true regressors by predicted responses based on the raw-stress embeddings, we can also devise a test whose power approximates the power of the F -test based on the true regressors, as shown by our following result.
Corollary 2 In the setting of Theorem 6, suppose (ỹ is the sequence of vector of predicted responses at the first s nodes of the random dot product graph, based on the raw-stress embeddings ẑ1 , ..., ẑs .Define Consider testing the null hypothesis H 0 : β = 0 against H 1 : β = 0 in the absence of the true regressors t i , and the decision rule is: reject H 0 in favour of H 1 at level of significance α if F (K) > c α, where c α is the (1 − α)-th quantile of F 1,s−2 distribution.If the power of this test is denoted by π(K) , then lim K→∞ π(K) = π * , where π * is the power of the test for which the decision rule is to reject H 0 in favour of H 1 at level of significance α if F * > c α.
Thus, if one wants to perform a test for model validity in the absence of the true regressors t i , then a test of approximately equal power, based on the raw-stress embeddings ẑi for a graph of sufficiently large number of auxiliary nodes, can be used instead.

Simulation
In this section, we present our simulation results demonstrating support for our theorems.We conducted simulations on 100 Monte Carlo samples of graphs on known and unknown onedimensional manifolds.

The case of known manifold
We take the manifold to be ψ([0, 1]), where ψ : [0, 1] → R 3 is the Hardy Weinberg curve, given by ψ(t) = (t 2 , 2t(1 − t), (1 − t) 2 ).The number of nodes, n, varies from 600 to 2500 in steps of 100.For each n, we repeat the following procedure over 100 Monte Carlo samples.A sample t 1 , .....t n ∼ iid U [0, 1] is generated, and responses y i are sampled via the regression model , where α = 2.0, β = 5.0, σ = 0.1.An undirected hollow random dot product graph with latent positions x i = ψ(t i ), i ∈ [n] is generated.More specifically, the (i, j)-th element of the adjacency matrix A satisfies A ij ∼ Bernoulli(x T i x j ) for all i < j, and A ij = A ji for all i, j ∈ [n], and A ii = 0 for all i ∈ [n].We denote the true latent position matrix by X = [x 1 |x 2 |...|x n ] T , and the adjacency spectral estimate of it by X.We compute Ŵ = arg min W∈O(d) X − XW F and finally, set X = X Ŵ to be the optimally rotated adjacency spectral estimate of the latent position matrix X.Then we obtain ti = arg min t xi − ψ(t) for i ∈ [n], and get αsub and βsub .Setting θ = (α, β), θtrue = (α true , βtrue ) and θsub = (α sub , βsub ), we compute the sample mean squared errors (MSE) of θtrue and θsub over the 100 Monte Carlo samples and plot them against n.The plot is given in Figure 2.

Remark 4
The fact that the optimal rotation matrix Ŵ needs to be computed from the true latent position matrix X is what makes inference on the regression model in the scenario of known manifold unrealistic, because X is typically unknown.
Figure 2 Plot showing consistency of the substitute estimator of the regression parameter vector on known manifold.For 100 Monte Carlo samples, substitute estimates are computed using the projections of the optimally rotated adjacency spectral estimates of the latent positions onto the manifold, and then the sample MSEs of the estimator based on the true regressors and the substitute estimator are computed.For graphs of moderate size (n ≤ 2000), the substitute estimator performs significantly worse than the estimator based on the true regressors.However, as the number of nodes increases, the difference in performances of the estimators diminish and the mean squared errors of both the estimators approach zero.

The case of unknown manifold
We assume that the underlying arclength parameterized manifold is ψ([0, 1]) where ψ : [0, 1] → R 4 is given by ψ(t) = (t/2, t/2, t/2, t/2).We take the number of nodes at which responses are recorded to be s = 20.Here, m denotes the number of auxiliary nodes, and n = m + s denotes the total number of nodes.We vary n over the set {500, 750, 1000, .....3000}.For each n, we repeat the following procedure over 100 Monte Carlo samples.A sample t 1 , ....t s ∼ iid U [0, 1] is generated, and responses y i are generated from the regression model , where α = 2.0, β = 5.0, σ = 0.01.Additionally, we sample the pre-images of the auxiliary nodes, t s+1 , ....t n ∼ iid U [0, 1].Thus, a random dot product graph with latent positions x i = ψ(t i ), i ∈ [n] is generated and the adjacency spectral estimates xi of its latent positions are computed.A localization graph is constructed with the first n/10 of the adjacency spectral estimates as nodes under the following rule: two nodes xi and xj of the localization graph are to be joined by an edge if and only if xi − xj < λ, where λ = 0.8 × 0.99 K−1 when n is the K-th term in the vector (500, 750, 1000, . . .3000).Then, the shortest path distance matrix D = (d n,λ (x i , xj )) l i,j=1 of the first l = s + 1 estimated latent positions is embedded into one-dimension by raw-stress minimization using mds function of smacof package in R and one-dimensional embeddings ẑi are obtained.The responses y i are regressed upon the 1-dimensional raw-stress embeddings ẑi for i ∈ [s], and the predicted response ỹs+1 for the (s + 1)-th node is computed.The predicted response ŷs+1 for the (s + 1)-th node based on the true regressors t i is also obtained.Due to identicality of distributions of the true regressors t i , the distribution of each of the predicted responses is invariant to the label of the node; hence we drop the subscript and use ŷtrue to denote the predicted response based on the true regressors t i , and use ŷsub to denote the predicted response based on the raw-stress embeddings ẑi (since raw-stress embeddings are used as substitutes for the true regressors in predicting the response).Finally, the sample mean of the squared distances (ŷ sub − ŷtrue ) 2 over all the Monte Carlo samples is computed and plotted against n.The plot is given in Figure 3. ) where ψ(t) = (t/2, t/2, t/2, t/2).The sample mean of the squared difference between the predicted response ŷsub based on raw-stress embeddings and predicted response ŷtrue based on the true regressors is plotted against the total number of nodes in the graph.The substitute predicted response ŷsub performs significantly worse than ŷtrue for moderate number of auxiliary nodes (n ≤ 1000).However, as the number of auxiliary nodes increases further, the squared difference between ŷtrue and ŷsub goes to zero.
Next, we focus on the issue of hypothesis testing based on the raw-stress embeddings ẑi .In order to test the validity of the model where i ∼ iid N (0, σ 2 ), i ∈ [s], one would conduct hypothesis testing H 0 : β = 0 against H 1 : β = 0.If the true regressors t i were known, we would have used the test statistic F * of Eq. ( 11), but Corollary 2 tells us that with sufficiently large number of auxiliary latent positions, one can have a test based on the one-dimensional raw-stress embeddings ẑi , whose power approximates the power of the test based on the true F -statistic F * .We present a plot in Figure 4 that speaks in support of Corollary 2. We show that the power of the test based on the raw-stress embeddings approaches the power of the test based on the true regressors for a chosen value of the pair of the regression parameters.The setting is almost the same as the previous one, except that here the number n of nodes varies from 100 to 1000 in steps of 50.For each n, the true F -statistic (based on the true regressors t i ) and the estimated F -statistic (based on the raw-stress embeddings ẑi ) are computed for 100 Monte Carlo samples, and the power of the two tests are estimated by the proportion of the Monte Carlo samples for which the statistics exceed a particular threshold.Then, for each n, the difference between the empirically estimated powers of the two tests (one based on the raw-stress embeddings and the other based on the true regressors) is computed and plotted against n.The plot is given in Figure 4 which shows that the difference between the empirically estimated powers of the tests based on the true regressors and the raw-stress embeddings approach zero as the number of auxiliary nodes grows.

Application
In this section, we demonstrate the application of our methodology to real world data.Howard Hughes Medical Institute Janelia reconstructed the complete wiring diagram of the higher order ) where ψ(t) = (t/2, t/2, t/2, t/2).For a small fixed number (s = 20) of nodes, responses y i are generated from y i = α + βt i + i , i ∼ iid N (0, σ 2 ).A large number (n − s) of auxiliary nodes are generated on ψ([0, 1]) and a localization graph is constructed on the adjacency spectral estimates corresponding to the first n/2 nodes.When n is the K-th term of the vector (100, 150, 200, . . .1000), the neighbourhood parameter is taken to be λ = 0.9 × 0.99 K−1 .The dissimilarity matrix of the shortest path distances is embedded into 1-dimension by minimization of raw-stress criterion.In order to test H 0 : β = 0 vs H 1 : β = 0, the test statistics F * based on the true regressors t i and F based on the 1-dimensional isomap embeddings ẑi are comapared, where n is the total number of nodes in the graph.The corresponding powers are empirically estimated by the proportions of times in a collection of 100 Monte Carlo samples the test statistics reject H 0 , for every n varying from 100 to 500 in steps of 25.The plot shows that the difference between the estimated powers of the two tests goes to zero, indicating the tests based on the isomap embeddings are almost as good as the tests based on the true regressors, for sufficiently large number of auxiliary nodes.
parallel fibre system for associative learning in the larval Drosophila brain, the mushroom body.There are n = 100 datapoints corresponding to 100 Kenyon cell neurons forming a network in a latent space.The distance (in microns) between the bundle entry point of a Kenyon cell neuron and mushroom body neuropil is treated as the response corresponding to that neuron.We carry out hypothesis testing to test whether a simple linear regression model links the responses on the neurons of the right hemisphere of the larval Drosophila ( [12], [19], [6]) to some dimension-reduced version of the latent positions of the neurons.
A directed graph representing a network of the 100 Kenyon cell neurons is observed.Since the graph under consideration is directed, the adjacency spectral embedding is formed by taking into account both the left and right singular vectors of the adjacency matrix.The latent position of each node is estimated by a 6-dimensional vector formed by augmenting the top 3 left singular vectors scaled by the diagonal matrix of the corresponding singular values, with the top 3 right singular vectors scaled by the diagonal matrix of the corresponding singular values.For each pair of components, we obtain a scatterplot of the bivariate dataset of all 100 points, thus obtaining a 6 × 6 matrix of scatterplots which is shown in Figure 5. Figure 5 shows that every component is approximately related to every other component, thus indicating the possibility that the six-dimensional datapoints lie on a one-dimensional manifold.We construct a localization graph with neighbourhood parameter λ = 0.50 on the 6-dimensional estimates of the latent positions and embed the dissimilarity matrix D = (d 100,0.5 (x i , xj )) of shortest path distances into one-dimension by minimizing the raw-stress criterion to obtain the 1-dimensional embeddings ẑi .Figure 6 presents the plot of responses y i against the onedimensional raw-stress embeddings ẑi .
The plot in Figure 6 gives an idea that a simple linear regression model links the responses with the raw-stress embeddings.We wish to check the validity the model For that purpose, we test H 0 : b = 0 vs H 1 : b = 0 at level of significance 0.01.The value of the F -statistic, with degrees of freedom 1 and 98, is found to be 9.815.This yields p-value = P [F 1,98 > 9.815] = 0.0023 which is lower than our level of significance 0.01.Therefore, we conclude that a simple linear regression model involving y i as values of the dependent variable and ẑi as values of the independent variable exist.Using Corollary 2, we conclude that the responses on the neurons are linked to the scalar pre-images of the latent positions via a simple linear regression model.Moreover, if the distances between the bundle entry point and the mushroom body neuropil is not recorded for some Kenyon cell neurons, then the values can be predicted using the one-dimensional raw-stress embeddings ẑi as proxy for the true regressors.

Conclusion
In the presented work, theoretical and numerical results are derived on models where latent positions of random dot product graphs lie on a one-dimensional manifold in a high dimensional ambient space.We demonstrated that for a known manifold, the parameters of a simple linear regression model linking the response variable recorded at each node of the graph to the scalar pre-images of the latent positions of the nodes can be estimated consistently even though the true regressors were unknown.However, a key result of our work is to show that even when the manifold is unknown (the more realistic scenario) one can learn it reasonably well under favourable conditions in order to obtain predicted responses that are close to the predicted responses based on the true regressors.
We use the convergence guarantees for raw-stress embeddings ( [11]) to obtain the consistent estimators of the interpoint distances of the regressors when the underlying manifold is unknown.We demonstrate that as the number of auxiliary latent positions grow to infinity, at every auxiliary node the predicted response based on the raw-stress embeddings approach the predicted response based on the true regressors.
Observe that while the substitute estimators of the regression parameters (or the predicted responses based on the raw-stress embeddings) can deliver asymptotic performances close to the performance of their counterparts based on the true regressors, in real-life scenarios we can be dealing with small samples, where the substitute estimators (or the predicted responses) are likely to be poor performers.When the underlying manifold is known, we can overcome this issue by taking into account the measurement errors which are the differences between the estimated regressors and the true regressors, thus making adjustments in the estimators of the regression parameters ( [13]).We conduct a simulation to compare the performances of the estimator based on the true regressors, the substitute (or naive) estimator and the measurement error adjusted estimators, on a known manifold.For a regression model y i = βt i + i , i ∼ iid N (0, σ 2 ), where regressors t i are estimated by ti , the measurement error adjusted estimator is given by βadj,σ = n i=1 yi ti n i=1 t2 i − n i=1 Γi where Γ i = var( ti − t i ).In most realistic scenarios, it is not possible to know the true values of Γ i .However, if they admit consistent estimates Γi , then we can use the proxy given by βadj,σ = .In order to compare the performances of these estimators, we sample a random dot product graph whose nodes lie on a one-dimensional curve in a high dimensional ambient space.We compute the adjacency spectral estimates of the latent positions, project them onto the manifold, and obtain estimates of the regressors which are then used to compute the values of βtrue , βnaive , βadj,σ and βadj,σ for 100 Monte Carlo samples.A boxplot of the values of these estimators computed over 100 Mont Carlo samples is shown in Figure 7.
) where t i ∼ iid U [0, 1].Response y i is generated at the i-th node via the regression model y i = βt i + i , i ∼ iid N (0, σ 2 ) where β = 5.0, σ = 0.1.The naive estimator was computed by plugging-in the pre-images of the projections of the optimally rotated adjacency spectral estimates of latent positions.In order to compute βadj,σ , we plug-in the sum of sample variances obtained from another set of Monte Carlo samples where the graphs are generated from the same model.We obtain n i=1 Γi , by using delta method on the asymptotic variance (see 2) of the optimally rotated adjacency spectral estimates of the latent positions, and thus compute βadj,σ .
Figure 7 clearly shows that the measurement error adjusted estimators βadj,σ and βadj,σ outperform the naive estimator βnaive .Moreover, it is also apparent that the performances of βadj,σ and βadj,σ don't differ by a significant amount.This assures us that the use of measurement error adjusted estimator βadj,σ is pragmatic and effective, since the computation of βadj,σ is not possible in many realistic scenarios owing to the lack of knowledge of the true values of Γ i .
Unfortunately, in the case of unknown manifolds, one cannot readily apply this same methodology, as only interpoint distances, and not embeddings are preserved, as discussed in Section 4. We believe it can be an interesting problem to approach in future.Another area of future investigation can be to see whether the results for one-dimensional manifold can be generalized to k-dimensional manifolds where k > 1.
First, we compute the adjacency spectral estimates xi of the latent positions of all n nodes.We then construct a localization graph on the adjacency spectral estimates xi under the following rule: join two nodes xi and xj if and only if xi − xj < λ, for some predetermined λ > 0 known as the neighbourhood parameter.Denoting the shortest path distance between xi and xj by d n,λ (x i , xj ), we embed the dissimilarity matrix D = (d n,λ (x i , xj )) l i,j=1 into one-dimension by minimizing the raw-stress criterion, thus obtaining (ẑ 1 , ....ẑ l ) = arg min l i=1 l j=1 )

Figure 3
Figure3Plot showing consistency of the predicted responses based on raw-stress embeddings on unknown manifold.The arclength parameterized manifold is taken to be ψ([0, 1]) where ψ(t) = (t/2, t/2, t/2, t/2).The sample mean of the squared difference between the predicted response ŷsub based on raw-stress embeddings and predicted response ŷtrue based on the true regressors is plotted against the total number of nodes in the graph.The substitute predicted response ŷsub performs significantly worse than ŷtrue for moderate number of auxiliary nodes (n ≤ 1000).However, as the number of auxiliary nodes increases further, the squared difference between ŷtrue and ŷsub goes to zero.

Figure 4
Figure 4  Plot of the difference between the empirical powers of tests for model validity based on the 1-dimensional raw-stress embeddings and the true regressors.The arclength parameterized manifold is taken to be ψ([0, 1]) where ψ(t) = (t/2, t/2, t/2, t/2).For a small fixed number (s = 20) of nodes, responses y i are generated from y i = α + βt i + i , i ∼ iid N (0, σ 2 ).A large number (n − s) of auxiliary nodes are generated on ψ([0, 1]) and a localization graph is constructed on the adjacency spectral estimates corresponding to the first n/2 nodes.When n is the K-th term of the vector (100, 150, 200, . . .1000), the neighbourhood parameter is taken to be λ = 0.9 × 0.99 K−1 .The dissimilarity matrix of the shortest path distances is embedded into 1-dimension by minimization of raw-stress criterion.In order to test H 0 : β = 0 vs H 1 : β = 0, the test statistics F * based on the true regressors t i and F based on the 1-dimensional isomap embeddings ẑi are comapared, where n is the total number of nodes in the graph.The corresponding powers are empirically estimated by the proportions of times in a collection of 100 Monte Carlo samples the test statistics reject H 0 , for every n varying from 100 to 500 in steps of 25.The plot shows that the difference between the estimated powers of the two tests goes to zero, indicating the tests based on the isomap embeddings are almost as good as the tests based on the true regressors, for sufficiently large number of auxiliary nodes.

Figure 5
Figure 5 Matrix of scatterplots indicating an underlying low-dimensional structure in the network of 100 Kenyon Cell neurons in larval Drosophila.A directed graph is taken into account where every node represents a neuron.A 6-dimensional adjacency spectral estimate is obtained for every node by augmenting the 3 leading left singular vectors scaled by corresponding singular values, with 3 leading right singular vectors scaled by corresponding singular values.A scatterplot is then obtained for every pair of these 6 components.Since each dimension appears to be approximately related to another dimension via a function, presence of an underlying 1-dimensional structure is assumed.

Figure 6
Figure 6  Scatterplot indicating that the responses and the 1-dimensional raw-stress embeddings are linked via a simple linear regression model.From 6-dimensional estimates of the latent positions corresponding to 100 Kenyon Cell neurons forming a directed network in larval Drosophila, 1-dimensional embeddings ẑi 's are obtained by raw-stress minimization of the shortest path distances.The distance (y i ) between bundle entry point of the i-th neuron and mushroom body neuropil is treated as the response corresponding to the i-th neuron.Scatterplot of (y i , ẑi ), with fitted regression line y = 4356.1 + 1296.6xindicates a significant effect (p < 0.01 for H 0 : a = 0 vs H 1 : a = 0 in y i = a + b i ẑi + η i )

Figure 7
Figure7Boxplot of squared errors of the 4 estimators of the regression slope parameter is given, where the intercept term of the regression model is zero.On each of 100 Monte Carlo samples, a random graph of n = 800 nodes is generated for which the latent position of the i-th node is given byx i = (t 2 i , 2t i (1 − t i ), (1− t i ) 2 ) where t i ∼ iid U [0, 1].Response y i is generated at the i-th node via the regression model y i = βt i + i , i ∼ iid N (0, σ 2 ) where β = 5.0, σ = 0.1.The naive estimator was computed by plugging-in the pre-images of the projections of the optimally rotated adjacency spectral estimates of latent positions.In order to compute βadj,σ , we plug-in the sum of sample variances obtained from another set of Monte Carlo samples where the graphs are generated from the same model.We obtain n i=1 Γi , by using delta method on the asymptotic variance (see 2) of the optimally rotated adjacency spectral estimates of the latent positions, and thus compute βadj,σ .