A General Deep Learning Framework for Network Reconstruction and Dynamics Learning

Many complex processes can be viewed as dynamical systems on networks. However, in real cases, only the performances of the system are known, the network structure and the dynamical rules are not observed. Therefore, recovering latent network structure and dynamics from observed time series data are important tasks because it may help us to open the black box, and even to build up the model of a complex system automatically. Although this problem hosts a wealth of potential applications in biology, earth science, and epidemics etc., conventional methods have limitations. In this work, we introduce a new framework, Gumbel Graph Network (GGN), which is a model-free, data-driven deep learning framework to accomplish the reconstruction of both network connections and the dynamics on it. Our model consists of two jointly trained parts: a network generator that generating a discrete network with the Gumbel Softmax technique; and a dynamics learner that utilizing the generated network and one-step trajectory value to predict the states in future steps. We exhibit the universality of our framework on different kinds of time-series data: with the same structure, our model can be trained to accurately recover the network structure and predict future states on continuous, discrete, and binary dynamics, and outperforms competing network reconstruction methods.


Introduction
Many complex processes can be viewed as dynamical systems on an underlying network structure. Network with the dynamics on it is a powerful approach for modeling a wide range of phenomena in real-world systems, where the elements are regarded as nodes and the interactions as edges [1][2][3]. One particular interest in the field of network science is the interplay between the network topology and its dynamics [4]. Much attention has been paid on how collective dynamics on networks are determined by the topology of graph. However, in real cases, only the performances, i.e., the time series of nodes states are observed, but the network structure and the dynamical rules are not known. Thus, the inverse problems, i.e., inferring network topology and dynamical rules based on the observed dynamics data, is more significant. This may pave a new way to detect the internal structure of a system according to its behaviors. Furthermore, it can help us to build up the dynamical model of a complex system according to the observed performance automatically.
For example, inferring gene regulatory networks from expression data can help us to identify the major genes and reveal the functional properties of genetic arXiv:1812.11482v3 [cond-mat.dis-nn] 24 Jun 2019 networks [5]; in the study of climate changes, network reconstruction may help us to reveal the atmospheric teleconnection patterns and understand their underlying mechanisms [6]; it can also find applications in reconstructing epidemic spreading processes in social networks, which is essential to identifying the source and preventing further spreading [7]. Furthermore, if not only the network structure but also the dynamics can be learned very well for these systems, surrogate models of the original problems can be obtained, on which, many experiments that are hard to implement on the original systems can be operated. Another potential application is automated machine learning (AutoML) [8,9]. At present, the main research problem of Neural Architecture Search(NAS), a sub-area of AutoML, is to find the optimal neural network architecture in a space by the search strategy, and it is essentially a network reconstruction problem, in which the optimal neural network and the dynamical rules on it can be learned according to the observed training samples as time series. In a word, reconstructions of network and dynamical rules are pivotal to a wide span of applications.
A considerable amount of methods have been proposed for reconstructing network from time series data. One class of them is based on the method of statistical inference such as Granger causality [10,11], and correlation measurements [12][13][14]. These methods, however, can usually discover functional connectivity and may fail to reveal structural connection [15]. This means that in the reconstructed system, strongly correlated areas in function need to be also directly connected in structure. Nevertheless this requirement is seldom satisfied in many real-world systems like brain [16] and climate systems [6]. Another class of methods were developed for reconstructing structural connections directly under certain assumptions. For example, methods such as driving response [17] or compressed sensing [7,[18][19][20] either require the functional form of the differential equations, or the target specific dynamics, or the sparsity of time series data. Although a model-free framework presented by Casadiego et al. [21] do not have these limitations, it can only be applied to dynamical systems with continuous variables so that the derivatives can be calculated. Thus, a general framework for reconstructing network topology and learning dynamics from the time series data of various types of dynamics, including continuous, discrete and binary ones, is necessary.
Recently, deep Learning has gained success in many areas such as image classification [22] and speech recognition [23]. Can we apply this state-of-the-art technique on network reconstruction problem? This is possible because Graph network framework [24] have enabled deep learning techniques applied on graph structures successfully by mapping graph-structured data onto Euclidean space with update and aggregation functions [25]. With a wealth of different avenues available, GN can be tailored to perform various tasks, such as node or graph classification [26,27], graph generation [28][29][30][31], and spatial-temporal forecasting [32][33][34][35]. Recently, the topic of recovering interactions and predicting physical dynamics under given interaction networks has attracted much attention. A most used approach is introduced by Battaglia et al. [36], representing particles as nodes and interactions as edges, then reconstruct the trajectories in a inference process on the given graph. However, most of the works in this field have focused on physical reasoning task while few dedicate to solving the inverse problem of network science: revealing network topology from observed dynamics. Some related works [37,38] attempted to infer implicit interaction of the system to help with the state prediction via observation. But they did not specify the implicit interaction as the network topology of the system, therefore the network reconstruction task remains ignored. Of all literature as we known, only NRI (Neural Relational Inference) model [39] is working on this goal. Nevertheless, only a few continuous dynamics such as spring model and Kuramoto model are studied, and discrete processes were never considered. So in the rest of this article, we will take NRI as one of our baselines and will be compared against our own model.
In this work we introduce Gumbel Graph Network (GGN), a model-free, datadriven method that can simultaneously reconstruct network topology and perform dynamics prediction from time series data of node states. It is able to attain high accuracy on both tasks under various dynamical systems, as well as multiple types of network topology. We first introduce our architecture which is called Gumbel Graph Networks in Section 2 and then give a brief overview of our experiments on three typical dynamics in Section 3. In Section 4, we show our results. Finally, some concluding remarks and discussions are given in Section 5.

Problem Overview
The goal of our Gumbel Graph Network is to reconstruct the interaction graph and simulate the dynamics from the observational data of N interacting objects.
Typically, we assume that the system dynamics that we are interested can be described by a differential equation dX/dt = ψ(X t , A) or the discrete iteration X t = ψ(X t−1 , A), where X t = (X t 1 , ..., X t N ) denotes the states of N objects at time t, and X i is the state of the object i. ψ is the dynamical function, and A is the adjacency matrix of an unweighted directed graph. However, ψ and A are unknown for us, and they will be inferred or reconstructed from a segment of time series data, i.e., X = (X t , ..., X t+P ), where P is the number of prediction steps.
Thus, our algorithm aims to learn the network structure (Specifically, the adjacency matrix) and the dynamical model ψ simultaneously in an unsupervised way.

Framework
The general framework of our model is shown in 3. The input of the model is the feature of all nodes at time step t, and the output of the model is the feature of all nodes in the following P steps. The model consists of two modules, a network generator and a dynamics learner. The job of the generator is to generate an adjacency matrix, and the learner will use the adjacency matrix generated and X t (feature of all nodes at time t) to predict X t+1 , ..., X t+P ,(feature of all nodes from time t + 1 to t + P ).
The Network Generator module uses the Gumbel softmax trick [40] to generate the adjacency matrix. Details are explained in subsection 3. The goal of the Dynamics Learner is to map the features of all nodes from time t to time t + 1 through generated adjacency matrix. Similar to NRI's design [39], our GNN comprises of 4 mapping processes between nodes and edges, which can be accomplished through MLP, CNN or RNN module. In this article, we use MLP. Details are further explained in subsection 4. To learn the complex non-linear process, we use Graph Figure 1: Basic structure of GGN. Our framework contains two main parts. First, the Adjacency Matrix is generated by the Network Generator via Gumbel softmax sampling; then the adjacency matrix and X t (node state at time t) are fed to Dynamics Learner to predict the node state in future P time step. The backpropagation process runs back through all the computations.
Neural Network instead of Graph Converlutional Network [41], since the latter does not consider the nonlinear coupling between nodes while sometimes it exists (for example, Kuramoto model).
The temporal complexity and the spatial complexity are both O(N 2 ).

Network Generator
One of the difficulties for reconstructing a network from the data is the discreteness of the graph, such that the back-propagation technique, which is widely used in differential functions, cannot be applied.
To conquer this problem, we apply Gumbel-softmax trick to reconstruct the adjacency matrix of the network directly. This technique simulates the sampling process from a discrete distribution by a continuous function such that the distributions generated from the sampling processes in real or simulation are identical. In this way, the simulated process allows for back-propagation because it is differentiable.
Network generator is a parameterized module to generate adjacency matrix. Specifically, for a network of N nodes, it uses a N × N parameterized matrix to determine the N × N elements in the adjacency matrix A, with α ij denoting the probability that A ij takes value 1.
Specifically, the method to generate an adjacency matrix is shown below where ξ ij s and ξ ij s are i.i.d. random numbers following the gumbel distribution [42]. This calculation uses a continuous function with random noise to simulate a discontinuous sampling process. And the temperature parameter τ adjusts the sharpness of the output. When τ → 0, a ij will take 1 with probability α ij and 0 with probability 1 − α ij . Since α ij s are all trainable parameters, they can be adjusted according to the back propagation algorithm. Thanks to the features of Gumbel-softmax trick, the gradient information can be back propagated through the whole computation graph although the process of sampling random numbers is non-differentiable.

Dynamics Learner
Learning with graph-structured data is a hot topic in deep learning research areas. Recently, Graph networks (GNs) [24] have been widely investigated and have achieved compelling performance in node classification, link prediction, etc. In general, a GN uses the graph structure A and X t , which denotes features of all nodes at time t, as its input to learn the representation of each node. Specifically, the graph information used here is the adjacency matrix constructed by the generator. The whole dynamics learner can be presented as a function: where X t is the state vector of all N nodes at time step t, A is the adjacency matrix constructed by the network generator. Similar to the work [39], we realized this function through four mappings operating in succession: Node to Edge, Edge to Edge, Edge to Node and Node to Node, as shown below. Details are explained in the caption of 2.
Where, H . are hidden layers, Operation ⊗ is pair-wised concatenation, represented by the resulting in a matrix where each element is a node pair. The operation is similar to the Kronecker Product except that we Figure 2: The Structure of the Dynamics Learner. Dynamics Learner takes the graph structure (here we use Adjacency Matrix) and node states X as its input to predict node states at next time step(s). Four main parts operate in succession to accomplish the whole process: Node to Edge, Edge to Edge, Edge to Node and Node to Node.
replace the internal multiplication with concatenation. Element-wised product * of Adjacency matrix and the result of Edge to Edge mapping sets elements 0 if there is no connection between two nodes and Reduced sum operation will aggregate edge information to the node.
Finally, we introduce skip-connection in ResNet [43] to improve the gradient flow through the network, which enhances the performance of the Dynamics Learner. X t denotes the nodes' states at time t. f output is another MLP. This process can be presented as a function Where [., .] denotes the concatenation operator, note that this operation, as well as the skip-connection trick are optional. We use these method only in experiments on Kuramoto. To make multi-step predictions, we feed in the output states and reiterate until we get the prediction sequence X predict = (X 1 predict , ..., X T predict ). Then we back propagate the loss between model prediction and the ground truth.

Training
Having introduced all the components, we now present the training process as algorithm below. In the training process, we feed one step trajectory value: X t as its input , and their succeeding states, namely (X t+1 , ..., X t+P ) as the targets.
The dynamics learner and the network generator are altering optimized in each epoch. We first optimize the dynamics learner for S d rounds with the network generator fixed, back propagating the loss to the dynamics learner in each round.
Then the network generator is trained with the same loss function as the dynamics learner for S n rounds. In each round, the trained dynamics learner make predictions with newly generated adjacency matrix. As for loss function, when the time series to be predicted is a discrete sequence with finite countable symbols, the cross-entropy objective function is adopted otherwise the mean square errors are used. Note that instead of training the dynamics learner and the network generator simultaneously, we train them for different number of rounds per epoch. The main reason behind this training process is the observation that the dynamics and network structures evolve in different paces in real systems. Networks usually change slower than the dynamics. Small changes on network structure may lead to dramatic changes on node dynamics. Therefore, by alternating the training processes in different rounds per epoch, we can adjust both learning processes to appropriate paces. In practice, S d and S n vary case by case. Although we chose them mainly through hyper-parameter tuning, there is a general observation that the more complex the dynamics is, the larger S d it requires. For example, for Boolean Network model mentioned below, which exhibiting binary dynamics, the S d is 10; while for Kuramoto model, which is highly nonlinear, the S d needs to be around 30 to achieve a good result.

An Example
At first, we will show how GGN works and in what accuracy, we use a 10-body mass-spring interaction system as an example. Suppose in a two-dimensional plane, there are 10 masses linked each other by springs, and the connection density is 0.2. The masses can move according to the spring dynamics if the initial positions and velocities are given. And we will use the data of the position and velocity of each particle generated by the simulation to reconstruct their connections and predict their future positions.
In this experiment, we set S n = 5 and S d = 50, and use 5k training samples,1k validation samples and 1k test samples with each sample containing 10-steps trajectory of each mass. In each sample, one initial condition is adopted. The result shows that GGN can reconstruct the adjacency matrix with 90% accuracy, and can predict the next step positions with a fairly small error 2.97e−5. Figure 3 visualizes the trajectories of simulation model and predictions.
It can be seen that GGN model can reconstruct the many-body problem in twodimensional plane with high accuracy, and it can predict the node state in the future time accurately.

Experiments on Simulated Models
To systematically test the power of GGN, we experimented it on three types of simulated models: Boolean Network [44], Kuramoto [45], and Coupled Map Lattice [46,47], which exhibit binary, continuous, and discrete trajectories, respectively. Here we attempt to train our model to learn the dynamics and reconstruct the interactions between particles, or the adjacency matrices, under all three circumstances.

Boolean Network
Boolean Network is a widely studied model that is often used to model gene regulatory networks. In Boolean Network system, every variable has a possible value of 0 or 1 and a Boolean function is assigned to the node. The function takes the states of its neighbors as inputs and returns a binary value determining the state of the current node.
In simulation. We set the structure of the network as a directed graph with the degree of each node as K, and different K determines whether the network will evolve chaotically or non-chaotically. All nodes follow the same randomly generated table of dynamical rules. The training data we generated contains 5k pairs of state transition sequences. Meanwhile, we simulated 1k validation set and 1k test set.

Kuramoto Model
The Kuramoto model [45] is a nonlinear system of phase-coupled oscillators, and it is often used to describe synchronization. Specifically, we study the system Where ω i are intrinsic frequencies sampled from a given distribution g(ω), and here we use a uniform distribution on [1,10); k is the coupling strength; A ij ∈ {0, 1} are the elements of N × N adjacency matrix, and for undirected random networks we study, A ij = A ji . The Kuramoto network have two types of dynamics, synchronized and non-synchronized. According to studies by Restrepo et. al [48], the transition from coherence to incoherence can be captured by a critical coupling strength k c = k 0 /λ, where k 0 = 2/πg(5.5) in our case, and λ is the largest eigenvalue of the adjacent matrix. The network synchronizes if k > k c , and otherwise fails to synchronize. We simulate and study both coherent and incoherent cases.
For simulation, we solve the 1D differential equation with 4th-order Runge-Kutta method with a step size of 0.01. Our training sets include 5k samplings, validation set 1k, and test set 1k, each sampling covers dφ i /dt and sin(φ i ) in 10 time-steps.

Coupled Map Lattice
Coupled map lattices represent a dynamical model with discrete time and continuous state variables [46,47],it is widely used to study the chaotic dynamics of spatially extended systems. The model is originally defined on a chain with a periodic boundary condition but can be easily extended to any type of topology: where s is the coupling constant and deg(i) is the degree of node i. We choose the following logistic map function: We simulated N ∈ {10, 30} coupled map lattices with initial states x 0 (i) sampling uniformly from [0, 1] for random regular graphs. Notice that when setting coupling constant s = 0, the system reduces to N independent logistic map. The training sets also include 5k samplings, 1k validation set, and 1k test set, each sampling covers x i in 10 time-steps.

Results
In each experiment listed below, we set the hyper-parameters S n and S d of the Boolean Network model to 20 and 10, respectively, while in Coupled Map Lattice model and Kuramoto model they are 5 and 30. In Coupled Map Lattice model and Kuramoto model, the prediction steps P is 9, which means that the current state is used to predict the node state of the next 9 time steps, while in the Boolean Network, it is set to 1. In all the experiments, we've set the hidden size in all the MLP networks of the dynamics learner module of the GGN model to 256. All the presented results are the mean value over five times of repeated experiments. The horizontal lines: "-" in the table indicates that the amount of data exceeds the model processing limitation, the model becomes so unstable that outputs may present as "nan" during training.
We compare our model with following baseline algorithms: • LSTM(Long Short-Term Memory Network) is a well-known recurrent neural network and has been shown to be very suitable for sequence prediction problems. To do network reconstruction with LSTM, previous work [39] used thresholded correlation matrix to represent the adjacency matrix. But according to our experiments, this method would only yield all-zero or all-one matrices, therefore cannot serve as a satisfactory way of deriving adjacency matrices. Hence, we use LSTM only for node state prediction.However, this method cannot obtain meaningful results as in [39] because different network generating methods are used. Therefore, we ignore the network reconstruction accuracy while only the state prediction is reported for LSTM. • NRI(Neural Relational Inference Model) is able to reconstruct the underlying network structure and predict the node state in future time steps simultaneously by observing the node state sequence. We compare our model against it in both tasks. Here we use settings similar to that in Kipf's original paper [39]: all our experiments use MLP decoders, and with the Kuramoto model, we use CNN encoder and other models the MLP encoder. We use the following indicators to evaluate the results of the experiments: • TPR(true positive rate) measures the proportion of positive instances that are correctly identified. We consider 1 in the adjacency matrix as a positive element, whereas 0 as a negative one. • FPR(false positive rate) computes the proportion of negative instances that are incorrectly identified in the adjacency matrix generated. • MSE(mean square error) measures the average of the squares of the errors, that is the average squared difference between the estimated values and data. The MSE we showed below is the average mean square error of next P time steps. • ACC(net) is the proportion of correctly identified elements of the Adjacency Matrix. • ACC(dyn), In our experiment on the Boolean Network, we use indices ACC(dyn) to measure the proportion of nodes whose states are predicted accurately in the next time step.

Experiments with Different Dynamics
In our experiments, we set the network topology of a Boolean Network as a directed graph, with the indegree of each node being k, and k determines whether the system is chaotic or not. For all systems, we set k = 2 for non-chaotic cases; and for systems with 10, 30, 100 nodes, k is set to 7, 5 and 4, respectively, to obtain chaotic dynamics. As shown in Table 1, the GGN model recovers the ground-truth interaction graph with an accuracy significantly higher than competing method, and the recovery rate in non-chaotic regimes is better than those in chaotic regime.  Here we presented our results obtained on coupled map lattice model in Table 2. In our experiments, the network topology is random 4-regular graph, and we set coupling constant s = 0.2 fixed. Because r ≈ 3.56995 is the onset of chaos in the logistic map, we chose r = 3.5 and r = 3.6 to represent non-chaotic and chaotic dynamics respectively. For a random 4-regular graph with 10 nodes, our GGN model has obtained approximately 100% accuracy in the task of network reconstruction. For a system with 30 nodes, it is still able to achieve a high accuracy and the performance obtained on non-chaotic dynamics is better than that on chaotic dynamics.  In the experiment concerning Kuramoto Model, we used Erdos-Renyi random graph with a connection possibility of 0.5. As the onset of synchronization is at k = k c (in our cases, k c = 1.20 for 10 nodes system and k c = 0.41 for 30 nodes system), we chose k = 1.1k c and k = 0.9k c to represent coherent and incoherent dynamics respectively. Here we used data of only two dimensions (speed and amplitude), as opposed to four dimensions in NRI's original setting (speed, phase, amplitude and intrinsic frequency), so the performance of NRI model here is lower than that presented in Kipf's original paper [39]. Similar to BN and CML, our GGN model attains better accuracy in coherent cases, which are more regular than incoherent ones.
To sum up, the above experiments clearly demonstrates the compelling competence and generality of our GGN model. As shown in the tables, GGN achieves high accuracy on all three network simulations, and its performance remains good when the number of nodes increases and the system transform from non-chaotic to chaotic. Although we note that our model achieves relatively lower accuracy in chaotic cases, and it is not perfectly stable under the current data size (in CML and Kuramoto experiments, there is a 1/5 chance that performance would decrease), the overall results are satisfactory.

Reconstruction Accuracy with Network Structure
In this section, we used 100-node Boolean Network with the Voter dynamics [49] (for a node with degree k, and m neighbours in state 1, in the next time step it has a probability of m/k to be in state 1, and a probability of (k − m)/k to be in state 0) to study how network structure affects the network reconstruction performance of our GGN model. Specifically, we studied WS networks and BA networks, and examined how the reconstruction accuracy would change under different network parameters. We also experimented with two different data sizes: 500 and 5000 pairs of state transition sequences, to see how network structure would affect the needed amount of data.
In the first experiment, we studied WS networks of different re-connection possibility p. We note that the reconstruction accuracy declines slowly between p = [10 −4 , 10 −2 ], but drops sharply when p is larger than 10 −2 . As the average distance of the network drops quickly before 10 −2 , but our reconstruction accuracy remains roughly the same, it seems that the reconstruction accuracy is insensible to it. On the other hand, the Clustering Coefficient of the network drops quickly when p is larger than 10 −2 , while declining slowly when p is smaller [50], which correspond with our curves of accuracy. Therefore, we may conclude that the reconstruction accuracy is directly affected by Clustering Coefficient of the network. However, as the data size increases, the performance is significantly augmented under all different values of p, and the slope is greatly reduced. So increasing the data size can effectively solve the problem brought by increasing re-connection possibility.
In the second experiment, we studied WS networks of different number of neighbours. Here the situation is much simpler: as number of neighbours increases, the complexity of network also goes up, and it in turn makes learning more difficult. So the need for data is increasing along with the number of neighbours.
In the third experiment, we studied BA networks of different number of connections of each node. The result is similar to the second experiment, but here, increasing the data size receives a smaller response than in the WS networks. That is probably because in BA networks, a few nodes can greatly affect the dynamics of the whole network, which makes the complexity even higher, therefore the need for data would be greater for BA networks.

Reconstruction Accuracy with Data Size
In this section we study the dependency between the amount of data and the accuracy of reconstruction. We performed our experiments on CML model with chaotic dynamics. As illustrated in Figure 6, the accuracy of reconstruction significantly improves when feeding more data to the model. We also noticed that an insufficient amount of data can lead to a high deviation, which means that our method can either produce results with high accuracy or fail.

Conclusion
In this work we introduced GGN, a model-free, purely data-driven method that can simultaneously reconstruct network topology and perform dynamic prediction from time series data of node state. Without any prior knowledge of the network structure, it is able to complete both tasks with high accuracy. In a series of experiments, we demonstrated that GGN is able to be applied to a variety of dynamical systems, including continuous, discrete, and even binary ones. And we found that in most cases, GGN can reconstruct the network better from non-chaotic data. In order to further explore GGN's properties and to better know its upper limit, we conducted experiments under different network topology and different data volumes. The results show that the network reconstruction ability of our model is strongly correlated with the complexity of dynamics and the Clustering accuracy (%) Figure 6: Accuracy of reconstruction versus the size of training data. The amount of training data is ranging from about 10 2 to 10 3 with each sampling evolving for T = 100 time steps. The results are obtained on CML model with 10 nodes and the network topology is random 4-regular graphs. We set s = 0.2 and r = 3.6 to generate chaotic dynamics.
Coefficient of the network. It is also demonstrated that increasing the data size can significantly improve GGN's net reconstruction performance, while a small data size can result in large deviation and unstable performance. However, we are well aware that the current work has some limitations. It now focuses only on static graph and Markovian dynamics. Besides, as the limitation of our computation power, the maximum network size we can process is limited up to 100 nodes. Several possible approaches may help us to conquer these problems. First, if we can parameterize the network generator in a dynamical way, that is, to allow the generator parameters change along time, we can break through the limitations of static graphs. Second, if we replace the MLP network with RNN in the framework, learning of non-Markovian dynamics is possible. Third, to improve the scalability of our framework, node by node reconstruction of network can be adopted to save the space complexity. Another good way to improve the size limitation is to use graph convolution network (GCN) to model dynamics learner. GCN has been proved to be very useful in a large variety of field although it can not simulate some complex nonlinear process quite well from the experimental.In future works, we will further enhance the capacity of our model so as to break through these limitations.

Availability of data and material
The datasets generated and/or analysed during the current study are available in https://github.com/bnusss/GGN.

Competing interests
The authors declare that they have no competing interests.

Funding
The research is supported by the National Natural Science Foundation of China (NSFC) under the grant numbers 61673070.

Authors' contributions
JZ conceived and designed the study. ZZ, YZ, SW, JL, RT and RX performed the experiments. ZZ, YZ, SW, JL, RT and RX wrote the paper. JZ reviewed and edited the manuscript. All authors read and approved the manuscript.

Acknowledgements
We thank professor Wenxu Wang and Qinghua Chen from School of Systems Science, Beijing Normal University for discussion and their generous help.
Author details