 Research
 Open Access
 Published:
Classes of random walks on temporal networks with competing timescales
Applied Network Science volume 4, Article number: 72 (2019)
Abstract
Random walks find applications in many areas of science and are the heart of essential network analytic tools. When defined on temporal networks, even basic random walk models may exhibit a rich spectrum of behaviours, due to the coexistence of different timescales in the system. Here, we introduce random walks on general stochastic temporal networks allowing for lasting interactions, with up to three competing timescales. We then compare the mean resting time and stationary state of different models. We also discuss the accuracy of the mathematical analysis depending on the random walk model and the structure of the underlying network, and pay particular attention to the emergence of nonMarkovian behaviour, even when all dynamical entities are governed by memoryless distributions.
Introduction
Diffusion on static networks is a wellknown, extensively studied problem. Its archetype is the study of random walk processes on networks, that are essentially equivalent to Markov chains, and are used as a baseline model for the diffusion of items or ideas on networks (Masuda et al. 2017), but also as a tool to characterise certain aspects of their organisation such as the centrality of nodes (Brin and Page 1998; Langville and Meyer 2004; Lambiotte and Rosvall 2012) or the presence of communities (Rosvall et al. 2009; Delvenne et al. 2010). Random walks find applications in biology, particle physics, financial markets, and many other fields. This diversity contributes to the many existing variants of random walks, including Levy flights (Klafter and Sokolov 2011), correlated walks (Renshaw and Henderson 1981), elephant walks (Schütz and Trimper 2004), random walks in heterogeneous media (Grebenkov and Tupikina 2018), in crowded conditions (Asllani et al. 2018), or even quantum walks (Kempe 2003) (see Klafter and Sokolov (2011); Hughes (1995) and the many references therein). In real world scenarios, a core assumption of random walk models, i.e. that the network is a static entity, is often violated (Holme 2015; Masuda and Lambiotte 2016). Empirical evidence shows instead that the network should be regarded as a dynamical entity, and part of the research on random walks is devoted to their dynamics on temporal networks (Perra et al. 2012; Starnini et al. 2012; Petit et al. 2018). The distinction between the dynamics on the network and the dynamics of the network is not always clear, which naturally lead to a standard classification of the different types of randomwalk processes (Masuda et al. 2017). According to this classification, there are two dominant facets of random walks on temporal networks.
Firstly, it is relevant for many dynamical systems on networks including random walks, to distinguish between nodebased and edgebased dynamics (Porter and Gleeson 2016). In a nodecentric walk, a stochastic process occurring at the level of the node determines the duration before the next jump. At that moment, the walker moves to a new destination, selecting any of the outgoing edges. At variance, in an edgecentric model, the links are the driving units. In other words, the links become available for transport, then vanish, according to their stochastic process. The edgecentric model is also known under the intuitive denomination of fluid model.
Secondly, one draws a line between active and passive models. The walk is called active when the waiting time before the next jump is reset by each jump of the walker. Active walks are common models for animal trajectories. On the other hand, in passive models the motion of the walker is instead constrained by the temporal patterns of (typically the edges of) the network. An example would be that of a person randomly exploring a public transportation network, taking every available ride.
In short, there are clocks, either on the nodes or on the edges (nodecentric vs edgecentric models), and either the clocks are reset following each jump of the walker or they evolve in an independent manner (active vs passive models). However, this classification is too restrictive when it comes to scenarios where the walker has an own dynamics, which is decoupled from the transport layer represented by the network. More precisely, a first objective of this work is to formulate natural extensions of existing random walks, that arise when the network is such that the duration of the links between the nodes is not instantaneous (Gauvin et al. 2013; Sekara et al. 2016). This is in contrast with a majority of approaches which assume that the network evolution can be modelled as a point process (Hoffmann et al. 2012). On top of that, the walker does not necessarily jump through an available link. Instead, it is constrained by its own waiting time after each jump. By doing so, we allow for the possibility to investigate the importance of a timescale associated to edge duration, which may appear in certain empirical scenarios, e.g. for phone calls versus text message communication. This importance is assessed by making a comparison with the timescale of the walker’s selfimposed waiting time. We will show that the diffusive process may effectively be a combination of active and passive diffusion, with clocks both on the edges (determining the network’s contribution to the overall dynamics) and on the nodes (determining the walker’s contribution). As a next step, we study the mean resting time, that is, the mean of the total waiting time on the nodes, with the motivation that this is an appropriate measure for the speed of spreading processes in the different models. Because random walks can be used to determine the most central nodes in a network, a final application will be the analysis of the rankings of the nodes resulting from the different models.
Overall, our main message is that diffusion on temporal networks is a question of timescales and that simplified existing models may emerge and provide accurate predictions in certain regimes when some processes are neglected. But we also show that in other regimes, the models that go beyond the usual binary classification (active or passive, nodecentric or edgecentric) are relevant. The interaction between dynamical parameters (which determine the waiting times associated to the walker and the edges) and topology parameters (such as vertex degrees) induces non trivial effects on  for instance  the steady state. Our results also show that the temporality of the underlying network may lead to a loss of Markovianity, in other words the emergence of memory, for the dynamics of the walker. This nonMarkovianity may reveal itself in the properties of the timings at which events take place or in trajectories that can thus only be approximated by firstorder Markov processes.
Our approach is based on the generalised integrodifferential master equation obtained for nonPoisson random walks on temporal networks (Hoffmann et al. 2012). Using this equation, it becomes apparent that we need to determine the distribution of the resting time on the nodes. By direct integration, we then obtain the mean resting time. We also compare the different models, and evaluate the dominating timescales in regimes of extreme values for the dynamical parameters. Finally, combining an asymptotic analysis of the master equation and the resting times leads to the steady state for the different models. The analysis up to that point does not take into account a possible memory effect arising from walkernetwork interaction. As stated above, this memory effect means that the trajectories followed by the walker cannot be described by a Markov chain because of correlations between the timings or directions of the jumps. This will be discussed based on the approach of Petit et al. (2018).
The structure of the paper reflects this approach. In “Classification of models with two or three timescales” section we introduce the models with lasting edges that complement the classical active or passive, nodecentric or edgecentric models which all have instantaneous links. The master equation from which the analysis is drawn follows in “Master equation” section. The mean resting time of the models and an analysis of competing timescales is the subject of “Mean resting times” section, followed by the steady states in “Steady state” section. “Memory through walkernetwork interaction” section addresses the memory effect due to walkernetwork interaction before we conclude in “Conclusion” section.
Classification of models with two or three timescales
The goal of this section is to present and to classify random walk models with up to three timescales : one associated to the walker, and two with the network. Let us first briefly review the three classical models of continuoustime random walks. Their respective sets of microscopic motion rules are described by the three panels in the left column of Fig. 1.

The first walk, labelled (Mod. 1), is an active nodecentric model where the walker resets the clock of a node upon arrival on the node. One may think of the clock as being attached to the walker, and as obeying the walker’s dynamics, whereas the network is then static. The waiting time on a node corresponds to the random variable X_{w}. In the simplest case, X_{w} follows an exponential distribution with probability density function (PDF) ψ(t)=μe^{−μt}, and the sequence of samples of X_{w} follows a Poisson process. In general cases however, the distribution for X_{w} is not exponential and the sequence of durations is a renewal process. In this work, we will concentrate on the simpler, exponential case allowing us to bring out effects due to walkernetwork interaction. Summing up, the variable X_{w} represents the active, nodelevel feature in all models.

The second walk with label (Mod. 2), is the active edgecentric model. The walker accordingly resets the downtime of the edges leaving a node, upon arrival on that node. This downtime is the random variable X_{d} and determines the period of unavailability of the edge.

The third walk, (Mod. 3), is again an edgecentric model with a passive walker, who passively follows edge activations.

The fourth combination corresponding to the two classification criteria, the passive nodecentric walk, appears to be irrelevant for practical applications (Masuda et al. 2017). We won’t discuss it further.
Those three models possess a single timescale, associated to the clock of the walker or of the edges. We now introduce three natural extensions, each featuring an active walker, carrying its own clock. This clock is reset after each jump, and determines a walker’s selfimposed waiting time. We consider different behaviours for the transport layer, which will define the edgelevel dynamics of the models.

In a first declination called model 4, the network functions independently of the walker. Hence, the edgelevel dynamics is passive. In this model, each edge cycles through states of availability followed by periods of unavailability. The durations of the uptimes is a random variable X_{u}, and the durations of the downtimes is the alreadyintroduced random variable X_{d}. Again, we assume both are exponentially distributed, with respective PDF’s U(t)=ηe^{−ηt} and D(t)=λe^{−λt}. As depicted by Fig. 1, when the walker is ready to jump at the end of X_{w}, there are two possibilities. If one or more outgoing edges are available, a new destination node is selected randomly without bias. The total waiting time on the node is thus X_{w}. If no outgoing edge is available, then the walker is trapped on the node, and will wait until the next activation of an edge. Observe that a similar^{Footnote 1} behaviour for the trapped walker was assumed and some consequences analysed in Petit et al. (2018). This model has three timescales^{Footnote 2}, one for the walker (X_{w}) and two for the network (X_{u} and X_{d}).

In a second declination called model 5, the walker this time has an active role with respect to the network. At the beginning of the walker’s waiting time X_{w}, the state of each outgoing edge is reset to unavailable, for a duration X_{d}. Then follows a period of availability X_{u}, then a downtime, and so on, until the walker is ready and an edge is available. This model is active at node and edgelevels, and also has three timescales.

The third declination, model 6, is similar to model 5 in the sense that the walker also actively resets the transport layer, but this reset occurs at the end of the walker’s waiting time X_{w}. Therefore, the walker is always trapped for a duration X_{d} and the dynamics only has two timescales, one for the walker and one for the downtimes.
The set of motion rules for three versions of walks on networks with finite duration of edge activations have now been presented. The mathematical modelling is expectedly more involved than in the singletimescale models 1, 2 or 3. However, it is not needed to always consider the models in their full complexity. Indeed, it results from a close look at Fig. 1 that in limiting regimes, the models approach the classical node or edgecentric walks. This qualitative observation is presented on Fig. 2. The analysis of “Mean resting times” section will provide a quantitative justification of this figure.
We proceed with the analysis based on the master equations in the next section.
Master equation
Starting from the microscopic mobility rules defining the model, a master equation for the evolution of the walker’s position across the network is derived. This position is encoded in the row vector
describing the residence probabilities on the nodes. Here, N is the number of nodes of the network and each component n_{i}(t) gives the probability that the walker is located on node i at time t. Once the equation is obtained, the analysis of various properties of the walk becomes accessible.
The master equation differs based on whether there is memory in the process or not, justifying the following distinction.
Markovian vs nonMarkovian walks
If the process is Markovian, the trajectory starting from a given time only depends on the state of the system at that time, and the equation simplifies, typically under the form of a differential equation. In that case, the distribution of the time that a walker still remains on a node is not influenced by the time spent so far on the node. Otherwise, in the nonMarkovian case, the master equation has a more complex but still useful form. Two different types of nonMarkovianity may emerge in the system: a nonMarkovianity in time, when memory affects the timings of future jumps, or in trajectory, when memory impacts the choice of the next destination node. NonMarkovianity in time means that the number of jumps of the process deviates from a Poisson process. For instance, this happens in the nodecentric walk of model 1 where the waiting time before activation is nonexponential. NonMarkovianity in trajectory means that the paths followed by the walker cannot be described by a Markov chain because of correlations between edge activation along this trajectory, even if edges are statistically independent.
Markovian random walks
The active nodecentric walk (Mod. 1) is arguably the simplest to study among continuoustime random walks. It is Markovian when the waiting time on the node is exponentially distributed. To fix notations, consider a directed, strongly connected^{Footnote 3} graph \(\mathcal G = (V,E)\) on N nodes, where E is the set of allowed directed edges between pairs of nodes of the set V. Let V_{j} be the set of nodes reachable from node j in \(\mathcal {G}\). When the outgoing edges are chosen uniformly by the walker, the walk on \(\mathcal G\) is governed by the wellknown master equation for the residence probabilities (Angstmann et al. 2013):
where A_{ij} is the adjacency matrix (A_{ij}=1 if i→j is an edge of E, and is zero otherwise), k_{j}=V_{j} is the cardinal of V_{j}, namely, the positive outdegree of node j, and where μ_{j} is the exponential rate in the node. If the rate is the same on all nodes, μ_{j}=μ for all j, and after scaling of the time variable t↦μt, Eq. 1 reads
where D denotes the diagonal matrix of the outdegrees. The matrix L^{rw}=D^{−1}A−I is the socalled random walk Laplacian of the graph.
Observe that an alternative modelling for the process of model 1 is to consider that the walker is always ready to jump, and that the edges activate like in the edge centric walk (Mod. 2). To illustrate this fact, let us start from model 1 where we assume that the rates on the nodes are proportional to degree : μ_{j}=λk_{j}, so that the rate of jump across each edge of the graph is the same, independently of the degrees of the nodes. The resting time on node j is the random variable X^{(j)} which is here written as
namely the minimum of k_{j} exponential distributions with rate λ, which again follows an exponential distribution with rate k_{j}λ. Eq. 1 becomes
In matrix form, recalling that n is a row vector we have \(\dot n = \lambda n(AD)\), or again, after scaling of time with respect to 1/λ,
where this time L=A−D is known as the graph or combinatorial Laplacian. Equation 5 is well known to correspond to model 2, where λ would be the rate of the exponential distribution governing the time before activation of each edge. This shows that the same process  in terms of trajectories  can be seen as happening atop of a static graph, or a temporal graph. The walk on a static graph generates a temporal network where jumps across edges are considered as edges activation. Note again that this freedom of the choice of modelling exists thanks to the restrictive framework of exponential distributions.
Remark 1.
(On the timedependent Laplacian) In previous works dealing with synchronisation (Stilwell et al. 2006; Belykh et al. 2004), desynchronisation (Lucas et al. 2018), instabilities in reactiondiffusion systems (Petit et al. 2017) and other works about dynamical systems on timevarying networks (Zanette and Mikhailov 2004; Holme 2015), a timedependent Laplacian L(t) has replaced the usual graph Laplacian L in the equations. Here we want to comment on (5) in this timevarying setting, that is,
In our framework, we consider this equation as associated to a passive edgecentric walk on switched networks, where the underlying network of possible links varies in time. The rewiring occurs for several edges simultaneously at discrete time steps, as opposed to the continuoustime process we have considered so far, where no two edges change states at the same time. The adjacency matrix is A(t)=A^{[χ(t)]}, where \(\chi : \mathbb { R}^{+} \rightarrow I \subset \mathbb { N}\) selects one possible graph configurations in the set {A^{[i]}}_{i∈I}. The definition of the Laplacian is then L(t)=A(t)−D(t), where D(t) contains the timedependent degrees on its diagonal. Note that for simplicity, we have again assumed that the rate λ is the same for all edges of all configurations of the underlying graph, allowing us to use the timescaled Eq. 5 between any two switching times. This remark applies in the context of discrete switching, but can be extended to a continuouslyvarying, weighted adjacency matrix, where L(t) is no longer a piecewise constant matrix function.
Nonmarkovian random walks
In general, when either the waiting time on the nodes, or the interactivation times on the edges is no longer exponentially distributed, the Markov property is lost^{Footnote 4}, and the differential Eqs. 2 and (5) are replaced by generalised integrodifferential versions. Such generalisations have been developed from a nodecentric perspective for instance in Angstmann et al. (2013), and for the edgecentric approach in Hoffmann et al. (2012), ending up in essentially the same form of equation, the only difference being the underlying mechanism regulating the resting times on the nodes. This lets us choose between the two approaches. We will mainly follow (Hoffmann et al. 2012), in which the generalised master equation valid for arbitrary distributions for X_{d} in walk (Mod. 2) is derived. This master equation will be an important ingredient in the next two sections. Therefore, it is worth introducing the equation, starting with some preliminary notations.
The building quantity in the models is the resting time on a node j, namely the duration between the arrivaltime on the node, and a jump to any other node. This duration is a random variable X^{(j)} with PDF T_{∙j}(t). This resting time density (also known as transition density) satisfies the normalisation condition
meaning that a jump will eventually occur since the outdegree in the underlying graph is positive. The diagonal matrix of the resting time densities is D_{T}(t), so that the elements are given by [D_{T}(t)]_{ij}=T_{∙j}(t)δ_{ij}.
The resting time density is written as the sum \(T_{\bullet j} (t) = \sum _{i \in V_{j}} T_{ij}(t) \), where T_{ij}(t) refers to a jump across edge j→i. If A_{ji}=1, the integral
is thus the probability that at the time of the jump, the walker located on node j selects node i as destination. Let the matrix function T̲(t) have entries T̲_{ij}(t)=T_{ij}(t). Finally, recall that the Laplace transform of the function f(t) is the map \(s \mapsto \int _{0}^{\infty } f(t) e^{st}dt\), and is written \(\widehat {f}(s)\) or \(\mathcal L \left \{ f(t) \right \}\).
We are now in position to write the master equation. In the Laplace domain it reads
where \(\widehat K (s) = \frac {s \widehat D_{T} (s) }{1 \widehat D_{T} (s) }\) is the memory kernel. The timedomain version of (9) is
This equation is more involved than in the markovian case, and the analysis is better pushed further in the sdomain. In particular, it will allow to obtain a compact expression for the steady state of the walk in “Steady state” section.
At this point, it is worth observing that Eq. 10 is generally nonMarkovian in time, but it results in a sequence of visited nodes that is indeed captured by a Markov chain. This means that it is Markovian in trajectory. Therefore, it cannot be used without further assumption for model 4 that is generally Markovian neither in time, neither in trajectory. Indeed, we will see in “Memory through walkernetwork interaction” section that if there are cycles in the network, the next jump in model 4 along a cycle is conditional to stochastic realisations in previous steps, hence affecting the choice of the next destination node.
The work ahead is now to compute the resting time densities T_{∙j}(t), from which the average time spent a node for a given model follows directly, as is shown in the next section.
Mean resting times
The mathematical expectation the resting time X^{(j)} on node j is given by
and will also be referred to by 〈T_{∙j}〉. It is naturally called the mean resting time (or mean residence time) and is relevant in many scenarios, as it will for instance directly determine the relaxation time on treelike structures^{Footnote 5}. We compute and interpret this quantity starting from the agent and edgelevel rules of the different models, to obtain a macroscopic interpretation. Our analysis is restricted to exponential densities for X_{w}, X_{u} and X_{d}, since this will allow to shed light on the effect of having up to three timescales, and not on complications arising from otherwise possibly fattailed distributions for these three random variables. As a results, the two edgecentric models 2 and 3 generate statistically equivalent trajectories, and the analysis for (Mod. 2) holds for (Mod. 3).
Derivation of the mean resting times
In this section we handle the models by increasing level of complexity.
Models 1, 2 and 3. In the active nodecentric walk one is allowed to write directly \(\langle T_{\bullet j} \rangle _{\mathrm {model \ 1}} = \frac {1}{\mu }\). When it comes to the active edgecentric walk (Mod. 2), when instantaneous activation times follow a Poisson process, we have
The interpretation is that the edge j→i must activate after a time t, whereas all competing edges must remain unavailable at least up to that point. Performing the integration and multiplying by k_{j} gives \( T_{\bullet j} = k_{j} \lambda e^{k_{j} \lambda t},\phantom {\dot {i}\!}\) a result already found in Hoffmann et al. (2012). This is again an exponential distribution with rate k_{j}λ. It follows that
where \(\mathbb { X}_{d}^{(j)}\) was introduced by Eq. 3.
Model 6. The walker residing in node j will jump along edge j→i at time t if all competing edges are unavailable at least up until then  that is, their period of unavailability will last for at least t−x, where x marks the time the walker is ready to jump. Moreover, edge j→i needs to activate exactly after the duration t−x. With this in mind, and when all distributions are exponential, it was shown in Petit et al. (2018) that :
Note that (12) is merely the convolution between the waiting time of the walker and the minimum of k_{j} independent downtimes for the edges, reflecting the fact that the process results in an addition of random variables. To proceed, we observe that the integral in (13) depends on whether μ=k_{j}λ or μ≠k_{j}λ. In the former case, the integral is equal to t and multiplying (13) by k_{j} yields \( T_{\bullet j} = \mu k_{j} \lambda e^{k_{j} \lambda t} t.\phantom {\dot {i}\!}\) Hence, the mean resting time is
Recalling that the nth moment of an exponential distribution with rate λ is E(X^{n})=n!/λ^{n}, we have 〈T_{∙j}〉=2/μ=1/μ+1/(k_{j}λ). We will show that we get the same expression also in the second case, i.e. when μ≠k_{j}λ. Indeed (13) becomes
The mean resting time follows from (15) :
or also, \( \langle T_{\bullet j} \rangle _{\mathrm {model \ 6}} = E(X_{w}) + E (\mathbb X^{(j)}_{d})\), justifying again to consider (Mod. 6) an additive model.
Model 4. We have mentioned at then end of “Nonmarkovian random walks” section that this model is generally nonMarkovian in time, and also not Markovian in trajectories, a fact that will be further discussed in “Memory through walkernetwork interaction” section. However, already note that no memory will arise in the choice of the next destination node if there are no cycles in the network. Therefore, the following derivation assumes a directed acyclic graph (DAG), which restores Markovianity in the trajectories, and Eq. 9 is valid. Let us now determine the resting time density. In model 4, two possible scenarios face the walker ready to jump : either an edge is available (probability r), or an extra wait period is needed before an outgoing edge turns available (probability 1−r). We have \(r = \frac {\lambda }{\lambda + \eta }\) and \(1r = \frac {\eta }{\lambda + \eta }\). It was therefore shown in Petit et al. (2018) that T_{ij}(t) has two terms, such that the transition density from node j reads
The two terms reflect a weighted combination of models 1 and 6. The weight \(\phantom {\dot {i}\!}(1r)^{k_{j}}\) is the probability that all outgoing edges are unavailable at a random time. It follows that
Under this form, we see the model is conditionally (depending on r) additive.
Model 5. When the walker is ready to jump, the availability of network edges depends on the duration since the walker arrived on the node. That makes the analysis somewhat more involved. Assume the walker is ready after s time units. Let p^{∗}(s) be the probability that an edge is in the same state it was at time t=0, namely, unavailable. Let also q^{∗}(s)=1−p^{∗}(s) be the probability the edge is available for transport. These two quantities were computed in Petit et al. (2018), by accounting for all possible onoff switches of the edge in the interval [0,s]. The resulting expression has a strikingly simple form when U(t) and D(t) have the same (exponential density) rate η=λ, our working hypothesis in what follows :
If the walker is ready after a short time s, the edge will probably still be down, p^{∗}(0)=1, while for large s, the state of the edge is up or down with equal probability, \({\lim }_{s \rightarrow \infty } p^{*}(s) = \frac 1 2 \).
So now, we have an expression similar to (17) except that r and 1−r are essentially replaced by the timedependent q^{∗} and p^{∗}. Let us begin by first writing an expression for T_{ij}(t) :
where we have set β_{m}=−μ+k_{j}λ−2mλ. The resting time density therefore reads :
The mean resting time follows as
Regrouping the terms, we get
Discussion
All models have a mean resting time that we cast under the form
where a_{model}=1 for all models but (Mod. 2) for which it is 0, and b_{model}(k_{j},μ,λ) accounts for the probability that all outgoing edges are unavailable when the walker is ready to jump. Summing up the results of this section, we have
Recall that (28) was derived under the assumption that η=λ, for which \(r = \frac {1}{2}\) and thus \(b_{\mathrm {model \ 4}}(k_{j},\mu, \lambda) = \frac {1}{2^{k_{j}}}\). Using standard algebra, it is straightforward to check that
for all \(k_{j} \in \mathbb { N}_{0}\) and all positive reals μ and λ. The smaller this coefficient, the larger the expected number of jumps along the trajectories of the walk, all other parameters being chosen equal.
We want to compare the three models with nonzero b_{model}, since these are the ones where there is a dynamical walkernetwork interaction. To this end, let us define the ratios of mean resting times
These quantities depend only on the degree k_{j}, and on a new variable \(\xi := \frac {\lambda }{\mu }\). Indeed, we write
The above expressions are plotted in Fig. 3 for various values of the degree. The reduction in mean resting time for model 4 is very pronounced for small ξ, especially for the large degrees for which the relatively slow network timescales have less effect. With model 5 however, the reduction factor never goes below \(\frac {3}{4}\). In terms of convergence of the models, we observe that for all degrees, R_{1}≈1 for large ξ, and R_{2}≈1 for both large and small ξ. This behaviour for large ξ is a direct consequence of the convergence of the resting time PDF’s of models 4, 5 and 6 to that of model 1 when \(\xi ^{1} = \frac {\mu }{\lambda } \rightarrow 0\). This is represented by the three blue dotted arrows on Fig. 4. On the other hand, the value of R_{2} for small ξ results from the convergence indicated by the two purple dashdotted arrows of the figure. The other arrows further indicate the convergence between the PDF’s T_{ij}(t) of the different models in asymptotic regimes of the dynamical parameters μ,η,λ. These results can be verified by direct computation from the expressions for the densities we have obtained in this section. Obviously, convergence of the densities implies convergence for the expectations. For instance, consider the blue arrow from (Mod. 5) to (Mod. 1). In terms of mean resting time, we have that when \(\lambda \rightarrow \infty, \mu \in \mathbb { R}\), then 〈T_{∙j}〉_{model 5}→E(X_{w}). If we now let \(\mu \rightarrow \infty, \lambda \in \mathbb { R}\), then the mean tends to \( \frac {1}{2^{k_{j}} k_{j} \lambda } \sum _{m = 0}^{k_{j}} \binom {k_{j}}{m} = \frac {1}{k_{j} \lambda } \), that is, \(E(\mathbb { X}_{d}^{(j)}) \) (purple arrow from (Mod. 5) to (Mod. 3)). In both cases, this is the expected outcome.
Steady state
The steady state of the walk, \( n(\infty) : = {\lim }_{t \rightarrow \infty } n(t)\), is a useful quantity for instance in ranking applications. This motivates the content of this section. We obtain the steady state based on the mean resting time of the preceding section. Let D_{〈T〉} be the diagonal matrix containing the mean waiting times on the nodes, such that [D_{〈T〉}]_{ij}=〈T_{∙j}〉δ_{ij}. In Hoffmann et al. (2012), a smalls analysis of the generalised master Eq. 9 showed the steadystate of the walk to be
where v is the eigenvector associated to the unit eigenvalue of the effective transition matrix \(\mathbb {T}\). This matrix has elements
One straightforwardly checks that v_{j}=k_{j} satisfies \(\mathbb (T v)_{i}= \sum _{\ell = 1}^{N} A_{\ell i} = k_{i}\), where the last equality assumes that the network is balanced, namely, the indegree of node i is equal to its outdegree k_{i}. In other words, when the underlying network is balanced, the steady state is Hoffmann et al. (2012)
where α is the normalisation factor. As mentioned before, one interesting application of random walks is their use to rank nodes of a network according to the probability to find a walker on each node, an information directly accessible from the steady state. We are thus interested in understanding how the asymptotic states depend on the modelling scheme and hence how the ranking process changes accordingly. We compute the steady state for each model in the sequel, and report the results graphically on Fig. 5.
Models 1, 2 and 3. For the sake of completeness and for later comparisons, we recall some standard results. For the active nodecentric model 1, \(\langle T_{\bullet j} \rangle = \frac 1 \mu \) and the steadystate is proportional to degree,
The active edgecentric model 2 yields
As already pointed out, the same expression is valid for the passive edgecentric walk (Mod. 3) when the downtime distribution for network edges are exponential. One easily verifies that the righthand side of \(\dot n = n L^{rw}\) vanishes for \(n = n_{j}^{(Mod. \ 1)} (\infty)\), while the same holds true with \(\dot n = n L\) for \(n = n_{j}^{(Mod. \ 2)} (\infty)\).
Model 6 We have \( n_{j}^{(Mod. \ 6)} (\infty) = \alpha ^{(Mod. \ 6)} \left (\frac {1}{\mu } + \frac {1}{k_{j} \lambda } \right) k_{j} = \frac { \alpha ^{(Mod. \ 6)} }{\lambda \mu } (\mu + k_{j} \lambda), \) and after normalisation we get
or under a different form, after division by k_{j}λμ,
We recover the expressions of the active nodecentric (Mod. 1) and edge centric (Mod. 4) walks in the respective limits λ→∞ and μ→∞.
Model 4 This model necessitates a preliminary observation concerning our method. We use the transition density derived in the preceding section, which results to be only an approximation when the graph has cycles (see “Memory through walkernetwork interaction” section). We also rely on the steady state formula (36) obtained for balanced networks. Hence, the steady state we will obtain with Eq. 39 is an approximation if the balanced network has cycles, for instance when the network is symmetric. On Fig. 5, we have indeed selected a symmetric network. The figure shows that the steady state computed with the theoretical formula is in good agreement with MonteCarlo simulations. The formula proved mostly accurate throughout our numerical investigation, certainly in terms of ranking of the nodes. The network topologies and range of values for the dynamical parameters for which this observation holds, are further discussed in “Memory through walkernetwork interaction” section.
Let us now proceed with the analysis. We have
and through normalisation we obtain
As expected, \(n_{j}^{(Mod. \ 4)}(\infty)\) tends to \(n_{j}^{(Mod. \ 1)}(\infty)\) when λ→∞. But more importantly, in the limit of a very fast walker we have
It results that smaller residence probabilities are associated with nodes with larger degree. At variance, for typically large walker waiting times, larger degree means larger probability. Indeed, observe that
It was reported before that fattailed resting times on a portion of the nodes of a network could lead to accumulation on these nodes in spite of their low degree (Fedotov and Stage 2017). In our case, the renewal process ruling the jump times arises from interaction between walker and network, without explicitly reverting to longtailed distributions of the resting time on certain nodes.
Model 5 Resulting directly from the transition density given by (24), we have
with
Memory through walkernetwork interaction
In general, the passive model 2 or the passiveatedgelevel model 4 are not tractable analytically, in the sense that without further assumptions each jump (time and destination) depends on the full trajectory of the walker. Model 2 becomes tractable assuming exponential distributions for the walker and edges, which allowed us to find the resting time density in “Mean resting times” section. But this assumption is not enough in model 4 (unless we make the extra assumption of a directed acyclic network).
Let us elaborate on this. We make the assumption that edges are statistically independent, in the sense that no correlation exists between the states (available or not) of the edges. In this way, in all models but model 4, we avoid preferred diffusion paths arising from correlations, as captured by the concepts of betweenness preference (Scholtes et al. 2014) or memory network (Rosvall et al. 2014). Model 4 is different. Indeed, if the walker choose to use an outgoing edge in the past, this gives information on the state of the same outgoing edge later in time. This holds true even with exponential distributions for the up and downtimes. More precisely, let us consider two cases :

A jump across an edge some time in the past (meaning it was available back then) increases the probability that the same edge is again available to the walker who has returned on the node after a cycle, and is ready for another jump. This memory that impacts the state of the edge was described in details in Petit et al. (2018), and is captured by the durationdependent probability p^{∗} in Eq. 19.

Conversely, if an outgoing edge was not selected and lost the competition to another edge, this raises the probability that this edge was actually not available at the time of the jump (and hence explaining why it lost the competition). Therefore, when the walker returns on the node after a cycle, with relatively more probability the same edge is not available when the walker wants to jump. Again, an expression similar to that of p^{∗} was obtained in Petit et al. (2018) to quantify this phenomenon.
Combining these two cases shows that by the presence of cycles, preferred diffusion paths can emerge.
The impact of cycles may depend on many factors (rates, topology of the network, presence of communities, initial condition of the walk), and it is difficult to analyse and predict. For instance, from numerical experiments (data not reported here) we found a dependence on the number of cycles in the network, or the mean degree or also the number of nodes. It is safe to say however, that the effect decreases for long cycles, because the walker expectedly takes longer to complete the cycle. In the same line of thought, the effect is less pronounced if the rate μ of the walker is relatively small, because again, more time will elapse between the jumps, thereby reducing the memory. In general, larger networks have more possible diffusion paths, and tend to be less sensitive to the effect.
Let us consider Fig. 5 again. The difference between the analytical curves of panel (A) and the outcome of Monte Carlo simulations on panel (B) is a consequence of the presence of the cycles. It is not due to the variance in the simulation of the stochastic process, and was moreover observed for various topologies. As anticipated, the effect in this example is more pronounced with larger values of μ. This particular example demonstrates the existence of a memory effect, that is, a bias in the trajectories and in terms of rate of jumps, due to the interaction of a memoryless walker, evolving on a network governed by memoryless distributions. Markovianity both in time and in trajectories is lost due to the microscopic rules of the model, and not due to the choice of the distribution or any explicit bias by the walker. We point to our previous work in Petit et al. (2018) for a mathematical framework further enabling an analytic approach.
Conclusion
Many types of random walks have been defined on static networks, from correlated walks (Renshaw and Henderson 1981) to other variants of elephant random walks or random walks with memory (Rosvall et al. 2014). The main purpose of this paper was to show that even the simplest model of random walk, where the walker does not have a memory and is unbiased, may generate complex trajectories when defined on temporal networks. As we have discussed, there exist different ways to interconnect the dynamics of the walker and of the network, and this interplay may break the Markovianity of the system (in time), even in purely active models or passive models without cycles. Note that there is no need for a longtailed walker waiting time on (a subset of) the node(s), such as in Fedotov and Stage (2017), to observe a dramatic departure from the steadystates of the classical random walk models. We also showed that the mean resting time may be impacted, resulting in a sloweddown diffusion on treelike structures. The Markovianity of the trajectories of the walkers may also be broken when the underlying graph is not acyclic, as certain jumps are preferred based on the past trajectory of a walker, even if edges are statistically independent.
Overall, our work shows the importance of the different timescales associated to random walks on temporal networks, and unveils the importance of the duration of contacts, that is link activations, on diffusion. We have also enriched the taxonomy of random walk processes (Masuda et al. 2017), adding to the known “active” walks, appropriate for human or animal trajectories and “passive” walks, typically used for virus/information spreading on temporal network, new combinations of active and passive processes that are relevant in situations when an active agent is constrained the dynamical properties of the underlying network. Typical examples would include the mobility of individuals on public transportation networks. Despite its richness, our model neglects certain aspects of reallife networks that could lead to interesting research directions. In particular, the implicit assumption that the network can be described as a stationary process calls for generalisations including circadian rhythms (Jo et al. 2012; Kobayashi and Lambiotte 2016). Another interesting generalisation would be to open up the modelling framework to situations when the number of diffusing entities is not conserved, and evolves in time, as in epidemic spreading on contact networks, where an additional temporal process is associated to the distribution of the recovery time of infected nodes (Keeling and Grenfell 1997).
Availability of data and materials
Not applicable since no datasets were generated or analysed in the study.
Notes
 1.
Another behaviour would have been that the readytojump but trapped walker waits for another period drawn again from the distribution of X_{w}, before attempting another jump. In this scenario, the induced delay before the jump also depends on the dynamics of the walker and not only on that of the network through availability of edges at the end of the prolonged stay (Figueiredo et al. 2012). With this choice, the analysis of the model would follow the same steps.
 2.
We implicitly assume that the timescales are welldefined and are correctly represented by the expectation of the random variables. This assumption holds for exponential distributions, but wouldn’t for example apply for powerlaw distributions.
 3.
This means that there exists a path along connected edges allowing to reach any node of the graph from any other node (Newman 2018).
 4.
In this work, we are interested in the loss of the Markov property resulting from the interaction between dynamical entities, some or all of them following exponential distributions. Therefore, we will not elaborate on the idea that conversely, Markovianity could emerge out of the interaction between nonMarkovian walker and edges.
 5.
Our choice of studying the resting time is certainly restrictive, and quantities typically associated to random walks, such as the mean firstpassage time or dispersal distribution, could be natural next steps.
Abbreviations
 PDF:

Probability density function.
References
Angstmann, CN, Donnelly IC, Henry BI (2013) Pattern formation on networks with reactions: A continuoustime randomwalk approach. Phys Rev E 87(3):032804.
Asllani, M, Carletti T, Di Patti F, Fanelli D, Piazza F (2018) Hopping in the crowd to unveil network topology. Phys Rev Lett 120(15):158301.
Belykh, IV, Belykh VN, Hasler M (2004) Blinking model and synchronization in smallworld networks with a timevarying coupling. Phys D Nonlinear Phenom 195(12):188–206.
Brin, S, Page L (1998) The anatomy of a largescale hypertextual web search engine. Comput Netw ISDN systems 30(17):107–117.
Delvenne, JC, Yaliraki SN, Barahona M (2010) Stability of graph communities across time scales. Proc Natl Acad Sci 107(29):12755–12760.
Fedotov, S, Stage H (2017) Anomalous metapopulation dynamics on scalefree networks. Phys Rev Lett 118(9):098301.
Figueiredo, D, Nain P, Ribeiro B, de Souza e Silva E, Towsley D (2012) Characterizing continuous time random walks on time varying graphs In: ACM SIGMETRICS Performance Evaluation Review, 307–318.. ACM.
Gauvin, L, Panisson A, Cattuto C, Barrat A (2013) Activity clocks: spreading dynamics on temporal networks of human contact. Sci Rep 3:3099.
Grebenkov, DS, Tupikina L (2018) Heterogeneous continuoustime random walks. Phys Rev E 97(1):012148.
Hughes, BD (1995) Random Walks and Random Environments: Random Walks, Vol. 1. Oxford University Press, Oxford.
Hoffmann, T, Porter MA, Lambiotte R (2012) Generalized master equations for nonpoisson dynamics on networks. Phys Rev E 86(4):046102.
Holme, P (2015) Modern temporal network theory: a colloquium. Eur Phys J B 88(9):1–30.
Jo, HH, Karsai M, Kertész J, Kaski K (2012) Circadian pattern and burstiness in mobile phone communication. New J Phys 14(1):013055.
Keeling, MJ, Grenfell B (1997) Disease extinction and community size: modeling the persistence of measles. Science 275(5296):65–67.
Kempe, J (2003) Quantum random walks: an introductory overview. Contemp Phys 44(4):307–327.
Klafter, J, Sokolov IM (2011) First Steps in Random Walks: from Tools to Applications. Oxford University Press, Oxford.
Kobayashi, R, Lambiotte R (2016) Tideh: Timedependent hawkes process for predicting retweet dynamics In: Tenth International AAAI Conference on Web and Social Media.. AAAI Press, California.
Lambiotte, R, Rosvall M (2012) Ranking and clustering of nodes in networks with smart teleportation. Phys Rev E 85(5):056107.
Langville, AN, Meyer CD (2004) Deeper inside pagerank. Internet Math 1(3):335–380.
Lucas, M, Fanelli D, Carletti T, Petit J (2018) Desynchronization induced by timevarying network. EPL (Europhys Lett) 121(5):50008.
Masuda, N, Lambiotte R (2016) A Guidance to Temporal Networks. World Scientific, London.
Masuda, N, Porter MA, Lambiotte R (2017) Random walks and diffusion on networks. Phys Rep:1–58.
Newman, M (2018) Networks.
Perra, N, Baronchelli A, Mocanu D, Gonçalves B, PastorSatorras R, Vespignani A (2012) Random walks and search in timevarying networks. Phys Rev Lett 109(23):238701.
Petit, J, Gueuning M, Carletti T, Lauwens B, Lambiotte R (2018) Random walk on temporal networks with lasting edges. Phys Rev E 98(5):052307.
Petit, J, Lauwens B, Fanelli D, Carletti T (2017) Theory of turing patterns on time varying networks. Phys Rev Lett 119(14):148301.
Porter, M, Gleeson J (2016) Dynamical systems on networks. Front Appl Dyn Syst Rev Tutorials.
Renshaw, E, Henderson R (1981) The correlated random walk. J Appl Probab 18(2):403–414.
Rosvall, M, Axelsson D, Bergstrom CT (2009) The map equation. Eur Phys J Spec Top 178(1):13–23.
Rosvall, M, Esquivel AV, Lancichinetti A, West JD, Lambiotte R (2014) Memory in network flows and its effects on spreading dynamics and community detection. Nat Commun 5:4630.
Scholtes, I, Wider N, Pfitzner R, Garas A, Tessone CJ, Schweitzer F (2014) Causalitydriven slowdown and speedup of diffusion in nonmarkovian temporal networks. Nat Commun 5:5024.
Schütz, GM, Trimper S (2004) Elephants can always remember: Exact longrange memory effects in a nonmarkovian random walk. Phys Rev E 70(4):045101.
Sekara, V, Stopczynski A, Lehmann S (2016) Fundamental structures of dynamic social networks. Proc Natl Acad Sci 113(36):9977–9982.
Starnini, M, Baronchelli A, Barrat A, PastorSatorras R (2012) Random walks on temporal networks. Phys Rev E 85(5):056115.
Stilwell, DJ, Bollt EM, Roberson DG (2006) Sufficient conditions for fast switching synchronization in timevarying network topologies. SIAM J Appl Dyn Syst 5(1):140–156.
Zanette, DH, Mikhailov AS (2004) Dynamical systems with timedependent coupling: clustering and critical behaviour. Phys D Nonlinear Phenom 194(34):203–218.
Acknowledgements
Not applicable.
Funding
TC presents research results of the Belgian Network DYSCO, funded by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office.
Author information
Affiliations
Contributions
All authors conceived the project. JP derived the analytical results and performed the numerical simulations. All authors contributed to the writing of the manuscript. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Petit, J., Lambiotte, R. & Carletti, T. Classes of random walks on temporal networks with competing timescales. Appl Netw Sci 4, 72 (2019). https://doi.org/10.1007/s4110901902046
Received:
Accepted:
Published:
Keywords
 Random walk
 Temporal network
 Memory