The mathematical expectation the resting time X(j) on node j is given by
$$ E \big(X^{(j)}\big) = \int_{0}^{\infty} t T_{\bullet j} (t) dt, $$
(11)
and will also be referred to by 〈T∙j〉. It is naturally called the mean resting time (or mean residence time) and is relevant in many scenarios, as it will for instance directly determine the relaxation time on tree-like structuresFootnote 5. We compute and interpret this quantity starting from the agent- and edge-level rules of the different models, to obtain a macroscopic interpretation. Our analysis is restricted to exponential densities for Xw, Xu and Xd, since this will allow to shed light on the effect of having up to three timescales, and not on complications arising from otherwise possibly fat-tailed distributions for these three random variables. As a results, the two edge-centric models 2 and 3 generate statistically equivalent trajectories, and the analysis for (Mod. 2) holds for (Mod. 3).
Derivation of the mean resting times
In this section we handle the models by increasing level of complexity.
Models 1, 2 and 3. In the active node-centric walk one is allowed to write directly \(\langle T_{\bullet j} \rangle _{\mathrm {model \ 1}} = \frac {1}{\mu }\). When it comes to the active edge-centric walk (Mod. 2), when instantaneous activation times follow a Poisson process, we have
$$T_{ij}(t) = D (t) \left[ \int_{t}^{\infty} D(\tau) d \tau \right]^{k_{j}-1}. $$
The interpretation is that the edge j→i must activate after a time t, whereas all competing edges must remain unavailable at least up to that point. Performing the integration and multiplying by kj gives \( T_{\bullet j} = k_{j} \lambda e^{-k_{j} \lambda t},\phantom {\dot {i}\!}\) a result already found in Hoffmann et al. (2012). This is again an exponential distribution with rate kjλ. It follows that
$$\langle T_{\bullet j} \rangle_{\mathrm{model \ 2}} = E(\mathbb{ X}_{d}^{(j)}), $$
where \(\mathbb { X}_{d}^{(j)}\) was introduced by Eq. 3.
Model 6. The walker residing in node j will jump along edge j→i at time t if all competing edges are unavailable at least up until then - that is, their period of unavailability will last for at least t−x, where x marks the time the walker is ready to jump. Moreover, edge j→i needs to activate exactly after the duration t−x. With this in mind, and when all distributions are exponential, it was shown in Petit et al. (2018) that :
$$\begin{array}{*{20}l} T_{ij}(t) &= \int_{0}^{t} \psi_{j}(x) \left[ \int_{t-x}^{\infty} D(s) ds \right]^{k_{j}-1} D(t-x) dx, \\ \text{and noting that} \left[ \int_{t-x}^{\infty} D(s) ds \right]^{k_{j}-1} \text{simplifies to} e^{-\lambda (k_{j}-1)(t-x)}, &=\int_{0}^{t} \mu e^{-\mu x} \lambda e^{-k_{j} \lambda (t-x)} dx \end{array} $$
(12)
$$\begin{array}{*{20}l} &=\mu \lambda e^{-k_{j} \lambda t} \int_{0}^{t} e^{(-\mu + k_{j} \lambda)x} dx. \end{array} $$
(13)
Note that (12) is merely the convolution between the waiting time of the walker and the minimum of kj independent down-times for the edges, reflecting the fact that the process results in an addition of random variables. To proceed, we observe that the integral in (13) depends on whether μ=kjλ or μ≠kjλ. In the former case, the integral is equal to t and multiplying (13) by kj yields \( T_{\bullet j} = \mu k_{j} \lambda e^{-k_{j} \lambda t} t.\phantom {\dot {i}\!}\) Hence, the mean resting time is
$$ \langle T_{\bullet j} \rangle_{\mathrm{model \ 6}} = \int_{0}^{\infty} \mu t^{2} k_{j} \lambda e^{-k_{j} \lambda t} dt. $$
(14)
Recalling that the n-th moment of an exponential distribution with rate λ is E(Xn)=n!/λn, we have 〈T∙j〉=2/μ=1/μ+1/(kjλ). We will show that we get the same expression also in the second case, i.e. when μ≠kjλ. Indeed (13) becomes
$$ T_{\bullet j} = \frac{\lambda \mu}{k_{j} \lambda - \mu} \left(e^{-\mu t} - e^{-k_{j} \lambda t} \right). $$
(15)
The mean resting time follows from (15) :
$$\begin{array}{*{20}l} \langle T_{\bullet j} \rangle_{\mathrm{model \ 6}} &= \frac{\lambda \mu}{k_{j} \lambda - \mu} \int_{0}^{\infty} \left(e^{-\mu \tau} - e^{-k_{j} \lambda \tau} \right) d\tau \\ &= \frac{\lambda \mu}{k_{j} \lambda - \mu} \left(\frac 1 {\mu^{2}} - \frac{ 1}{(k_{j} \lambda)^{2}} \right) \\ &=\frac 1 \mu + \frac{1}{k_{j} \lambda}, \end{array} $$
(16)
or also, \( \langle T_{\bullet j} \rangle _{\mathrm {model \ 6}} = E(X_{w}) + E (\mathbb X^{(j)}_{d})\), justifying again to consider (Mod. 6) an additive model.
Model 4. We have mentioned at then end of “Non-markovian random walks” section that this model is generally non-Markovian in time, and also not Markovian in trajectories, a fact that will be further discussed in “Memory through walker-network interaction” section. However, already note that no memory will arise in the choice of the next destination node if there are no cycles in the network. Therefore, the following derivation assumes a directed acyclic graph (DAG), which restores Markovianity in the trajectories, and Eq. 9 is valid. Let us now determine the resting time density. In model 4, two possible scenarios face the walker ready to jump : either an edge is available (probability r), or an extra wait period is needed before an outgoing edge turns available (probability 1−r). We have \(r = \frac {\lambda }{\lambda + \eta }\) and \(1-r = \frac {\eta }{\lambda + \eta }\). It was therefore shown in Petit et al. (2018) that Tij(t) has two terms, such that the transition density from node j reads
$$ T_{\bullet j}(t) = \left[ 1-(1-r)^{k_{j}} \right] \psi_{j} (t) + (1-r)^{k_{j}} k_{j}\lambda \mu e^{-k_{j} \lambda t} \int_{0}^{t} e^{(-\mu + k_{j} \lambda)x}dx. $$
(17)
The two terms reflect a weighted combination of models 1 and 6. The weight \(\phantom {\dot {i}\!}(1-r)^{k_{j}}\) is the probability that all outgoing edges are unavailable at a random time. It follows that
$$\begin{array}{*{20}l} \langle T_{\bullet j} \rangle_{\text{model} \ 4} &= \langle T_{\bullet j} \rangle_{\text{model} \ 1} + \langle T_{\bullet j} \rangle_{\text{model} \ 6} \\ &= \left[ 1-(1-r)^{k_{j}} \right] E(X_{w}) + (1-r)^{k_{j}} \left(E(X_{w}) + E(\mathbb{ X}_{d}^{(j)}) \right) \\ &= E(X_{w}) + (1-r)^{k_{j}} E(\mathbb{ X}_{d}^{(j)}). \end{array} $$
(18)
Under this form, we see the model is conditionally (depending on r) additive.
Model 5. When the walker is ready to jump, the availability of network edges depends on the duration since the walker arrived on the node. That makes the analysis somewhat more involved. Assume the walker is ready after s time units. Let p∗(s) be the probability that an edge is in the same state it was at time t=0, namely, unavailable. Let also q∗(s)=1−p∗(s) be the probability the edge is available for transport. These two quantities were computed in Petit et al. (2018), by accounting for all possible on-off switches of the edge in the interval [0,s]. The resulting expression has a strikingly simple form when U(t) and D(t) have the same (exponential density) rate η=λ, our working hypothesis in what follows :
$$\begin{array}{*{20}l} p^{*}(s) & = \frac{1}{2} (1+ e^{-2\lambda t}), \end{array} $$
(19)
$$\begin{array}{*{20}l} q^{*}(s) & = \frac{1}{2} (1- e^{-2\lambda t}). \end{array} $$
(20)
If the walker is ready after a short time s, the edge will probably still be down, p∗(0)=1, while for large s, the state of the edge is up or down with equal probability, \({\lim }_{s \rightarrow \infty } p^{*}(s) = \frac 1 2 \).
So now, we have an expression similar to (17) except that r and 1−r are essentially replaced by the time-dependent q∗ and p∗. Let us begin by first writing an expression for Tij(t) :
$$\begin{array}{*{20}l} T_{ij}(t) &= \frac{1}{k_{j}} \left[ 1-p^{*}(t)^{k_{j}} \right]\psi_{j}(t) + \int_{0}^{t}\psi_{j}(x) \left[ p^{*}(x) \int_{t-x} D(s) ds\right]^{k_{j}-1} p^{*}(x) D(t-x) dx \\ &= \frac{1}{k_{j}} \left[ 1-p^{*}(t)^{k_{j}} \right]\psi_{j}(t) + \lambda \mu e^{-k_{j} \lambda t} \int_{0}^{t} p^{*}(x)^{k_{j}} e^{(-\mu + k_{j} \lambda) x}dx \\ &= \frac{1}{k_{j}} \left[ 1-p^{*}(t)^{k_{j}} \right]\psi_{j}(t) + \frac{\lambda \mu }{2^{k_{j}}}e^{-k_{j} \lambda t} \int_{0}^{t} \left(1+e^{-2\lambda x}\right)^{k_{j}}e^{(-\mu + k_{j} \lambda) x}dx, \text{and using Newton's binomial formula in both terms, } &= \frac{1}{k_{j}} \left[ 1- \frac{1}{2^{k_{j}}} \sum_{m=0}^{k_{j}} \binom{k_{j}}{m} e^{-2m\lambda t} \right]\psi_{j}(t) + \frac{\lambda \mu }{2^{k_{j}}}e^{-k_{j} \lambda t} \sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{\beta_{m}} \left(e^{\beta_{m} t} -1 \right) \end{array} $$
(21)
where we have set βm=−μ+kjλ−2mλ. The resting time density therefore reads :
$$ \begin{aligned} T_{\bullet j} (t) = \mu e{-\mu t} &- \frac{\mu }{2^{k_{j}}} \sum_{m=0}^{k_{j}} \binom{k_{j}}{m} e^{-(\mu+2m\lambda) t} \\ &+ \frac{\mu}{2^{k_{j}}} k_{j} \lambda \sum_{m=0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{\beta_{m}} e^{-(\mu +2 m\lambda) t} - \frac{\mu}{2^{k_{j}}} k_{j} \lambda \sum_{m=0}^{k_{j}} e^{-k_{j} \lambda t} \binom{k_{j}}{m} \frac{1}{\beta_{m}}. \end{aligned} $$
(22)
The mean resting time follows as
$$ \begin{aligned} \langle T_{\bullet j} \rangle_{\text{model} \ 5} = E(X_{w}) &- \frac{\mu }{2^{k_{j}}} \sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{(\mu + 2 m \lambda)^{2}} \\ &+ \frac{\mu}{2^{k_{j}} } k_{j} \lambda \sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{\beta_{m}} \frac{1}{(\mu + 2 m \lambda)^{2}} - \frac{\mu}{2^{k_{j}} } \frac{1}{k_{j} \lambda} \sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{\beta_{m}}. \end{aligned} $$
(23)
Regrouping the terms, we get
$$\begin{array}{*{20}l} \langle T_{\bullet j} \rangle_{\text{model} \ 5} &= E(X_{w}) + \frac{\mu}{2^{k_{j}}} \sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \left(\frac{1}{(\mu + 2m \lambda)^{2}} \left[ \frac{k_{j} \lambda}{\beta_{m}} -1 \right] - \frac{1}{\beta_{m}} \frac{1}{k_{j} \lambda} \right) \\ &= E(X_{w}) + \frac{\mu}{2^{k_{j}}} \sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{\beta_{m}} \left(\frac{1}{\mu + 2 m \lambda} - \frac{1}{k_{j} \lambda} \right) \\ &= E(X_{w}) + \frac{\mu}{2^{k_{j}} } \sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{\mu +2 m \lambda} E(\mathbb{ X}_{d}^{(j)}). \end{array} $$
(24)
Discussion
All models have a mean resting time that we cast under the form
$$ \langle T_{\bullet j} \rangle_{\text{model}} = a_{\text{model}} E(X_{w}) + b_{\text{model}}(k_{j},\mu, \lambda) E(\mathbb{ X}_{d}^{(j)}), $$
(25)
where amodel=1 for all models but (Mod. 2) for which it is 0, and bmodel(kj,μ,λ) accounts for the probability that all outgoing edges are unavailable when the walker is ready to jump. Summing up the results of this section, we have
$$\begin{array}{*{20}l} &b_{\mathrm{model \ 1}} (k_{j},\mu, \lambda) = 0 \end{array} $$
(26)
$$\begin{array}{*{20}l} &b_{\mathrm{model \ 4}} (k_{j},\mu, \lambda) = (1-r)^{k_{j}} \end{array} $$
(27)
$$\begin{array}{*{20}l} &b_{\mathrm{model \ 5}} (k_{j},\mu, \lambda) = \frac{1}{2^{k_{j}}} \mu \sum_{m =0}^{k_{j}} \binom{k_{j}}{m} \frac{1}{\mu + 2m \lambda} \end{array} $$
(28)
$$\begin{array}{*{20}l} &b_{\mathrm{model \ 6}} (k_{j},\mu, \lambda) = 1. \end{array} $$
(29)
Recall that (28) was derived under the assumption that η=λ, for which \(r = \frac {1}{2}\) and thus \(b_{\mathrm {model \ 4}}(k_{j},\mu, \lambda) = \frac {1}{2^{k_{j}}}\). Using standard algebra, it is straightforward to check that
$$ 0 = b_{\mathrm{model \ 1}} < b_{\mathrm{model \ 4}} < b_{\mathrm{model \ 5}} < b_{\mathrm{model \ 6}} = 1, $$
(30)
for all \(k_{j} \in \mathbb { N}_{0}\) and all positive reals μ and λ. The smaller this coefficient, the larger the expected number of jumps along the trajectories of the walk, all other parameters being chosen equal.
We want to compare the three models with nonzero bmodel, since these are the ones where there is a dynamical walker-network interaction. To this end, let us define the ratios of mean resting times
$$\begin{array}{*{20}l} R_{1} &: = \frac{\langle T_{\bullet j} \rangle_{\mathrm{model\ 4}}}{\langle T_{\bullet j} \rangle_{\mathrm{model \ 6}}} \, \end{array} $$
(31)
$$\begin{array}{*{20}l} R_{2} & := \frac{\langle T_{\bullet j} \rangle_{\mathrm{model\ 5}}}{\langle T_{\bullet j} \rangle_{\mathrm{model \ 6}}}. \end{array} $$
(32)
These quantities depend only on the degree kj, and on a new variable \(\xi := \frac {\lambda }{\mu }\). Indeed, we write
$$\begin{array}{*{20}l} R_{1}(k_{j}, \xi) & = \frac{ k_{j} 2^{k_{j}}\xi + 1}{k_{j} 2^{k_{j}}\xi + 2^{k_{j}}} \, \end{array} $$
(33)
$$\begin{array}{*{20}l} R_{2}(k_{j}, \xi) & = \frac{ k_{j} 2^{k_{j}}\xi +\sum_{m = 0}^{k_{j}} \binom{k_{j}}{m} \frac{ 1}{1+2m \xi} }{k_{j} 2^{k_{j}}\xi + 2^{k_{j}}}. \end{array} $$
(34)
The above expressions are plotted in Fig. 3 for various values of the degree. The reduction in mean resting time for model 4 is very pronounced for small ξ, especially for the large degrees for which the relatively slow network timescales have less effect. With model 5 however, the reduction factor never goes below \(\frac {3}{4}\). In terms of convergence of the models, we observe that for all degrees, R1≈1 for large ξ, and R2≈1 for both large and small ξ. This behaviour for large ξ is a direct consequence of the convergence of the resting time PDF’s of models 4, 5 and 6 to that of model 1 when \(\xi ^{-1} = \frac {\mu }{\lambda } \rightarrow 0\). This is represented by the three blue dotted arrows on Fig. 4. On the other hand, the value of R2 for small ξ results from the convergence indicated by the two purple dash-dotted arrows of the figure. The other arrows further indicate the convergence between the PDF’s Tij(t) of the different models in asymptotic regimes of the dynamical parameters μ,η,λ. These results can be verified by direct computation from the expressions for the densities we have obtained in this section. Obviously, convergence of the densities implies convergence for the expectations. For instance, consider the blue arrow from (Mod. 5) to (Mod. 1). In terms of mean resting time, we have that when \(\lambda \rightarrow \infty, \mu \in \mathbb { R}\), then 〈T∙j〉model 5→E(Xw). If we now let \(\mu \rightarrow \infty, \lambda \in \mathbb { R}\), then the mean tends to \( \frac {1}{2^{k_{j}} k_{j} \lambda } \sum _{m = 0}^{k_{j}} \binom {k_{j}}{m} = \frac {1}{k_{j} \lambda } \), that is, \(E(\mathbb { X}_{d}^{(j)}) \) (purple arrow from (Mod. 5) to (Mod. 3)). In both cases, this is the expected outcome.