 Research
 Open Access
 Published:
Using a novel genetic algorithm to assess peer influence on willingness to use preexposure prophylaxis in networks of Black men who have sex with men
Applied Network Science volume 6, Article number: 22 (2021)
Abstract
The DeGroot model for opinion diffusion over social networks dates back to the 1970s and models the mechanism by which information or disinformation spreads through a network, changing the opinions of the agents. Extensive research exists about the behavior of the DeGroot model and its variations over theoretical social networks; however, research on how to estimate parameters of this model using data collected from an observed network diffusion process is much more limited. Existing algorithms require large data sets that are often infeasible to obtain in public health or social science applications. In order to expand the use of opinion diffusion models to these and other applications, we developed a novel genetic algorithm capable of recovering the parameters of a DeGroot opinion diffusion process using small data sets, including those with missing data and more model parameters than observed time steps. We demonstrate the efficacy of the algorithm on simulated data and data from a social network intervention leveraging peer influence to increase willingness to take preexposure prophylaxis in an effort to decrease transmission of human immunodeficiency virus among Black men who have sex with men.
Background
The use of interventions for epidemic control requires both an effective intervention and sufficient uptake by the population. In the case of human immunodeficiency virus (HIV), Black men who have sex with men (BMSM) are disproportionately affected by \(\text {HIV}\) infection throughout the United States [1]. Preexposure prophylaxis (PrEP) has been shown to reduce lifetime infection risk and increase mean life expectancy; however, \(\text {PrEP}\) uptake is much lower for \(\text {BMSM}\). Negative \(\text {PrEP}\)related stereotypes are prevalent and awareness of \(\text {PrEP}\) is low among \(\text {BMSM}\), particularly those outside of large cities and among men who have sex with men who are not gayidentified nor easily reached through \(\text {PrEP}\) campaigns directed toward the gay community [1,2,3]. An ongoing study seeks to assess the feasibility of increasing \(\text {PrEP}\) uptake for \(\text {BMSM}\) through the use of a social network intervention: training network leaders to communicate the benefits of \(\text {PrEP}\) within their social networks [4]. In this paper, we analyze data from the pilot study of the intervention and present a methodological innovation needed to facilitate evaluation of the possible populationlevel impact of the intervention on HIV incidence.
Most \(\text {BMSM}\) are connected with other men who have sex with men (MSM) of color in their personal social and sexual networks. For that reason, it is possible to reach men through their network connections. In addition to serving as a vehicle for reaching highrisk \(\text {BMSM}\) in the community, networks are social environments that can be harnessed for interventions to increase \(\text {PrEP}\) awareness, correct \(\text {PrEP}\) misconceptions, and strengthen norms, attitudes, benefit perceptions, and skills for \(\text {PrEP}\) use. In the \(\text {HIV}\) epidemiology literature, the networks of \(\text {BMSM}\) have too often been studied only as drivers of disease transmission [5]. However, from a strengthsbased perspective, social networks also carry positive, adaptive, and protective functions. \(\text {BMSM}\) confront stigma and exclusion due to homophobia in the Black community and racism in predominantly white gay communities, leading some to develop social institutions such as constructed families and house ball communities for support [6,7,8,9,10,11].
\(\text {HIV}\) prevention advice from personallyknown and trusted sources is likely to have greater impact than messages from impersonal sources. For that reason, recommendations that come from influential members of one’s close personal social network are especially powerful. Wellliked peers influence the actions and beliefs of their friends. Peers within the close personal networks of \(\text {BMSM}\) provide acceptance, trusted information, and guidance on courses of action, including in matters related to \(\text {HIV}\) prevention [9]. Messages that provide information but also target the recipient’s \(\text {PrEP}\)related perceived norms, attitudes, intentions, and selfefficacy are likely to have the greatest impact because these theorybased domains influence the adoption of protective actions [12, 13].
The intervention is also grounded in principles of innovation diffusion theory [14]. After recruiting networks of \(\text {BMSM}\) in the community, the intervention involved selection of a cadre of members within each network who were most socially interconnected with others, most trusted for advice, and most open to \(\text {PrEP}\). These network leaders together attended sessions in which they learned about \(\text {PrEP}\) and its benefits and were systematically engaged to talk with friends about these topics, correct misconceptions and counter negative stereotypes about \(\text {PrEP}\), instill interest in \(\text {PrEP}\), and guide interested friends in accessing \(\text {PrEP}\) providers. Thus, the intervention engaged trusted and sociallyinterconnected network leaders to function as agents who diffuse messages to others.
A preliminary analysis on the pilot study data demonstrates more favorable opinions of \(\text {PrEP}\) after the network leader intervention, across all subjects and for only those subjects who did not attend leadership training [4]. This analysis supports the continuing use of this intervention from an individual risk perspective. The question remains, however, how impactful the intervention might be if implemented at scale, and how the benefits to HIV prevention compare to similarly intensive interventions. To make this assessment, we plan to use an agentbased epidemic model. In order to evaluate the intervention, it is necessary to translate the results from the pilot and ongoing study to network and intervention parameters in the epidemic modeling framework. For this, we need to understand the structure of influence observed in the local networks. Hence, we wish to estimate the parameters of the opinion diffusion process. Since existing methods for fitting appropriate opinion diffusion models vastly exceed the data available in both the pilot and full study, we developed the parameter estimation method presented here.
We first detail classes of models for opinion diffusion, assessing the appropriateness of each method for modeling the diffusion of opinions about \(\text {PrEP}\) through the observed networks and describing our chosen model. We then detail the genetic algorithm we developed to estimate the parameters of the selected opinion diffusion model. Finally, we assess the performance of the algorithm on simulated data and demonstrate its practical application using the pilot study data, discussing behavior of the algorithm and interesting features of the estimated opinion diffusion process.
Opinion diffusion models
A variety of models exist for opinion diffusion: the process through which opinions change and spread through a network. They vary in complexity, underlying assumptions, and the precision or structure of opinions generated. In this section, we outline the main classes of models and justify our choice of the DeGroot model [15] for our intended application.
Modeling considerations
Our primary considerations for selecting a model are the limited number of time steps available, small observed networks, expected features of \(\text {BMSM}\) networks, structure of data collected, and focus on agentlevel assessments. The pilot study consisted of observations collected at two time steps (initial opinions and one measurement of opinions in followup assessment after the intervention) and the full study will include three time steps (initial opinions and two followup assessments). These limited numbers of observations mean an appropriate model will not rely on the system reaching equilibrium. They also limit our ability to estimate a large number of parameters or assess the appropriateness of our selected model using data, informing our preference for simple models that have already been validated on the small networks (\(N=4\) to \(N=12\)) present in the pilot study. Since the networks in the study are themselves clusters from larger networks and contain disinformation, an appropriate model will allow for disinformation under that structure. Because the data consist of Likertscale measures, the chosen model must make use of the precision available in the data, especially in the absence of more observations. Finally, given our interest in the influence of particular agents within the network, an appropriate model will involve agentlevel parameters as opposed to networklevel parameters.
Statistical physics
Statistical physics traditionally focuses on modeling the movement of particles but has increasingly been applied to other fields including opinion diffusion, where agents take the place of particles [16]. Since these models typically involve taking the thermodynamic limit which–in the case of network models–means assuming a network with infinitely many agents, statistical physics models are applied to large networks where each agent interacts with a negligible number of agents relative to the size of the network [17, 18]. Even for networks with hundreds or thousands of agents, this assumption is problematic as behaviors of the diffusion process due to finite size effects are absent from the models using the thermodynamic limit [18]. Given the very small networks included in our data set, models that assume an infinite network are not appropriate. These models also focus on explaining the overall behavior of diffusion process through the actions of individual agents [16, 17] while our goal is to explain the behavior of individual agents through their interactions. Finally, while other candidate models have been validated using data, model validation is largely absent from the statistical physics literature [17].
\(\text {SIR}\) model
Banerjee, Chandrasekhar, Duflo, and Jackson successfully modeled the diffusion of information about microfinance loans between households using a modification of the SusceptibleInfectedRecovered (SIR) epidemic model in which information about microfinance loans takes on the role of the disease [19]. The model was fit to populationlevel uptake data and does not incorporate agentlevel parameters. Though this method would be appropriate for modeling the impact of the intervention on uptake of \(\text {PrEP}\) over the limited number of time steps collected, it does not allow for an assessment of how opinions about \(\text {PrEP}\) change within the network and would not make full use of the more precise opinion data collected.
Bayesian and naive learning
Unlike \(\text {SIR}\) models, Bayesian learning directly models opinions, rather than uptake; however, these models require distributional assumptions about each agent’s prior belief about the state of the world and the signals—or information—received from other agents, both marginally and conditional on the state of the world. Bayesian learning models also assume agents are able to calculate the likelihood of the signals they receive under their current worldview according to Bayes’ rule [20]. These assumptions are problematic both from a modeling perspective and because they imply an unrealistic level of sophistication in the learning mechanisms of each agent. Additionally, this sophistication makes modeling of disinformation difficult [20, 21].
Two different experiments conducted on networks of seven agents using binary signals compared Bayesian learning to nonBayesian naive learning^{Footnote 1} where agent adopt the majority belief expressed by themselves and their contacts. These experiments demonstrate that naive learning predicts the behaviors of individual agents better than Bayesian learning, especially in highly clustered or insular networks; however, they also indicate that agents behave with more sophistication than is implied by the naive model [21, 22]. Specifically, agents account for dependencies between signals received from agents who are connected to each other, though not to the extent a Bayesian learner would. A slight modification of the naive learning model where agents can place varying importance on the signals of other agents allows for more sophistication in the learning behavior of agents and can even approximate the behavior of a Bayesian learner, especially when the importance can vary with time [22]. This modification brings us to the DeGroot model for opinion diffusion.
DeGroot model
The DeGroot model is the foundational opinion diffusion model and most influential nonBayesian model, with the majority of nonBayesian models being modifications of the DeGroot model [15, 20,21,22]. Under the DeGroot opinion diffusion model, agents update their opinions at each time step to be a weighted average of their own current opinion and the opinions of everyone with whom they interact. This process is described on a network of N agents by
where X (t) is a vector of length N with \(x_i (t)\in [0,1]\) representing the opinion of agent i at time t and W is an \(N\times N\) matrix of weights with \(w_{ij}\) representing the weight that agent i places on the opinion of agent j. The elements in the weight matrix W are restricted so that \(0\le w_{ij}\le 1\) and \(\sum _{j=1}^Nw_{ij}=1\).
W is further restricted based on the social network as represented by the adjacency matrix A, in which \(a_{ij}=a_{ji}=1\) if agents i and j are connected in the network and \(a_{ij}=a_{ji}=0\) otherwise. Since agents can only be directly influenced by the opinions of agents with whom they interact, \(w_{ij}=0\) if agents are not connected in the network (\(a_{ij}=0\)). We set \(a_{ii}=1\) to allow agents to update their opinions based on their own current opinions [15]. While we considered extension of the DeGroot model that include bounded confidence or decaying weight placed on the opinions of others, we lack the time steps to assess whether either extension is appropriate or to estimate the relevant parameters. Based on our intended application, the DeGroot model is the clear choice due to its simplicity, ability to model disinformation, capacity for using precise opinion data, and validation on small networks [21, 22].
Methods
Available methods to estimate parameters of a DeGroot model are quite limited. Castro and Shaikh were able to estimate the parameters of a Stochastic Opinion Dynamics Model (SODM)—a variant of the DeGroot model which includes information from external sources and normal measurement error on observed opinions—using data collected from an observed opinion diffusion process over an online social network. While they developed both maximumlikelihoodbased and particlelearningbased algorithms for estimating the parameters of the \(\text {SODM}\), these algorithms require more time steps than agents and at least 100 time steps, respectively [23, 24]. Since the requirements for the existing algorithms vastly exceed the available data for the \(\text {PrEP}\) studies and similar opinion diffusion applications, we developed a novel genetic algorithm capable of recovering the parameters of a DeGroot opinion diffusion process—the elements in the weight matrix W—using small data sets, including those with missing data and more model parameters than observed time steps.
Objective function
We propose a squared deviation function summed across all N agents and T time steps^{Footnote 2} which measures how closely the predicted opinions match the observed opinions:
Since lower values indicate a better fit, the optimal solution is one that minimizes the value of the objective function. To assess fit at the agentlevel, we simply exclude the sum over all agents and instead compute a separate value of the objective function for each agent using
Genetic algorithm
Genetic algorithms, which mimic the natural processes through which the fittest genes survive to subsequent generations, are an ideal choice for fitting a DeGroot model; they have fewer assumptions and a lower chance of becoming stuck in local optima compared to other optimization algorithms [25]. Genetic algorithms consist of a population of chromosomes, or complete solutions to the optimization problem, which are each composed of genes, subsets of the solution consisting of either individual values or collections of values. In each iteration of the algorithm, these parent chromosomes undergo a variety of operators which modify the genes, producing a population of offspring chromosomes that differ from the population of parent chromosomes. This process repeats with the offspring chromosomes becoming the parent chromosomes of the subsequent iteration until some stopping criterion is met [25]. In the case of the simple DeGroot model, a chromosome is defined as the weight matrix W and a gene as a single row of the weight matrix, denoted \(W_i\). This means each gene corresponds to an individual agent and represents the weights that agent places on the opinions of other agents.
We adapt a genetic algorithm developed for generating Doptimal designs for constrained mixture experiments by Limmun, Borkowski, and Boonorm: a useful starting point since both the design matrix for mixture experiments and the weight matrix for a DeGroot model are rowstochastic with the elements in each gene summing to 1 [25]. Though this common constraint on the matrices means the operators developed for mixture experiments are well suited for estimating the parameters of the DeGroot model, there is an important difference in the way the objective functions are used to assess the fitness of chromosomes and the genes within them. Under Doptimality, the fitness of a gene within the design matrix is dependent on the other genes and cannot be assessed separately from the fitness of the entire chromosome. In contrast, since each gene corresponds to an agent, the objective function for a DeGroot opinion diffusion process can be assessed at the genelevel by assessing how well the model predicts the opinions of a particular agent, as demonstrated in the above objective function. We leverage this ability to assess fit on the genelevel by incorporating a geneswapping process into the algorithm along with the selection, blending, crossover, mutation, and survival operators described below.
Geneswapping
While the objective function can be assessed at the agent or genelevel, since agents update their opinions based on the opinions of others, the fitness of a gene is dependent on the predicted opinions of the other agents which are, in turn, dependent on the other genes within the chromosome. Consider a case where agent 1 has an unfavorable view of \(\text {PrEP}\) which changes to a favorable view after speaking with their contacts, including agent 2. A gene corresponding to agent 1 which places high weight on agent 2 will fit well if the model predicts a favorable view for agent 2 but fit poorly if the model predicts an unfavorable view. In essence, when presented with a pair of genes corresponding to the same agent between two chromosomes, it is possible to assess which gene better predicts the opinions of the agent within its current chromosome but the fitter gene is not guaranteed to continue to produce better predicted opinions when swapped with a less fit gene in an otherwise fitter chromosome.
For example, consider the following population of chromosomes (B, C, and D) for a network of \(N=3\) agents whose opinions are recorded across \(T=6\) time steps. The value of the objective function for each chromosome, broken down by gene, is given to the right of the chromosome. Genes of interest, along with their contributions to the value of the objective function, are bolded.
Compared to chromosome D, the second gene in chromosome B and the third gene in chromosome C produce predicted opinions closer to the observed opinions (objective function contributions of \(0.017<0.044\) and \(0.001<0.005\)); however, when the fitter genes are swapped with the corresponding genes in chromosome D, the value of the objective function for the new chromosome \(D^*\) increases, indicating the swapped genes perform worse than the original genes within chromosome D. This is due to changes in the predicted opinions over time when modified weights are placed on others’ opinions.
Within the algorithm, the geneswapping process behaves in a manner very similar to the above example. The fittest chromosome, which we again call D, is identified and the fitness of each chromosomes is assessed at the genelevel. We then compare the fitness of each gene within D to the fitness of the corresponding genes in all other chromosomes. In all cases where a fitter gene is present, the corresponding genes are swapped between D and the chromosomes containing fitter genes, resulting in \(D^*\) and modified versions of all other chromosomes previously containing fitter genes. After all swaps are completed, we compare the fitness of D to the fitness of \(D^*\). All swaps are retained if \(D^*\) is the fitter chromosome and all swaps are rejected, returning all chromosomes to their previous version, if D is the fitter chromosome.
Selection
The purpose of the selection operator is to preserve the current optimal solution if the subsequent iteration of the algorithm is unable to find a better one. To this end, we implement selection with elitism, where the best chromosome is identified prior to each iteration of the algorithm. This elite chromosome undergoes gene swapping with nonelite chromosomes and then remains unchanged for the remainder of the current iteration of the algorithm, while the remaining chromosomes undergo the subsequent operators.
Blending
For blending, all nonelite chromosomes are randomly paired. Blending occurs for each pair of genes between paired parent chromosomes independently with probability \(p_b\) and produces new genes which are weighted averages of the chosen gene pairs. Suppose chromosomes B and C are paired and that \(B_i\) and \(C_i\) are the ith genes in chromosomes B and C, respectively. Assuming blending occurs for row i, the offspring genes \(B^*_i\) and \(C^*_i\) are defined as
where \(\beta \sim Unif (0,1)\) is a randomly selected blending factor.
Crossover
We use a modified version of the withinparent crossover operator developed by Limmun, Borkowski, and Boonorm [25]. Modifications are required to adequately explore the relatively large parameter space of the DeGroot model, in contrast to the smaller design space in constrained mixture experiments. Withinchromosome crossover occurs with probability \(p_c\) independently for all genes in the nonelite chromosomes. For genes selected for crossover, all nonfixed weights are randomly permuted within the gene.
Mutation
Mutation of a gene occurs with probability \(p_m\) independent of all other genes in the nonelite chromosomes. For mutation, a weight w is selected randomly from all nonfixed weights within the gene and a random perturbation \(\varepsilon \sim N (0,\sigma ^2)\) is added to the selected weight to produce \(w^*=w+\varepsilon\). All other nonfixed weights within the row are then scaled by \(\frac{1}{1w_{fixed}w^*}\), where \(w_{fixed}\) is the sum of all fixed weights within the row, so that the row sums to 1. We provide more details on fixed weights below in the Other Features section. To avoid division by zero and maintain weights in the interval [0,1], we implement the following special cases:

If \(w^*<0\), \(w^*\) is set to 0

If \(w^*>1w_{fixed}\), \(w^*\) is set to \(1w_{fixed}\). All other nonzero weights in the row are then set to 0.

If the selected weight \(w=1w_{fixed}\), the excess weight of \(1w_{fixed}w^*\) is evenly distributed between all other nonfixed weights within the row.
Survival
The survival operator is a means to retain only offspring chromosomes that are an improvement over the corresponding parent chromosomes. The survival operator occurs after mutation, blending, and crossover and compares each pair of parent and offspring chromosomes and retains the fitter of each pair. The retained chromosomes are then subject to gene swapping with their corresponding rejected chromosomes. The final retained chromosomes become the parent chromosomes for the subsequent iteration of the algorithm.
Other features
The fixed values mentioned in the mutation operator allow for specification of elements in the weight matrix W whose values are either known or assumed. Most often, this will be because weights are fixed at 0 because agents are not linked in the network (\(w_{ij}\) where \(a_{ij}=0\)).
For early iterations of the algorithm, the purpose of the crossover and mutation operators is to explore the parameter space of the DeGroot model. To this end, we use relatively high values of the probabilities (\(p_c,p_m\)) and of \(\sigma\). In later iterations, the purpose shifts to refining an existing solution. Since the blending operator is better suited to this purpose, we progressively reduce the probabilities of crossover and mutation and the mutation variance \(\sigma ^2\) while increasing the probability of blending (\(p_b\)). These changes are implemented using a multiplicative adjustment each time a specified number of iterations with no improvement is reached (until the probabilities and \(\sigma\) attain specified minimum or maximum values). The hyperparameters of the algorithm used in both the simulation study and data analysis are given in Table 1. The ranges of operator probabilities (\(p_b,p_c,p_m\)) and \(\sigma\) are informed by the work of Limmun, Borkowski, and Boonorm [25] with adjustments to accommodate searching a larger space. The remainder of the parameters were selected based on our experience using the algorithm to heavily favor convergence over computational efficiency with the intention of the values having minimal impact on results.
We also implement a chromosome reintroduction process that serves either to support a prior belief that agents will place high weight on their own opinions or to aid the refinement process. In either case, after a specified number of iterations without improvement, the worst chromosome—as identified by the highest value of the objective function—is removed from the population and replaced with a reintroduced chromosome. To support a prior belief about high selfweight, the reintroduced chromosome has \(1w_{fixed}\) along the diagonal and zero for all other nonfixed weights. To aid the refinement process, the reintroduced chromosome is a clone of the elite chromosome which would otherwise be exempted from the the remaining operators in the selection operator. We reintroduce the elite chromosome in the simulation study and data analysis for this paper. A chromosome with \(1w_{fixed}\) on the diagonal is always included in the population of initial chromosomes.
Simulation study
We first demonstrate the performance of the algorithm implemented in Julia [26] on simulated data varying the factors described in Table 2. The values chosen are informed by the intended application of the algorithm to the pilot and full \(\text {PrEP}\) studies as well as possible features of other network studies illsuited to existing methods: those with few observations per agent on smaller networks. Since the smallest network in the pilot study had four agents, we use \(N=4\) as the smallest network in the simulation study. Though data collection methods that can practically be applied to networks of 50 agents should allow for the collection of a sufficient number of time steps to use alternate algorithms, we include a network of size \(N=50\) to ensure the range of network sizes considered includes all networks where this algorithm is the most practical option.
The use of number of time steps of \(T=2\) and \(T=3\) are based on the number of time steps in the pilot and full study, respectively. While the motivation for the development of this method is the ability to fit opinion diffusion models using small data sets, the number of time steps available in the \(\text {PrEP}\) studies are extremely small. We include time steps of \(T=6\) and \(T=11\) (initial opinions plus five and ten followup assessments, respectively) in order to assess the impact of these extremely small samples on algorithm performance compared to other small samples. Though at most \(T=11\) time steps are ever provided to the algorithm, we simulate data over 21 time steps so that the time steps provided to the algorithm and the time steps withheld would serve as pseudo testing and training sets, allowing us to assess the ability of the algorithm to project opinions past the time steps on which data were collected. We use 21 time steps so that the number of time steps over which we extrapolate is the same as the number of time steps past initial provided to the algorithm for the largest number of time steps used (\(T=11\)).
In each simulation, we generate an Erdős–Rényi random network of size N with with connection probability \(p=\frac{d}{N1}\) based on a target degree of d, excluding mathematically impossible combinations. While the overall structure of the social networks being leveraged is unlikely to be a random network, the use of relatively dense random networks reflects the attempt to sample clusters in the \(\text {BMSM}\) social network for intervention. We enforce reachability between all pairs of nodes by rejecting any generated networks that are not connected. This rejection does inflate the average degree beyond the target with the inflation being worse for smaller degrees as seen in Table 3. We address this in our analysis of the simulation study results by using the observed degree in place of the categorical target degree where reasonable. We use the same approach with selfweight, though the mean selfweight within each target category is equal to the target when rounded to two decimal places.
We then randomly generate a weight matrix subject to a target selfweight. Each \(w_{ii}\) is drawn from a beta distribution with concentration \(\kappa =\alpha +\beta =10\) with \(\alpha\) and \(\beta\) derived from the concentration and the target mean. Weights are fixed as indicated by the adjacency matrix of the social network so that \(w_{ij}=0\) if \(a_{ij}=0\). The remaining \(1w_{ii}\) in each row is randomly distributed between all weights not fixed at 0. This weight matrix is then used to simulate the opinion diffusion process according to the DeGroot model over twenty time steps past the initial (total of 21). Initial opinions are drawn from a Unif (0, 1) distribution. Finally, we create the “observed” data set by restricting to T time steps and randomly removing a specified percentage of observations, ensuring that no initial observations were missing and at least one other observation is nonmissing for each agent. Again, combinations of network size and missing data which are incompatible with the above requirements are excluded. We then use the algorithm to fit the DeGroot model to the simulated data set, repeating the process ten times for each generated data set.
Performance is assessed for correct prediction of opinions across the time steps used to fit the weight matrix (T), correct prediction of opinions across the time steps extrapolated past those used to fit the weight matrix (\(21T\)), and recovery of weights. We used the rootmeansquare error (RMSE) for all assessments:
and
where P is the number of elements not fixed at zero in the weight matrix (the number of parameters to be estimated) and \(w_p\) is the pth nonzero element with \(w_p\) and \({\hat{w}}_p\) representing the true and estimated weights, respectively.
Results
Since possible proportions of missing data depend on the number of time steps and possible degrees depend on the network size, we assess the effect of each pair of variables together. We also make use of the mean number of observations per agent, which incorporates information about both missingness and time steps. For each variable or combination of variables, we assess the ability of the algorithm to recover the model parameters and investigate whether high recovery RMSE is the result of the algorithm identifying a solution that fits the data poorly or identifying a solution that fits the data well without recovering the model parameters. We also explore whether fit on the time steps used for estimation is indicative of fit on time steps extrapolated past those used to estimate the weight matrix. Because RMSE is bounded below and many of the plots used show rightskew, we present all summary statistics for RMSE using median and IQR.
Network size and degree
Figure 1 and Table 4 show a clear decrease in variability of the RMSE for weight recovery, with increasing degree and network size. They also indicate that recovery tends to improve with increasing degree and size, though the differences are small and this relationship is inconsistent for low degree networks. As noted previously, the method for generating networks inflates the actual degree beyond the target for lowdegree networks. The effect is worse for larger networks as indicated in Fig. 1 and confirmed by Table 5. The high variability for low degree networks combined with the observed degree being inflated beyond the target may explain the trend in observed medians for networks with a target degree of \(d=2\).
Initially, both of these relationships seem counterintuitive since increasing either network size or degree increases the number of model parameters to be estimated. We propose that the observed relationship is due to the dependencies between weights within a row, resulting from the restriction that the weights must sum to one. For a network where agents have a degree of two, there are at most three nonzero weights within each row. If one of these weights differs from the true weight, either one or both of the others must also differ from the true weight, inflating the recovery RMSE. As the degree increases, if one weight differs, it becomes possible for other weights in the row to closely match the truth since the excess weight can be distributed between or taken from the many other weights within the row. For networks with low degree, once a single weight within a row differs, this effect cascades through the rest of the row and produces high recovery RMSE, explaining both the higher median and variability for networks with low degree.
This explanation does not address whether the algorithm struggles to identify good solutions for networks with low degree since the cascading effect occurs whether or not these weights produce predicted opinions close to those observed. In fact, we would expect model parameter recovery to be easiest for lowdegree networks since there are fewer model parameters to recover. This is supported by the presence of near perfect model parameter recovery for small networks with low degree as demonstrated in Fig. 1 and Table 6. Figure 2 confirms that these lowdegree networks with high recovery RMSE are not the result of the algorithm failing to identify an adequate solution but are instead the result of solutions that fail to recover the model parameters while still predicting opinions with reasonable accuracy. This is especially true for small networks (which we consider in more detail below).
Though the effect of network size is slightly complicated by selfweight—as we will discuss later—we postulate that the decrease in median RMSE and IQR for larger networks is a result of the larger networks mitigating the dependency induced between weights by the degree of each agent. While low degree ensures that a single poorly recovered weight will result in more poorly recovered weights within the row, in larger networks that problematic row has less influence on other rows. This is supported by Fig. 3, which demonstrates that—within a target degree—larger networks result in less variability in model parameter recovery for similar values of fit RMSE. This figure also confirms that the higher median and variability in recovery RMSE for smaller networks is not the result of the algorithm failing to identify a solution that fits the data, since the marginal distribution of fit RMSE is comparable across network sizes.
Selfweight
Figure 4 and Table 7 demonstrate that lower selfweights are more difficult to recover and produce greater variability in the RMSE. We theorize that this discrepancy is partially explained by the fact that the same target selfweight was used for all agents. Agents with high selfweight necessarily place low weight on the opinions of other agents, so their predicted opinions will remain fairly stable over time and as the opinions of other agents change during estimation. As a result, the fit of a row with high selfweight will be robust to incorrect predicted opinions of other agents, implying robustness to incorrect estimated weights for other agents. The opposite is true for agents with low selfweight: the fit of a row is highly dependent on the estimated weights of other agents and the predicted opinions they generate, making estimation more unstable.
However, Fig. 5 demonstrates that networks with high recovery RMSE do not have particularly high fit RMSE, implying that the above explanation alone does not explain the patterns observed. We suggest that the high recovery RMSE for networks with low selfweight is caused by a similar phenomenon as was suggested in our discussion of the effect of degree: the strong dependencies between rows within networks with low selfweight result in a single incorrect weight affecting the rows for agents on whom incorrect weight is placed. Including selfweight in our assessment of size and degree supports this idea as demonstrated by Fig. 6, which shows all of the highest recovery RMSE values are from networks with low selfweight. This is consistent with the hypothesized effects of both degree and selfweight. When a single weight within a row of low degree is incorrect, the other weights in the row must also be incorrect. When agents have low selfweight, these incorrect weights in a single row spread to other rows which must also contain multiple incorrect weights because of the low degree. As the geodesic distance between the agent with incorrect weights and other agents increases, the rows corresponding to those other agents become progressively less influenced by the incorrect weights. This allows size to mitigate the effect of low selfweight as we suggested it did for low degree.
Missingness and time steps
Figure 7 shows model parameter recovery improves with more time steps of data used for estimation and lower proportions of missing data, though both Fig. 7 and Table 8 indicate diminishing improvement as the number of time steps increases. While Table 8 also demonstrates that variability in RMSE tends to increase with more time steps and higher proportions of missing data, it is unclear if this relationship holds for low proportions of missing data. Figure 8 confirms the decreased recovery ability for lower numbers of time steps is not the result of poor fit on the time steps used for estimation, instead the ability of the algorithm to recover parameters is improved when more data are available, as is to be expected.
Though the relationships between model parameter recovery and both missingness and time steps are intuitive in that fewer observations result in worse recovery of model parameters, whether the presence of missing data is problematic solely because it reduces the amount of information available for fitting the model is unclear from the information in either Fig. 7 or Table 8. To address this, we present Fig. 9, which demonstrates that, for a given mean number of observations per agent, the algorithm is more accurate and precise when provided with more complete data from fewer time steps (see also Table 9). We suggest this is due to the data being missing at random instead of distributed evenly between agents or across time points, making some observations more valuable than others.
Figure 9 confirms that the distribution of missing data between time steps is an issue based on the comparison of networks with two time steps to those with three time steps past initial and 50% missing data. Based on the requirement that all agents have at least one nonmissing observation past initial, all agents in networks with three time steps and 50% missing data have two observations but the second observation could occur at either the first or second time step. While both setups result in the same number of observations per agent, recovery RMSE is higher for the networks with missing data, indicating missing data increases recovery RMSE beyond what would be expected from the decreased number of observations per agent.
As shown in Fig. 10, with the exception of simulations with only two time steps, recovery RMSE roughly increases with fit RMSE and larger values for both measures tend to be from networks with more missing data. The lack of a relationship for simulations with two time steps indicates fit on the observed time steps is a poor indicator of parameter recovery when only two time steps are available. While Fig. 8 also indicates the presence of unusually high recovery RMSE relative to fit RMSE for some networks with high missingness, high missingness alone does not explain this. In Fig. 11 we can see that all such networks have either low selfweight, low degree, or both.
We see a similar pattern with fit on the observed time steps and fit on extrapolated time steps where networks with only two time steps behave differently than all others. Figure 12 shows the relationship between fit on the time steps provided to the algorithm (observed time steps) and fit on the time steps not provided to the algorithm (extrapolated time steps). With the exception of simulations where only two time steps are provided to the algorithm, fit on the observed time steps is indicative of fit on extrapolated time steps with the strength of this relationship increasing with an increasing number of time steps provided to the algorithm. Since RMSE for the time steps provided to the algorithm ignores the presence of missing data, Fig. 12 includes clusters with higher \(RMSE_{fit}\) and \(RMSE_{ext}\) for higher proportions of missingness.
The collection of points in a vertical line for the simulations with only two time steps demonstrates the lack of a relationship which is consistent with worse parameter recovery for fewer time steps even when the estimated parameters fit the data well as was seen in Figs. 8 and 10. Fit on observed time steps is indicative of fit on extrapolated time steps across network size, degree, selfweight, and missingness as demonstrated by Figs. 13, 14, 15, and 16. The notable feature in all of these plots is the collection of points in a vertical line already discussed here.
Application: diffusion of willingness to use \(\text {PrEP}\) among \(\text {BMSM}\)
Study overview
The pilot intervention study was conducted in Milwaukee, WI in 20162017 with social networks of \(\text {BMSM}\) enrolled in the study. Network leaders—members of each network who were most socially interconnected with others in the same network, as well as those linked in their friendship with network members who would not otherwise be reached—were selected to attend a group intervention which met each week for five weeks for 2 h per session. Intervention sessions provided \(\text {PrEP}\) education and skills training in how to endorse \(\text {PrEP}\) to friends. All participants (network leaders and other network members) completed assessments at enrollment and three months later. The research was approved by the Medical College of Wisconsin Institutional Review Board (IRB), and written informed consent was provided by all study participants. Further information on procedures is available elsewhere [4].
Recruitment of social networks
Five social networks were enrolled in the study. Recruitment of each network began by identifying, approaching, and recruiting an initial seed in a community venue. Entry criteria for seeds were reporting male sex at birth; describing oneself as African American, Black, or multiracial; being age 18 or older; reporting sex with males in the past year; and not reporting being \(\text {HIV}\) positive. When consented and enrolled in the study, seeds were asked to identify friends who were \(\text {MSM}\). First names were recorded by the study staff, and the seed was provided with study invitation packets to distribute to listed friends. Interested friends of the seed were then enrolled following the same procedures as for the seed with the same entry criteria except not restricting study eligibility based on serostatus. This first ring of friends surrounding the seed were then asked to identify their own \(\text {MSM}\) friends, and were asked to give an invitation packet to each named friend, with enrolled friends constituting the second and final ring extending outward from the seed. The recruited networks of the five seeds had a total of 40 unique members, and networks were composed of between 4 and 12 participating members.
Assessments
Assessment measures were completed using selfadministered questionnaires during individual sessions at the time of the baseline and followup visits. Key measures for this analysis were \(\text {PrEP}\) selfefficacy and \(\text {PrEP}\) willingness. \(\text {PrEP}\) selfefficacy was assessed with eight items. Each item asked participants to use a 4point scale to indicate how difficult, from very hard to very easy, it would be to engage in an action (sample item: “How difficult or easy would it be for you to visit a doctor who can provide \(\text {PrEP}\)?”). \(\text {PrEP}\) willingness was assessed with three items. Each item asked participants to indicate their strength of agreement using a 5point Likert scale (from “strongly disagree” to “strongly agree”; sample item: “I would be willing to go on \(\text {PrEP}\) if I had a casual sex partner who was \(\text {HIV}\)positive”).
Methods
We selected willingness and selfefficacy for this analysis since increasing willingness to take \(\text {PrEP}\) is a key study aim and previous work has demonstrated a direct association between selfefficacy and \(\text {PrEP}\) use [27]. A single value per agent was created for each measure by summing all the component Likertscale items. Possible values ranged from 832 for selfefficacy and 315 for willingness. Since initial observations for both willingness and selfefficacy were missing for agent 11 in network 2 and agent 4 in network 3, we imputed these values using followup data for those agents where available or the median initial response for all other agents.
As the DeGroot model uses continuous data on the interval [0, 1], both scale measures were converted to continuous for model fitting and backtransformed for evaluation of model fit using the following processes:
Forward transformation

1
Begin with data on an npoint composite scale.

2
Divide the interval [0, 1] into n subintervals of equal width.

3
An opinion of x on the composite scale takes on the middle value, y, in the xth subinterval on the continuous scale.
Back transformation

1
Begin with data on a continuous [0, 1] interval to be converted to an npoint composite scale.

2
Multiply the continuous opinion y by n.

3
Round the multiplied continuous opinion to an integer to produce an opinion on the composite scale.
The transformation to a continuous scale would allow us to use the objective function used for the simulation study,
but the optimal solution in the case of Likertscale data is the one that best predicts observed opinions on the original scale. Specifically, predicted opinions that differ from the observed opinions on the continuous scale should only be penalized if they also differ on the original scale after the rounding step in the backwards transformation. We accomplish this through the use of the objective function
where \(B ({\hat{x}}_i (t),x_i (t))\) measures the absolute deviation between the observed and predicted opinions on the Likert scale. We refer to predicted opinions where \(B ({\hat{x}}_i (t),x_i (t))=0\) as being in the correct bin or as a correctly predicted opinion. The inclusion of the absolute deviation on the continuous scale serves to penalize only estimates outside of the correct bin. Since the absolute deviation on the Likert scale as measured by \(B ({\hat{x}}_i (t),x_i (t))\) is already a second penalty for predicted and observed opinions that differ greatly, we do not square the absolute deviation on the continuous scale as in the continuous version of the objective function. The fit can be assessed on a row or agent level by only summing across time steps to obtain an agentlevel value of the objective function using
While there are only two time steps, the followup assessment was conducted three months after the initial assessment, meaning agents likely engaged in multiple interactions between the initial and followup assessments. To address this, we define a time step to be one month and treat time steps \(t=1\) and \(t=2\) as missing for all agents. This approach allows for information and opinions shared by the network seed to spread to their friends who can then share the information with their friends instead of assuming information from the seed has not yet reached the second ring of recruitment.
Since the recruitment process does not result in complete information about connections within the network, we construct two different adjacency matrices for each network: one where we begin with a matrix of zeros and add only connections we can be certain exist (denoted Build in reporting of results) and another where we begin with a matrix of ones and remove any connections we are certain do not exist (denoted Remove in reporting of results).
For all four combinations of outcome measure and adjacency matrix construction, we run the algorithm ten times. Though mathematically derived error estimates for either parameter estimates or predicted opinions are impractical, conducting multiple runs of the algorithm allows us to generate estimates of algorithmic variability for both. While we do not present point estimates for opinions as part of our results, we include standard deviations for all estimated weights. In order to assess algorithmic variability and predictive performance on a network level, we use modified forms of the RMSE measures used for the simulation study adapted to account for the unknown model parameter values and scale data:
and
where C represents the number of possible bins and is included to allow comparison of variability in estimated weights between the willingness and selfefficacy measures, \(\bar{w}_{ij}\) is the mean estimated weight agent i places on the opinion of agent j, and \(\bar{w}_p\) is the pth weight not fixed at zero.
Results
To provide context for the following results, we include Table 10 which identifies both the seed and leader (s) in each social network. Note that some networks had a higher density of network leaders (range: oneeighth to onethird of network members).
Accuracy of predicted opinions
We present Fig. 17 which demonstrates the models are able to predict opinions with reasonable accuracy while highlighting some limitations. It is worth noting that, while deviation is measured in number of bins—units on the composite scale—between observed and predicted opinions for both willingness and selfefficacy, the size of a bin is not the same between measures: a deviation of one bin is a larger difference for willingness than for selfefficacy.
The presence of the 10 observations that are seven bins away from the observed opinions using the build matrix for selfefficacy is particularly interesting. These observations are all from agent 5 in network 1. As can be seen in Table 11, agent 5 has only one connection in the build matrix, to agent 2. Since agent 2 has a lower selfefficacy score at both observed time steps than agent 5, their connection to agent 2 is unable to explain the improvement in selfefficacy at followup. In contrast, the model using the remove matrix has no deviations exceeding four bins. We see evidence of this effect across both measures with improved predicted opinions for the remove matrix. Table 11 also shows that agent 5 is allowed to update their opinion based on more agents within the network, explaining the improvement in predicted opinions. This is not to say that the remove matrix is a more accurate representation of the underlying network, only that information sources are missing from the build matrix. This could be a missing connection that is present in the remove matrix but it could also be a connection to an individual not included in the network or an external source such as social media, advertisements, or individual research.
Weight estimates and variability
We present Tables 11, 12, 13, 14, 15, 16, 17 to summarize the estimated weights for all five networks across ten runs of the algorithm for both willingness and selfefficacy on adjacency matrices built from known ties or created by removing connections known not to exist. Elements fixed at 0 according to the adjacency matrix are indicated with a 0 without a standard deviation. Weights places on leaders are indicated with an asterisk (*) and weights that are structurally zero in the build matrix are italic in the remove matrix. As was discussed previously, there are notable differences in Table 11 for agent 5 between the build and remove matrices with estimated weights between 0.12 and 0.21 placed on agents whose weights were fixed at 0 for the build matrix. We also see evidence that the algorithm is able to identify connections present in the adjacency matrix that may not be present in the network in the form of zero or nearly zero estimated weights for both build and remove adjacency matrices.
While Tables 11, 12, 13, 14, 15, 16, 17 do provide information about variability between runs of the algorithm, Table 18 presents this information in condensed form. It shows that algorithmic variability is much lower for selfefficacy than for willingness. Within measures, algorithmic variability is higher on the remove matrix for willingness and higher on the build matrix for selfefficacy. The assessment of how well the predicted opinions fit the data confirms the relationship seen in Fig. 17 with models on the remove matrix tending to produce predicted opinions closer to those observed. Again, this supports the idea that information is missing from the build matrices though the additional information in the remove matrices is not necessarily correct.
Comparison of leaders to other agents
In order to assess whether leaders are better able to influence their friends, we calculate the mean weight placed on leaders versus nonleaders for each network, adjacency matrix, and measure combination. We also determine, out of all possible nonzero weights placed on leaders or nonleaders, both the number and proportion that exceed 0.005 for all combinations. All of these measures exclude agents not connected to a leader. We also include mean selfweight for leaders and nonleaders for all agents within the networks. Table 19 shows these summary statistics for willingness.
For willingness, on networks 1, 2, 3, and 5 the mean estimated weight for leaders is higher for both adjacency matrices. It is higher for nonleaders across both adjacency matrices for network 4. There is no clear pattern in the proportion of practically nonzero weights (weights greater than 0.005) between leaders and nonleaders. We note that the uncertainty in the adjacency matrices could obscure any relationship between leadership training and the proportion of possible nonzero weights that are present and that network 4 is the only network where the seed did not attend leadership training, providing a possible explanation for the inefficacy of the leader in that network. For both measures, mean selfweight tends to be higher for the build matrix than for the remove matrix, indicating the algorithm identifies a solution where weight is distributed over either more or different agents when available.
Table 20 presents the same statistics for selfefficacy. For selfefficacy, whether higher weight was placed on leaders or nonleaders is only consistent between the build and bemove matrices for networks 1 and 5 with higher weight placed on nonleaders. With network 4 again being a notable exception, both the weights and the differences between them tend to be small relative to those estimated for willingness. This necessarily results in higher estimated selfweights than for willingness, indicating agents were more open to changing their willingness to use \(\text {PrEP}\) than their beliefs about the difficulty of engaging in \(\text {PrEP}\) behaviors. Again, there is no clear pattern in the proportion of practically nonzero weights between leaders and nonleaders and selfweight tends to be higher for the build matrix.
Conclusions
In order to expand the use of opinion diffusion models to public health and social science applications, we developed a novel genetic algorithm capable of recovering the parameters of a DeGroot opinion diffusion process using small data sets, including those with missing data and more model parameters than observed time steps. We assessed the efficacy of the algorithm on simulated data, considering a variety of features of the networks and data sets where this method could reasonably be used. We also demonstrated the performance of the algorithm on the \(\text {PrEP}\) pilot study data, producing estimated weights that result in predicted opinions close to those observed. This serves as a first step in the development of an epidemic model informed by the opinion diffusion process to assess the network leader intervention, an option not previously available with the limited size of the data set.
Simulation study
The simulation study demonstrates the algorithm is able to recover the model parameters of the opinion diffusion process and correctly predict opinions, though the accuracy of both types of estimates depend on degree, network size, selfweight, number of time steps observed, and proportion of missing data. Small, low degree networks are the only networks capable of nearly perfect model parameter recovery but can also result in inaccurate model parameter estimates even when the solution fits the observed opinions well. Increased network size mitigates this issue, decreasing both the median and IQR of the recovery RMSE at the expense of increasing the minimum recovery RMSE. Networks comprised of agents who place little weight on their own opinions also result in poorer recovery of weights even when predicted opinions closely match the observed ones. Low degree may exacerbate this problem but larger network size mitigates the effect of low selfweight as for low degree. Both lower proportions of missing data and more time steps improve recovery by increasing the number of observations per agent though applying the algorithm to data with a few completely observed time points will result in better parameter recovery than an application with a comparable number of observed time points per agent resulting from more followup time points but significant missing data.
Opinion diffusion modeling
Analysis of the estimated weights demonstrates that agents generally place more weight on the opinions of leaders than nonleaders when updating their willingness to use \(\text {PrEP}\), though evidence is mixed for selfefficacy. They also suggest the network leader intervention is more effective for changing willingness than selfefficacy. While these results agree with previous research and indicate the use of opinion leaders may be an effective intervention for increasing \(\text {PrEP}\) usage to reduce transmission of \(\text {HIV}\), this is not the intended use of the estimated weights. Instead, the application of this algorithm to the pilot study data is the first step in a detailed investigation of the opinion diffusion process underlying the use of network leaders as an intervention to increase uptake of \(\text {PrEP}\) for \(\text {BMSM}\).
Though other methods can be used to assess the effectiveness of the intervention for each agent, this method allows for an exploration of why the intervention was or was not effective for each agent in the network. Since the model includes an estimate of the weight each agent places on the opinions of all other agents, we are able to identify agents who were particularly receptive or resistant to the opinions of the network leaders. Identifying demographic or relational differences that explain the varying receptivity of agents with the network allows for better prediction of the change in opinions about \(\text {PrEP}\) that could be expected when applying the network leader intervention to other networks.
While only having two time steps is a limitation that results in higher variability in model parameter estimates, both the simulation study and application of the algorithm to the pilot study data show the algorithm can be used to estimate the model parameters of the diffusion process using very few time points. In addition, the simulation study demonstrates a substantial improvement in the performance of the algorithm on three time steps as compared to two time steps. This suggests the algorithm will be able to produce estimates that are more precise and accurate using the three time steps included in the full study. These estimates can then be used to inform parameters of an epidemic model for evaluating the use of a network leader intervention as a means to reduce incidence of \(\text {HIV}\) for \(\text {BMSM}\).
Availability of data and materials
The data generated and analysed during the simulation study are available in the corresponding author’s GitHub repository, https://github.com/karajohnson4/DeGrootGeneticAlgorithm. The data from the pilot study are available from Jeffrey A. Kelly, PhD, cairdirector%40mcw.edu but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are however available from the authors upon reasonable request and with permission of Jeffrey A. Kelly, cairdirector%40mcw.edu. The genetic algorithm code is also available in the corresponding author’s GitHub repository, https://github.com/karajohnson4/DeGrootGeneticAlgorithm under the name AlgorithmCode. The ANSArchive branch will serve as an archived version. The code is written in Julia, is platform independent, requires Julia 1.5 or higher, and uses the GNU GENERAL PUBLIC LICENSE [26]. Analysis was performed in R using RStudio with data imported using haven and restructured using tidyr [28,29,30,31]. Plots were generated with ggplot2 [32].
Notes
Naive learning can be expressed as a DeGroot model with \(w_{ij}=\frac{1}{N_i+1}\) for \(a_{ij}\ne 0\) where \(w_{ij}\) and \(a_{ij}\) are as defined in the DeGroot Model subsection below and \(N_i\) is the number of agents in the neighborhood of agent i.
Within this paper, the observed and predicted initial opinions are equal, meaning that the contribution to the value of the objective function and other fit assessments from opinions at time \(t=0\) is always 0. For use of extensions to this method when observed and predicted initial opinions differ, we present all relevant equations in the form that incorporates the deviation between observed and predicted initial opinions.
Abbreviations
 BMSM:

Black men who have sex with men
 HIV:

human immunodeficiency virus
 IRB:

Institutional Review Board
 MSM:

men who have sex with men
 PrEP:

preexposure prophylaxis
 RMSE:

rootmeansquare error
 SIR:

Susceptibleinfectedrecovered
 SODM:

Stochastic opinion dynamics model
References
Schneider JA, Bouris A, Smith DK (2015) Race and the public health impact potential of preexposure prophylaxis in the United States. JAIDS J Acquir Immune Defic Syndr 70 (1):30–32
Paltiel AD, Freedberg KA, Scott CA, Schackman BR, Losina E, Wang B, Seage GR, Sloan CE, Sax PE, Walensky RP (2009) HIV preexposure prophylaxis in the United States: impact on lifetime infection risk, clinical outcomes, and costeffectiveness. Clin Infect Dis 48 (6):806–815
Golub SA, Gamarel KE, Surace A (2017) Demographic differences in PrEPrelated stereotypes: implications for implementation. AIDS Behav 21 (5):1229–1235
Kelly JA, Amirkhanian YA, Walsh JL, Brown KD, Quinn KG, Petroll AE, Pearson BM, Rosado AN, Ertl T (2020) Social network intervention to increase preexposure prophylaxis (PrEP) awareness, interest, and use among African American men who have sex with men. AIDS Care 32 (sup2):40–46
HernándezRomieu AC, Sullivan PS, Rothenberg R, Grey J, Luisi N, Kelley CF, Rosenberg ES (2015) Heterogeneity of HIV prevalence among the sexual networks of Black and White MSM in Atlanta: illuminating a mechanism for increased HIV risk for young Black MSM. Sex Transm Dis 42 (9):505
Garcia J, Colson PW, Parker C, Hirsch JS (2015) Passing the baton: communitybased ethnography to design a randomized clinical trial on the effectiveness of oral preexposure prophylaxis for HIV prevention among black men who have sex with men. Contemp Clin Trials 45:244–251
Quinn K, DicksonGomez J, Kelly JA (2016) The role of the Black Church in the lives of young Black men who have sex with men. Cult Health Sex 18 (5):524–537
Quinn K, DicksonGomez J (2016) Homonegativity, religiosity, and the intersecting identities of young black men who have sex with men. AIDS Behav 20 (1):51–64
DicksonGomez J, Owczarzak J, Lawrence JS, Sitzler C, Quinn K, Pearson B, Kelly JA, Amirkhanian YA (2014) Beyond the ball: implications for HIV risk and prevention among the constructed families of African American men who have sex with men. AIDS Behav 18 (11):2156–2168
Kipke MD, Kubicek K, Supan J, Weiss G, Schrager S (2013) Laying the groundwork for an HIV prevention intervention: a descriptive profile of the Los Angeles House and Ball communities. AIDS Behav 17 (3):1068–1081
Phillips G, Peterson J, Binson D, Hidalgo J, Magnus M, YMSM of color SPNS Initiative Study Group et al (2011) House/ball culture and adolescent AfricanAmerican transgender persons and men who have sex with men: a synthesis of the literature. AIDS Care 23 (4):515–520
Bandura A (1986) Social foundations of thought and action. Englewood Cliffs, NJ, pp 23–28
Fishbein M, Ajzen I (1977) Belief, attitude, intention, and behavior: an introduction to theory and research
Rodgers E (1983) Diffusion of innovation 3rd end free press. New York, p 247
DeGroot MH (1974) Reaching a consensus. J Am Stat Assoc 69 (345):118–121
Castellano C, Fortunato S, Loreto V (2009) Statistical physics of social dynamics. Rev Mod Phys 81 (2):591
Sîrbu A, Loreto V, Servedio VD, Tria F (2017) Opinion dynamics: models, extensions and external effects. Participatory sensing, opinions and collective awareness. Springer, Berlin, pp 363–401
Toral R, Tessone CJ (2006) Finite size effects in the dynamics of opinion formation. arXiv preprint physics/0607252
Banerjee A, Chandrasekhar AG, Duflo E, Jackson MO (2013) The diffusion of microfinance. Science 341 (6144)
Acemoglu D, Ozdaglar A (2011) Opinion dynamics and learning in social networks. Dyn Games Appl 1 (1):3–49
Chandrasekhar AG, Larreguy H, Xandri JP (2020) Testing models of social learning on networks: evidence from two experiments. Econometrica 88 (1):1–32
Grimm V, Mengel F (2020) Experiments on belief formation in networks. J Eur Econ Assoc 18 (1):49–82
Castro LE, Shaikh NI (2018) Influence estimation and opiniontracking over online social networks. Int J Bus Anal (IJBAN) 5 (4):24–42
Castro LE, Shaikh NI (2018) A particlelearningbased approach to estimate the influence matrix of online social networks. Comput Stat Data Anal 126:1–18
Limmun W, Borkowski JJ, Chomtee B (2013) Using a genetic algorithm to generate Doptimal designs for mixture experiments. Qual Reliab Eng Int 29 (7):1055–1068
Bezanson J, Edelman A, Karpinski S, Shah VB (2017) Julia: a fresh approach to numerical computing. SIAM Rev 59 (1):65–98
Walsh JL (2019) Applying the informationmotivationbehavioral skills model to understand PrEP intentions and use among men who have sex with men. AIDS Behav 23 (7):1904–1916
R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2020) R Foundation for Statistical Computing. https://www.Rproject.org/
RStudio Team: RStudio: Integrated Development Environment for R. RStudio, PBC., Boston, MA (2020). RStudio, PBC. http://www.rstudio.com/
Wickham H, Miller E (2020) Haven: import and export ’SPSS’, ’Stata’ and ’SAS’ files. R package version 2.3.1. https://CRAN.Rproject.org/package=haven
Wickham H (2020) Tidyr: Tidy messy data. R package version 1.1.2. https://CRAN.Rproject.org/package=tidyr
Wickham H (2016) ggplot2. Elegant graphics for data analysis. Springer, New York. https://ggplot2.tidyverse.org
Acknowledgements
We would like to thank Jeffrey A. Kelly, PhD for allowing use of the PrEP pilot study data.
Funding
This work was partially funded by NIH Grants R01AI147441 and R01NR017574.
Author information
Authors and Affiliations
Contributions
KLJ developed and coded the algorithm, conducted the simulation study and analysis, and wrote the relevant sections. JLW developed and validated the scales used to assess willingness and selfefficacy and contributed to the relevant sections. YAA developed and implemented the recruitment of social networks and contributed to the relevant background material. JJB contributed to the development of the algorithm. NBC oversaw the selection of a model, development of the algorithm, and writing. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Johnson, K.L., Walsh, J.L., Amirkhanian, Y.A. et al. Using a novel genetic algorithm to assess peer influence on willingness to use preexposure prophylaxis in networks of Black men who have sex with men. Appl Netw Sci 6, 22 (2021). https://doi.org/10.1007/s41109020003472
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s41109020003472
Keywords
 Intervention uptake
 Opinion diffusion
 Parameter estimation
 DeGroot model
 Genetic algorithm
 Preexposure prophylaxis (PrEP)