Skip to main content

Opinion Dynamic Modeling of News Perception


During the last decade, the advent of the Web and online social networks rapidly changed the way we were used to search, gather and discuss information of any kind. These tools have given everyone the chance to become a news medium. While promoting more democratic access to information, direct and unfiltered communication channels may increase our chances to confront malicious/misleading behavior. Fake news diffusion represents one of the most pressing issues of our online society. In recent years, fake news has been analyzed from several perspectives; among such vast literature, an important theme is the analysis of fake news’ perception. In this work, moving from such observation, I propose a family of opinion dynamics models to understand the role of specific social factors on the acceptance/rejection of news contents. In particular, I model and discuss the effect that stubborn agents, different levels of trust among individuals, open-mindedness, attraction/repulsion phenomena, and similarity between agents have on the population dynamics of news perception. To discuss the peculiarities of the proposed models, I tested them on two synthetic network topologies thus underlying when/how they affect the stable states reached by the performed simulations.


The advent of the web and online social networks (OSN) has created an unprecedented amount of accessible information. These tools have given everyone the chance to become a news medium by setting up a website, a blog, or simply creating an account on an OSN, thus enlarging the offer with a significant number of individual uncontrolled contributions. News generation and circulation have become easier than ever: moreover, the lack of a structured control on user generated contents facilitated the diffusion of misleading, bogus, and inaccurate information, often grouped under the umbrella term “fake news”.

Fake news proliferation is a significant threat to democracy, journalism, and freedom of expression and the debate around them and its potentially damaging effects on public opinion and democratic decision making is complex and multifaceted (Watts et al. 2021; Lazer et al. 2018; Wardle and Derakhshan 2017). Their spread has weakened the confidence of the population in governments: vivid examples of such effects can be seen, for instance, in the impact that they have had on the “Brexit” referendum and the 2016 US presidential elections (Hoferer et al. 2020; Bovet and Makse 2019). Like all controversial pieces of information, fake news usually polarizes the public debate—-both online and offline—with the side effect of radicalizing population opinions, thus reducing the chances of reaching a synthesis of opposing views. Moreover, stubborn agents’—as firstly introduced in Wu and Huberman (2004), Mobilia (2003) and Galam (2016)—existence amplifies such phenomena by fostering—either for personal gain, lack of knowledge, or excessive ego—their point of view, disregarding the existence of sound opposing arguments or, even, debunking evidence.

So far, the leading efforts to understand and counter the effects of misleading information were devoted to (1) identifying fake news and its sources, (2) debunk them, and (3) studying how they spread. Indeed, the analysis of how fake news diffuse is probably the most difficult task to address. Even when restricting the analysis on the online world, tracing the path of content shared by users of online platforms is not always feasible (at least extensively). Moreover, such an analysis becomes nearly impossible when considering that news can diffuse across multiple services—of which we can usually have only a partial view.

However, two different aspects need to be addressed to properly understand how news (fake or legit) spread: (1) how different individuals get in touch with them, and (2) how the population reached by such contents perceives them. Effective news reaches a broad audience and is also able to convince such an audience of its message. The latter component goes beyond the mere spreading process that allows news to become viral: it strictly relates to individuals’ perceptions, opinions whose consolidation is due not only to the news’ content but also to the social context in which they diffuse.

In this work, moving from my previous work (Toccaceli et al. 2020) and from such observation, I propose a family of opinion dynamics models to understand the role of specific social factors on the acceptance/rejection of news contents. Assuming a population composed of agents aware of a given piece of information - each starting with a predefined attitude toward it—I study how different social interaction patterns lead to the consensus or polarization of opinions. In particular, I model and discuss the effect that stubborn agents, different levels of trust among individuals, open-mindedness, attraction/repulsion phenomena [such previously discussed in Flache et al. (2017)], and similarity between agents have on the population dynamics of news perception. Even if applicable to any controversial news content, my study will be framed in the fake news diffusion setting, being such a particular context a vivid example of the relevance of opinion dynamics for societal effects.

The paper is organized as follows. In “Related Works” section, I discuss the relevant literature to my work; subsequently, in “Opinion Dynamic Modeling of News Perception” section, I describe the opinion dynamics models I designed to study news perception’s evolution. In “Experiments” section, I present an analysis of the proposed models on synthetic networks having heterogeneous characteristics. Finally, “Conclusion” section concludes the paper by summarizing my results and underlying future research directions.

Related Works

To better contextualize my study two different, yet related topics need to be reviewed: (1) research on fake news and their characterization, and (2) opinion dynamics modeling.

Fake News Characterization

Indeed, since I am using fake news as an example of the contents that generate peculiar opinion dynamics, it is important to briefly discuss what they are and which are the lines of research that focus on them. Unfortunately, scientific literature does not converge on a unique and universal definition of fake news, rather it provides several contextualized descriptions and taxonomies. As an example, Allcott and Gentzkow (2017) defines “fake news” as “news articles that are intentionally and verifiably false, and could mislead readers”; instead, Lazer et al. (2018) pictures them as “fabricated information that mimics news media content in form but not in organizational process or intent”. Finally, the authors of Zhang and Ghorbani (2020) specify that “fake news refers to all kinds of false stories or news that are mainly published and distributed on the Internet, in order to purposely mislead, befool or lure readers for financial, political or other gains.” In this work I align with the first of the reported definitions—fake news as news articles that are intentionally false and at the same time easily verifiable and that can mislead readers. Indeed, identifying the components that characterize fake news is an open and challenging issue (Zhang and Ghorbani 2020). Moreover, to address unreliable content online, several approaches have been designed: most of them propose detecting bogus contents or their creators. Focusing on the analysis involving fake news, we can identify different fields of research: content analysis [e.g., fake news identification (Sharma et al. 2019; Alam et al. 2021)], creator analysis [e.g., bots detection (Cresci et al. 2016; Caldarelli et al. 2020)], evaluation of the presence of d/misinformation campaigns (Hernon 1995), propaganda (Nakov and Martino Da San 2020), and echo-chamber (Cinelli et al. 2021; Garimella et al. 2018; Ge et al. 2020; Hilbert et al. 2018; Morales et al. 2021) and, social context analysis [e.g., the effect of the fake news and their spread on society (Visentin et al. 2019)].

Opinion Dynamics

Opinion-forming processes have attracted the interest of interdisciplinary experts. Humans have opinions on everything that surrounds them: opinions that are influenced by several factors, such as individual predisposition, information possessed, and interaction with other subjects. In Si and Li (2018), opinion dynamics is defined as the process that “attempts to describe how individuals exchange opinions, persuade each other, make decisions, and implement actions, employing diverse tools furnished by statistical physics, e.g., probability and graph theory”. Opinion dynamics models are often devised to understand how certain assumptions on human behaviors can explain alternative scenarios, namely consensus, polarization, or fragmentation. In a population, we say that consensus is reached when we obtain a single and homogeneous cluster of opinions; polarization occurs when we have the simultaneous presence of several groups of opinions, well defined and separated, of adequate size; finally, fragmentation corresponds to a disordered state with a still high set of groups of small opinions.

To understand how those stable statuses can be reached as a result of a diffusive process—and given a precise set of preconditions—are often proposed agent-based models. In these models, each agent has a variable that corresponds to his opinion. Opinion dynamic models can be categorized into discrete or continuous models, according to how opinion variables are defined. Among the former class, fall the Majority rule (Galam 2002), Sznajd Sznajd-Weron and Sznajd (2000), Voter Holley and Liggett (1975), and Q-Voter models (Castellano et al. 2009), discrete models that define scenarios in which agents have to decide between two options on a given theme (e.g., true/false, yes/no, iPhone/Samsung). On the latter class, some of the field’s milestones are the works of DeGroot (1974) and Friedkin (1986), where opinions are continuous, and agents update their beliefs based on the ones of all their neighbors simultaneously. All models called “bounded confidence models”, where agents are influenced only by peers having an opinion sufficiently close to theirs are part of the second class. This characteristic is justified by sociological theories such as homophily, which tends to associate and bond with similar individuals. Two of the milestones of this kind are the models by Deffuant et al. (2000) and Hegselmann et al. (2002) that are usually applied in those contexts in which an opinion can be modeled as a real value within a given range, such as the political orientation of an agent.

Nowadays, the field has attracted much attention from researchers, and recently some surveys were published to try to give a comprehensive outlook on state of the art, such as Noorazar (2020), Noorazar et al. (2020) and Sirbu et al. (2016).

Opinion Dynamic Modeling of News Perception

To model news perception’s opinion dynamics, let us consider a set of agents that share their opinion position w.r.t a given piece of news—that, for sake of example, I assume to be fake—posted on a social platform. Agents can interact only with their friends, updating their point of view on a given news to account for their distance in opinions. Thus, my work aims not to evaluate how the fake news spread but, instead, to understand how agents relate to them as a function of their social environment. To such extent, and without loss of generality, I assume that the piece of news of interest is known to all agents of the observed population, and that each agent has already formed an initial opinion about it at the beginning of the simulation.

Due to the peculiar nature of the phenomena I am analyzing—e.g., how a fake news is perceived by individuals and how such perception reflects on their peers—I decided to adopt a continuous modeling framework, extending the well-known Hegselmann–Krause model.

Definition 1

In the HK model, every agent i has an internal status \(x_i\) expressing its opinion represented in the continuous range \([-\,1,1]\). The model consider only interaction that occur during discrete time events, \(T = \{0,1,2,\dots \}\). Agent pairs can interact if their opinions differ up to a user-specified threshold (\(\epsilon\)), that we refer to as the users’ confidence level. During each interaction event \(t\in T\) a random agent i is selected and the set \(\Gamma _{\epsilon }(i)\) of its neighbors whose opinions differ at most \(\epsilon\) (\(d_{i,j}=|x_i(t)-x_j(t)|\le \epsilon\)) is identified. The selected agent i changes its opinion based on the following update rule:

$$\begin{aligned} x_i(t+1)=\frac{\sum _{j\in \Gamma _{\epsilon }(i)}a_{i,j}x_j(t)}{\sum _{j\in \Gamma _{\epsilon }(i)}a_{i,j}} \end{aligned}$$

where \(a_{i,j}\) is 1 if there is and edge between i and j, 0 otherwise. At time \(t+1\), i’s opinion becomes the average of its \(\epsilon\)-neighbors’ opinions.

The HK model converges in polynomial time, and its behavior is closely related to the expressed confidence level: lowering \(\epsilon\), the model will tend to stabilize on fragmented opinions’ clusters while, raising it agents will tend toward reaching a consensus. Considering its definition, the HK model does not take into account the strength of the agents’ ties, nor the fact that agents embedded in a social context tend to relate with peers having similar interests or social status—as shown in Fig. 2. To overcome such limitations, conversely from HK, in the following I assume that when an agent i discuss a news A the trustworthiness she attributes to her peer’s opinion depends on the strength of the relation among the two—as exemplified in Fig. 1.

In “Modeling Group Interactions” and “Modeling Pairwise Interactions” sections I formally extend the HK model to exploit weighted interactions (modeling ties strengths) following two different agents communication patterns: (1) group interaction, and (2) pair-wise interaction. Finally, in “Stubbornness and Homophily” section I discuss how stubborn agents and agents’ homophilic behaviors can be integrated in the proposed models.

Based on the classification of Flache et al. (2017), I implemented models that belong to the class “Models with similarity biased influence” where only sufficiently similar individuals (the bounded confidence \(\epsilon\)) can influence each other towards reducing opinion differences. Differently, the Repulsion Weighted HK model belongs to the class “Models with repulsive influence” where individuals are too dissimilar they can also influence each other towards increasing mutual opinion differences.

Modeling Group Interactions

In this HK variant, during a generic iteration the agent’s opinion \(x_i\) changes as a weighted function of the opinions of her neighbors.

Definition 2

At each iteration, an agent i is randomly selected and with all her neighbors. These agents are filtered, taking into account only those whose opinion have a distance less or equal than \(\epsilon\) (\(|x_i-x_j|\le \epsilon\)). This particular set of neighbors is denoted by \(\Gamma _{\epsilon }\). Each edge \((i,j) \in E\), has an associated strength value \(w_{i,j}\in [0,1]\). At the end of the interaction with all compatible neighbors, \(x_i\) changes as follows:

$$\begin{aligned} x_i(t+1)={\left\{ \begin{array}{ll} x_i(t) + \frac{\sum _{j \in \Gamma _{\epsilon }} x_j(t)w_{ij}}{\#\Gamma _{\epsilon }} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0\\ x_i(t) + \frac{\sum _{j \in \Gamma _{\epsilon }} x_j(t)w_{ij}}{\#\Gamma _{\epsilon }} (1+x_i(t)) &{} \text {if } x_i(t) < 0 \end{array}\right. } \end{aligned}$$

If \(\Gamma _{\epsilon }\) is empty \(x_i(t) = x_i(t+1)\).

\(WHK_G\)’s opinion update rule state that the new opinion of agent i, \(x_i(t + 1)\) is given by the combined effect of i’s previous opinion, \(x_i(t)\), and the weighted average opinion of her compatible neighbors \(\Gamma _{\epsilon }\). This model assumes that opinions evolution is the result of a (weighted) aggregation of all (reachable) influences expressed by agents’ peers.

Fig. 1

Weight example. Opinion \(x_i\) is influenced by the opinions of agents with the opinion more similar to its opinion; e.g., the agents in the yellow elliptical. At the end of the interaction, \(x_i\) approaches the opinions of the agents with heavier weights (as visually shown \(x_i\) change of position)

Modeling Pairwise Interactions

In this HK variant, previously introduced in Toccaceli et al. (2020), during a generic iteration the agent’s opinion \(x_i\) changes as a function of the opinion held by one of her neighbors, accounting for the strength of their social connection.

Definition 3

During each iteration a random agent pair (ij) is selected with the constrain that \(w_{i,j}>0\) and that \(|x(i)-x(j)| \le \epsilon\). To account for the heterogeneity of agent pairs’ interaction strengths, \(WHK_B\) leverages edge weights, thus capturing the effect of different social bonds’ strength. As in \(WHK_G\), each edge \((i,j) \in E\), has a value \(w_{i,j}\in [0,1]\). At the end of the interaction \(x_i\) changes as follows:

$$\begin{aligned} x_i(t+1)= \left\{ \begin{array}{ll} x_i(t) + \frac{x_i(t) + x_j(t)w_{i,j}}{2} (1-x_i(t)) &{} \quad \text{ if } x_i(t) \ge 0\\ x_i(t) + \frac{x_i(t) + x_j(t)w_{i,j}}{2} (1+x_i(t)) &{} \quad \text{ if } x_i(t) < 0 \end{array} \right. \end{aligned}$$

In \(WHK_B\) the opinion of agent i at time \(t + 1\) is given by the compound effect of his previous belief, \(x_i(t)\) and the weighted average opinion of a neighbor j selected from the set \(\Gamma _{\epsilon }\), where \(w_{i,j}\) accounts for i’s perceived influence/trust of j.

Indeed, \(WHK_B\) can be easily extended to account for more complex interaction patters. In particular, while considering the opinion formation process involving controversial news, an interesting effect to account for involves the attraction-repulsion of agents beliefs.

Definition 4

With the term “attraction” I identify the effect of those interactions in which an agent opinion move toward the one of her peer. At the end of an “attraction” event, agent i begins to doubt her position and share some of j’s one. For this reason i’s opinion approaches the one of his interlocutor: their distance at \(t+1\) becomes lower than at time t, \(d_{i,j}(t+1)\le d_{i,j}(t)\).

At the end of the interaction \(x_i\) changes as follows:

$$\begin{aligned} x_i(t+1)={\left\{ \begin{array}{ll} x_i(t) - \frac{sum_{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t) \ge 0, x_i(t)> x_j(t) \\ x_i(t) + \frac{sum_{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t) \ge 0, x_i(t)< x_j(t) \\ x_i(t) + \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t)< 0, x_i(t)> x_j(t) \\ x_i(t) - \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t)< 0, x_i(t)< x_j(t) \\ x_i(t) - \frac{sum_{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t)< 0, sum_{op}> 0\\ x_i(t) + \frac{sum,_{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t)< 0, sum_{op}< 0\\ x_i(t) + \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t) \ge 0, sum_{op} > 0\\ x_i(t) - \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t) \ge 0, sum_{op} < 0\\ \end{array}\right. } \end{aligned}$$

where \(sum_{op} = x_i(t) + x_j(t)w_{i,j}\).

The criterion used to evaluate opinions’ evolution is the same of \(WHK_B\): the difference lies in the identification of different cases according to whether the opinions of i and j are discordant/concordant. Following such a strategy, AWHK ensures that agents pairs opinions’ difference is reduced after interaction.

However, while observing real phenomena, we are used to observe interactions where people influence each other notwithstanding their initial opinions: interactions that results in approaching like-minded individuals while moving away from those having opposite opinions.

Definition 5

With the term “repulsion” I identify the effect of those interactions resulting in agents’ opinions that move apart. For example, “repulsive” interactions are the ones involving agents starting from opposite beliefs that conclude with opinion radicalization. In this scenario, at the end of the interaction, i’s opinion will have moved away from j’s; the agent i will be more convinced of his thoughts. At the end of the interaction \(x_i\) changes as follows:

$$\begin{aligned} x_i(t+1)={\left\{ \begin{array}{ll} x_i(t) + \frac{sum{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t) \ge 0, x_i(t)> x_j(t) \\ x_i(t) - \frac{sum_{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t) \ge 0, x_i(t)< x_j(t) \\ x_i(t) - \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t)< 0, x_i(t)> x_j(t) \\ x_i(t) + \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t)< 0, x_i(t)< x_j(t) \\ x_i(t) + \frac{sum_{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t)< 0, sum_{op}> 0\\ x_i(t) - \frac{sum_{op}}{2} (1-x_i(t)) &{} \text {if } x_i(t) \ge 0, x_j(t)< 0, sum_{op}< 0\\ x_i(t) - \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t) \ge 0, sum_{op} > 0\\ x_i(t) + \frac{sum_{op}}{2} (1+x_i(t)) &{} \text {if } x_i(t)< 0, x_j(t) \ge 0, sum_{op} < 0\\ \end{array}\right. } \end{aligned}$$

with \(sum_{op} = x_i(t) + x_j(t)w_{i,j}\).

As for AWHK, the opinion evolution rule is defined by multiple cases, each of which describes a particular configuration produced by the sign of agents’ opinions. The update rule will ensure that \(d_{i,j}(t) \le d_{i,j}(t+1)\).

Indeed, AWHK and RWHK can be combined to obtain a holistic model integrating both attraction and repulsion behaviors.

Definition 6

During each iteration, the model selects an agent i randomly with one of its neighbors, j-regardless of the \(\epsilon\) threshold. Once identified the agents’ pair, the model computes the absolute value of the difference between the opinions of i and j, \(\theta _{i,j} = |x_i(t)-x_j(t)|\). If \(\theta _{i,j} \le \epsilon\), AWHK is applied to compute \(x_i(t+1)\), otherwise RWHK.

ARWHK describes several complex opinion dynamics scenarios; e.g., the changes of mind that an agent experiences while discussing news, either fake or not, shared by a trusted/untrusted peer.

Stubbornness and Homophily

Indeed, opinion exchange/dynamics is affected by several environmental and contextual peculiarities. Among them, the presence of stubborn agents and the increased likelihood of interactions among similar agents are the ones that can often be observed in online social platforms (Yildiz et al. 2013; Sheykhali et al. 2019; McPherson et al. 2001).

Stubborn Agents

The models described so far do not account for the presence of stubborn agents, e.g., agents having strong opinions which, despite peers’ discussions, they are not willing to reconsider. Stubborn agents can be seen as those agents who deliberately spread/support misinformation, as well as the radical supporters of a given idea. This type of agents can coincide with notable individuals in society, such as companies, media, or politicians.

I integrate stubborn agents in my models by introducing an agent-wise binary flag to denote her willingness (or not) to change opinion upon peers’ interactions. The opinion update rules expressed by the proposed models change accordingly following a conservative strategy: if the randomly selected agent i is a stubborn one, she will not update her opinion and, therefore, \(x_i(t)=x_i(t+1)\); otherwise, the standard opinion evolution rule for the considered model is applied.


In our everyday life, we are used to mainly relate with people who share our same interests or social status. An aspect that influences social relationships is individuals’ character: it is typical for individuals to show greater trust and harmony with others sharing similar character traits. To integrate this idea into the proposed models I enrich each agent description with a vector encoding her “profile”. For the sake of my experiment, each agent vector is composed by five binary elements, that represent the “big five” personality traits (McCrae and Costa 2003)—openness, conscientiousness, extroversion, agreeableness and neuroticism . To each element of the list is assigned a value in the set \(\{0,1\}\). When considering an agent i a value of 1 in the kth position of her profile vector implies that she posses the associated personality trait (as we can see in Fig. 2). Once built agents’ profile vectors, I measure the similarity among them so to weight the previously defined opinion update rules. In detail, once selected the agent i and a compatible neighbors j, I calculate through the Jaccard coefficient (\(J(A, B) =\frac{\left| A \cap B \right| }{\left| A \cup B \right| }\)) their profiles’ similarity. The Jaccard index by definition assumes a value between 0 and 1: in my scenario is maximized iff the profile vectors of the two agents are the same, minimized if they do not overlap. Once defined and computed the similarity scores, the opinion \(x_j\) of each neighbor j of i is weighted by the similarity \(sim_{ij}\) between i and j. In practice, the component \(x_i(t)+x_j(t)w_{ij}\) of the previous equations (called \(sum_{op}\) in AWHK and RWHK) becomes \(x_i(t)+x_j(t)w_{ij}sim_{ij}\). With such a change the more the agents i and j are similar the more the opinion of j will weight on i’s updated one.

Fig. 2

Similarity example. Each agent is assigned the vector of binary elements representing personality traits


This section describes my experimental analysis, focusing on three aspects: the selected network datasets, the implemented experimental protocol, and the obtained results. To foster experiments reproducibility, I integrated the introduced models into the NDlibFootnote 1 python library (Rossetti et al. 2018, 2017).


I simulate the proposed models on a scale-free network (Barabasi–Albert model) (Barabási and Albert 1999). Since I am not interested in analyzing the proposed models’ scalability, I generate networks composed of 500 nodes for all scenarios.

Moreover, to simulate a more realistic network topology (e.g., integrating meso-scale topologies), I also tested my models against a 500 nodes network generated with the LFR benchmark (Lancichinetti et al. 2008). This latter network was generated imposing the following parameter values: (1) power law exponent for the community size distribution, \(\beta = 1.5\); (2) power law exponent for the degree distribution, \(\gamma = 3\); (3) average degree of nodes, \(<k>= 12\); (4) fraction of intra-community edges incident to each node, \(\mu = 0.1\); (5) minimum community size \(min_{s} = 80\). In my LFR instance composes of five non-overlapping communities.

Experimental Protocol

All opinion dynamic models are analyzed while varying the bounded confidence, \(\epsilon\) (\(\epsilon \in \{0.05, 0.25, 0.45, 0. 65, 0.85\}\)) and the percentage of stubborn agents (\(\kappa \in \{0, 0.1, 0.3, 0.5, 0.7, 0.9\})\). Simulation results are discussed using opinion evolution plots, namely, representing individual agents’ opinions through time. For the sake of results interpretation, I assume that given an agent i and its opinion at time t, \(x_i(t)\):

  • if \(x_i(t)>0\) then i, at time t, accepts the (fake) news and supports it while involved in discussions with her peers;

  • if \(x_i(t)<0\) then i, at time t, rejects the (fake) news and try to debunk it during the discussion she is involved into;

  • if \(x_i(t)=0\) then i, at time t, is not interested in the (fake) news or she considers it debunked and is unwilling to advocate for either side of the dispute.


First, I report the results obtained with the \(WHK_{G}\) model; after that, I focus on the results obtained considering pair-wise interactions on the base synthetic scenario; finally, I discuss the impact of community structure on the observed dynamics. Edges’ weights, representing ties’ strengths, are randomly sampled from a normal distribution.

Group Interactions

Figure 3 shows the results obtained by \(WHK_{G}\) model on scale-free network for different \(\epsilon\) values. Different line colors represent the agent’s initial opinion (positive, negative, or neutral). My simulation results in a system reaching a polarization equilibrium: a status in which agents are well separated into two large clusters of opposing opinions. Therefore, the final equilibrium has a cluster composed of agents accepting and supporting the fake news; the second cluster has agents that reject them. The final status is independent by the \(\epsilon\) value: in my simulation, such a parameter only affects the convergence speed, making it quicker as it grows. It is worth noticing that for small values of \(\epsilon\) only a small portion of agents are involved in opinion exchange (as shown in Fig. 3a): such expected result makes the polarization extreme, actually describing an opinion fragmentation scenario.

Fig. 3

Opinion evolution varying \(\epsilon\) for \(WHK_{G}\). The system reaches a polarization equilibrium and the final status is independent by the \(\epsilon\) value. Such a parameter only affects the convergence speed, making it quicker as it grows

For this model, I omit the results obtained imposing the stubborn agents and the similarity between agents since the obtained opinion evolution trends do not show sensible differences from the ones in Fig. 3.

Pair-Wise Interactions

Fig. 4

Opinion evolution for the models with pair-wise interactions with \(\epsilon\) equal 0.85. The system does not reach an equilibrium with the \(WHK_{B}\) and the RWHK model

Figure 4 shows the opinion evolution for all the models that consider the pair-wise interactions with the value of \(\epsilon\) fixed to 0.85. Differently from the previous model, with the \(WHK_{B}\) and the RWHK model, the system does not reach an equilibrium (Fig. 4a, c). In the former scenario, after 20 iterations, the opinion begins to fluctuate in the space of negative opinions without ever reaching a convergence; for the RWHK model, there is a fluctuation around 0. Instead, for the other two methods (AWHK and ARWHK), the system reaches an equilibrium; for the method that considers the attraction (Fig. 4b), the system leading to a consensus to middle opinion (the agents are not interested in the fake news), for the ARWHK model (Fig. 5c) the system converges to two different well separated states, the positive and negative opinion.

Fig. 5

Opinion evolution varying \(\epsilon\) for ARWHK. Low values of \(\epsilon\) will favor the application of RWHK model—thus leading to higher fragmentation—while high values will favor the application of AWHK model—thus leading to convergence

Figure 5 shows the opinion evolution for the ARWHK model for different value \(\epsilon\) values. A thorough analysis of simulation results highlights that \(\epsilon\) acts as a razor that implicitly divides the probability of observing attractive or repulsive pair-wise interactions: low values of \(\epsilon\) will favor the application of RWHK—thus leading to higher fragmentation—while high values will results in a more likely application of AWHK—thus leading to convergence.

Fig. 6

Effect of the stubborn agents varying their percentage in the AWHK model with \(\epsilon\) equal 0.85. The stubborn agents opinion is fixed to extreme positive opinion. Stubborn population opinion evolution lines are omitted. The stubborn presence leads to a more chaotic regime towards the stubborn agents’ opinion

Attraction and Stubbornness

Figure 6 presents the AWHK model results for various values of the percentage of stubborn agents while keeping constant the value of \(\epsilon\) to 0.85. In this Figure, the stubborn agents’ opinion is fixed to extreme positive opinion, thus supporting fake news’ acceptance.

As underlined by the selected scenarios simulations’ outcomes, increasing the percentage of stubborn agents leads to a more chaotic regime, formed by a subset of agents whose opinions fluctuate heavily towards the stubborn agents’ opinion. The presence of stubborn agents influences the evolution of opinions as they act as a pivot for those who are open to change their minds. I performed the same simulations varying the initial set of initial stubborns agents’ opinions. As expected, I observe a similar result when stubborn agents have negative opinions and even a more chaotic regime when that class of agents is equally distributed across the opinion spectrum. The opinions approach the positive or negative values based on how the stubborn ones are distributed in the population: if the stubborn are negative, the opinions that change fluctuate towards negative values; the opposite situation occurs for stubborn ones with positive opinions. So stubborn agents act as persuaders, bringing the opinion of the population closer to theirs. The higher the number of stubborn agents, the more evident their effect on the remaining population appears.

Fig. 7

Effect of the stubborn agents varying their percentage in the RWHK model with \(\epsilon\) equal 0.85. The stubborn agents opinion is fixed to extreme positive opinion. Stubborn population opinion evolution lines are omitted. The stubborn presence leads to a more chaotic regime towards the opposite stubborn agents’ opinion

Repulsion and Stubborness

Figure 7 presents the results obtained with the RWHK model for various values of the percentage of stubborn agents while keeping constant the value of \(\epsilon\) to 0.85. In this case, the stubborn agents’ opinion is fixed to extreme positive opinion. Unlike the AWHK model, the stubborn presence leads to a more chaotic regime towards the stubborn agents’ opinion, for the RWHK model, the opposite behavior occurs. If the stubborn are positive, the opinions fluctuate towards negative values; the opposite situation occurs for stubborn ones with negative opinions. This is due to the update rule that ensures that \(d_{i,j}(t) \le d_{i,j}(t+1).\)

Attraction/Repulsion, Stubborness and Similarity

I performed the same simulations while introducing agents’ profile similarity. Such extension mainly impact the time required to reach a stable state: enforcing homophilic interactions slow down convergence time, while not significantly affecting scenarios in which polarization/fragmentation arise.

Community Structure

To understand the effect that the presence of dense meso-scale topologies have the opinion process unfolding, in Fig. 9 I show a visual example. There, the color spectrum cover negative (blue) to positive (red) node opinions: the darker the color, the more extreme the node opinion. As previously done, I study the opinion spreading process while varying the distribution of initial opinions in the communities and the number of stubborn agents.

I designed two different scenario configurations varying the stubborn agents and the communities’ distribution of opinions. In particular:

  • configuration A: the opinions of agents belonging to the various communities are randomly distributed with values in the range \([-1, 1]\).

  • configuration B: in this case, I set different opinions for each community. For example, I can set negative opinions for some communities and positive for others.

For all configurations, I selected the stubborn agents in two different ways: (1) at random (2) by identifying those nodes that are less embedded within their communities (e.g., the ones have the higher ratio of external community degree w.r.t. their total one).

Group Interactions

Initially, I consider configuration A without stubborn agents. Figure 8 shows the opinion evolution of the \(WHK_{G}\) model varying the \(\epsilon\) parameter. As we can see, with a low value of \(\epsilon\) I obtain different results respect the Fig. 3, obtained with the scale-free network. In this case, I do not obtain a polarization into two distinct groups, many nodes take an intermediate opinion. Therefore, the community structure inhibits the polarization.

Considering configuration B, the network topologies considered in this analysis are exemplified in the toy example of Fig. 9. In this configuration, the network nodes are clustered in five loosely interconnected blocks—three formed of agents with opinions in the negative spectrum, the others two characterized by agents with positive opinions. Figure 9a shows the initial condition assigned to both simulations. I set the same value of \(\epsilon =0.45\) for both simulations; I also fix the percentage of stubborn agents to 30 %. For both cases, the stubborn agents’ opinion is fixed to extreme positive opinion. Figure 9b presents the final configuration when the stubborn agents are chosen at random; in Fig. 9c the stubborn agents are chosen between the nodes with the greater ratio among their inter-community degree and their total degree. While executing a simulation that involves “bridge” stubborns, we can view how the resulting final equilibrium converges to a common spectrum with the major number of positive nodes (as we can see in Fig. 9c). In particular, we can observe how stubborn people can make their opinion prevail, even outside the community. Conversely, when the stubborn agents are chosen at random, I get a different result, as shown in Fig. 9b. The final equilibrium is characterized by a major number of nodes with negative opinions.

Stubborn agents have an important role in the opinion dynamics when the network structure is clustered into communities: my experiments underlined that, for the \(WHK_{G}\) model, stubborn agents’ presence foster the convergence toward the stubborn agents’ opinion.

Fig. 8

Opinion evolution on LFR network with \(WHK_{G}\) varying \(\epsilon\) with configuration A. There is not a polarization into two distinct groups, so the community structure inhibits the polarization; there is not a polarization into two distinct groups

Fig. 9

Network visualizations with configuration B. a Nodes initial conditions - five communities, three prevalently negative (blue node), two positive (red nodes); b \(WHK_{G}\) final equilibrium with random stubborn; c \(WHK_{G}\) final equilibrium with stubborn as bridges

Pair-Wise Interactions

I observed that stubborn agents’ presence plays a relevant role, especially with high \(\epsilon\) confidence values and when they reach a high critical mass. Such a behavior can be explained by the analyzed network’s modular structure, which acts as a boundary for the diffusion between clusters.

Fig. 10

Effect of the stubborn agents chosen at random varying their percentage in the AWHKmodel with \(\epsilon\) equal 0.85 in the configuration A. The stubborn agents opinion is fixed to extreme positive opinion. Stubborn population opinion evolution lines are omitted

Fig. 11

Effect of the stubborn agents chosen at random varying their percentage in the AWHKmodel with \(\epsilon\) equal 0.85 in the configuration B. The stubborn agents opinion is fixed to extreme positive opinion. Stubborn population opinion evolution lines are omitted

Considering configuration A, stubborn agents’ effect on the opinion dynamic model is evident for the AWHK model, mostly if we set them at extremes of the opinion range. For configuration A, I choose random stubborn agents, and I fix their opinion to positive. The effect of stubbornness leads to observing a fluctuating trend towards the positive extreme (Fig. 10). Conversely, for configuration B, when the percentage of stubborn is less than 0.5, the system converges to two states (middle and positive); when the percentage is greater than 0.5, the opinion of the population polarizes on the opinion of the stubborns agents Fig. 11.


In this paper, I modeled individuals’ responses to fake news as a dynamic opinion process. By modeling some of the different patterns governing the exchange of views regarding a given news item—namely, attraction/repulsion, trust, similarity, and the existence of stubborn agents—I was able to drive some interesting observations on this complex, often not adequately considered, context. My simulations showed that (1) the differences in the topological interaction layer are reflected in the convergence times of the proposed models; (2) the presence of stubborns significantly affects the final system equilibrium of the system, especially when high confidence limits regulate pair-wise interactions; (3) the mechanisms of attraction favor convergence toward a common opinion while those of repulsion facilitate polarization. (4) Adding similarity between agents acts as an attenuation factor.

As a future work, I plan to extend the experimental analysis to real data to understand the extent to which the proposed models can replicate observed ground truths.

Availability of data and materials

The datasets analyzed during the current study are synthetic networks. The implementation of the introduced models is available on the NDlib: Network Diffusion library ( python library.


  1. 1.

    NDlib: Network Diffusion library.



Hegselmann–Krause model

\(WHK_G\) :

Weighted Hegselmann–Krause group model

\(WHK_B\) :

Weighted Hegselmann–Krause binary model


Attraction weighted Hegselmann–Krause model


Repulsion weighted Hegselmann–Krause model


Attraction–repulsion weighted Hegselmann–Krause model


  1. Alam F, Cresci S, Chakraborty T, Silvestri F, Dimitrov D, Martino GDS, Shaar S, Firooz H, Nakov P (2021) A survey on multimodal disinformation detection. arXiv preprint arXiv:2103.12541

  2. Allcott H, Gentzkow M (2017) Social media and fake news in the 2016 election. J Econ Perspect 31(2):211–36

    Article  Google Scholar 

  3. Barabási A-L, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512

    MathSciNet  Article  Google Scholar 

  4. Bovet A, Makse HA (2019) Influence of fake news in twitter during the 2016 us presidential election. Nat Commun 10(1):1–14

    Article  Google Scholar 

  5. Caldarelli G, De Nicola R, Del Vigna F, Petrocchi M, Saracco F (2020) The role of bot squads in the political propaganda on twitter. Commun Phys 3(1):1–15

    Article  Google Scholar 

  6. Castellano C, Muñoz MA, Pastor-Satorras R (2009) Nonlinear \(q\)-voter model. Phys Rev E 80(4):041129

    Article  Google Scholar 

  7. Cinelli M, Morales GDF, Galeazzi A, Quattrociocchi W (2021) The echo chamber effect on social media. Proc Natl Acad Sci 118(9):e2023301118

    Article  Google Scholar 

  8. Cresci S, Di Pietro R, Petrocchi M, Spognardi A, Tesconi M (2016) Dna-inspired online behavioral modeling and its application to spambot detection. IEEE Intell Syst 31(5):58–64

    Article  Google Scholar 

  9. Deffuant G, Neau D, Amblard F, Weisbuch G (2000) Mixing beliefs among interacting agents. Adv Complex Syst 3(01n04):87–98

    Article  Google Scholar 

  10. DeGroot M (1974) Reaching a consensus. J Am Stat Assoc 69:118–121

    Article  Google Scholar 

  11. Flache A, Mäs M, Feliciani T, Chattoe-Brown E, Deffuant G, Huet S, Lorenz J (2017) Models of social influence: towards the next frontiers. J Artif Soc Soc Simul 20(4):2

    Article  Google Scholar 

  12. Friedkin NE (1986) A formal theory of social power. J Math Sociol 12:103–126

    Article  Google Scholar 

  13. Galam S (2002) Minority opinion spreading in random geometry. Eur Phys J B Condens Matter Complex Syst 25(4):403–406

    Google Scholar 

  14. Galam S (2016) Stubbornness as an unfortunate key to win a public debate: an illustration from sociophysics. Mind Soc 15(1):117–130

    Article  Google Scholar 

  15. Garimella K, Morales GDF, Gionis A, Mathioudakis M (2018) Quantifying controversy on social media. ACM Trans Soc Comput 1(1):1–27

    Article  Google Scholar 

  16. Ge Y, Zhao S, Zhou H, Pei C, Sun F, Ou W, Zhang Y (2020) Understanding echo chambers in e-commerce recommender systems. In: Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval, pp 2261–2270

  17. Hegselmann R, Krause U et al (2002) Opinion dynamics and bounded confidence models, analysis, and simulation. J Artif Soc Soc Simul 5(3)

  18. Hernon P (1995) Disinformation and misinformation through the internet: findings of an exploratory study. Gov Inf Q 12(2):133–139.

    Article  Google Scholar 

  19. Hilbert M, Ahmed S, Cho J, Liu B, Luu J (2018) Communicating with algorithms: a transfer entropy analysis of emotions-based escapes from online echo chambers. Commun Methods Meas 12(4):260–275

    Article  Google Scholar 

  20. Hoferer M, Böttcher L, Herrmann HJ, Gersbach H (2020) The impact of technologies in political campaigns. Physica A 538:122795

    MathSciNet  Article  Google Scholar 

  21. Holley RA, Liggett TM (1975) Ergodic theorems for weakly interacting infinite systems and the voter model. Ann Probab 3(4):643–663

    MathSciNet  Article  Google Scholar 

  22. Lancichinetti A, Fortunato S, Radicchi F (2008) Benchmark graphs for testing community detection algorithms. Phys Rev E 78(4):046110

    Article  Google Scholar 

  23. Lazer DM, Baum MA, Benkler Y, Berinsky AJ, Greenhill KM, Menczer F, Metzger MJ, Nyhan B, Pennycook G, Rothschild D et al (2018) The science of fake news. Science 359(6380):1094–1096

    Article  Google Scholar 

  24. McCrae RR, Costa PT (2003) Personality in adulthood: a five-factor theory perspective. Guilford Press

  25. McPherson M, Smith-Lovin L, Cook JM (2001) Birds of a feather: homophily in social networks. Ann Rev Sociol 27(1):415–444

    Article  Google Scholar 

  26. Mobilia M (2003) Does a single zealot affect an infinite group of voters? Phys Rev Lett 91(2):028701

    Article  Google Scholar 

  27. Morales GDF, Monti C, Starnini M (2021) No echo in the chambers of political interactions on reddit. Sci Rep 11(1):1–12

    Article  Google Scholar 

  28. Nakov P, Martino Da San G (2020) Fact-checking, fake news, propaganda, and media bias: truth seeking in the post-truth era. In: Proceedings of the 2020 conference on empirical methods in natural language processing: tutorial abstracts, pp 7–19

  29. Noorazar H (2020) Recent advances in opinion propagation dynamics: a 2020 survey. Eur Phys J Plus.

    Article  Google Scholar 

  30. Noorazar H, Vixie KR, Talebanpour A, Hu Y (2020) From classical to modern opinion dynamics. Int J Mod Phys C 31(07):2050101.

    MathSciNet  Article  Google Scholar 

  31. Rossetti G, Milli L, Rinzivillo S, Sîrbu A, Pedreschi D, Giannotti F (2018) Ndlib: a python library to model and analyze diffusion processes over complex networks. Int J Data Sci Anal 5(1):61–79

    Article  Google Scholar 

  32. Rossetti G, Milli L, Rinzivillo S, Sirbu A, Pedreschi D, Giannotti F (2017) Ndlib: studying network diffusion dynamics. In: 2017 IEEE international conference on data science and advanced analytics (DSAA), IEEE, pp 155–164

  33. Sharma K, Qian F, Jiang H, Ruchansky N, Zhang M, Liu Y (2019) Combating fake news: a survey on identification and mitigation techniques. ACM Trans Intell Syst Technol (TIST) 10(3):1–42

    Article  Google Scholar 

  34. Sheykhali S, Darooneh AH, Jafari GR (2019) Instability of social network dynamics with stubborn links. arXiv preprint arXiv:1907.00352

  35. Si X-M, Li C (2018) Bounded confidence opinion dynamics in virtual networks and real networks. J Comput 29(3):220–228

    Google Scholar 

  36. Sirbu A, Loreto V, Servedio VDP, Tria F (2016) Opinion dynamics: models, extensions and external effects. In: Participatory sensing, opinions and collective awareness, pp 363–401.

  37. Sznajd-Weron K, Sznajd J (2000) Opinion evolution in closed community. Int J Mod Phys C 11(06):1157–1165

    Article  Google Scholar 

  38. Toccaceli C, Milli L, Rossetti G (2020) Opinion dynamic modeling of fake news perception. In: International conference on complex networks and their applications, Springer, pp 370–381

  39. Visentin M, Pizzi G, Pichierri M (2019) Fake news, real problems for brands: the impact of content truthfulness and source credibility on consumers behavioral intentions toward the advertised brands. J Interact Mark 45:99–112

    Article  Google Scholar 

  40. Wardle C, Derakhshan H (2017) Information disorder: toward an interdisciplinary framework for research and policy making. Council of Europe 27

  41. Watts DJ, Rothschild DM (2021) Measuring the news and its impact on democracy. Proc Natl Acad Sci 118(15):e1912443118

    Article  Google Scholar 

  42. Wu F, Huberman BA (2004) Social structure and opinion formation

  43. Yildiz E, Ozdaglar A, Acemoglu D, Saberi A, Scaglione A (2013) Binary opinion dynamics with stubborn agents. ACM Trans Econ Comput (TEAC) 1(4):1–30

    Article  Google Scholar 

  44. Zhang X, Ghorbani AA (2020) An overview of online fake news: characterization, detection, and discussion. Inf Process Manag 57(2):102025

    Article  Google Scholar 

Download references


This work is supported by the scheme’ INFRAIA-01-2018-2019: Research and Innovation action’, Grant Agreement No. 871042 ’SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics’.


This work was supported by the scheme ’INFRAIA-01-2018-2019: Research and Innovation action’, Grant Agreement No. 871042 ’SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics’.

Author information




All authors read and approved the final manuscript.

Corresponding author

Correspondence to Letizia Milli.

Ethics declarations

Competing interests

The author declare that she has no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Milli, L. Opinion Dynamic Modeling of News Perception. Appl Netw Sci 6, 76 (2021).

Download citation


  • Opinion dynamics
  • Polarization
  • Fake news