Skip to main content

Investigating the effect of selective exposure, audience fragmentation, and echo-chambers on polarization in dynamic media ecosystems


The degree of polarization in many societies has become a pressing concern in media studies. Typically, it is argued that the internet and social media have created more media producers than ever before, allowing individual, biased media consumers to expose themselves only to what already confirms their beliefs, leading to polarized echo-chambers that further deepen polarization. This work introduces extensions to the recent Cognitive Cascades model of Rabb et al. to study this dynamic, allowing for simulation of information spread between media and networks of variably biased citizens. Our results partially confirm the above polarization logic, but also reveal several important enabling conditions for polarization to occur: (1) the distribution of media belief must be more polarized than the population; (2) the population must be at least somewhat persuadable to changing their belief according to new messages they hear; and finally, (3) the media must statically continue to broadcast more polarized messages rather than, say, adjust to appeal more to the beliefs of their current subscribers. Moreover, and somewhat counter-intuitively, under these conditions we find that polarization is more likely to occur when media consumers are exposed to more diverse messages, and that polarization occurred most often when there were low levels of echo-chambers and fragmentation. These results suggest that polarization is not simply due to biased individuals responding to an influx of media sources in the digital age, but also a consequence of polarized media conditions within an information ecosystem that supports more diverse exposure than is typically thought.


The media ecosystem is defined by Benkler, Faris, and Roberts as “the outlets and influencers who form networks, the structure of networks, and the flow of information in networks,” and includes social media, blogs, major news channels, and newspapers (Benkler et al. 2018). It describes which media organizations are serving information to media consumers, and how they interface with each other. The concept evokes an array of media providers who have influence on information distribution and the subsequent opinion formation of a public.

Media studies, with the advent of social media and the internet, have increasingly focused on analyzing the effect of audience fragmentation [the partition of media consumers into many small audiences (Webster and Ksiazek 2012)], and echo-chamber formation [ideologically homogeneous communities who circulate and consume information that agrees with community members’ held beliefs (Guess et al. 2018)] through the process of selective exposure (the tendency of media consumers to consume messages in a systematically biased manner diverging from the composition of available messages) (Cardenal et al. 2019; Messing and Westwood 2014; Karlsen et al. 2020; Knobloch-Westerwick 2014; Arendt et al. 2019). Social media, by making more media sources available and having them serve bias-confirming content, is argued to facilitate selective exposure and subsequent fragmentation and echo-chambers—called the high-choice news avoidance thesis by Karlsen et al. (2020).

Ultimately, this process is argued to lead to political polarization. There are three frequently cited versions of this story: (1) that media follow financial incentives to cater to fragmented audiences, thus pushing their subscribers further into their own views (Iyengar and Hahn 2009); (2) that discussion within echo-chambers moves group members towards the dominant view of the group (Sunstein 2001, 1999), pushing individuals to (relative) “extremes”; or (3) no explanation is given, but polarized attitudes are attributed to echo-chambers (often the argument in mainstream media) (Guess et al. 2018). In this work, we focus on the first and third versions of polarization processes, and study whether these, combined with selective exposure dynamics, lead to a polarized public.

To do this, we extend a simple, but cognitively- and socially-informed, model of public opinion formation that we developed in Rabb et al. (2022), to model both a static and dynamic media ecosystem. Both models allow for the simulation of media producers and consumers, the flow of information between them, and media consumers’ formation of opinion based on the media messages they hear and share. In particular, these models allow us to vary different levels of selectivity among media consumers (how willing they are to adopt beliefs that differ from theirs), different tactics for media producers, more or less fragmented audiences, and more, to evaluate their effect on polarization dynamics. The static ecosystem tests polarization trends under conditions where media consumers do not unsubscribe from media or other consumers who send them disconfirming information, and the dynamic ecosystem allows media consumers to unsubscribe and subscribe to both media sources and other individuals. We simulate opinion formation dynamics within these models and draw conclusions about the conditions necessary to lead to a polarized population.

We find in both our static and dynamic models, that polarization does not follow the logic arguing that polarization is caused by echo-chambers alone. Even when we simulate media consumers who have strong preferences for believing messages that agree with their prior belief, most simulated populations fail to polarize. Because agents need to be open to beliefs more radical than their own, being at least somewhat open to different beliefs will be a necessary condition to polarize in the presence of polarizing media. Contradicting the thesis, by measuring polarization, the presence of echo-chambers, and audience fragmentation over time as simulations play out, we find that increases in echo-chambers and fragmentation do not correlate with increases in polarization—rather, the opposite. Specifically, 83% of simulations that polarized saw a simultaneous decrease of audience fragmentation, and 82% saw a decrease in echo-chambers.

The largest effect we find is a striking difference in polarization trends between models where media sources continuously broadcast a fixed belief and models where media sources instead try to appeal to the average belief of their subscribers. When media sources appeal to subscribers, the population almost never polarizes—even when subscribers are assumed to only subscribe to news sources that are in concordance with their initial beliefs, and when both media and citizens are initially polarized. We further find that, when media sources are continually broadcasting a fixed message, polarization is much more likely to occur, but notably only when the media producers are more polarized than media consumers. Surprisingly, many of our simulations resulted in depolarization of the population when media was less polarized than the population. This effect holds even when there are very few media producers, demonstrating that polarization is not simply a matter of high versus low availability of media. Polarized media can drive population polarization even with very few media sources.

In sum, our results partially confirm the theories explaining polarization, but only in the presence of several enabling conditions: (1) the distribution of media belief must be more polarized than the population; (2) the population must be at least somewhat persuadable to changing their belief according to new messages they hear; and finally, (3) the media must statically continue to broadcast more polarized messages rather than, say, adjust to appeal more to the beliefs of their current subscribers. Moreover, and somewhat counter-intuitively, under these conditions we find that polarization is more likely to occur when media consumers are exposed to more diverse messages, and that polarization occurred most often when there were low levels of echo-chambers and fragmentation. These results suggest that polarization is not only a consequence of biased individuals responding to a proliferation of media sources on the internet, but also a result of polarized media conditions within an information ecosystem that supports more diverse exposure than is typically thought.

While these models are simplifications of the complex dynamics of opinion formation in reality, they help us arrive at a more nuanced argument surrounding the mechanisms leading to social polarization. Our results suggest that the typical polarization logic can be bounded, as there are certain enabling conditions of people’s belief process, and the media ecosystem, that make the complex process of polarization possible. Despite their simplifications, these models offer a framework in which to study a rich and subtle set of dynamics in order to test theories about causes of the polarization of political opinion.


Media ecosystem

Media scholars describe the media ecosystem as interacting with media consumers in two ways: through top–down and bottom-up processes. Top–down processes communicate a discursive structure of society and ideology relayed by media organizations and political elites (Webster and Ksiazek 2012; Jost et al. 2009). Traditional media are argued to be top–down, one way communication regimes where information is selected, curated, and distributed by professional organizations with certain norms and motivations (Shehata and Strömbäck 2021; Benkler et al. 2018). Media organizations are driven by the motivations and goals of those who run them, and may differ in tactics, but are generally interested in maximizing their audience (Benkler et al. 2018; Jost et al. 2009).

Bottom-up processes involve the formation of opinion through interaction with media that involves motivated cognition on the part of individuals and groups (Jost et al. 2009; Benkler et al. 2018). Individuals have social and psychological motives that drive their belief or rejection of certain information. These may involve socioeconomic position, identity, or psychological traits (Jost et al. 2009; Marwick 2018).

Two major developments in the media ecology that have sparked much research are the widespread adoption of social media and the perceived rise of misinformation and disinformation. Since the rise of social media, many have argued that these platforms have led to the rise of misinformation, and for specific reasons. Three media phenomena frequently cited as contributing to misinformation are selective exposure (the tendency of media consumers to consume messages in a systematically biased manner diverging from the composition of available messages (Cardenal et al. 2019; Messing and Westwood 2014; Karlsen et al. 2020; Knobloch-Westerwick 2014)), its resultant audience fragmentation (the partition of media consumers into many small audiences (Webster and Ksiazek 2012)), and echo-chambers (ideologically homogeneous communities who circulate and consume information that agrees with community members’ held beliefs (Guess et al. 2018)).

Social media and the internet have been argued to increase the ability to selectively expose oneself to bias-confirming media (Arendt et al. 2019), thus leading to fragmentation and echo-chambers (Guess et al. 2018; Donsbach and Mothes 2013; Strömbäck et al. 2020; Metzger et al. 2020), and ultimately polarization of political opinion (Iyengar and Hahn 2009; Sunstein 2001, 1999; Guess et al. 2018). By allowing more people and organizations to become sources of news, anyone can simply follow sources that agree with their prior beliefs, which (Karlsen et al. 2020) called the “high-choice news avoidance thesis.”

Separate but related research has theorized that echo-chambers subsequently lead to polarization, (Sunstein 2001; Negroponte 1995); a view which has been spread through mainstream media and culture (Guess et al. 2018). There are three frequently cited versions of this story: (1) that media follow financial incentives to cater to fragmented audiences, thus pushing their subscribers further into their own views (Iyengar and Hahn 2009); (2) that discussion within echo-chambers moves group members towards the dominant view of the group (Sunstein 2001, 1999), pushing individuals to (relative) “extremes”; or (3) no explanation is given, but polarized attitudes are attributed to echo-chambers (often the argument in mainstream media) (Guess et al. 2018).

Yet interestingly, countervailing empirical evidence exists against perceived fragmentation (Webster and Ksiazek 2012; Benkler et al. 2018) and the presence of echo-chambers (Messing and Westwood 2014; Cardenal et al. 2019): namely that audiences are exposed to countervailing information either through social networks or a diverse media diet, but choose to believe only information that agrees with their prior beliefs. Even though evidence of ideological and affective polarization, especially in countries like the United States, is widespread, the mechanisms underlying its formation are contested. In light of these facts and challenges, the causal logic of polarization needs to be further investigated.

Opinion diffusion models

Computational models of opinion formation can be used to understand the interplay of complicated mechanisms governing message belief and spreading. Opinion diffusion models are often based on what are called social contagion models (Christakis and Fowler 2013). We focus, in this work, on a form of cognitive contagion model; the name we give to models which give simulated, networked agents rules governing their adoption of beliefs that are informed by some sort of cognitive or psychological process. Several models in this category have been used to try to understand complicated social phenomena such as polarization (Dandekar et al. 2013; DellaPosta et al. 2015; Goldberg and Stein 2018; Sikder et al. 2020), and contagion governed by cognitive dissonance (Li et al. 2020; Rabb et al. 2022).

Our previous work introduced the cognitive cascade class of models that combined a simple cognitive model with a concept in network science called network cascades (Del Vicario et al. 2016). As stories spread through a network like social media, they are said to “cascade” as they are shared. Our cognitive cascade model captured this cascading behavior by modeling institutions, message-passing behavior, and individual cognitive models in each of the networked agents (Rabb et al. 2022). This model was formed around describing the spread of misinformation, and provided insights as well as a simple framework to further investigate the phenomenon.

Review of the cognitive cascade model

The cognitive cascade model that we introduced in Rabb et al. (2022) operates on a graph \(G = (V,E)\) of citizen agents \(v \in V\) who are connected in a network. This model differed from typical network science models that model information spread like disease spread, in that each citizen in the graph is assigned its own belief model that will both influence which messages the citizen will choose to pass on to their connections, as well as perhaps update in response to new messages they receive. An additional difference was the introduction of institutional agents into the model that are the originators of new messages, to model the media.

In the paper that introduced the model, we focused on a very simplistic belief update model based on cognitive dissonance, whereas in the general model, any cognitive model can be chosen. Dissonance-driven belief update is frequently cited as a factor in studies of misinformation and polarization (Donsbach and Mothes 2013; Flynn et al. 2017; Knobloch-Westerwick 2014; Stroud 2011; Arendt et al. 2019; Guess et al. 2018; Cardenal et al. 2019; Messing and Westwood 2014; Karlsen et al. 2020), so we chose to model this cognitive process for agents. Each citizen in the graph holds beliefs in propositions \(B_i \subseteq {\mathcal {B}}\) from the universe of possible propositions to believe in \({\mathcal {B}}\), that each span the range [0, 1]. We initially studied a discrete belief function which we represent as \(B_i = \{ 0<= b <= 6 \}\) where \(B_i\) is some proposition (e.g. “Covid is real” or “masks keep you safe”), i is an index for different propositions, and \(b=0\) represents strong disbelief in \(B_i\) while \(b=6\) represents strong belief (see (Rabb et al. 2022)). Messages, which encode belief values in the same propositions \(B_i\) modeled in citizen cognitive models, then interact with agents. Citizens will both update their own beliefs and propagate the messages they receive if they believe them based on a cognitive function \(\beta\). In our previous work, we selected \(\beta\) so that agents have a high probability to believe a message if it is within distance \(\gamma\) of what they currently believe, where \(\gamma\) is a tunable parameter. The specific function we chose for \(\beta\), which we called the Defensive Cognitive Contagion (DCC) model, was:

$$\begin{aligned} \beta (b_{u,(t+1)}, b_v) = \frac{1}{1+e^{\alpha (|b_{u,t}-b_v|-\gamma )}}. \end{aligned}$$

For more details on the function parameters, see (Rabb et al. 2022). While this model of allows for multiple different institutional agents, our cognitive dissonance simulations in that work focused on a single institutional point of origin for all messages, and looked at whether that agent was able to sway the opinions of a heterogeneous group of citizens with different initial beliefs. We tested these dynamics on several random graph topologies: the Erdős-Rényi random graph (Erdős and Rényi 1960), Watts-Strogatz small world network (Watts and Strogatz 1998), Barabási-Albert preferential attachment network (Barabási and Albert 1999), and the multiplicative attribute graph (Kim and Leskovec 2011). We found that our model qualitatively replicated polarization of opinion that we observed in polling data: with initially uniform distributions of belief being swayed to one opinion, and then not being able to be swayed by messages from the opposite belief pole. The only way that opinion could change after being initially swayed to one value was to send messages to that group that gradually changed value from their held belief to another. We also found that, even in network topologies that were highly homophilic (agents only connected to other agents with similar held beliefs), messages from across the belief spectrum reached all agents. This was surprising, as one could imagine that such a network structure, embodying the concept of echo-chambers, would prohibit a group at one belief pole from hearing messages from the opposite pole.

In this work, we extend the cognitive cascade model to build out a more sophisticated modeling of the institutional agents so that we can better model the media ecosystem and understand the conditions that might promote or discourage audience fragmentation, echo-chamber formation, and subsequent opinion polarization. Because our focus is on building out the media ecosystem piece of the model, we will retain the simple cognitive dissonance belief model for the citizen agents we studied in Rabb et al. (2022) for our simulations in this work. We extend our media ecosystem model in two steps. As an intermediate step, we first define a static media ecosystem, which has multiple media agents with different rules for broadcasting messages to their subscribers. We then present a model of a fully dynamic media ecosystem, that allows citizen agents to subscribe or unsubscribe to media agents and other citizen agents based on the history of messages that these agents have broadcast or shared in the past.

Static media ecosystem model

Our static media ecosystem model is a natural extension of the cognitive cascade model from (Rabb et al. 2022). As in the base model, the static media ecosystem model consists of N citizen agents (\(v_0,v_1,...,v_N\)) in a graph \(G=(V,E)\), as well as a set of institutional agents \(I=\{i_0,i_1,...,i_{|I|}\}\) to serve as producers in the media ecosystem. Citizen agents are connected to each other in a social network structure (see below for the network topologies we test), as well as to media agents outside of the social graph. Citizen agents hold a set of beliefs in propositions \(B_i \in {\mathcal {B}}\) where \({\mathcal {B}}\) is the universe of all propositions, and have a cognitive belief update function \(\beta\) that governs how they change beliefs.

Institutional agents are initialized with beliefs drawn from distributions \({\mathcal {I}}_{j}\) over a proposition \(B_j\). We also extend (Rabb et al. 2022)’s initial uniform distribution of citizen beliefs to include any general distribution over propositions \(B_j\), as \({\mathcal {C}}_{j}\) for citizen agents \(u \in V\).

At each time step \(t \le T\), institutional agents begin the spread of messages m, which encode a subset of beliefs in the same propositions \(B_i\), through the citizen network by sending messages to their citizen agent subscribers, \(S_i\), where \(S_i\) is a set of citizens connected to institution i. Each institutional agent is equipped with a messaging tactics function \(\varphi : S_{i} \rightarrow M, M \in {\mathcal {M}}\) that governs what messages they send to subscribers at each time step t. These tactics can be simple or complicated processes, though in the present work, the media tactics functions we simulate are only very simple decision processes, as described below.

As messages flow through the network, citizens who they are sent to have a chance to believe them, with a probability governed by \(\beta\). Each citizen agent who believes a message updates their belief in the propositions encoded by m, adopting the values in m, and subsequently shares m with their connections in G. The spread process thus continues as citizens believe and share messages, and update their beliefs accordingly.

In order to begin to explore audience fragmentation in the present work, we extend the simple discrete belief model studied in Rabb et al. (2022) in the most straightforward way to include the presence of media agents. We assume that citizen agents subscribe to media agents whose beliefs are close to their initial beliefs using \(\epsilon\), the belief distance threshold, where a smaller value of \(\epsilon\) creates a more fragmented audience. We look at varying \(\epsilon\) as we vary graph topology, distribution of media agent beliefs, and sensitivity of citizen agents to disconfirming messages (controlled by \(\gamma\), as described above). We will also describe below how we used graph topology to simulate the presence of echo-chambers (see “Static model experiments”).

The static media ecosystem model is so named because it does not allow citizen agents to drop or create connections to other citizen or media agents based on any criteria. This cuts out part of the polarization process, as a key part of the logic is that individuals who receive too much disconfirming information—from either media or other people—cut ties with them to avoid the discomfort. However, to build a model that allows individuals to subscribe or unsubscribe to media agents based on belief and message history requires a much more complex model of internal belief state. We describe our attempt at capturing a more general model next, as well as the simplifications we will use in our experiments.

Dynamic media ecosystem model

To allow citizen agents to form and break connections with other agents based on the messages they receive, we added the notion of internal representations to the model. Each citizen agent u has an internal representation of other agents v formed by keeping a memory of messages that they receive from that agent. If an agent has a memory of length r, an internal representation \(\phi _{u,v}\) can span belief propositions \(B_i \in {\mathcal {B}}\), and be represented as a matrix where columns are propositions \(B_i\) and rows are the last r values of \(B_i\) an agent has been exposed to via messages from v:

$$\begin{aligned} \begin{bmatrix} b_{0,0} &{} b_{0,1} &{} ... &{} b_{0,|{\mathcal {B}}|} \\ b_{1,0} &{} b_{1,1} &{} ... &{} b_{1,|{\mathcal {B}}|} \\ ... \\ b_{r-1,0} &{} b_{r-1,1} &{} ... &{} b_{r-1,|{\mathcal {B}}|} \end{bmatrix} \end{aligned}$$

Each value in row i column j in the matrix therefore represents the ith most recent value of proposition \(B_j\) that agent u has heard from agent v.

This required that we augment the idea of a message from simply containing an encoding of beliefs in propositions from \({\mathcal {B}}\) to one containing beliefs, a source, and a list of senders. A message object can now be described as: \(m_i=\langle {{\textbf {b}}}_m, o, D \rangle\) where \({{\textbf {b}}}_m = [\ b_0\ b_1\ldots \ b_{|{\mathcal {B}}|}\ ]\) is a vector of belief values \(b_i\) across propositions \(B_{i}\) (where \(b_i\) has a value from \(B_i\) if the message encodes the proposition \(B_i\), otherwise \(b_i\) is set as -1), o is the source (\(o \in V \cup I\)), \(D \subseteq V\) is a list of each agent who shared the message thus far.

Agents’ internal representations of institutions are thus constructed from the source of messages, since institutions are the originators of all messages in our model. Their internal representations of other citizens are constructed from the senders of messages. Keeping track of the original message source allows citizens to additionally connect to institutions that they may never have been connected to, but have heard about from messages that reach them through the social network. The same is true for connecting to other citizen agents, as keeping an internal representation of each citizen agent in the entire chain of message spread (the set of senders) allows citizens to “discover” others with whom they may never have had a connection.

Each agent can then connect or disconnect based on an evaluation of their internal representation of another agent. For example, a function \(X(u,\phi _{u,v})\) could evaluate the internal representation by summing the distances between values in \(\phi _{u,v}\) and u’s held beliefs about \(B_i \in {\mathcal {B}}\). Using a threshold \(\zeta\), an agent u whose evaluation of its internal representation of v meets or exceeds \(\zeta\) (connecting if \(X(u,\phi _{u,v}) \ge \zeta\)) could connect to v, and vice versa. We call both the function X and threshold \(\zeta\) the selection criteria for a given agent. Different citizen agents could have differing selection criteria for types of institutions, types of other citizens, or any variation.


To test the polarization logic, we chose certain model parameters to serve as corollaries of selective exposure, audience fragmentation, and echo-chambers, and varied these with different initial conditions and media tactics to determine what configurations led to opinion polarization. Details can be found in Sections “Static model experiments” and “Dynamic model experiments.”

These provided the basis for simulation experiments that were developed and run with NetLogo 6.1 (Wilensky 1999) and Python 3.8 scripts that interfaced with the simulation. Experiments were designed and run with NetLogo’s BehaviorSpace extension. Source code and replication instructions are available publicly on GitHub at Results were analyzed using Python 3.8, and analysis code is also available on GitHub.

Citizen belief function

As in Rabb et al. (2022), we use a \(\beta\) function that mimics cognitive dissonance resolution when agents receive messages. For the mathematical details, refer to Eq. 1. But for intuition, this function, parameterized with \(\alpha =4\) and a variable \(\gamma\) (described below), yields a high probability of citizen agents adopting a belief if its distance from their prior belief is \(\gamma\) or less. We follow the extensive literature citing dissonance-driven selective exposure in media consumers (Donsbach and Mothes 2013; Flynn et al. 2017; Knobloch-Westerwick 2014; Stroud 2011; Arendt et al. 2019; Guess et al. 2018; Cardenal et al. 2019; Messing and Westwood 2014; Karlsen et al. 2020) to motivate our choice of this function. For a more detailed motivation for this function, please refer to Rabb et al. (2022).

Belief distributions

For simplicity of modeling and analysis, in our experiments, we mirror (Rabb et al. 2022) and model only one proposition, B. Though in reality, beliefs interact with each other, changing one may affect another, and beliefs may be composed of several propositions, modeling just one proposition allows for a simple, preliminary investigation of dynamics under simple conditions.

We will also mimic the discrete distributions studied in Rabb et al. (2022) and set both each B to take values from the set [0,6], as well as set \({\mathcal {I}}_{i,j}\) and \({\mathcal {C}}_{u,j}\) to be discrete distributions over those same values. Since we only model one proposition, our experiments do not differentiate between different initial belief distributions across propositions. Moreover, for simplicity, we draw all institutional agent beliefs from the same distribution, \({\mathcal {I}}\), and all citizen agent beliefs from the same distribution \({\mathcal {C}}\).

Though the distribution can be anything, in our experiments detailed below, we considered three types of distributions to draw initial beliefs for citizen and institutional agents: namely, we set \({\mathcal {I}}\) and \({\mathcal {C}}\) to be uniformly distributed, normally (truncated) distributed \({\mathcal {N}}(\mu _i, \sigma _i, a, b)\) (where a and b are the lower and upper limits), or polarized where the distribution draws from two truncated normal distributions \({\mathcal {P}} = {\mathcal {N}}_l(1, 1, 0, 6) \cup {\mathcal {N}}_u(5, 1, 0, 6)\). Note that all belief values were drawn from these continuous distributions, and converted into integers to discretize the distributions (Fig. 1).

Fig. 1
figure 1

An example of initial distributions over B (from left to right: uniform \({\mathcal {U}}\), normal \({\mathcal {N}}\), polarized \({\mathcal {P}})\)

Institutional messaging patterns

We modeled the messaging patterns in our experiments off of simple heuristics and tactics observed in media study. One pattern, broadcast, makes institutional agents send a message encoding their belief \(b_i\) at each time step (\(\varphi (S_{i,t}) = \langle [ b_i ], i, i \rangle\)). The heuristic patterns allow institutional agents to assess the mean or median belief of their subscribers at time step t and send a message encoding that belief (e.g. mean: \(\varphi (S_{i,t}) = \langle [ b ], i, i \rangle\) where \(b = \sum {b_s / |S_{i,t}|}, s \in S_{i,t}\)). The former messaging pattern may represent media organizations that are attempting to push a certain view (Benkler et al. 2018; Marwick 2018), and the latter, organizations that try to measure audience metrics to maximize the reception of their messages (Webster and Ksiazek 2012).

Simulation specifications

For both the static and dynamic model simulations, at each time step, institutional agents send messages in the same order. In other words, if there are 10 institutional agents, \(i_0\) sends a message first, then \(i_1\), then \(i_2\), etc. until \(i_9\), and that order is the same for each t.

Moreover, in our simulations, citizen agents update their belief (or not) immediately after being exposed to a message. Therefore, a citizen may update its beliefs several times during one time step t.

Our dynamic model employs a slightly different method for having citizen agents update their connections, not doing so immediately upon changing belief, but rather at the end of a time step. The details are discussed below in the section “Dynamic model experiments.”

Static model experiments

We ran simulations on a graph of citizen agents, \(N=100\), connected in a social network by the Barabási-Albert preferential attachment process. The Barabási-Albert network was parameterized by \(m=3\), and seeded with a star graph with \(m+1\) nodes. In contrast to the experiments in Rabb et al. (2022), where dynamics were tested across four different graph topologies, we instead focused on only the Barabási-Albert network. As we are simulating interactions of a social network, the Barabási-Albert process leads to networks that most closely resemble human social networks, with low diameters, power law degree distributions, and more (Leskovec and Faloutsos 2007).

We simulated \(|I|=20\) media agents sending messages to citizen agents over \(T=100\) time steps. As the simulations quickly become computationally intensive as the number of citizen and institutional agents increases, we chose relatively small numbers for our experiments. Though small, the choice of 100 citizen agents allows for a diverse spread of opinion and connection among citizen agents. Similarly, 20 institutional agents were enough to allow the initial institutional belief distribution, chosen from parameterized distributions, to reliably span B while allowing for variation that makes results more robust. Below, in our analysis section, we test for the resilience of our results to our small N, repeating some experiments with larger values.

Our experiments aggregate simulation results across 5 trials per parameter combination (with 486 combinations across parameters in Tables 1 and 2) and analyze the mean time series of the measures over time. Additionally, each unique parameter combination was tested on two random graphs (5 trials on each). The trends we report are observed across all such results.

Graph topologies and homophily

Our citizen network construction method, via preferential attachment, was tuned to be more or less homophilic with a scalar parameter \(h_G\). This scalar, \(h_G\) can be used in the agent connection process that yields the probability of connection p between agent u and v. In conjunction with the Barabási-Albert process, while u is being added to \(G=(V,E)\), \(h_G\) modifies the connection probability as follows:

$$\begin{aligned} p((u,v) \in E) = \frac{h_G * k_u}{\sum _v{k_v}} \end{aligned}$$

If homophily in the graph is desired, \(h_G\) could be calculated with a function \(h(b_u,b_v)\), a scalar between 0 and 1 based on the similarity of agent beliefs. Setting \(h_G=1\) would yield a graph with no added homophily. For our homophilic graphs, we set \(h(b_u,b_v)\) as a linear combination of agent beliefs:

$$\begin{aligned} h(b_u, b_v) = \frac{1}{1 + (|b_u-b_v|)} \end{aligned}$$

Importantly, the seed graph for the homophilic Barabási-Albert process was not a star graph, as is the case for a graph constructed without homophily. In the homophilic case, the seed graph is a complete graph of \(N=m\).

Selective exposure, fragmentation, and echo-chambers

To test the logic of polarization in our experiments, we varied certain model parameters we viewed as related to selective exposure, fragmentation, and echo-chambers to find conditions that led to polarization. Variations in parameters can be found in Table 1.

Table 1 Experimental parameter values for those chosen to represent audience fragmentation, selective exposure, and echo-chambers

The model parameter we chose to tune audience fragmentation was the \(\epsilon\) parameter, representing the threshold that agents use to subscribe to different media. A lower value of \(\epsilon\) will mean that agents subscribe to media that is increasingly the same as what they already believe. Lower values thus lead to an initial condition in the system that represents a fragmented audience.

To tune selective exposure, we varied the translation parameter \(\gamma\) in the DCC function governing belief update. Cognitive dissonance reduction is argued to underlie selective exposure, so our previous work can be utilized (Iyengar and Hahn 2009). The DCC translation parameter represents where the fuzzy threshold lies given a belief difference. A lower value (e.g. 1) means that agents will increasingly only believe messages that are close to their prior belief (e.g. a distance of 1 away).

Echo-chambers were simulated using the homophily of the graph itself through the parameter \(h_G\). Increasing the homophily in the graph structure increases the presence of echo-chamber-like structures in the network—neighborhoods of agents who already agree with each other.

Additionally, as described above, we varied citizens’ initial belief distribution, media’s initial belief distribution, and media messaging patterns to determine if results of those fragmentation and exposure parameters were robust across different initial conditions. Changing the initial conditions of the experiment allowed testing the robustness of these parameters potentially leading to polarization. These parameter choices are described in Table 2.

Table 2 Experimental parameter values for those chosen to vary initial model conditions: the media messaging tactic \(\varphi\), initial media distribution \({\mathcal {I}}\), and initial citizen belief distribution \({\mathcal {C}}\)

Dynamic model experiments

Just as with the static ecosystem model, we chose certain parameterizations of the dynamic media ecosystem model that we saw as useful for testing the logic of polarization. By running simulations of these model dynamics while varying parameter values, we investigated the resultant dynamics to try to learn about the mechanisms underlying audience fragmentation, selective exposure, echo-chambers, and polarization.

For these simulations, the citizen network was initialized as a Barabási-Albert preferential attachment network (Barabási and Albert 1999) with \(N=50\) agents and \(|I|=15\) institutions. The Barabási-Albert network was parameterized by \(m=3\), and seeded with a star graph with \(m+1\) nodes. The Barabási-Albert graph was chosen as the base network topology for these experiments for the same reason as in the static model experiments, because it captures essential aspects of real social networks: low diameter, a power law degree distribution, and more (Leskovec and Faloutsos 2007).

Initial citizen beliefs were drawn from \({\mathcal {C}}\), and institutional stances were drawn from \({\mathcal {I}}\). Because citizen connections to institutions were governed by \(X(u,\phi _{u,i})\), initial subscribers \(S_i, i \in I\) were not determined with \(\epsilon\) as in the static model, but by \(X(u,\phi _{u,i})\) where \(\phi _{u,i}\) was a \(r \times |{\mathcal {B}}|\) matrix initialized with values drawn from \({\mathcal {I}}\). This allowed each citizen agent to initially subscribe to institutional agents whose leaning sufficiently matched their initial belief as governed by \(\zeta _i\).

Since this model contains two types of agents (citizens and institutions), selection criteria can be described for two different groups: two thresholds, \(\zeta _c\) and \(\zeta _i\), and two functions \(X_c(u,\phi _{u,v})\) and \(X_i(u,\phi _{u,v})\) to govern social connections (for citizens) and subscribers (for institutions). This allows the model to capture both the phenomenon of audience fragmentation (citizens only subscribing to certain institutions) and echo chambers (citizens having only certain social friends).

Simulations were run, for each combination of experimental parameters described below, across 100 time steps. Each combination was additionally simulated 5 times on 5 randomly generated graphs, for a total of 25 simulations per parameter combination. Again, only a limited number of agents and runs was chosen because of computational power limitations. The time to run simulations increases drastically as these values increase, so we were limited in our ability to model large networks with many simulation iterations. We do, however, also test the dynamic model below with a different ratio of media to citizens to test the robustness of our results to different network conditions.

We note that for nearly all update schema that one might want to consider, there is no real way to make the simulations order invariant: there is always a chance that re-ordering the updates for belief in the message, in the information source itself, and in subscription preferences will change the outcome of the simulation. In the interests of computational efficiency, in the below simulations we adopted a news day cycle model where new messages are spread and beliefs and internal representations are updated first, then only at the end of the “day” (the time step) are new decisions to subscribe and unsubscribe made. Then the cycle starts again with institutional agents broadcasting new messages.

We varied several experimental parameters governing the process of selective exposure and echo-chamber formation. The complete list of experimental parameters and their associated phenomenon from the thesis is detailed in Table 3.

Table 3 Experimental parameter values for those chosen to represent audience fragmentation and selective exposure

Selection criteria

We also chose a simple function \(X(u,\phi _{u,i})\) to govern subscription and social connections for this model experiment:

$$\begin{aligned} X(u,\phi _{u,i}) = \frac{\sum _{h \in \phi _{u,i}}{\beta (u,h)}}{|\phi _{u,i}|}. \end{aligned}$$

Each evaluation, X, was set to the mean of the set of values in the internal representation \(\phi\), but each value in the set was the result of the belief function \(\beta\) of agent u’s beliefs and an element of \(\phi\). In essence, any agent u’s evaluation of another agent v was set as the average probability of belief (governed by the dissonance function) in messages from v, heard by u (as not all messages v shares would reach u).

By varying \(\zeta _c\) and \(\zeta _i\), the selection criteria for connecting to citizen and institutional agents, respectively, were changed. As each \(\zeta\) value increased, the likelihood of connecting to other agents decreased, and any agent u would more strictly connect with agents with whom u’s internal representation matched u’s held beliefs. Similarly, changes in \(\gamma\) also affected both belief update and connection. For belief update, decreasing \(\gamma\) leads agents to only believe messages that are increasingly close to their held beliefs. But this also affects connectivity, because the selection function X calculates an agent’s average dissonance given their internal representation of another agent using the same \(\beta\) function as in Eq. 1, which includes \(\gamma\). Therefore, as \(\gamma\) decreases, making agents experience stronger dissonance pressures to hold onto their beliefs, any agent u is increasingly likely to connect with agents who have been sharing messages matching u’s internal beliefs.

Importantly, we did not choose \(\zeta =0\) or \(\zeta =1\) to test in our simulations for specific reasons. When \(\zeta =0\), every agent connects to every other agent, as every evaluation of internal representations using the \(\beta\) function is greater than or equal to 0. This is both an unrealistic scenario and computationally expensive. Conversely, when \(\zeta =1\), agents form no connections, regardless of other parameterizations of \(\beta\). Because of the sigmoid \(\beta\) function we use, and how we parameterize it, no output from the function ever equals 1, even if the value is very close (e.g. 0.999). A scenario with no agent connections is also not useful. By simulating both \(\zeta\) values as 0.25, 0.5, and 0.75, we were able to span agents who are not very selective to those who are very selective. There may be transition points at \(\zeta\) values we did not test, based off of the values that \(\beta\) can take with different parameterizations of \(\gamma\), but we could not test a wide range of values because that would take too much computing time. This remains future work.

Citizen memory capacity

We also varied r, the citizen memory capacity, to model scenarios where agents may change their connections impulsively, or be more measured—testing combinations where \(r=1\), 2, or 10. At a value of 1, agents will select their connections solely based off of the last message they heard. As r increases, agents become more patient in their selection of connections. This is primarily due to the choice of our selection criteria using the mean of agent memory, as a mean over larger sets is more robust to outliers and changes slowly.

Institutional messaging tactics

Moreover, by changing \(\varphi\) (the institutional tactic), we tested if different media tactics, given varying selection criteria and dissonance levels, led to fragmentation, echo-chambers, and polarization. We used the same “broadcast” and “appeal mean” media tactics as described in the static media ecosystem model.

Initial belief distributions

Finally, initial citizen and media distributions were varied so that initial population conditions could be tested to assess their contribution to the levels of fragmentation, echo-chambers, and polarization. Both citizens’ and institutions’ initial belief in B was drawn from either a uniform, normal, or polarized distribution as was described in the static model.


Analyses & measures

To experimentally test the logic of polarization, we primarily measured audience fragmentation, echo-chambers, and polarization at each time step t. Each of these allowed us to reason about the thesis as the simulation progressed.

We measured polarization using a measure developed by Musco et al. (2018), but modified for our model. Polarization was measured via the equation:

$$\begin{aligned} P(G) = \sum \limits _{u \in V}{\overline{{{\textbf {b}}}}_u^2} = \overline{{{\textbf {b}}}}^T\overline{{{\textbf {b}}}}, \end{aligned}$$

where \({{\textbf {b}}}\) is a vector of all beliefs \(b_u\) for \(u \in V\) (and \({{\textbf {b}}}_u\) is \(\frac{b_u}{max(B)}\) to normalize to a [0, 1] scale as in Musco et al. (2018) and \(\overline{{{\textbf {b}}}}\) is the mean-centered vector of beliefs. This measure and the citizen belief distribution was recorded at each time step and their means across simulation runs were calculated.

The presence of echo-chambers in the graph was measured by a calculation of the global homophily, an average of a measure of each agent’s belief distance from its neighbors:

$$\begin{aligned} H(G) = \frac{1}{N} \sum _{u \in N}{\frac{\sum _{v \in N_c(u)}{|b_u-b_v|}}{|N_c(u)|}}, \end{aligned}$$

where \(N_c(u)\) is the citizen neighborhood of agent u. This measure decreases as homophily increases. To transform this measure into one which increases as homophily increases, we simply report \(\frac{1}{1+H(G)}\). This also makes values for the measure span [0,1].

Similarly, the degree to which the audience in the graph is fragmented was measured by a similar calculation, but modifying the neighborhood to only include institutional agents—the institutions to which agent u is a subscriber:

$$\begin{aligned} F(G) = \frac{1}{N} \sum _{u \in N}{\frac{\sum _{v \in N_i(u)}{|b_u-b_v|}}{|N_i(u)|}}, \end{aligned}$$

where \(N_i(u)\) is the institutional neighborhood of u. We similarly transform the fragmentation measure to increase as fragmentation increases, and span [0,1], by reporting \(\frac{1}{1+F(G)}\)

As our primary dependent variable to measure is polarization, we use that measure to drive our main analysis of results. To measure trends in polarization over time, we fit linear regression lines to polarization results for each simulation trial. By using the slope (\(\beta\)) and intercept (\(\alpha\)) of those lines (note that this \(\alpha\) and \(\beta\) are not the same as in Eq. 1, but kept as they are common conventions in regression analysis), in conjunction with the raw data from the polarization measure (specifically, the initial polarization value \(P(G)_0\), we were able to partition polarization results into four result sets: polarized, depolarized, remained polarized, remained nonpolarized. However, between the static and dynamic model experiments, we used different criteria for categorization.

For the static model, we initially started with breaking results into two categories: results that polarized (including polarized and remained polarized) and those which were nonpolarized (any that depolarized or remained nonpolarized). If results were polarized, the slope of the regression fit to them was positive above a certain threshold, or the slope was near zero but the intercept was sufficiently high (\(\beta \ge 0.01\), or \(-0.01 \le \beta \le 0.01\) and \(\alpha \ge 8.5\)). We say that results were nonpolarized or failed to polarize if they had a sufficiently negative slope or had a near zero slope and a lower intercept below a threshold (\(\beta \le -0.01\), or \(-0.01 \le \beta \le 0.01\) and \(\alpha < 8.5\)). Note that nonpolarized results also include those which depolarized the population (\(\beta \le -0.01\)). A slope threshold of 0.01 was chosen as a conservative threshold to include even results that somewhat polarized over time. An intercept threshold of 8.5 (half of the maximum polarization value observed from experiments) was used to include results with near-zero slopes but which started and remained polarized or nonpolarized.

For the dynamic model, we chose to use the full set of categories and include slightly more complicated categorization criteria to improve the analysis. If results were polarized, the slope of the regression line was \(\beta \ge 0.01\), or if \(-0.01< \beta < 0.01\) but \(P(G)_0 < 5.5\) and \(\alpha \ge 5.5\). Results depolarized if they had a sufficiently negative slope \(\beta \le -0.01\), or if \(-0.01< \beta < 0.01\) but \(P(G)_0 \ge 5.5\) and \(\alpha < 5.5\). Results that remained polarized had a slope between \(-\)0.01 and 0.01, but an intercept and initial P(G) value both above 5.5. Those that remained nonpolarized had the same slope criteria, but an intercept and \(P(G)_0\) below 5.5. A slope threshold of 0.01 was again chosen to include even results that somewhat polarized over time, using a conservative estimate to increase the strength of the results. The intercept, 5.5, was determined by finding the empirical average initial polarization values for 100 graphs drawn from \({\mathcal {U}}, {\mathcal {N}}\), and \({\mathcal {P}}\), respectively. The mean initial polarization for 100 graphs drawn from \({\mathcal {U}}\) was \(\mu =5.411\), \(\sigma ^2=0.725\); for \({\mathcal {N}}\), \(\mu =1.456\), \(\sigma ^2=0.078\); and \({\mathcal {P}}\) was \(\mu =5.932\) with \(\sigma ^2=0.167\). To distinguish between polarized and uniform distributions of belief, we chose 5.5 as a transition threshold.

Our inclusion of \(P(G)_0\) as part of the categorization criteria for the dynamic model was driven by results which polarized or depolarized very quickly, within the first few time steps. In these results, polarization levels may start above or below the threshold, but then quickly transition to the other side. Linear regression intercepts turned out to not represent this shift, instead lying on the side of the threshold which the polarization quickly switched to. By comparing the initial polarization with the regression’s representation of polarization over time, we could include results that rapidly polarized and would not be captured by the regression data alone.

For the static model, when partitioning results into polarized and nonpolarized results, we fit regression lines to the mean of polarization data across 5 simulation runs, each using the exact same network and parameters. However, we only did so because the polarization classification of the mean across simulation runs was representative of the classifications for each individual run. We measured how often, out of five simulations, trials for a given parameter combination yielded the same polarization result as the mean across trials. We found that 5/5 trials matched the mean classification in 62% of our results, 4/5 in 20% of results, 3/5 in 13%, 2/5 in 4% and 1/5 in 1%. This gave us confidence that classifying the means of the trials would be representative.

For the dynamic model, the mean across simulation runs was similarly representative of the polarization classification of individual runs. Across the five simulation trials for each parameter combination and network topology, we found that 5/5 trials matched the mean classification in 65% of results, 4/5 in 17% of results, 3/5 in 10%, 2/5 in 5%, and 1/5 in 2%. However, to yield power to our analyses, we decided to conduct the analysis of the dynamic model across all simulation runs, not taking the mean. If results are similar to the static model, then it provides evidence that our results are robust to different analysis strategies.

Static media model results

First, we examine results from the static model, which does not allow citizen agents to reconnect themselves based on the messages they receive. Notably, when fragmentation and the presence echo-chambers are low, this model gives us insight into cases where citizens may be forced to receive more of a diversity of messages than in the dynamic model, as in this model, we directly control those attributes of the network.

Fragmentation, echo-chambers, and exposure do not cause polarization

We observed that polarization occurred in the population regardless of \(\gamma\), \(h_G\), and \(\epsilon\)—the parameters for selective exposure, echo-chambers, and fragmentation, respectively. This runs contrary to arguments in favor of these phenomena being the cause of polarization (Karlsen et al. 2020; Arendt et al. 2019; Cardenal et al. 2019; Messing and Westwood 2014; Iyengar and Hahn 2009).

Table 4 Percentage of polarized/nonpolarized results (over the 72 experiments in each cell) broken down by selective exposure (\(\gamma\)), presence of echo-chambers (\(h_G\)), and fragmentation (\(\epsilon\))

Table 4 shows what percentage of simulations polarized, holding constant combinations of \(\epsilon\), \(\gamma\), and \(h_G\). Each cell contains the percent of simulation results that polarized, keeping those parameter values constant while letting others vary. The results show little effect from changes in \(\epsilon\) and \(h_G\), but a noticeable effect as \(\gamma\) changes: as it increases (agents are more likely to believe messages dissimilar to their prior belief), polarization decreases. Logistic regression on \(\epsilon\), \(\gamma\), and \(h_G\) to predict the polarization outcome confirmed these observations. Regression results showed only \(\gamma\) significantly contributing to polarization outcomes (one unit increase decreased odds of polarization by 69%, 95% CI [54%, 86%], \(p=0.001\)), with \(\epsilon\) and \(h_G\) failing to reach significance (\(p_\epsilon =0.644\), \(p_{h_G}=0.509\)).

Most appeals that adjust to subscriber beliefs fail to polarize

Our model found that in cases where media used tactics of appealing to subscribers, population beliefs were far less likely to polarize. By instead splitting the data by \(\varphi\) and \(\gamma\), we found that polarization results differ significantly based on \(\varphi\). The percent of results that polarized, keeping constant combinations of \(\varphi\) and \(\gamma\), are shown in Table 5.

Table 5 Percentage of polarized/nonpolarized results (over the 216 experiments in each cell) broken down by media tactic (\(\varphi\)) and selective exposure from cognitive dissonance reduction (\(\gamma\))

While \(\gamma\) has an effect on polarization, \(\varphi\) has a stronger one. The broadcast tactic comprised almost all the polarized results, while the mean and median appeal tactics almost never polarized. This further challenges intuition, which imagines that appealing to subscribers leads to polarization. Notably, these results include cases where the media producers and citizen population are both polarized, and where fragmentation, echo-chambers, and selective exposure are at their highest.

Focusing on results that did polarize, where \(\varphi =broadcast\), broken down by the initial institutional and citizen belief distributions, reveals more about their effect on polarization. This is displayed in Table 6.

Table 6 Percentage of polarized/nonpolarized results when \(\varphi =broadcast\) (over the 24 experiments in each cell) broken down by selective exposure from dissonance (\(\gamma\)), initial citizen distribution (\({\mathcal {C}})\) and initial institutional distribution (\({\mathcal {I}}\))

When the media tactic is broadcasting, the population polarizes only when \({\mathcal {I}}\) is uniform or polarized. Particularly, there is more polarization when \({\mathcal {I}}\) is more polarized than \({\mathcal {C}}\)—as polarized distributions yield a higher polarization value by Eq. 6 than uniform distributions, and uniform more than normal. Logistic regression on \(\epsilon\), \(\gamma\), \(h_G\), and \({\mathcal {C}}\)—on a subset of data where \(\varphi =broadcast\) and \({\mathcal {I}}={\mathcal {U}}\)—confirmed a significant effect of \(\gamma\) (one unit increase decreased odds of polarization by 34%, 95% CI [20%, 59%], \(p<0.001\)) and nearly significant effect of \({\mathcal {C}}={\mathcal {P}}\) (decreased odds of polarization by 39%, 95% CI [14%, 91%], \(p=0.076\)). Another regression on the same variables, but data where \(\varphi =broadcast\) and \({\mathcal {I}}={\mathcal {P}}\) showed only a nearly significant effect of \(\gamma\) (one unit increase decreased odds of polarization by 57%; 95% CI [33%, 97%], \(p=0.039\)).

Robustness to network size

We ran additional experiments to confirm that our results were not just artifacts of having a small population size (\(N=100\)), a high ratio of institutional agents to citizen agents (\(\frac{|I|}{|V|}=0.15\)), or that a large N would not lead to emergent effects absent from a smaller N. To that end, we simulated the same parameter combinations, but with \(N=1000\). As this made the simulation more computationally expensive, we restricted the number of simulation runs and separate random graph trials for each parameter combination to 1 each. Results may be less accurate because of the limited number of trials for each parameter combination.

What we observe from this experiment is that results stay mostly consistent those from experiments with a smaller N. There appears to be an effect of \(\gamma\), where as it increases, polarization decreases. This does not hold for all values of \(\epsilon\) as strongly as it does for our previous experiments. Moreover, the effect of echo-chambers through \(h_G\) seems to be minimal, if present at all (Table 7).

Table 7 Percentage of polarized/nonpolarized results (over the 18 experiments in each cell) broken down by selective exposure (\(\gamma\)), echo-chambers (\(h_G\)), and fragmentation (\(\epsilon\))

Interestingly, when broken down by tactic and initial distribution, results mostly follow the same pattern as previous experiments, but polarize more often. When \(\varphi\) is appealing to the mean subscriber belief, it does polarize in some cases, particularly more often than previous results when \({\mathcal {I}}={\mathcal {P}}\). However, the broadcast tactic does appear to still more reliably polarize. Again, since this experiment had no repetitions per parameter combination, it is difficult to say with confidence how robust these results are. Yet they suggest that more research could be performed to confirm any effects of larger populations (Table 8).

Table 8 Percentage of polarized/nonpolarized results (over 18 experiments in each cell) broken down by media tactic (\(\varphi\)) and initial belief distributions (\({\mathcal {C}}\) and \({\mathcal {I}}\))

Dynamic media model results

Now, we turn to results for our dynamic model, where citizen agents were free to disconnect to both institutions and other citizens who send too many disconfirming messages, and were free to form new connections with more sympathetic institutions and citizens. These results thus speak more directly to the dynamics of polarization as they are stated.

Extreme selectivity rarely polarizes

To first investigate the effect of the selectivity parameters on polarization results, we partitioned our results into sets based on combinations of the parameters \(\zeta _c\), \(\zeta _i\) and \(\gamma\). Each parameter combination (with the exception of combinations with \(\gamma =0\) and \(\zeta _i\) or \(\zeta _c=0.75\), explained below) spanned 1350 results. We display these results, broken down into polarizing/depolarizing/remained polarized/remained nonpolarized in Table 9. Results for \(\gamma =0\) and either \(\zeta _i\) or \(\zeta _c = 0.75\) were not included because at this threshold, no agents can make connections, as the highest \(\beta\) value for a message distance of 0 is 0.5, thus the maximum average belief value calculated by \(X(u,\phi _{u,i})\) would be 0.5. If X must be greater than or equal to either \(\zeta\), then this is impossible under these parameterizations.

Table 9 Percentage of polarized/depolarized/remained polarized / remained nonpolarized results (over the 1350 experiments in each cell) broken down by selection criteria \(\gamma\), \(\zeta _c\), and \(\zeta _i\)

Our results show that, notably, when \(\gamma =0\), there are almost no cases of polarization or depolarization as compared to results that started and remained polarized or nonpolarized. In these simulation runs, we observed from our simulations that citizen agents very quickly form tight echo-chambers and high fragmentation, limiting the types of messages they receive to only those which match their belief, and freezing the initial belief distribution for the duration of the simulation.

However, when citizen agents have higher \(\gamma\) values of 1 or 2, opinion more frequently polarizes and depolarizes. There appears to be no effect of \(\gamma\) increasing from 1 to 2 on polarization, but a slight effect on depolarization: as \(\gamma\) increases, depolarization increases.

There does seem to also be an effect of \(\zeta _i\) on polarization, but not so much an effect of \(\zeta _c\), nor an interaction between \(\zeta _i\) and \(\zeta _c\) or either \(\zeta\) with \(\gamma\). As \(\zeta _i\) increases, it appears that polarization and depolarization are less likely, with more results maintaining initial levels of polarization.

Appeals rarely polarize while broadcasting does

The dynamic model results further match those from the static model when breaking down results by media tactic and initial distribution of citizen and institutional agents. Results across these dimensions, specifically for polarizing results from the previous analysis (\(\gamma =1\) and 2), are displayed in Table 10.

Table 10 Percentage of polarized/depolarized/remained polarized / remained nonpolarized results (over the 1650 experiments in each cell) broken down by media tactic (\(\varphi\)) and initial belief distributions (\({\mathcal {C}}\) and \({\mathcal {I}}\))

Again, appeals to the mean of subscribers overwhelmingly fail to polarize, instead either depolarizing, or maintaining initial levels of polarization. Also, similar to the results from the static model, it appears that when institutional initial distributions are more polarized than citizen initial distributions, the population is more likely to polarize. In this vein, notably, when institutional agents were initially normally distributed and broadcasting, no simulation runs polarized.

No effect of memory

We also note that, interestingly, there was no significant effect on polarization results as r, the citizen memory capacity, varied. Results are displayed in Table 11.

Table 11 Percentage of polarized/depolarized/remained polarized / remained nonpolarized results (over the 9898 experiments in each cell) broken down by citizen memory capacity (r)

Most polarized results do not correlate with fragmentation and homophily

While the above results demonstrate the relationship between selection criteria and polarization in the dynamic model, we can further investigate the relationship between the other measures—fragmentation and homophily—and polarization. For each simulation run, we took pairwise correlations of polarization and fragmentation, and polarization and homophily. Resulting values would show us if polarization tended to rise and fall in harmony with fragmentation and homophily. A plot of correlation values is shown in Fig. 2.

Fig. 2
figure 2

A scatter plot of pairwise correlation values, per run, between polarization data and fragmentation on the x-axis, and polarization and homophily on the y-axis. Results are shown for polarizing runs only

Surprisingly, of 1532 runs that polarized, only 262 had a positive correlation between polarization and fragmentation, and only 82 of those 262 had a correlation value greater than 0.5. Conversely, 1196 polarizing runs had a negative correlation of polarization and fragmentation, with 1052 runs having a correlation less than \(-\)0.5.

Correlation tests between polarization and homophily showed that of the 1532 polarizing runs, only 272 had a positive correlation, and only 128 had a value greater than 0.5. The 1260 remaining runs had negative correlation, with 1083 having a correlation below \(-\)0.5.

Results are similar for low availability of media

The high-choice availability thesis hinges on the proliferation of many media sources with the advent of the internet and social media, with scholars and popular commentators often referring to the pre-internet era political world being free of such rampant polarization (Sunstein 2001; Guess et al. 2018). This related hypothesis, that a low-choice media ecosystem would not yield such polarization, is testable within the dynamic media ecosystem model.

We decided to test this related hypothesis by running the same battery of parameter combinations as in the previous experiments, but limiting I to only 2 or 3 institutional agents, “drawn” from approximately normal and polarized distributions. We call these distributions, respectively \({\mathcal {N}}(2)\) (two institutional agents with initial belief 2 and 4), \({\mathcal {N}}(3)\) (three agents with initial belief 2, 3, and 4), \({\mathcal {P}}(2)\) (two agents with belief 0 and 6), and \({\mathcal {P}}(3)\) (three agents with belief 0, 3, and 6). Results broken down by selection criteria parameters are displayed in Table 12, and broken down by tactic and initial distributions (specifically for cases that polarized, where \(\gamma =1\) or 2) in Table 13.

Table 12 Percentage of polarized/depolarized/remained polarized / remained nonpolarized results (over the 1800 experiments in each cell) broken down by selection criteria \(\gamma\), \(\zeta _c\), and \(\zeta _i\)

These results are very similar to those when the model has a high availability of institutional agents. When \(\gamma =0\), the model only maintains its initial level of polarization, neither polarizing nor depolarizing. But notably, with a low number of institutional agents, there appears to be more of an effect as \(\gamma\) increases: more simulation trials polarize and depolarize. There also appears to be more of an effect of \(\zeta _c\) on results, as when it increases, the number of polarizations and depolarizations decrease.

Table 13 Percentage of polarized/depolarized/remained polarized / remained nonpolarized results (over the 1650 experiments in each cell), for \(\gamma =1\) and 2, broken down by media tactic (\(\varphi\)) and initial belief distributions (\({\mathcal {C}}\) and \({\mathcal {I}}\))

Moreover, the patterns of polarization surrounding institutional tactic and initial citizen and institutional distributions are similar to the high availability experiments. Appealing to the mean of subscribers still fails to polarize in almost all cases, yet with notably more success when \(|I|=3\). Interestingly, there are more trials that polarize for the \({\mathcal {N}}(3)\) condition than in the high availability experiments, where almost no trials where \({\mathcal {I}}={\mathcal {N}}\) polarized.


Both the static and dynamic media ecosystem model yielded results that challenged the popular polarization theses. Yet at the same time, some results from the dynamic media ecosystem model seem to confirm parts of the logic. Our experiments, given all the simplifications and assumptions the model is necessarily endowed with, provide new insights into the logic of polarization, and shed light on some of the underlying assumptions.

Polarization and diversity of exposure

Both models demonstrated that polarization is much more likely to occur under conditions where agents are able to be exposed to messages that differ from their prior belief. In the dynamic model, when \(\gamma =0\), the population rarely polarized, but rather maintained its initial level of polarization because agents quickly formed strict echo-chambers and fragmentation that essentially froze the population distribution in place. To change the belief distribution of the population, agents had to have a decent chance to believe messages at least 1 value away from their own. Under the static model, agents were continually exposed to more diverse messages because their connections did not change over time. Thus, even when \(\gamma =0\) and agents had very low chances of believing messages that did not match their prior belief, the population belief distribution changed (albeit, perhaps slowly).

Another very surprising result from analysis of the dynamic model was the relationship between polarization, fragmentation, and echo-chambers (measured through homophily). Only a small number of simulation trials which polarized saw either fragmentation or homophily positively correlate with polarization. Even fewer had a relatively strong positive correlation. This contradicts the logic of the polarization thesis which argues that polarization is the result of high fragmentation and echo-chambers (Guess et al. 2018). As is the case with the relationship between selection criteria and polarization results (that being too selective through \(\gamma\) or \(\zeta _i\) led to less polarization), it seems that our measures of fragmentation and homophily (which result from the selectivity of agents) show the same pattern.

When discussing the process of opinion polarization, it is necessary to be more specific about what is meant by “selective exposure.” How selective must people be in order to contribute to opinion polarization? Our results indicate that a high level of selectivity lowers polarization. Rather, having some tolerance for media messaging that is not purely in agreement with what someone believes seems to aid the polarization process.

The role of polarized media

Another takeaway from this work is that there is a consistent pattern of which media conditions lead to polarization and which do not. In both the static and dynamic model, appeals to subscribers fail to polarize in almost all cases, and rather depolarize the population. In contrast, media producers that statically broadcast their take on a proposition polarizes the population, but only when it is more polarized than the population. The final polarization levels of the population mirror that of the media producers when they are committed to broadcasting.

This idea is contrary to argumentation that media appealing to echo-chambers has led the media and the population to become polarized (Webster and Ksiazek 2012; Iyengar et al. 2012; Stroud 2011). This may be an artifact of how our polarized citizen distributions are generated—they may not be polarized enough to lead to the phenomenon theorized in media commentary. But again, this begs a question of “how polarized?” does a population need to be for media appeals to their subscribers to polarize. What is the typical opinion distribution of a nonpolarized population before media polarizes opinion?

We also find from the dynamic model that low availability of media yielded very similar results to models with high availability. Again, what was most driving of polarization in these models was a statically broadcasting media that was more polarized than the citizens. The polarization arguments often contrast the polarization we see today with the historical lack of polarization when there were fewer media sources (Sunstein 2001). In our low availability model, the population remained non-polarized when the media were not polarized or were appealing to subscribers. Our results agree with scholars who complicate the typical story, arguing that U.S. society polarized because of media and elites becoming more politically polarized, not just the proliferation of news on the internet and social media (Benkler et al. 2018).

In sum, our findings hint at shifting the causal onus of polarization from individuals onto the media institutions themselves. As it stands, the arguments blaming polarization on selective exposure and echo-chambers is rooted in the biased nature of the individual, selecting information from disinterested media producers. There is no consideration of what shape the high-availability media producers must have to polarize opinion, only that its high-availability leads “flawed” individuals to contribute to polarization of the population. Even the degree to which the media producers are polarized is put on the shoulders of individuals; most arguments say that extreme partisans shape media extremism, as if media organizations have no autonomy themselves. The outsize influence that media producer tactics and belief distribution have in our model may lead media researchers to focus more on the role that media organizations play in the polarization process, not solely focusing on cognitively biased individuals.

Refining the dominant polarization logic

In total, our results call into question the popular way that polarization is conceived. They raise critical questions about the scope and bounds of the thesis: under what conditions does it hold true, and how does that affect its causal logic? It appears that it is not enough to simply say that high media availability, combined with selective exposure pressures from individuals (confirmation bias and subsequent subscribing to confirmatory sources and unsubscribing from disconfirming sources), has caused polarization. There are caveats and enabling conditions that must be present.

On the one hand, within our model, the dominant polarization logic is confirmed in some ways. Our dynamic model did show that very strict selection criteria (\(\gamma =0\)) led to the formation of echo-chambers and fragmented audiences. Moreover, it does show that a high-choice, dynamic media ecosystem, can lead to polarization of the population. But the devil is in the details. In the case of very strictly selective agents, even though they formed tight echo-chambers and fragmented audiences, they overwhelmingly failed to polarize. In most cases of polarization, they were not simply due to a high media availability or the selectivity of agents. Other criteria were necessary to see polarization arise.

Specifically, our model constrains the logic of polarization in two ways: for the population to polarize, (1) agents must be exposed to, and have a decent chance to believe, messages encoding different beliefs from their held belief; (2) the media ecosystem must expose them to a distribution of messages that is more polarized than the population, and that some people are willing to believe. Even though our model brought about these conditions through a specific implementation, it is plausible that different specific mechanisms could lead to the same conditions. The generality of these conditions allows them to comment on the thesis at the highest level. Unless people are capable of changing their mind, they can never go from being part of a nonpolarized population to being part of a polarized one. Without being exposed to a distribution of messages that is more polarized than the current opinion distribution, the population will never grow more polarized because polarizing messages do not exist. If those polarizing messages are not believed by anyone, opinion will never change.

Limitations and future work

Perhaps the greatest limitation of this work is that our analysis could not simulate very large networks, use higher-definition gradation for parameters that were experimentally manipulated, and be run over a larger number of simulation trials. These constraints were due to the high computational costs of such simulations, and have limited the scope of our analysis. In future work, we plan to find ways to overcome these limitations by improving the efficiency of our implementation and seeking more powerful computation resources. Moreover, a principle contribution of our work is the software itself to run the models, which we have released open source and is available at With this software, others can test other variants and interesting parameter choices. We note that the dynamic media ecosystem model is quite powerfully expressive, and will allow many different types of experiments with sophisticated belief models to be explored.

Our model is also limited in that it does not capture all processes related to political opinion formation, nor all the dynamics of the media ecosystem. There are more complex dynamics that could be explored in terms of agent cognitive models of belief, connection and disconnection processes between citizen agents and between citizen and media agents, media messaging tactics, and more. Within these areas, a few notable next steps stand out as feasible for future work.

More complex agent cognitive models

One simple extension that some other scholars have taken up in different models is to model multiple agent beliefs that are in relationship to each other. For example, (Friedkin et al. 2016) explored opinion diffusion dynamics under logical constraints to belief. This and other similar, salient dimensions of relations between beliefs (ideological, identity characteristics, logical coherence) could be explored.

Moreover, as in Sunstein (1999)’s conception of why echo-chambers polarize, a logical next step for this work would be to explicitly model group polarization dynamics and see if aggregate results change.

Different sharing behavior

It also may not be that polarization is occurring simply because of selective exposure, but because media platforms are designed to serve content that outrages individuals, and a different model would show different conditions leading to polarization. Our model only allows citizen agents to share messages that they agree with and believe, but individuals undoubtedly share messages that they do not agree with, including those that make them upset. A model allowing citizens to share messages they do not agree with may result in different patterns of polarization.

More complex models of connection

The model could be extended to include robust notions of trust, both between citizens and between citizens and institutional agents. While our cognitive model including simple selection criteria, based on cognitive dissonance reduction, is a step in this direction, there is an expanding literature on media trust that could motivate even more nuanced models of connection (Fawzi et al. 2021; Metzger et al. 2020; Strömbäck et al. 2020; Tsfati and Cappella 2003). Trust, which does not have to correlate with cognitive dissonance from message evaluations, could be defined as a scalar \(\tau\) which is held differentially for different agents, affects the belief process \(\beta\), and is updated based on other message features like their entertainment potential, appeals to identity or psychology, and more.

Another extension to the model could include connecting evaluations to be made conditionally based on a notion of topics. Topics \(Q \in {\mathcal {Q}}\) could be aggregations of propositions \(B \in {\mathcal {B}}\). There is evidence in media studies literature that a notion related to criteria for subscribing to media, trust, may be conditional based on topic (e.g. one may trust the Wall Street Journal for business news but not Covid-19 news) (Fawzi et al. 2021; Metzger et al. 2020). A topic, Q, could then condition an evaluation such that \(X(u,\phi _{u,v}\ |\ Q)\) considers only propositions \(B \in Q\).


Given the prevalence of political ideological polarization, as well as discussion of it, this work examined the dominant logic behind political polarization: that biased individuals selectively expose themselves to content that agrees with their prior belief, thus generating fragmented audiences, echo-chambers, which lead to polarized populations through biased deliberation, or by media catering to subscribers in echo-chambers, pushing them into more extreme views. By extending a previously developed opinion diffusion model that allows for individual agent cognitive models, we attempted to test these dynamics theoretically in simulation. We modeled salient features of the media ecosystem (a large number of media agents spanning multiple beliefs, media messaging tactics, and citizens’ ability to and tolerances for connecting and disconnecting with other citizens and media) and simulated dynamics of opinion formation under different conditions. Our results indicated that less biased individuals ended up leading to more polarization, that less fragmented audiences and fewer echo-chambers correlated with higher polarization, and that polarization depended on a polarized media ecosystem which statically broadcasts its views rather than appealing to subscribers’ beliefs. These results challenge the dominant polarization logic, suggesting that its dynamics hold only under a certain set of enabling conditions: (1) the distribution of media belief must be more polarized than the population; (2) the population must be at least somewhat persuadable to changing their belief according to new messages they hear; and finally, (3) the media must statically continue to broadcast more polarized messages rather than, say, adjust to appeal more to the beliefs of their current subscribers. This shifts the focus away from cognitively “flawed” individuals, and more toward the polarizing behavior by media institutions.

Availability of data and materials

The datasets generated and/or analysed during the current study are available in the GitHub repository,


  • Arendt F, Northup T, Camaj L (2019) Selective exposure and news media brands: implicit and explicit attitudes as predictors of news choice. Media Psychol 22(3):526–543

    Article  Google Scholar 

  • Barabási A-L, Albert R (1999) Emergence of scaling in random networks. Science 286(5439):509–512

    Article  MathSciNet  MATH  Google Scholar 

  • Benkler Y, Faris R, Roberts H (2018) Network propaganda: manipulation, disinformation, and radicalization in American Politics. Oxford University Press, New York

    Book  Google Scholar 

  • Cardenal AS, Aguilar-Paredes C, Galais C, Pérez-Montoro M (2019) Digital technologies and selective exposure: how choice and filter bubbles shape news media exposure. Int J Press/Polit 24(4):465–486

    Article  Google Scholar 

  • Christakis NA, Fowler JH (2013) Social contagion theory: examining dynamic social networks and human behavior. Stat Med 32(4):556–577

    Article  MathSciNet  Google Scholar 

  • Dandekar P, Goel A, Lee DT (2013) Biased assimilation, homophily, and the dynamics of polarization. Proc Natl Acad Sci 110(15):5791–5796

    Article  MathSciNet  MATH  Google Scholar 

  • Del Vicario M, Bessi A, Zollo F, Petroni F, Scala A, Caldarelli G, Stanley HE, Quattrociocchi W (2016) The spreading of misinformation online. Proc Natl Acad Sci 113(3):554–559

    Article  Google Scholar 

  • DellaPosta D, Shi Y, Macy M (2015) Why do liberals drink lattes? Am J Sociol 120(5):1473–1511

    Article  Google Scholar 

  • Donsbach W, Mothes C (2013) The dissonant self: contributions from dissonance theory to a new agenda for studying political communication. Ann Int Commun Assoc 36(1):3–44.

    Article  Google Scholar 

  • Erdős P, Rényi A (1960) On the evolution of random graphs. Publ Math Inst Hung Acad Sci 5(1):17–60

    MathSciNet  MATH  Google Scholar 

  • Fawzi N, Steindl N, Obermaier M, Prochazka F, Arlt D, Blöbaum B, Dohle M, Engelke KM, Hanitzsch T, Jackob N, Jakobs I, Klawier T, Post S, Reinemann C, Schweiger W, Ziegele M (2021) Concepts, causes and consequences of trust in news media-a literature review and framework. Ann Int Commun Assoc.

    Article  Google Scholar 

  • Flynn DJ, Nyhan B, Reifler J (2017) The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit Psychol 38:127–150

    Article  Google Scholar 

  • Friedkin NE, Proskurnikov AV, Tempo R, Parsegov SE (2016) Network science on belief system dynamics under logic constraints. Science 354(6310):321–326

    Article  MathSciNet  MATH  Google Scholar 

  • Goldberg A, Stein SK (2018) Beyond social contagion: associative diffusion and the emergence of cultural variation. Am Sociol Rev 83(5):897–932.

    Article  Google Scholar 

  • Guess A, Nyhan B, Lyons B, Reifler J (2018) Avoiding the echo chamber about echo chambers. Knight Found 2:1–25

    Google Scholar 

  • Iyengar S, Hahn KS (2009) Red media, blue media: evidence of ideological selectivity in media use. J Commun 59(1):19–39

    Article  Google Scholar 

  • Iyengar S, Sood G, Lelkes Y (2012) Affect, not ideology: a social identity perspective on polarization. Public Opin Q 76(3):405–431

    Article  Google Scholar 

  • Jost JT, Federico CM, Napier JL (2009) Political ideology: its structure, functions, and elective affinities.

  • Karlsen R, Beyer A, Steen-Johnsen K (2020) Do High-choice media environments facilitate news avoidance? A longitudinal study 1997–2016. J Broadcast Electron Media 64(5):794–814

    Article  Google Scholar 

  • Kim M, Leskovec J (2011) Modeling social networks with node attributes using the multiplicative attribute graph model. arXiv:1106.5053

  • Knobloch-Westerwick S (2014) Choice and preference in media use: advances in selective exposure theory and research. Routledge, New York

    Book  Google Scholar 

  • Leskovec J, Faloutsos C (2007) Scalable modeling of real graphs using kronecker multiplication. In: Proceedings of the 24th international conference on machine learning, pp 497–504

  • Li K, Liang H, Kou G, Dong Y (2020) Opinion dynamics model based on the cognitive dissonance: an agent-based simulation. Inform Fusion 56:1–14

    Article  Google Scholar 

  • Marwick AE (2018) Why do people share fake news? A sociotechnical model of media effects. Georgetown Law Technol Rev 2(2):474–512

    Google Scholar 

  • Messing S, Westwood SJ (2014) Selective exposure in the age of social media: Endorsements trump partisan source affiliation when selecting news online. Commun Res 41(8):1042–1063

    Article  Google Scholar 

  • Metzger MJ, Hartsell EH, Flanagin AJ (2020) Cognitive dissonance or credibility? A comparison of two theoretical explanations for selective exposure to Partisan News. Commun Res 47(1):3–28.

    Article  Google Scholar 

  • Musco C, Musco C, Tsourakakis CE (2018) Minimizing polarization and disagreement in social networks. In: Proceedings of the 2018 World Wide Web Conference, pp 369–378

  • Negroponte N (1995) Being digital. Alfred A. Knopf Inc, New York

    Google Scholar 

  • Rabb N, Cowen L, de Ruiter Jan P, Scheutz M (2022) Cognitive cascades: how to model (and potentially counter) the spread of fake news. PLoS ONE

  • Shehata A, Strömbäck J (2021) Learning political news from social media: Network media logic and current affairs news learning in a high-choice media environment. Commun Res 48(1):125–147

    Article  Google Scholar 

  • Sikder O, Smith RE, Vivo P, Livan G (2020) A minimalistic model of bias, polarization and misinformation in social networks. Sci Rep 10(1):1–11

    Article  Google Scholar 

  • Strömbäck J, Tsfati Y, Boomgaarden H, Damstra A, Lindgren E, Vliegenthart R, Lindholm T (2020) News media trust and its impact on media use: toward a framework for future research. Ann Int Commun Assoc.

    Article  Google Scholar 

  • Stroud NJ (2011) Niche news: the politics of news choice. Oxford University Press, New York

    Book  Google Scholar 

  • Sunstein C (2001) Princeton University Press, Princeton

  • Sunstein CR (1999) The Law of Group Polarization. John M. Olin Program in L. & Econ. Working Paper No. 91

  • Tsfati Y, Cappella JN (2003) Do people watch what they do not trust? Exploring the Association Between News Media Skepticism and Exposure.

    Article  Google Scholar 

  • Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Nature 393(6684):440–442

    Article  MATH  Google Scholar 

  • Webster JG, Ksiazek TB (2012) The dynamics of audience fragmentation: public attention in an age of digital media. J Commun 62(1):39–56

    Article  Google Scholar 

  • Wilensky U (1999) NetLogo itself.

Download references


LC and NR thank NSF 1934553 (Tufts T-Tripods institute) for support. NR also thanks the Tufts Data Intensive Studies Center (DISC) and NSF-NRT 2021874 for additional support. We also thank our anonymous referee for helpful comments that improved the paper.

Author information

Authors and Affiliations



Conceptualized the work: NR and LC; performed the simulations: NR; analyzed the data: NR, LC, and JPD. All authors helped write and review the paper

Corresponding author

Correspondence to Nicholas Rabb.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rabb, N., Cowen, L. & de Ruiter, J.P. Investigating the effect of selective exposure, audience fragmentation, and echo-chambers on polarization in dynamic media ecosystems. Appl Netw Sci 8, 78 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: