Ryan McBeth is a YouTuber focused on national security and military topics, who I have started to follow, because he seems well informed, grounded, and sensible.
Here he is speaking to the topic of misinformation in a natural disaster, and whether that kind of activity could be weaponized by a foreign power. At this point, unfortunately, Americans are already doing such a good job of it that we are already killing our own, as he points out.
Online discussions are dominated by a surprisingly small, extremely vocal, and non-representative minority. Research on social media has found that, while only 3 % of active accounts are toxic, they produce 33 % of all content [4]. Furthermore, 74 % of all online conflicts are started in just 1 % of communities [5], and 0.1 % of users shared 80 % of fake news [6,7]. Not only does this extreme minority stir discontent, spread misinformation, and spark outrage online, they also bias the meta-perceptions of most users who passively “lurk” online. This can lead to false polarization and pluralistic ignorance, which are linked to a number of problems including drug and alcohol use [8], intergroup hostility [9,10], and support for authoritarian regimes [11]. Furthermore, exposure to extreme content can normalize unhealthy and dangerous behavior. For example, teens exposed to extreme content related to alcohol consumption thought dangerous alcohol consumption was normative.
Detecting social norms can be a challenge, as it requires one to attend to the behaviors and opinions of many group members to form a model of how to behave. Thus, rather than encoding and memorizing each individual exemplar of normative behavior and opinion, people instead form an average representation of a series of exemplars in a group via the process of ensemble encoding [21,22]. Ensemble coding is cognitively efficient, allowing people to encode a single representation of a set of stimuli, rather than encoding and memorizing every item [21]. Socially, ensemble coding allows people to form a single estimation of group emotion or opinion, rather than individually encoding each person’s reaction [23,24]. Thus, one might gather information about what opinion is normative over repeated interactions with others to form an average representation of a groups’ opinion. In this way, people encode the social norms from posts and comments in online forums and social media platforms.
While ensemble coding is efficient, it can become distorted online due to the structure of the normative information. False norms emerge, in part, because social media is dominated by a small number of extreme people who post only their most extreme opinions, and do so at a very high volume–often posting dozens of times more than others [25∗∗, 26, 27], while more moderate or neutral opinions are practically invisible online. Encountering a disproportionate volume of extreme opinions can lead to false perceptions that the norms are far more extreme than they actually are.
In online political discussions, the people who post frequently on social media are often the most ideologically extreme [31,32]. Indeed, 97 % of political posts from Twitter/X come from just 10 % of the most active users on social media, meaning that about 90 % of the population’s political opinions are being represented by less than 3 % of tweets online [33]. This is a marked difference from offline polling data showing that most people are ideologically moderate, uninterested in politics, and avoid political discussions when they are able [34, 35, 36]. In discussions of the Covid-19 vaccine discussion on Twitter, only 0.35 % of people were in true echo chambers, and yet those users dominated the overall discourse [37]. Similarly, an analysis of social media 448,103 users found that a third of low-credibility posts were shared by just 10 accounts [26] (see also [38]). This renders moderate opinions effectively invisible on social media, leaving the most extreme perspectives most visible for users.
Juliette Kayyem in The Atlantic:
In past crises, emergency managers at all levels of government have relied on local media for factual information about events on the ground. But the erosion of the local-news industry—the number of newspaper journalists has shrunk by two-thirds since 2005, and local television stations face serious financial pressure—has reduced the supply of reliable reporting.
For a time, the social-media platform formerly known as Twitter provided countervailing benefits: Information moved instantaneously, and by issuing blue checks in advance to authenticated accounts, the platform gave users a way of separating reliable commentators from random internet rumormongers. But under its current owner, Elon Musk, the platform, renamed X, has changed its algorithms, account-verification system, and content-moderation approach in ways that make the platform less reliable in a crisis.
Helene seemed to prove the point. X was awash in claims that stricken communities would be bulldozed, that displaced people would be deprived of their home, even that shadowy interests are controlling the weather and singling some areas out for harm. The Massachusetts Maritime Academy emergency-management professor Samantha Montano, the author of Disasterology: Dispatches From the Frontlines of the Climate Crisis, declared in a post on X that Helene was “Twitter’s last disaster.”
Disinformation—fast and unreliable—filled a vacuum exacerbated by power outages, bad cell service, and destroyed transportation routes; it then had to be swatted back by legacy media. Local print, television, and radio newsrooms have made a heroic effort in covering Helene and its aftermath. But they, too, are forced to devote some of their energies to debunking the rumors that nonlocals promote on national platforms.
Unfortunately, the unfolding information crisis is likely to get worse. As climate change produces more frequent weather-related disasters, many of them in unexpected places, cynical propagandists will have more opportunities to make mischief. Good sources of information are vulnerable to the very climate disasters they are supposed to monitor. That’s true not just of local media outlets. In an ironic turn, Helene’s path of destruction included the Asheville headquarters of the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information, which tracks climate data, including extreme weather.

I’ll nit on one point there: ‘0.1 % of users shared 80 % of fake news’
I’d believe it if the verb was ‘originated’ instead of ‘shared’. Misinformation hits a critical mass level where it’s being shared by a very large number of people.
The study for that quote seems to be referencing ‘Social media use and risky behaviors in adolescents: a meta-analysis’ and ‘Different digital paths to the keg? How exposure to peers’ alcohol-related social media content influences drinking among male and female first-year college students’, and then the paragraph with the 0.1% quote ends with a comment about teen alcohol usage – all of which seems to be trying to make an apple into an orange.
Here’s another study:
https://news.temple.edu/news/2021-11-09/study-shows-verified-users-are-among-biggest-culprits-when-it-comes-sharing-fake
‘Even worse, 10% of U.S. adults have knowingly shared fake news.’
Key word there – ‘knowingly’. A great many share misinformation while believing the misinformation is true.