Extremism is now one of the most imminent public safety threats in the United States. So far in 2020, over 90% of terrorist attacks and plots have originated under the umbrella of right-wing extremism, in particular.
This surge is due in large part to social media, which extremists rely on to circulate their narratives and radicalize vulnerable communities. Removing this content has been an uphill battle for mainstream platforms like Facebook, and more extremists are migrating to less-regulated social platforms like 4chan and Telegram to avoid censorship.
How are extremists influencing and radicalizing people online? And how can monitoring public social media activity help interrupt radicalization processes, support vulnerable individuals, and ultimately neutralize related national security threats?
How Are People Radicalized Online?
There are likely a number of factors behind a person’s choice to radicalize—including past trauma, personal crises, and even being victims of online hate themselves. Regardless of the underlying factors, radicalization typically manifests as a grievance (towards a government or racial group, for example), which escalates to an ideological curiosity, sympathy, and then active participation.
A collection of hashtags on Gab associated with spreading COVID-19 disinformation, appealing to grievances associated with the pandemic—discovered using Echosec
Isolated young men, individuals facing unemployment, or disenfranchised military personnel are all examples of vulnerable groups who may be suggestible to ideologies surrounding anti-government sentiment (e.g. the Boogaloo), misogyny (e.g. incel extremism), and racism (e.g. white supremacism). Extremists understand how to exploit the appeal of finding an in-group with similar grievances—and have sophisticated methods of engineering someone down that path.
On social media, this is often initiated through memes, hashtags, disinformation, or hate speech that appeal to a category of grievances. Once a vulnerable individual’s attention is captured, interactions might turn into an emerging trend referred to as "shitposting", consuming extremist literature, and joining fringe social networks where grievances and extremist ideas are discussed more openly.
These platforms are where curiosity often escalates into active participation: making monetary contributions to a cause, idolizing terrorist “martyrs,” gamifying or fantasizing physical violence, and posting manifestos and other forms of leakage. Terrorist acts also have a ripple effect on the spread of online radicalization and extremism, causing hate speech to significantly increase in the days and even months following an attack.
Boogaloo-related meme on 4chan appealing to anti-government and second amendment grievances—discovered using Echosec
Where Does Online Radicalization Happen?
The internet plays a key role in violent extremism, allowing movements to quickly and easily reach vulnerable audiences. And yet, as the APPG wrote in 2019, “there has not been a quick enough realisation of the links between online attacks and ‘real-world’ incidents.” Part of understanding these links, and predicting or de-escalating imminent public safety threats, is knowing where to look.
This is harder than it sounds. While radicalization happens at different levels on a variety of mainstream and less-regulated platforms, moderation efforts and network shutdowns mean that users are constantly migrating to and from different sites.
Secondly, extremist movements aren’t necessarily centralized with a website or social media page, leader, or defined dogma. Instead, they increasingly manifest as unorganized online movements active within multiple platforms and audiences. They also tend to use coded language and even dog-whistle communication, which can be more difficult to detect and understand.
“Please keep ignorant and make your stale memes while The Base actually get shit done.” The Base is a militant right-wing extremist group—discovered using Echosec
It makes sense that extremist movements would use the popularity of mainstream social media networks to reach vulnerable individuals susceptible to radicalization. These include sites like Facebook, Twitter, YouTube, and Reddit—and indeed, these platforms have seen usage spikes for extremist memes, hashtags, and other co-option techniques related to movements like the Boogaloo in recent months.
There’s also an abundance of radicalization on social networks with little to no content moderation. These include imageboards like 8kun, Endchan, and 4chan (the /k/ and /pol/ boards are particularly active among right-wing extremists), messaging apps like Telegram and Discord, and distributed networks like Mastodon and Gab. These sites play a key role in radicalizing vulnerable groups and providing more explicit content for the intelligence community.
How Social Data Supports Counter Radicalization
Monitoring radicalization and extremism online is not as simple as identifying “bad guys” and reprimanding individuals—beyond identifying high-risk situations, the goal is to understand how movements communicate and operate as a whole and investigate trends in radicalization. This information is crucial for establishing methods of de-radicalization and vulnerable community support, which is required alongside other strategies like content moderation.
It’s also important to note that monitoring public social data must also be accomplished ethically and in compliance with data protection regulations and providers’ terms of service, which can prohibit broad monitoring by the public sector.
Monitoring social networks for radicalized content is useful for answering questions like:
- Who is vulnerable to radicalization, and how are these communities being targeted?
- Where is radicalization occurring online?
- How do radicalization trends change after a public crisis, terror attack, or other significant event?
- When does radicalized activity online become high-risk?
- What is the best way to communicate with vulnerable individuals or approach de-radicalization initiatives?
There are other benefits to tackling counter-radicalization initiatives from an online perspective. In an article for The Guardian, Ross Frenett, co-founder of the tech startup Moonshot, says, “our level of confidence when identifying individuals who are vulnerable to radicalisation is way higher online than it could ever be offline. And it sidesteps some of the discriminatory, stigmatising practices we’ve seen in an offline setting.”
Moonshot is one of several organizations now using technology and open-source online data to disrupt violent extremism. Public social media content is valuable for understanding how extremists target and radicalize vulnerable individuals—but this process can’t be done accurately or efficiently by manually navigating networks.
Specialized software that aggregates, filters, and contextualizes public information ethically across mainstream sources like Twitter—and especially on more fringe networks like Gab, 4chan, and Telegram—are crucial for understanding online radicalization.
Extremists are incredibly effective at reaching vulnerable audiences with powerful communication techniques. Sophisticated technology can help national security and counter-terror organizations better understand and surpass these techniques—and direct suggestible individuals with truth and compassion rather than disinformation and hate.
Many data feeds crucial to monitoring extremism are unavailable through commercial solutions. Contact us to learn about a more streamlined solution.