Disrupting Deception: Misinformation, Disinformation, and the Link with Radicalisation
Building a Deterrence-Based Response Across Sectors
Deterrence Center Publication
The rise of misinformation and disinformation in the digital age has become a significant factor in radicalisation, extremism, and social polarisation. False or misleading narratives—whether shared unwittingly or deliberately—can distort perceptions, undermine trust, and expose vulnerable individuals to manipulative ideologies. Understanding these dynamics is essential for preventing harm and building long-term societal resilience.
What Are Mis-, Dis- Information—And Why They Matter
Misinformation refers to inaccurate or misleading information shared without malicious intent—such as a viral rumour or misrepresented statistic. Disinformation, by contrast, is false content spread deliberately to deceive, manipulate, or serve a specific agenda (Wardle and Derakhshan, 2017). While distinct, both types of information disorder can feed radicalisation when they reinforce, inter alia, fear, alienation, or perceived injustice (Neumann, 2013).
Misinformation and disinformation are tactics used to influence perception and behaviour. While disinformation is intentionally created to mislead, misinformation is often shared by individuals who believe it to be true, driven by fear, frustration, or social identity. Although not always malicious in intent, misinformation can have deeply harmful effects—especially when it reinforces polarisation, legitimises falsehoods, or erodes trust in democratic institutions.
Wardle and Derakhshan (2017) classify information disorder into three categories:
· Misinformation: false or misleading information shared without intent to cause harm(e.g., a misleading health tip shared by a concerned friend).
· Disinformation: false information created or shared deliberately to deceive or manipulate(e.g., fabricated election results spread to undermine public trust).
· Malinformation: genuine information used to cause harm, often by stripping it of context or sharing it invasively (doxing or quoting someone out of context to distort their meaning or incite hostility- e.g. leaked school footage used to incite online hate).
Studies by Pennycook and Rand (2018, 2019) suggest individuals often share misinformation due to emotional reasoning rather than critical analysis. Similarly, Van Bavel and Pereira (2018) argue that identity-driven reasoning can override facts—people are more likely to accept and spread information that aligns with their worldview, regardless of accuracy. This blurs the line between personal belief, emotional response, and manipulation. Misinformation is often the by-product of polarised environments, not always the deliberate act of a single deceiver.
Disinformation, on the other hand, is more strategic. It is frequently used by organised actors—states, state-sponsored entities, extremist groups, ideological movements—to destabilise trust, manipulate opinion, provoke emotional reactions, and make communities more conflicted, and therefore more vulnerable. As Pomerantsev (2014) and Bartlett et al. (2021) show, modern campaigns are designed to overwhelm audiences with contradictory content, eroding the very notion of objective truth. In such environments, radicalisation becomes easier to catalyse—because when nothing can be trusted, anything becomes believable.
Disinformation thrives not only on content but on context. It gains traction not only because of the message or narrative, but because of the environment into which it is released. In a society that feels fractured—where trust in government is low, economic hardship is rising, and the social contract feels brittle—manipulative narratives find fertile ground. Following the UK’s Brexit referendum and the COVID-19 pandemic, public scepticism grew—making the population more susceptible to malign narratives. In contrast, a society grounded in shared values like fairness, tolerance, democracy, and mutual respect is far harder to deceive or divide.
In practice, disinformation often becomes misinformation when shared by ordinary individuals unaware of its origin. This ‘contagion’ effect broadens reach, legitimises falsehoods, and accelerates radicalisation through everyday interactions.
How Mis- and Disinformation Fuel Radicalisation
Mis- and disinformation contribute towards radicalisation by influencing emotional, cognitive, and social vulnerabilities. They create simplified, often polarising narratives that frame complex issues in terms of "us vs. them," reinforcing fear, anger, or perceived injustice. Below are examples of how manipulated narratives can be used to advance radicalisation. Each illustrates a common theme in disinformation campaigns.
Immigration and National Identity
“They’re taking our jobs and resources.”
False or decontextualised claims about migrants are often used to stir public resentment—framing newcomers as a threat to economic security or cultural values. This simplifies complex economic issues and encourages scapegoating.
Pandemic Conspiracies
“The government is using lockdowns to control us.”
These distortions frame public health policy as authoritarianism. It erodes institutional trust and fuels radical anti-government sentiment.
Religious or Cultural Polarisation
“[X group] wants to destroy our way of life.”
Extremist rhetoric invokes existential fear to justify exclusion or aggression toward entire faith or ethnic groups.
Electoral Disinformation
“The election was stolen.”
This narrative undermines democratic legitimacy. It also contributed towards fuelling civic unrest and justify extreme actions—as seen in the Capitol attack on 6 January 2021.
Climate Change Misinformation
“Climate change is a hoax to make you poorer.”
This oversimplifies global policy challenges and frames environmental justice as elite manipulation.
Social media algorithms amplify these effects by curating content that reinforces users' existing beliefs, creating echo chambers and isolating individuals from alternative viewpoints. Over time, this repeated exposure to manipulative or false information can normalise extreme views and increase the likelihood of individuals embracing radical ideologies or even resorting to violence. These algorithmically shaped environments create informational silos, where the same ideas are encountered again and again—often without users realising that critical context is missing. This is particularly dangerous when users are repeatedly exposed to fake news or disinformation that reinforces radical beliefs and allows extremist ideologies to be absorbed unchecked (Fraser, 2021).
Common mechanisms used in mis- and disinformation include:
· Emotional Conditioning: repeated exposure to emotionally charged falsehoods—especially among young people—can erode critical thinking. Over time, this may normalise extremist narratives or justify violence as a form of self-expression (Horgan, 2014).
· Confirmation Bias- leads individuals to accept content that aligns with their beliefs—even when it's false. This makes them more receptive to narratives that reinforce fear, blame, or perceived injustice (Van Bavel and Pereira, 2018).
· False Legitimacy- Extremist groups often use tactics like inflated follower counts or staged testimonials (social proof) to make radical ideas appear mainstream and acceptable (Reicher et al., 2019).
The Geopolitics of Disinformation
Over the years, foreign state actors have been linked to efforts to manipulate public opinion and destabilise democratic societies. The scale of this manipulation has grown significantly, driven by the global reach of online platforms and their intimate, in-home access to users. These disinformation campaigns often aim to amplify polarising narratives around immigration, religion, and national identity (FCDO and Zainuddin, 2024; Winter, 2021; ISD, 2020). By sowing division and undermining social cohesion, these actors make societies more vulnerable to external influence and political manipulation.
By layering disinformation with conspiracy theories and tapping into cultural, political or personal grievances, foreign actors deliberately undermine trust in institutions and foster conditions that make individuals more susceptible to radicalisation, polarisation, and authoritarian influence. This is not just propaganda—it is a form of psychological operations; calculated, strategic manipulation of public perception.
As disinformation spreads, it can quickly evolve into misinformation when forwarded among family members, friends, colleagues, and communities. This avalanche of weaponised information has already threatened peace and security, and undermined democratic processes and state institutions across the globe. What matters now is not proving its existence, but preparing coordinated, cross-sector responses to mitigate its impact — and anticipating and deterring future iterations and manipulations.
Read more: Beyond Espionage: The Radicalising Reach of Foreign State Disinformation.
Why This Matters Now
Digital radicalisation does not start with lethal means or ideology—it often begins with a single manipulated message. In an age of weaponised narratives, emotional vulnerability, and algorithmic amplification, preventing violence means pre-empting harmful distortions.
This is not a call for censorship or surveillance, but for preparedness, partnership, and proportionate, rights-based responses.
Mis- and disinformation campaigns do not just succeed because they are sophisticated—they succeed because they land in fractured, distrustful, or disillusioned societies. Misinformation is not just a public nuisance. In the wrong conditions, it becomes a threat vector. If we are to deter the next wave of extremism, we must understand that the frontline is not just physical—it is emotional, cognitive, and digital.
We must disrupt the hostile manipulation of minds before it escalates to the manipulation of actions. Digital radicalisation is not hypothetical. It is happening now. And how we respond will shape not just public safety—but public trust and democratic resilience.
A Deterrence-Based Toolkit for Countering Mis- and Disinformation
Preventing the radicalising potential of mis- and disinformation requires a coordinated response—across policy, practice, education, technology, and community settings. The following framework provides practical steps for policy-makers, frontline professionals, and institutional leaders.
1. Promote Digital Literacy and Counter-Narratives
Build long-term societal resilience by equipping people with the tools to question, contextualise, and challenge harmful and manipulative content—while fostering more constructive narratives.
· Integrate digital literacy and critical thinking into school curricula, youth work, and lifelong learning programmes.
· Deliver public awareness campaigns—co-designed with trusted media outlets—to help the public recognise and report mis- and disinformation.
· Train educators, youth workers, and frontline professionals to identify and respond to mis- and disinformation effectively.
· Involve young people in creating positive counter-narratives and fund initiatives that explore belonging, truth, and identity in the digital space.
· Offer digital literacy workshops in community settings, for people of all ages, empowering families and local leaders to become active participants in building online resilience.
· Raise awareness of accessible fact-checking tools and promote their use in schools, public services, and community spaces
·
2. Strengthen Community-Based Prevention
Work locally to raise awareness, knowledge and understanding and skills to detect early signs of radicalisation and polarisation, and foster environments that reduce vulnerability and isolation.
· Collaborate community organisations, faith leaders, and local authorities to identify warning signs of online grooming, ideological influence, and radicalisation.
· Develop safeguarding protocols and community pathways that support at-risk individuals through non-criminal interventions.
· Facilitate cross-sector cooperation—bringing together educators, police, youth services, and civil society to share knowledge and coordinate responses.
3. Expand Support for Vulnerable Individuals
Intervene early to support those most exposed to manipulative content—whether due to social isolation, mental health challenges, or identity crises.
· Strengthen outreach and mentorship programmes for young people and marginalised groups at risk of online radicalisation.
· Integrate digital harms into safeguarding frameworks across schools, social care, and youth justice.
· Provide accessible emotional and psychological support, especially for individuals disengaging from harmful online networks.
4. Advance Responsible Tech Partnerships and Accountability
Digital platforms must play a more active role in protecting the information ecosystem. Collaborate with technology platforms and research bodies to develop transparent, rights-based solutions to mitigate digital threats.
· Partner with institutions like the Alan Turing Institute to build AI systems for detecting and analysing disinformation patterns and influence campaigns.
· Encourage platforms to support investigations into foreign interference and share threat data with relevant authorities.
· Establish and promote clear, consistent moderation standards that prioritise credible information—without infringing on free expression.
· Increase platform transparency on algorithmic amplification, content flagging, and takedown procedures.
· Work together to disrupt existing extremist echo chambers.
5. Use AI Thoughtfully for Fact-Checking
Support and guide the responsible use of automation in verifying information and disrupting harmful content flows.
· Invest in the development of AI-powered fact-checking tools and assess their application in UK public and professional contexts.
· Raise public and practitioner awareness of existing tools, ensuring they are accessible, trusted, and clearly explained.
· Ensure human oversight remains central to any automated system—balancing efficiency with ethical and contextual judgment.
Rebuild Trust in Public Institutions and Leadership
Mis- and disinformation thrive in environments of distrust, disillusionment, and social fragmentation. Restoring confidence in public life is one of the most long-term but vital forms of prevention.
· Collate and analyse locally gathered data from schools, community organisations, faith leaders, and local authorities on signs of online grooming, radicalisation, and ideological influence linked to mis- and disinformation. Mapping regional and national trends can reveal how information disorders fluctuate, which communities are being targeted, and where vulnerabilities lie—enabling more tailored and pre-emptive deterrence strategies.
· Encourage leadership that is visible, transparent, and rooted in public service and civil service values and standards of behaviour—regardless of who is in power.
· Strengthen civic education and public dialogue to reinforce democratic values like fairness, accountability, and inclusion.
· Engage the public in shaping narratives about national identity and future direction—highlighting that British values such as tolerance, and the rule of law are the key safeguards against manipulation.
· Help people not just to trust institutions, but to trust themselves and each other—to believe they can spot deceit, ask critical questions, and be part of something worth protecting.
· Recognise that adversaries do not need to invent grievances—they only need to amplify what already exists. A resilient society is one that sees its differences not as weaknesses, but as sources of democratic strength.
References
Bartlett, J., 2020. The Extreme Right and Radicalisation. Journal of Radical Politics, 35(2), pp.56–79.
Bartlett, J., Krasodomski-Jones, A. and Rumball, N., 2021. Conspiracy Theories, the Internet, and Threats to Public Safety. London: Institute for Strategic Dialogue.
Conway, M., 2017. Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research. Studies in Conflict & Terrorism, 40(1), pp.77–98.
Conway, M., 2017. Terrorism and Social Media: How Extremist Groups Use the Internet. Terrorism Studies, 29(5), pp.112–134.
FCDO and Zainuddin, H. 2024. Disinformation is being weaponised against all of us: UK Statement at the UN Fourth Committee. New York: United Nations. Available at: https://www.gov.uk/government/speeches/disinformation-is-being-weaponised-against-all-of-us-uk-statement-at-the-un-fourth-committee
Fraser, J., 2021. Echo Chambers and Polarisation: The Role of Social Media in Political Radicalisation. Journal of Digital Politics, 18(4), pp.231–245.
Horgan, J., 2014. The Psychology of Terrorism. Psychology Press.
Institute for Strategic Dialogue (ISD), 2020. The Disinformation Landscape in the COVID-19 Era. [online] Available at: https://www.isdglobal.org
Neumann, P.R., 2013. The Trouble with Radicalization. International Affairs, 89(4), pp.873–893.
Neumann, P.R., 2013. The Role of Communication and Social Media in Radicalisation. International Journal of Terrorism Studies, 11(4), pp.45–61.
Pennycook, G. and Rand, D.G., 2018. Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188, pp.39–50.
Pennycook, G. and Rand, D.G., 2019. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Stories Increases Perceived Accuracy of Stories Without Warnings. Management Science, 66(11), pp.4944–4957.
Pomerantsev, P., 2014. Nothing is True and Everything is Possible: The Surreal Heart of the New Russia. Faber & Faber.
Reicher, S.D. et al., 2019. Social Identity and Radicalisation. European Journal of Social Psychology, 48(6), pp.1349–1363.
Shpilkin, A., 2020. Russian Influence in the Brexit Debate: Evidence and Impact. British Politics Review, 23(1), pp.17–32.
Van Bavel, J.J. and Pereira, A., 2018. The Partisan Brain: An Identity-Based Model of Political Belief. Trends in Cognitive Sciences, 22(3), pp.213–224.
Wardle, C. and Derakhshan, H., 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe. Available at: https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c
Winter, C., 2021. The Capitol Riot and the Role of Disinformation. Journal of Terrorism Research, 12(1), pp.45–59.
Winter, P., 2021. Disinformation and the January 6th Capitol Attack. American Political Review, 58(2), pp.345–370.