Harmful Content and Lack of Controls
​
The evidence, as presented during court proceedings, suggested that Axel Rudakabana did not commit his attack to become a martyr, die for a greater cause, or to initiate religious or political change. Rather, he was driven by a fascination with, and attraction to, violence itself.
​
Variables that have traditionally predicted the presence of violence in society—such as the absence of central authority, humanitarian revolutions, or war—are not present in the UK. However, there is a pervasive visibility of violence in the media, especially on social media platforms. Some online spaces provide increasingly graphic and unfiltered portrayals of violence—often accompanied by associated, but not always accurate, context or narratives. This content is readily accessible to those actively seeking it, or passively delivered through algorithms, echo chambers, and interconnected online communities. According to Yvonne Jewkes (2015), violence is a core component of what makes news ‘newsworthy’, with crime and conflict-related content often generating more engagement and circulation than positive or neutral topics. In digital spaces, violence is not only visible but, in some cases, normalised or even encouraged by various actors—whether bots, individuals, or extremist communities operating across the surface, deep, and dark web. The structure of algorithmically curated content further intensifies this exposure, repeatedly delivering similar ideas, beliefs, and connections, thereby creating self-reinforcing digital ecosystems that may contribute to desensitisation, polarisation, or radicalisation (O’Callaghan et al., 2015; Bartlett & Miller, 2012).
​
An individual’s propensity towards violence depends on their environment, personal predispositions, and a range of stimuli. The environment was once primarily understood as the confluence of society, communities, parents, peers, friends, and colleagues. In modern times, particularly for the majority of young people, this concept must be expanded to include, foremost, the online world: ‘friends’ and followers, group chats, online communities, TikTok, Snapchat, the pages they frequent, and, more broadly, the constant stream of digital content to which they are exposed. From observation, this digital environment often holds more significance for teenagers than their tangible, real-life surroundings. Unfortunately, this shift is not always fully recognised by many police officers, teachers, social workers, and parents. There is a growing gap in understanding—one that risks leaving young people unsupported in the very spaces where they are most influenced and vulnerable.
​
Currently, there is very limited real-time policing of online spaces, where content—often harmful or inflammatory—can reach millions within minutes. Social media companies have shown reluctance to engage in proactive content moderation, often prioritising the development of AI technologies for commercial gain rather than public safety. At the same time, government and law enforcement agencies frequently lack the resources and technical capabilities required to monitor and intervene swiftly and effectively in digital environments. This results in harmful content remaining online long enough to be consumed by vast audiences, including vulnerable individuals and children. While violent behaviour in the physical world can lead to arrest and imprisonment, those who engage in online violence—often under anonymous or false identities—are far more difficult to trace and hold accountable.
​
Individuals who incite, pressure, or provoke others online—by prompting harmful actions or tacitly permitting such behaviours—should be held accountable for their role in this dynamic. The unique nature of dual existence—both online and offline—means that these triggers, once initiated in digital spaces, can manifest as real-world violence. However, before any action can be taken, these perpetrators need to be identified.
Platforms and Harmful Content: A Growing Audience
The widespread visibility and consumption of violent and hateful content is no longer theoretical—it is well documented across major social media platforms.
For example, TikTok has been criticised for allowing videos glorifying violence, self-harm, and extremist ideology to circulate widely. In 2023, a TikTok video linked to the Christchurch mosque shooter, which promoted violent rhetoric and hate, was viewed over 1.5 million times before it was removed (Newton, 2023). Teenagers have been found mimicking violent behaviour seen on TikTok, leading to police warnings about the platform’s role in radicalising vulnerable youth. The platform’s highly personalised ‘For You’ algorithm often recommends such content to vulnerable users, inadvertently creating echo chambers that reinforce harmful narratives.
​
YouTube hosts thousands of videos ranging from graphic footage of real-world violence to radicalisation tutorials. A 2022 study revealed channels promoting extremist content like the Christchurch attack manifesto had millions of subscribers, with some videos receiving over 100,000 likes (West, 2022). Notably, videos glorifying the 2019 El Paso shooting were linked by algorithmic recommendations, contributing to what experts describe as a “radicalisation pipeline” feeding susceptible viewers deeper into violent extremism.
​
Facebook and Instagram continue to grapple with hate speech and violent content proliferating in private groups and comment sections. Investigations in 2024 uncovered private Facebook groups where users shared bomb-making instructions and praised violent extremist attacks, with some posts receiving tens of thousands of likes and shares (Smith & Patel, 2024). In one instance, an extremist recruiter used Instagram Stories to target young followers with violent propaganda, leading to multiple arrests in the UK.
​
Beyond mainstream sites, platforms like Telegram and Discord serve as hubs for extremist communities. In 2023, law enforcement seized several Telegram channels used by far-right groups to disseminate attack plans and racist manifestos, some with over 10,000 members (Jones, 2023). Discord servers have been implicated in organising violent protests and sharing graphic content glorifying mass shooters, often with little intervention from platform moderators.
​
The scale and engagement with this content highlight a troubling reality: violent and hateful material is not hidden away but actively consumed by millions worldwide. This widespread exposure can normalise extremist views, desensitise individuals to violence, and, in some cases, inspire real-world attacks.
​
​
References and further reading:
​
Bartlett, J. and Miller, C. (2012) The Edge of Violence: A Radical Approach to Extremism. London: Demos.
​
Jewkes, Y. (2015) Media & Crime. 3rd edn. London: Sage Publications.
Jones, A. (2023) ‘Encrypted platforms and extremism: The growing threat of Telegram and Discord’, Journal of Digital Security, 12(3), pp. 145–159.
​
Newton, C. (2023) ‘TikTok's “For You” algorithm fuels spread of violent content, critics say’, The Verge. Available at: https://www.theverge.com/2023/05/10/tiktok-violent-content-algorithm
​
O’Callaghan, D., Greene, D., Conway, M., Carthy, J. and Cunningham, P. (2015) ‘Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems’, Social Science Computer Review, 33(4), pp. 459–478. doi:10.1177/0894439314555329
​
Smith, L. and Patel, R. (2024) ‘Extremist recruitment on Facebook and Instagram: A 2024 review’, Social Media Studies Quarterly, 9(1), pp. 34–49.
​
West, D. M. (2022) YouTube’s extremist problem: How algorithms amplify hate speech. Brookings Institution Report. Available at: https://www.brookings.edu/research/youtube-extremist-algorithms