Trends of 2020: What increased internet has meant for terrorism in Europe

The European Union, United Kingdom and Switzerland have had an unconventional year for identifying trends in terrorist activity. The COVID-19 pandemic and subsequent lockdowns, travel restrictions, and digitization of everyday life have posed difficulties for some terrorist groups and opportunities for others.

A Europol report on terrorism in Europe declared that in 2020, six EU member states experienced a total of 57 completed, foiled, or failed terrorist attacks. Taking the UK into account, the number increases to 119. Upon analysis of their data, Europol revealed that all completed jihadist attacks were committed by individuals supposedly acting alone. Three of the foiled attacks involved multiple actors or small groups. All the attackers in the UK and EU were male and typically aged between 18 and 33, and in only one case in Switzerland was the perpetrator a woman. The same report identifies right-wing extremist trends over the last three years. Findings depict similarities between Islamist terrorists and right-wing terrorists in terms of age and gender. Right-wing terror suspects are increasingly young in age, many of which are still minors at the time of their arrest. Right-wing suspects appear intricately connected to violent transnational organizations on the internet.

COVID-19 lockdown restrictions have vastly increased European citizens’ reliance on the internet for everyday tasks, both professional and recreational. Statista recently released data showing that 91% of EU households had internet access in 2020, reaching an all-time high. But with the increased access and usage of the internet comes the risk of it being used for malicious purposes, specifically for terrorist organizing. The quantity of propaganda produced by official ISIL media outlets reportedly decreased in 2020. Despite this, ISIL continues to use the internet to stay connected to potential attackers who align themselves with the same ideology. These connections have allowed ISIL to call for lone actors to commit terrorist attacks. The data from Europol’s 2020 report confirms that it was lone-actor attacks that comprised most of the “successful” terror attacks in 2020, while attacks planned in a group were typically prevented.

Their right-wing extremist counterparts have developed sophisticated methods of recruitment in the internet age, particularly over the last year. Right-wing terror suspects have developed communication strategies via gaming apps and chat servers typically used by gamers. Presumably to attract a younger demographic, right-wing extremists with links to terror suspects have diversified their internet use to include gaming platforms, messenger services, and social media. In the wake of the coronavirus pandemic and vaccination programs, the Centre for Countering Digital Hate notes that Discord has been a vital tool for spreading disinformation and conspiracy theories involving racial hatred. In this case, strategies used in online games to reward progression have been translated to serve right-wing propaganda. Thus, points are awarded to the most active members of certain discord servers who can fabricate and promote conspiracy theories, often including antisemitic tropes involving Bill Gates. Virtual currency plays a key role in promoting the narrative of success and reward, and its ability to capture the interest of minors who are active in the virtual space.

Combating terrorist threats in Europe has always been a challenge on account of the sporadic nature of terrorists themselves. While the people behind the attacks may vary in socio-economic upbringing, religious affiliation and nationality, some similarities remain. Based on the commonalities, solutions to tackling internet-based strategies could be introduced. If the EU were to develop a common framework for disrupting and taking down radical groups online, it could find greater success in combating digital extremism. ISIL online networks on Telegram were taken down in November 2019, and they have since struggled to recreate networks to a similar degree.

Gender and age also give some insight for where to begin in diminishing future recruitment to ideology-based terrorism. While internet usage cannot be regulated, education can. Europe may benefit from the cooperation of educational institutions at all level in raising awareness of the dangers of online radicalization. Workshops, information posters, and seminars introducing the intricacies of radicalization would inform vulnerable students on the potential downfalls of internet consumption. This would create a clear understanding of modern conspiracy theories, where they come from and why they exist.

Additionally, understanding the meaning behind extremist imagery, symbols, numbers, phrases, and music (as well as how to report them on the internet) would increase awareness among otherwise distracted students consumed by online trends and activity.

Paired with the awareness commitment, the EU should set a budget meeting the needs of mental health services in schools to introduce spaces in which students may express their concerns. This in turn could curb their vulnerability to online extremist groups looking to recruit.

Content Moderation Presents New Obstacles in the Internet Age

Image Credit: Cogito Tech (Cogitotech)

The first instance of a terrorist recording violent crimes and posting it online occurred when Mohammed Merah — the perpetrator of the 2012 Toulouse and Montauban attacks in France — did just that with his GoPro. Seven years later, the culprit of the Christchurch mosque shootings used a similar method. These attacks both beg the same question: How are social media platforms like Facebook, YouTube and Twitter handling extremist content posted to their sites?

As a consequence, tech giants began the process of addressing this problem and seek to formulate a specific mechanism that targets extremist content. Facebook and Google focus significant attention towards development of their automated systems or AI (Artificial Intelligence) software to detect and eventually remove content that violates their policy.

The Global Internet Forum to Counter Terrorism (GIFCT) acts as a cooperative between tech companies to pool extremist content already in existence. A key purpose is to create unique digital fingerprints of contentious material called “hashes.” Hashes are then shared within the GIFCT community to ensure an expanded reach to tackle such material efficiently and the burden is lifted upon a single network to contain the bulk.

YouTube uses techniques like automated flagging also. Membership of their Trusted Flagger Program includes individuals, non-governmental organizations (NGO’s) and government agencies that are particularly effective at notifying YouTube of content that violates its Community Guidelines. YouTube has removed 8.2 million videos from its platform using these techniques as of March 2019.

In a Wired interview, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer described AI the “best tool” to keep the Facebook community safe. AI is not infallible though, as it sometimes fails to understand the nuances of online extremism and hate. This is the point where the human moderators enter the picture.

The Verge provided a detailed piece detailing the lives of Facebook content moderators. Once the post has been flagged, the moderator can either delete it, ignore it or send it for further review. The moderators are trained to look at signs that are distressing for any number of people.

It took 17 minutes for the original live stream of the Christchurch attack posted on Facebook to be removed. That was more than enough time for it to be downloaded, copied, and posted to other platforms. Facebook claims it removed 1.5 million copies of the Christchurch footage within the first 24 hours, but copies remain.

Content moderation is such a mammoth task for social media companies because of the sheer scale of their operations. Millions of people are online and accessing these services at the same time. Errors are expected. The Christchurch attack exposed a glaring shortcoming in content reporting: livestreaming. Moderation has mechanisms for standard uploaded videos but there are not enough tools to moderate a livestream.

Another issue facing social media companies remains the tech savvy nature of modern extremists. Such content can be uploaded by manipulating audio and video quality to bypass the filters in place. Language poses another problem as most of the automatic content moderation is English-language based. Nearly half of Facebook users do not speak English therefore the company needs to expand its technology to incorporate other languages.

Facebook, YouTube, Twitter and Instagram continue to develop their AI tools and improve their human moderator strategies. Nevertheless, the sections taking advantage of current security loopholes are evolving as well. With 4.3 billion internet users in the world in March of 2019, content moderation itself is under scrutiny.