The first instance of a terrorist recording violent crimes and posting it online occurred when Mohammed Merah — the perpetrator of the 2012 Toulouse and Montauban attacks in France — did just that with his GoPro. Seven years later, the culprit of the Christchurch mosque shootings used a similar method. These attacks both beg the same question: How are social media platforms like Facebook, YouTube and Twitter handling extremist content posted to their sites?
As a consequence, tech giants began the process of addressing this problem and seek to formulate a specific mechanism that targets extremist content. Facebook and Google focus significant attention towards development of their automated systems or AI (Artificial Intelligence) software to detect and eventually remove content that violates their policy.
The Global Internet Forum to Counter Terrorism (GIFCT) acts as a cooperative between tech companies to pool extremist content already in existence. A key purpose is to create unique digital fingerprints of contentious material called “hashes.” Hashes are then shared within the GIFCT community to ensure an expanded reach to tackle such material efficiently and the burden is lifted upon a single network to contain the bulk.
YouTube uses techniques like automated flagging also. Membership of their Trusted Flagger Program includes individuals, non-governmental organizations (NGO’s) and government agencies that are particularly effective at notifying YouTube of content that violates its Community Guidelines. YouTube has removed 8.2 million videos from its platform using these techniques as of March 2019.
In a Wired interview, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer described AI the “best tool” to keep the Facebook community safe. AI is not infallible though, as it sometimes fails to understand the nuances of online extremism and hate. This is the point where the human moderators enter the picture.
The Verge provided a detailed piece detailing the lives of Facebook content moderators. Once the post has been flagged, the moderator can either delete it, ignore it or send it for further review. The moderators are trained to look at signs that are distressing for any number of people.
It took 17 minutes for the original live stream of the Christchurch attack posted on Facebook to be removed. That was more than enough time for it to be downloaded, copied, and posted to other platforms. Facebook claims it removed 1.5 million copies of the Christchurch footage within the first 24 hours, but copies remain.
Content moderation is such a mammoth task for social media companies because of the sheer scale of their operations. Millions of people are online and accessing these services at the same time. Errors are expected. The Christchurch attack exposed a glaring shortcoming in content reporting: livestreaming. Moderation has mechanisms for standard uploaded videos but there are not enough tools to moderate a livestream.
Another issue facing social media companies remains the tech savvy nature of modern extremists. Such content can be uploaded by manipulating audio and video quality to bypass the filters in place. Language poses another problem as most of the automatic content moderation is English-language based. Nearly half of Facebook users do not speak English therefore the company needs to expand its technology to incorporate other languages.
Facebook, YouTube, Twitter and Instagram continue to develop their AI tools and improve their human moderator strategies. Nevertheless, the sections taking advantage of current security loopholes are evolving as well. With 4.3 billion internet users in the world in March of 2019, content moderation itself is under scrutiny.
The world has been evolving each day, and by that, society is quickly adapting and…
“We are sinking”: A Speech from the Sea Tuvalu’s foreign minister Simon Kofe addressed the…
As the 21st commemoration of the September 11th terrorist attacks approaches, the solemn anniversary brings…
Remembering September 11th "The Black Swan Theory", coined by Nassim Nicholas Caleb, describes sporadic, unforeseen,…
The Global Terrorism Index (GTI), a comprehensive study prepared by the Institute for Economics and…
“Women’s security in the home is a reflection of the security in the country. If…