Content Moderation Presents New Obstacles in the Internet Age

Image Credit: Cogito Tech (Cogitotech)

The first instance of a terrorist recording violent crimes and posting it online occurred when Mohammed Merah — the perpetrator of the 2012 Toulouse and Montauban attacks in France — did just that with his GoPro. Seven years later, the culprit of the Christchurch mosque shootings used a similar method. These attacks both beg the same question: How are social media platforms like Facebook, YouTube and Twitter handling extremist content posted to their sites?

As a consequence, tech giants began the process of addressing this problem and seek to formulate a specific mechanism that targets extremist content. Facebook and Google focus significant attention towards development of their automated systems or AI (Artificial Intelligence) software to detect and eventually remove content that violates their policy.

The Global Internet Forum to Counter Terrorism (GIFCT) acts as a cooperative between tech companies to pool extremist content already in existence. A key purpose is to create unique digital fingerprints of contentious material called “hashes.” Hashes are then shared within the GIFCT community to ensure an expanded reach to tackle such material efficiently and the burden is lifted upon a single network to contain the bulk.

YouTube uses techniques like automated flagging also. Membership of their Trusted Flagger Program includes individuals, non-governmental organizations (NGO’s) and government agencies that are particularly effective at notifying YouTube of content that violates its Community Guidelines. YouTube has removed 8.2 million videos from its platform using these techniques as of March 2019.

In a Wired interview, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer described AI the “best tool” to keep the Facebook community safe. AI is not infallible though, as it sometimes fails to understand the nuances of online extremism and hate. This is the point where the human moderators enter the picture.

The Verge provided a detailed piece detailing the lives of Facebook content moderators. Once the post has been flagged, the moderator can either delete it, ignore it or send it for further review. The moderators are trained to look at signs that are distressing for any number of people.

It took 17 minutes for the original live stream of the Christchurch attack posted on Facebook to be removed. That was more than enough time for it to be downloaded, copied, and posted to other platforms. Facebook claims it removed 1.5 million copies of the Christchurch footage within the first 24 hours, but copies remain.

Content moderation is such a mammoth task for social media companies because of the sheer scale of their operations. Millions of people are online and accessing these services at the same time. Errors are expected. The Christchurch attack exposed a glaring shortcoming in content reporting: livestreaming. Moderation has mechanisms for standard uploaded videos but there are not enough tools to moderate a livestream.

Another issue facing social media companies remains the tech savvy nature of modern extremists. Such content can be uploaded by manipulating audio and video quality to bypass the filters in place. Language poses another problem as most of the automatic content moderation is English-language based. Nearly half of Facebook users do not speak English therefore the company needs to expand its technology to incorporate other languages.

Facebook, YouTube, Twitter and Instagram continue to develop their AI tools and improve their human moderator strategies. Nevertheless, the sections taking advantage of current security loopholes are evolving as well. With 4.3 billion internet users in the world in March of 2019, content moderation itself is under scrutiny.

Digital Repression Keeps the Crisis in Sudan Hidden from the World

Photo Credit: Photographer Ahmed Mustafa of Agence France-Presse

“How Come My Heartbreak Isn’t Loud Enough?” This message signifies the calls of the Sudanese people who yearn for democracy. The issue is, few in the international community are aware as Sudan’s authoritarian regime restricts citizens’ access to the internet to deter pro-democratic demonstrations, and hide government actions against its own people. Sudan has many challenges to overcome to secure its democratic freedom and in order to do so, Khartoum must restore its digital freedom to share its struggle with the world.

Authoritarian regimes such as Gabon, Zimbabwe, Chad, and the Democratic Republic of the Congo have all blocked internet access to its citizens in the first three months of 2019. Sudan takes this repression a step further.

On April 3, a council of generals assumed power in the country against the wishes of democratic demonstrators who sought civilian rule. As a result, Sudan’s government shut down internet access to its citizens as a means to stop pro-democratic movements from mobilizing. Democratic movement activists are reduced to using text messages and secret meetings in order to organize and share information. This alternative process seems primitive compared to the share power of Twitter and Facebook.

Demonstrators engaged in a sit-in protest in Khartoum on June 3. The world did not seem to notice that this public activism turned violent when government forces used deadly force against protestors. Reports state that 30 anti-government protesters have been killed. Twitter users began to use and share the tag #BlueforSudan to spread awareness of the violent repression and support the Sudanane pro-democratic movement. Currently, Twitter users report closer to 500 deaths and 623 injured.

The blackout in the country appears to be working. With no video, pictures, or other forms of media coming from Khartoum, these atrocities are only verified by witness accounts. Major international media outlets seem weary to pick up the story.

Greater media coverage on the situation in Sudan is needed. Reporters and journalists are barred from entering the country however, there are additional means of information gathering. Al-Jazeera and NPR have both spoken to people about the occurrences, but additional coverage is required to increase awareness globally.

The United Nations Security Council recently debated the situation in Sudan and attempted to forward a unified bid condemning Sudanese actions. The draft was vetoed by China, with the backing of Russia and Kuwait, claiming it needed amendments. China claims it is an “internal issue”, while Russia asserts that the situation needed to be handled with extreme caution. Eight European nations condemned the actions by Sudan’s security forces, but as it stands, no formal action has been taken.

China typically defends Sudan’s government and its atrocities. An interest in Sudanese oil is linked to this stance. Since the discovery of oil in 1997, China invests heavily into the northeastern African nation and subsequently, defending it at the UN, even when action is needed. A transfer to a democratic framework puts Chinese oil imports in danger.

Following the coup this past April, Sudan announced that they will have a three year transfer to democracy.  On June 4th, Sudan’s government said they will have a ballot box election in nine months. The fear is that this mode of election will be rigged to favor the current administration.

The UN conducts election monitoring, when assistance is specifically requested, and this presents an opportunity to ensure a fair election in Sudan. This mechanism is beneficial if citizens doubt the integrity their national electoral process and seek outside assistance. A UN representative from the particular state, a mandate from the Security Council or General Assembly (GA) can initiate this process also. A GA mandate would be ideal, seeing the Security Council’s recent blocking to condemn Sudan’s actions.

International media outlets must report on Sudan’s current democratic struggle so that the country can have free and fair elections. These actions are only possible if the Sudanese government lifts its restrictions on civilian media, primarily internet access, so that interest builds in the situation. Media organizations must seek additional means, such as establishment with reliable sources, despite information blocks. The global community would devote greater attention to the crisis in Khartoum, and create a unified front, if they knew the state violence conducted by the Sudanese government.

Will the United Kingdom’s Online Harms White Paper Curb Extremism but Allow Expression?

On April 8, Theresa May turned to Twitter to make a bold statement. Upon the release of the United Kingdom’s Online Harms White Paper, a tweet noted, “The era of social media companies regulating themselves is over.” The 102-page policy document urges the establishment of new regulations which will hold all social media companies liable for harmful and extremist content. Is this a sensible way to deal with digital extremism?

Social media companies and platforms have a part to play in making the internet a safer place. In order to combat harmful content, the United Kingdom seeks to hold companies such as Google, Facebook, and Twitter responsible. Authorities in the United Kingdom plan to enforce penalties for harmful content, which would be a fine of 4% of global turnover or 20 million euro ($23 million), whichever is greater.

In addition to the fines, the United Kingdom aspires to create a regulatory body, and enact bans and restrictions on user content, limiting what citizens can view. Regulations on internet freedoms and bans will undoubtedly anger citizens. Countries such as Russia and China have similar authoritarian beliefs. Liberal democratic nations adopting parallel legislation potentially legitimizes such restrictions and can be viewed as a victory for extremists.

Overreaction by the government of the United Kingdom has extremely detrimental consequences. Changing online regulations and censoring citizens is a flawed legislative move. Passing this particular law encourages extremists because it shows their actions initiate socio-political change and cause legislative action. Further, it provokes pessimism in financial markets by causing a greater risk for tech startups.

Proactive responses to digital extremism and hoping to make the internet a safer place are at the core of the United Kingdom’s argument. The May government is correct in its mission, but its execution needs more work. Fining social media companies and censorship of user content seems more like a punishment rather than a solution.

The United Kingdom is faced with a few considerations should it proceed with the proposed White Paper. Public safety is of the utmost importance, as is the ability of free expression. Fining companies for the negligence of extreme content is justifiable. As extreme content lingers, it continues to spread. Thus, social media platforms are directly responsible for stopping hateful and extremist messaging.

Major social media companies — Facebook, Twitter, YouTube — must update their Terms of Service and ask all users to act as moderators. If content appears to be approaching an extreme or violent conclusion, it should be reported by the community. False reports regarding extreme content should have penalties as well, in order to ensure users are being responsible. This avenue permits millions to help protect cyberspace on their own terms. It would allow citizens to come together to combat online hate, which presents a powerful message against extremism.

If the UK plans on changing its online usage and how its users interact in online spaces, the people should have a say. The solutions in the United Kingdom’s Online Harms White Paper need to be more focused. The 12 weeks of consultation have begun and will end July 1st.

Currently, the resolutions to the proposed issues are very broad and seem severe. The best way to ensure a safer internet space is to create a unified community of users. Group accountability will help ensure the internet is a safer place as the people of the United Kingdom define it. This could be the beginning of a safer internet and a model for other countries.

The EU Calls for Removal of all Extremist Content on Social Media

The European Union has given social media companies like Google, YouTube, Facebook, and Twitter three months to demonstrate that they are making efforts to rid their platforms of extremist content in order to fight online radicalization. This has been a significant issue in Europe, and the European Commission hopes that by removing extremist content an hour after notification, social media companies can halt the proliferation of radicalization and extremist ideologies [1].

This could certainly help stop the lone-wolf radicalization phenomenon that’s been occurring online, but certain realities of this plan remain unclear. The proposal adds to the existing, voluntary system agreed by the EU and social media companies, under which social media platforms are not legally responsible for the content circulating on their sites [2].

It’s unclear how feasible the EU proposal is since companies’ attempts to deliver on the one hour mandate will be a struggle. For example, Google currently reviews 98% of reported videos within 24 hours [3].

The recommendations are non-binding, but could potentially be taken into account by European courts. For now, they are meant as guidelines for how companies should remove illegal content [4].

The next few months will demonstrate how the EU will proceed and whether tech companies will become more helpful in the fight against violent extremism. While it is certainly a step in the right direction with regard to decreasing online radicalization, there will be pushback from companies that find the increased effort and potential legal battles bothersome.


[1] Gibbs, S. (2018, March 1). EU gives Facebook and Google three months to tackle extremist content. Retrieved March 1, 2018, from http://www.theguardian.com/technology/2018/mar/01/eu-facebook-google-youtube-twitter-extremist-content

[2] Social media faces EU ‘1-hour rule’ on taking down terror content. (March 1, 2018.). Retrieved March 1, 2018, from https://www.ft.com/content/708b82c4-1d65-11e8-aaca-4574d7dabfb6

[3] Social media faces EU ‘1-hour rule’ on taking down terror content. (March 1,2018). Retrieved March 1, 2018, from https://www.ft.com/content/708b82c4-1d65-11e8-aaca-4574d7dabfb6

[4] Gibbs, S. (2018, March 1). EU gives Facebook and Google three months to tackle extremist content. Retrieved March 1, 2018, from http://www.theguardian.com/technology/2018/mar/01/eu-facebook-google-youtube-twitter-extremist-content