Content Moderation Presents New Obstacles in the Internet Age

Image Credit: Cogito Tech (Cogitotech)

The first instance of a terrorist recording violent crimes and posting it online occurred when Mohammed Merah — the perpetrator of the 2012 Toulouse and Montauban attacks in France — did just that with his GoPro. Seven years later, the culprit of the Christchurch mosque shootings used a similar method. These attacks both beg the same question: How are social media platforms like Facebook, YouTube and Twitter handling extremist content posted to their sites?

As a consequence, tech giants began the process of addressing this problem and seek to formulate a specific mechanism that targets extremist content. Facebook and Google focus significant attention towards development of their automated systems or AI (Artificial Intelligence) software to detect and eventually remove content that violates their policy.

The Global Internet Forum to Counter Terrorism (GIFCT) acts as a cooperative between tech companies to pool extremist content already in existence. A key purpose is to create unique digital fingerprints of contentious material called “hashes.” Hashes are then shared within the GIFCT community to ensure an expanded reach to tackle such material efficiently and the burden is lifted upon a single network to contain the bulk.

YouTube uses techniques like automated flagging also. Membership of their Trusted Flagger Program includes individuals, non-governmental organizations (NGO’s) and government agencies that are particularly effective at notifying YouTube of content that violates its Community Guidelines. YouTube has removed 8.2 million videos from its platform using these techniques as of March 2019.

In a Wired interview, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer described AI the “best tool” to keep the Facebook community safe. AI is not infallible though, as it sometimes fails to understand the nuances of online extremism and hate. This is the point where the human moderators enter the picture.

The Verge provided a detailed piece detailing the lives of Facebook content moderators. Once the post has been flagged, the moderator can either delete it, ignore it or send it for further review. The moderators are trained to look at signs that are distressing for any number of people.

It took 17 minutes for the original live stream of the Christchurch attack posted on Facebook to be removed. That was more than enough time for it to be downloaded, copied, and posted to other platforms. Facebook claims it removed 1.5 million copies of the Christchurch footage within the first 24 hours, but copies remain.

Content moderation is such a mammoth task for social media companies because of the sheer scale of their operations. Millions of people are online and accessing these services at the same time. Errors are expected. The Christchurch attack exposed a glaring shortcoming in content reporting: livestreaming. Moderation has mechanisms for standard uploaded videos but there are not enough tools to moderate a livestream.

Another issue facing social media companies remains the tech savvy nature of modern extremists. Such content can be uploaded by manipulating audio and video quality to bypass the filters in place. Language poses another problem as most of the automatic content moderation is English-language based. Nearly half of Facebook users do not speak English therefore the company needs to expand its technology to incorporate other languages.

Facebook, YouTube, Twitter and Instagram continue to develop their AI tools and improve their human moderator strategies. Nevertheless, the sections taking advantage of current security loopholes are evolving as well. With 4.3 billion internet users in the world in March of 2019, content moderation itself is under scrutiny.

Digital Repression Keeps the Crisis in Sudan Hidden from the World

Photo Credit: Photographer Ahmed Mustafa of Agence France-Presse

“How Come My Heartbreak Isn’t Loud Enough?” This message signifies the calls of the Sudanese people who yearn for democracy. The issue is, few in the international community are aware as Sudan’s authoritarian regime restricts citizens’ access to the internet to deter pro-democratic demonstrations, and hide government actions against its own people. Sudan has many challenges to overcome to secure its democratic freedom and in order to do so, Khartoum must restore its digital freedom to share its struggle with the world.

Authoritarian regimes such as Gabon, Zimbabwe, Chad, and the Democratic Republic of the Congo have all blocked internet access to its citizens in the first three months of 2019. Sudan takes this repression a step further.

On April 3, a council of generals assumed power in the country against the wishes of democratic demonstrators who sought civilian rule. As a result, Sudan’s government shut down internet access to its citizens as a means to stop pro-democratic movements from mobilizing. Democratic movement activists are reduced to using text messages and secret meetings in order to organize and share information. This alternative process seems primitive compared to the share power of Twitter and Facebook.

Demonstrators engaged in a sit-in protest in Khartoum on June 3. The world did not seem to notice that this public activism turned violent when government forces used deadly force against protestors. Reports state that 30 anti-government protesters have been killed. Twitter users began to use and share the tag #BlueforSudan to spread awareness of the violent repression and support the Sudanane pro-democratic movement. Currently, Twitter users report closer to 500 deaths and 623 injured.

The blackout in the country appears to be working. With no video, pictures, or other forms of media coming from Khartoum, these atrocities are only verified by witness accounts. Major international media outlets seem weary to pick up the story.

Greater media coverage on the situation in Sudan is needed. Reporters and journalists are barred from entering the country however, there are additional means of information gathering. Al-Jazeera and NPR have both spoken to people about the occurrences, but additional coverage is required to increase awareness globally.

The United Nations Security Council recently debated the situation in Sudan and attempted to forward a unified bid condemning Sudanese actions. The draft was vetoed by China, with the backing of Russia and Kuwait, claiming it needed amendments. China claims it is an “internal issue”, while Russia asserts that the situation needed to be handled with extreme caution. Eight European nations condemned the actions by Sudan’s security forces, but as it stands, no formal action has been taken.

China typically defends Sudan’s government and its atrocities. An interest in Sudanese oil is linked to this stance. Since the discovery of oil in 1997, China invests heavily into the northeastern African nation and subsequently, defending it at the UN, even when action is needed. A transfer to a democratic framework puts Chinese oil imports in danger.

Following the coup this past April, Sudan announced that they will have a three year transfer to democracy.  On June 4th, Sudan’s government said they will have a ballot box election in nine months. The fear is that this mode of election will be rigged to favor the current administration.

The UN conducts election monitoring, when assistance is specifically requested, and this presents an opportunity to ensure a fair election in Sudan. This mechanism is beneficial if citizens doubt the integrity their national electoral process and seek outside assistance. A UN representative from the particular state, a mandate from the Security Council or General Assembly (GA) can initiate this process also. A GA mandate would be ideal, seeing the Security Council’s recent blocking to condemn Sudan’s actions.

International media outlets must report on Sudan’s current democratic struggle so that the country can have free and fair elections. These actions are only possible if the Sudanese government lifts its restrictions on civilian media, primarily internet access, so that interest builds in the situation. Media organizations must seek additional means, such as establishment with reliable sources, despite information blocks. The global community would devote greater attention to the crisis in Khartoum, and create a unified front, if they knew the state violence conducted by the Sudanese government.

The Christchurch Call and Eliminating Violent Extremism Online

On March 15th, the world witnessed an atrocity that left fifty-one people dead at a mosque in Christchurch, New Zealand. A live stream video capturing the massacre circulated online across social media platforms for two months and enraged people across the globe.

The international community provided a response on May 15th. New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron announced the formation of a global initiative to combat online extremism and related terrorism. “The Christchurch Call to Action”  (The Call) is an agreement between countries and tech companies to unite in this difficult endeavor.

Ardern and Macron called upon countries and tech companies to voluntarily join this global initiative. An impressive list heeded this request. The purpose of The Call is to transform the internet into a safer environment through cooperation, education and research whilst protecting basic human rights and freedoms.

This global commitment stands tall against the United Kingdom’s Online Harms White Paper. In opposition to The Call, London suggested watchdogs, regulations, and fines to govern its cyberspace. The Christchurch Call offers a global voluntary commitment to making the internet safe, through collaboration between states and tech companies. It is important to give these entities the decision to join rather than threats of coercion. Joining on their own accord shows that The Call is a united front against online extremism.

Amazon, Facebook, Google, Microsoft, and Twitter released a nine-point plan, and a joint statement in response to The Call. This preliminary framework lays out five individual plans and four collaborative efforts, offering better security, updating terms of service, education, and shared technology development.

The United States represents one of the countries that were unwilling to join. Washington stated that while they supported the overall goal, it was not an appropriate time to sign on. Concerns rest with freedom of expression. In the past,  the Trump Administration accused social media companies of denying these rights.

The governance of cyberspace presents the main issue for American interests. Cyberspace mirrors the Wild West. It is largely self-governed where no state can claim authority. The only entities who manage it are people and companies. The Call initiates the conversation over the governance of cyberspace and if it can be governed in the first place.

If signed, states not only volunteer to safeguard the internet, but for it to be governed by all signatories. It is problematic if these countries do not agree with one another. Many countries use cyberspace for various purposes that may conflict with The Call and signing it may forfeit states’ rights to act in cyberspace freely.

Another point of interest is the co-existence of the Online Harms White Paper and The Call. They both tackle the same issue but in different ways. The differences in approaching the same problem creates possible dysfunction. Already there is a conflict of interest regarding appropriate methods of combating online extremism and online terrorism between states who have signed The Call.

Ideas and solutions must be consistent in order to regulate cyberspace. Discussion over how to achieve goals is expected but one country implementing punitive regulations and another pursuing a holistic approach sends a mixed message.

As it stands, the Christchurch Call to Action appears as a list of strategies states and tech companies plan to implement. These include calls for transparency, collaboration, and better security. Terrorism is a complicated social issue, but having key actors working together to counter online terror and extremism is a giant leap forward. It will be interesting to witness how states work with each other and how they collaborate with tech companies to address the issue.

Terror’s New Form

Source: The East African (2014)

Author: Caleb Septoff

Perhaps one of the greatest scientific achievements in human history is the invention of the internet, which landmarked the beginning of the digital age in the modern era. Its uses span multiple fields and in large part is responsible for the high levels of rapid globalization we have become accustomed to today. Although it has improved humanity in many facets, it has also led to the increase in the susceptibility of nations’ and individuals to cyber-attacks. The internet has evolved over the last decade with the inception of social media and cyber currency, but with this evolution comes a new wave of terrorism in the form of cyber-attacks, propaganda, hacking, and online recruitment. The threat has grown substantially – enough for even university institutions, namely New York University (NYU), to offer cyber security majors and courses solely to deter these types of attacks.

Before venturing into the subject of digital terrorism, it is important to explore something less widely known to the average internet user; this being the deep web and dark net. The internet is composed of two main points of access; the surface web and the dark web. The surface web is most common to everyday users and comprises mainly of search engines, like Google and Bing, and the information found is unrestricted. Comparatively, the deep web differs mainly in size, estimated at four to five hundred times bigger than the surface web, accounting for 90% of the internet. In comparison to the surface web, the wealth of information stored on the deep web is gigantic. Most of the deep web is restricted by applications, which grant access to databases or password protected sites. Anything from social media, such as Facebook or Instagram, to online banking are considered part of the deep web. In addition to its size, the dark web differs  in its accessibility. Despite popular beliefs, the deep web and dark net are not synonymous. Rather, the dark net exists hidden below the surface web. The dark net is almost entirely unregulated and is even harder to access than the deep web. To date, the dark net hosts an unknown number of websites, but the content ranges from people sending messages who wish to maintain anonymity to underground drug dealing, sex trafficking, weapons dealing, and the focus of this article, terrorists and extremists’ sites.

The Islamic State of Iraq and the Levant (ISIL) or Daesh, was the first terrorist organization to truly maximize their outreach using the internet. When Abu Bakar al-Baghdadi declared the caliphate, a wave of propaganda and recruitment media took social media by storm. While destructive, authorities and the companies themselves were able to mitigate much of the content since it took place on the more accessible surface web. However, the organization consistently found new ways to respond to authorities’ crackdowns. First, they began attracting people through social media and other corners of the surface web and then slowly moved them towards more difficult protected places like domains and chat rooms on the dark net. In addition, the use of messaging applications that offered heavy encryption, like Telegram, were core ways for them to communicate. The use of these cyber tools aided in attracting over 20,000 foreign fighters from more than 10 different countries to flock to Syria to fight on ISIL’s behalf, and even more followers aided the organization from remote positions around the globe. In early 2018, New York Times’ reporter, Rukimini Callimachi, released a podcast by the name of “Caliphate.” The podcast goes into detail about one Canadian man’s experience of being recruited through multiple steps, starting on social media and eventually moving into private chat rooms. Callimachi’s reporting highlights how effective ISIL’s extensive reach was, not only technologically, but by simply creating effective connections with people, especially the youth.

Thus far, terrorists’ groups have not been able to do much more than the defacement of webpages and execution of minor cases of hacking. For example, a series of attacks in 2015, all claiming ties to Daesh, were executed in various countries. Most notably, a self-titled group called Cyber Caliphate managed to hack Malaysia Airlines’ main website, deface the French TV5 broadcast station, and hack the US military Central Command’s YouTube and Twitter accounts. Technology is continuously growing and it gets more sophisticated every year. As greater attention turns to digital recruitment and terrorism, these “small” attacks will grow larger in scope and harm. The possibility of cutting electric to hospitals or inciting mass riots through the spread of false media is very real and dangerous. The need to find adequate responses to the rising dangers of cyber terrorism is crucial to the future of counter terrorism. Perhaps most conspicuously, the important question becomes how to best be proactive in thwarting attacks and rather than simply being reactive.

The international community has a plethora of different third-party watch dogs when it comes to war and terrorism, whether they come in the form of global entities like the United Nations (UN) or International Non-Profit Organizations (INGO). In addition, a multitude of international treaties and agreements exist to set standards for war and outline what is not acceptable. The Geneva Convention, one of the most important and widely known, is comprised of four treaties and three protocols that establish standards for humanitarian rights and treatment during times of war. Yet, something these organizations don’t cover adequately is how to respond to cyber warfare and digital terrorism. One of the greatest challenges in dealing with these online threats is attribution, or ascribing blame to those who have committed the crime and proving it. According to a RAND Corporation video on the subject, they identify three main types of attribution: political (dealing with diplomatic knowledge and political actors’ objectives), technical (IP addresses, log file analysis, etc.), and clandestine (classified information and political insights).

Categorizing makes it easier to decide how to interpret the crime and, thus, how to assign punishment. However, it is not simple to prove digital crimes without access to data that, for the most part, is private, anonymous and not easily tracked. Citizens’ right to privacy and the level of privacy that is entitled has become a topic of high contention in the debate for higher cyber security. Although these are difficult issues to deal with, the international community needs to step up and begin to take action before cyber warfare reaches a level with much higher stakes. Like the UN, there needs to be a large international organization that can specialize in cyber security and cyber terrorism. It would require the nonexistence of any political affiliation to be effective and act on behalf of any country that requires its services to increase its credibility. Perhaps, most important, would be its role in providing international laws on cyber warfare and attacks to clearly and concisely build a foundation or framework for security agencies to work from. It would also be responsible for developing the mechanisms for freedom of expression and privacy; although this would most likely fall to the specific countries rather than the independent watch dog organization.

Social media platforms have done relatively well at combing through their users and content to locate possible terrorist activities, but this is not enough. Further action needs to be taken regarding regulation. Systems need to be devised to adequately monitor both the surface web content and the deep and dark web to locate, deter and respond to these threats before they can implement harm to critical infrastructures, governments, businesses, and even the psyches of viewers. Creating measures to regulate data and prevent data mining for terrorist activities is crucial to preventing the attacks in the future. There is no easy answer to the rising threat of cyber terrorism and warfare, but it’s imperative that solutions and international cooperation begins sooner than later.