The Christchurch Call and Eliminating Violent Extremism Online

On March 15th, the world witnessed an atrocity that left fifty-one people dead at a mosque in Christchurch, New Zealand. A live stream video capturing the massacre circulated online across social media platforms for two months and enraged people across the globe.

The international community provided a response on May 15th. New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron announced the formation of a global initiative to combat online extremism and related terrorism. “The Christchurch Call to Action”  (The Call) is an agreement between countries and tech companies to unite in this difficult endeavor.

Ardern and Macron called upon countries and tech companies to voluntarily join this global initiative. An impressive list heeded this request. The purpose of The Call is to transform the internet into a safer environment through cooperation, education and research whilst protecting basic human rights and freedoms.

This global commitment stands tall against the United Kingdom’s Online Harms White Paper. In opposition to The Call, London suggested watchdogs, regulations, and fines to govern its cyberspace. The Christchurch Call offers a global voluntary commitment to making the internet safe, through collaboration between states and tech companies. It is important to give these entities the decision to join rather than threats of coercion. Joining on their own accord shows that The Call is a united front against online extremism.

Amazon, Facebook, Google, Microsoft, and Twitter released a nine-point plan, and a joint statement in response to The Call. This preliminary framework lays out five individual plans and four collaborative efforts, offering better security, updating terms of service, education, and shared technology development.

The United States represents one of the countries that were unwilling to join. Washington stated that while they supported the overall goal, it was not an appropriate time to sign on. Concerns rest with freedom of expression. In the past,  the Trump Administration accused social media companies of denying these rights.

The governance of cyberspace presents the main issue for American interests. Cyberspace mirrors the Wild West. It is largely self-governed where no state can claim authority. The only entities who manage it are people and companies. The Call initiates the conversation over the governance of cyberspace and if it can be governed in the first place.

If signed, states not only volunteer to safeguard the internet, but for it to be governed by all signatories. It is problematic if these countries do not agree with one another. Many countries use cyberspace for various purposes that may conflict with The Call and signing it may forfeit states’ rights to act in cyberspace freely.

Another point of interest is the co-existence of the Online Harms White Paper and The Call. They both tackle the same issue but in different ways. The differences in approaching the same problem creates possible dysfunction. Already there is a conflict of interest regarding appropriate methods of combating online extremism and online terrorism between states who have signed The Call.

Ideas and solutions must be consistent in order to regulate cyberspace. Discussion over how to achieve goals is expected but one country implementing punitive regulations and another pursuing a holistic approach sends a mixed message.

As it stands, the Christchurch Call to Action appears as a list of strategies states and tech companies plan to implement. These include calls for transparency, collaboration, and better security. Terrorism is a complicated social issue, but having key actors working together to counter online terror and extremism is a giant leap forward. It will be interesting to witness how states work with each other and how they collaborate with tech companies to address the issue.

Will the United Kingdom’s Online Harms White Paper Curb Extremism but Allow Expression?

On April 8, Theresa May turned to Twitter to make a bold statement. Upon the release of the United Kingdom’s Online Harms White Paper, a tweet noted, “The era of social media companies regulating themselves is over.” The 102-page policy document urges the establishment of new regulations which will hold all social media companies liable for harmful and extremist content. Is this a sensible way to deal with digital extremism?

Social media companies and platforms have a part to play in making the internet a safer place. In order to combat harmful content, the United Kingdom seeks to hold companies such as Google, Facebook, and Twitter responsible. Authorities in the United Kingdom plan to enforce penalties for harmful content, which would be a fine of 4% of global turnover or 20 million euro ($23 million), whichever is greater.

In addition to the fines, the United Kingdom aspires to create a regulatory body, and enact bans and restrictions on user content, limiting what citizens can view. Regulations on internet freedoms and bans will undoubtedly anger citizens. Countries such as Russia and China have similar authoritarian beliefs. Liberal democratic nations adopting parallel legislation potentially legitimizes such restrictions and can be viewed as a victory for extremists.

Overreaction by the government of the United Kingdom has extremely detrimental consequences. Changing online regulations and censoring citizens is a flawed legislative move. Passing this particular law encourages extremists because it shows their actions initiate socio-political change and cause legislative action. Further, it provokes pessimism in financial markets by causing a greater risk for tech startups.

Proactive responses to digital extremism and hoping to make the internet a safer place are at the core of the United Kingdom’s argument. The May government is correct in its mission, but its execution needs more work. Fining social media companies and censorship of user content seems more like a punishment rather than a solution.

The United Kingdom is faced with a few considerations should it proceed with the proposed White Paper. Public safety is of the utmost importance, as is the ability of free expression. Fining companies for the negligence of extreme content is justifiable. As extreme content lingers, it continues to spread. Thus, social media platforms are directly responsible for stopping hateful and extremist messaging.

Major social media companies — Facebook, Twitter, YouTube — must update their Terms of Service and ask all users to act as moderators. If content appears to be approaching an extreme or violent conclusion, it should be reported by the community. False reports regarding extreme content should have penalties as well, in order to ensure users are being responsible. This avenue permits millions to help protect cyberspace on their own terms. It would allow citizens to come together to combat online hate, which presents a powerful message against extremism.

If the UK plans on changing its online usage and how its users interact in online spaces, the people should have a say. The solutions in the United Kingdom’s Online Harms White Paper need to be more focused. The 12 weeks of consultation have begun and will end July 1st.

Currently, the resolutions to the proposed issues are very broad and seem severe. The best way to ensure a safer internet space is to create a unified community of users. Group accountability will help ensure the internet is a safer place as the people of the United Kingdom define it. This could be the beginning of a safer internet and a model for other countries.