The Continued Impact of Technology on Terrorism

Technology has greatly evolved in the past decades to the point that it is fair to speak of a technological revolution. Mobile phones, personal computers, and the Internet are commonplace in everyday life. More specifically, the evolution of information and communication technology has radically changed not only the way people communicate, but also ways of thinking and understanding complex matters.

Notwithstanding the recognized benefits of new technologies, there are concerns regarding their dual use. Recent events demonstrate that technological developments have been misused by non-state actors, such as terrorist groups. In fact, many terrorist organizations have been quick to exploit rapid technological advances to aid in the manufacture of weapons, ammunition and explosives. The use of military technology by such groups is one of the most severe threats currently faced worldwide.

However, developments in the information and communications technology (ICT) sector are even more alarming. Indeed, the use of digital and Internet platforms and their possible misuse by terrorists requires significant attention in any discussion focused on the topic. Social media platforms, Internet forums and online messaging applications have undoubtedly become terrorist propaganda mechanisms.

The use of information and communication technologies as tools for radicalization and recruitment is now common. Many terrorist organizations have managed to build a vast, sophisticated network of supporters from all over the world. Moreover, such technologies provide a major source of inspiration for lone actor terrorists who either have attempted or successfully carried out attacks after watching live-streamed attacks or speeches by leading members of terrorist groups inciting people to commit violence.

Planning an attack is now much easier as there are websites that provide all the necessary information about means and methods. These sites are easily accessed by the public thus permitting would-be terrorists to download instructions, such as those related to bomb-making, from the Internet.

In addition, digital technology has influenced the media. Changes in media technology have enabled terrorists to easily disseminate their message to wider audiences. Violence may instill fear, but live images attract the attention needed to cause widespread reaction, influence public opinion and mobilize moderates around the world. For years now, terrorists avail themselves of the ability to broadcast live on television.

Real-time TV coverage of an attack helps terrorist organizations to achieve their objectives: promotion of their cause to the widest possible audience, incite fear in the intended target audience and recruitment of new members. In some cases, the over-coverage of such events may unwittingly exacerbate the problem, instead of simply providing information to the public. It is therefore important that journalists avoid the further incitement of already present public fear and the over-emphasis of the motives behind an attack while reporting on terrorism.

In their attempt to prevent terrorists from exploiting digital platforms, leading tech companies cooperate with law enforcement for counter-terrorism purposes. In this sense, working closely with counter terrorism officers and security experts, social media companies improved their ‘takedown’ policies, weeding out an enormous number of accounts with the aim to reduce or even eliminate terrorists’ presence on technology platforms.

Furthermore, law enforcement authorities have also intensified monitoring of the contents disseminated online in order to detect and remove terrorist propaganda. In fact, a new technology able to automatically detect terrorist content on any online platforms and stop it before it ever reaches the internet has recently been developed.

To sum up, while technology continues to evolve rapidly, technology and media companies should work together with the competent authorities to combat terrorism and to prevent terrorist groups from recruiting new members. Although the public has the right to be informed on matters of public concern, media professionals should be particularly vigilant when it comes to the coverage of terrorism issues. They should aim at keeping the public informed without offering terrorists the publicity they seek.

In addition, as long as terrorists exploit new technological developments and online technologies, counter-terrorism authorities must detect and delete any online material that promotes terrorism or encourages violence. It is therefore essential that everyone collaborate in order to address this global challenge.

Drones: Weapons of Terror?

Yemen’s Houthi rebels have taken responsibility of the drone attack on Saudi Arabia’s state-owned oil sites in Abqaiq and Khurais. These strikes have escalated tensions in the Middle East. Sources report that 5 million barrels a day of crude oil production were impacted; this impacted the half of Saudi’s output or 5% of the world’s output.

The Houthis claimed that the attacks were in retaliation of the years of airstrikes on its citizens and they will continue to expand their targets. They carried out the attacks via 10 drones. The claims of the Houthis have been challenged by the US, which continues to state that Iran orchestrated the attacks. Iran has vehemently denied involvement and warned the United States it would retaliate “immediately” if targeted over the attacks.

This is not the first instance of the use of unmanned aerial vehicles (UAV)/drone technology by extremist groups. ISIL has made the most of advances in the field of drone technology. While organizations like Hamas, Hezbollah and Jabhat Fatah al-Sham have their own drone programs, it took these groups a considerable time to apply the drone technology in conflict situations. Compared to the slow adoption by other groups, the Islamic State adopted drone technology exponentially. This can be partly attributed to the development, availability, and commercialization of the technology. The application by ISIL involves a modification of the existing drone’s design or even constructing them from scratch once the basic blueprint from the commercialized drones is available.

ISIL’s first use of drones was for reconnaissance purposes. By September and October 2016, they had managed to weaponize the drones by attaching explosives and releasing them on the intended target. The first recorded incident was in October 2016 when two Kurdish Peshmerga soldiers were killed, and two French special forces soldiers were injured after a drone they were inspecting exploded.

A 2017 report provides detailed insight into the ISIL drone program, identified separate centers for training, weaponization, modification, and maintenance, as well as the existence of a center for storage and distribution. Owing to ISIL’s sophistication, each of these centers, based in Raqqa, also had their own separate command structure.

The Taliban has also used the drones in recent years. Much like other groups in the region, the use of drones has been mostly for surveillance, there not many reports of the Taliban using weaponized drones against its opposers. In October 2016, they released drone footage showing a suicide bomber driving a Humvee into a police base in Helmand province, the largest province in Afghanistan.

In the latest reports, Taliban insurgents in Afghanistan have been using unmanned aerial vehicles to monitor US troops, and their coalition partners in Afghanistan, Air Force Research Laboratory official Tom Lockhart revealed.

Outside the Middle East and Central Asia, drones have also been used in Central America. In August 2018 Venezuelan President Nicolas Maduro said he escaped an “assassination” attempt that used an explosive-laden drone after a live broadcast showed him being escorted away by his security personnel when a bang went off during a Caracas military parade. His government said seven soldiers were wounded in the incident.

The easy access, affordability of drones, and the modifications they can undergo, make them a tricky technology to tackle. While it is the militarized drones grab headlines, the real value of UAVs lays in surveillance, according to Paul Scharre, a senior fellow and director of the technology and national security program at Center for a New American Security (CNAS). Small, cheap drones can stay in the air for a considerable amount of time. The military drones are used to get a better view of the battlefield and gain a tactical edge on opponents. That is true for extremist groups as we saw in the example of the Taliban.

Militarized drones, the kind probably available to groups such as the Houthis, are heavier and can carry several pounds of explosives at speeds up to 160km/h with a range of 650km. They have an immense tactical advantage as most can fly lower than current technology is capable of detecting, which was the case for the drone strike at the oil sites.

Countering drone attacks may lie in jamming the communication links that allow them to operate.  Drones generally rely on a GPS or radio link to a human controller, which can be blocked or hijacked. This seems like a good strategy for a conflict zone, but jamming communications in a typical civilian setting, like at an airport, can have more devastating consequences.

Whether the responsibility for the attacks lies with the Houthis or Iran, the attack on Saudi oil sites has demonstrated the difference in the adaptability of the drone technology and the lack of a fitting defensive technology.

Image Credit: Forbes

Content Moderation Presents New Obstacles in the Internet Age

Image Credit: Cogito Tech (Cogitotech)

The first instance of a terrorist recording violent crimes and posting it online occurred when Mohammed Merah — the perpetrator of the 2012 Toulouse and Montauban attacks in France — did just that with his GoPro. Seven years later, the culprit of the Christchurch mosque shootings used a similar method. These attacks both beg the same question: How are social media platforms like Facebook, YouTube and Twitter handling extremist content posted to their sites?

As a consequence, tech giants began the process of addressing this problem and seek to formulate a specific mechanism that targets extremist content. Facebook and Google focus significant attention towards development of their automated systems or AI (Artificial Intelligence) software to detect and eventually remove content that violates their policy.

The Global Internet Forum to Counter Terrorism (GIFCT) acts as a cooperative between tech companies to pool extremist content already in existence. A key purpose is to create unique digital fingerprints of contentious material called “hashes.” Hashes are then shared within the GIFCT community to ensure an expanded reach to tackle such material efficiently and the burden is lifted upon a single network to contain the bulk.

YouTube uses techniques like automated flagging also. Membership of their Trusted Flagger Program includes individuals, non-governmental organizations (NGO’s) and government agencies that are particularly effective at notifying YouTube of content that violates its Community Guidelines. YouTube has removed 8.2 million videos from its platform using these techniques as of March 2019.

In a Wired interview, Facebook’s Chief Technology Officer (CTO) Mike Schroepfer described AI the “best tool” to keep the Facebook community safe. AI is not infallible though, as it sometimes fails to understand the nuances of online extremism and hate. This is the point where the human moderators enter the picture.

The Verge provided a detailed piece detailing the lives of Facebook content moderators. Once the post has been flagged, the moderator can either delete it, ignore it or send it for further review. The moderators are trained to look at signs that are distressing for any number of people.

It took 17 minutes for the original live stream of the Christchurch attack posted on Facebook to be removed. That was more than enough time for it to be downloaded, copied, and posted to other platforms. Facebook claims it removed 1.5 million copies of the Christchurch footage within the first 24 hours, but copies remain.

Content moderation is such a mammoth task for social media companies because of the sheer scale of their operations. Millions of people are online and accessing these services at the same time. Errors are expected. The Christchurch attack exposed a glaring shortcoming in content reporting: livestreaming. Moderation has mechanisms for standard uploaded videos but there are not enough tools to moderate a livestream.

Another issue facing social media companies remains the tech savvy nature of modern extremists. Such content can be uploaded by manipulating audio and video quality to bypass the filters in place. Language poses another problem as most of the automatic content moderation is English-language based. Nearly half of Facebook users do not speak English therefore the company needs to expand its technology to incorporate other languages.

Facebook, YouTube, Twitter and Instagram continue to develop their AI tools and improve their human moderator strategies. Nevertheless, the sections taking advantage of current security loopholes are evolving as well. With 4.3 billion internet users in the world in March of 2019, content moderation itself is under scrutiny.