The Conversation
Edition:
Available editions
Africa
Job Board
Become an author
Sign up as a reader
Sign in
The Conversation
Academic rigour, journalistic flair
Arts + Culture
Business + Economy
Education
Environment + Energy
Health + Medicine
Politics + Society
Science + Technology
In French
How tech companies are successfully disrupting terrorist social media activity
June 26, 2018 12.53pm SAST
Author
Stuart Macdonald
Professor of Law, Swansea University
Disclosure statement
In 2016/17 Stuart Macdonald received funding from the Fulbright Commission to research violent extremism online. Swansea University is a member of the research network funded by the Global Internet Forum to Counter Terrorism.
Partners
Swansea University
Swansea University provides funding as a member of The Conversation UK.
The Conversation is funded by Barclays Africa and eight universities, including the Cape Peninsula University of Technology, Rhodes University, Stellenbosch University and the Universities of Cape Town, Johannesburg, Kwa-Zulu Natal, Pretoria, and South Africa. It is hosted by the Universities of the Witwatersrand and Western Cape, the African Population and Health Research Centre and the Nigerian Academy of Science. The Bill & Melinda Gates Foundation is a Strategic Partner. more
Republish this article
Republish
Republish our articles for free, online or in print, under Creative Commons licence.
Platforms for radicalisation? pixabay/7stonesgfx, CC BY
Email
Twitter12
Facebook21
LinkedIn
Print
In June 2017, Google, Facebook, Twitter and Microsoft announced the formation of the Global Internet Forum to Counter Terrorism (GIFCT). The aim of this industry-led initiative is to disrupt the terrorist exploitation of its services. Recently, GIFCT members hailed the achievements of its first year of operations. But, while this progress must be acknowledged, significant challenges remain.
Every single minute there are on average 510,000 comments and 136,000 photos shared on Facebook, 350,000 tweets posted on Twitter and 300 hours of video uploaded to YouTube.
Given this, the biggest companies extensively rely on artificial intelligence (AI). Facebook’s uses of AI include image matching. This prevents users from uploading a photo or video that matches another photo or video that has previously been identified as terrorist. Similarly, YouTube reported that 98% of the videos that it removes for violent extremism are also flagged by machine learning algorithms.
Progress so far
One difficulty the social media companies face is that, if a terrorist group is blocked from one platform, it might simply move to a different one. In response to this, GIFCT members have created a shared industry database of “hashes”. A hash is a unique digital fingerprint that can be used to track digital activity. When pro-terrorist content is removed by one GIFCT member, its hash is shared with the other participating companies to enable them to block the content on their own platforms.
At its recent meeting, the GIFCT announced that to date 88,000 hashes have been added to the database. So the consortium is on track to meet its target of 100,000 hashes by the end of 2018. Especially so, now that another nine companies have joined the consortium, including Instagram, Justpaste.it and LinkedIn.
These efforts have undoubtedly disrupted terrorists’ use of social media platforms. For example, in the 23 months since August 1, 2015, Twitter has suspended almost a million accounts for promoting terrorism. In the second half of 2017, YouTube removed 150,000 videos for violent extremism. Nearly half of these were removed within two hours of upload.
Future challenges
Yet much further work remains. In response to the disruption of their use of Twitter, supporters of the so-called Islamic State (IS) have tried to circumvent content blocking technology by what is known as outlinking, using links to other platforms. Interestingly, the sites that are most commonly outlinked to include justpaste.it, sendvid.com and archive.org. This appears to be a deliberate strategy to exploit smaller companies’ lack of resources and expertise.
Telegram has been banned in Russia and Iran. EPA
IS supporters have also moved their community-building activities to other platforms, in particular Telegram. Telegram is a cloud-based instant messaging service that provides optional end-to-end encrypted messaging. This encryption stops messages being read by third parties. And it has been used extensively to share content produced by official IS channels.
This forms part of a wider movement towards more covert methods. Other encrypted messaging services, including WhatsApp, have been used by jihadists for communication and attack-planning. Websites have also been relocated to the Darknet. The Darknet is a hidden part of the internet that is anonymous in nature and only accessed using specialist encryption software. A 2018 report warned that Darknet platforms have the potential to function as a jihadist “virtual safe-haven.”
Read more: Since Boston bombing, terrorists are using new social media to inspire potential attackers
In addition, recent research has found that supporters of jihadist groups other than IS experience significantly less disruption on Twitter. Supporters of these other groups were able to post six times as many tweets, follow four times as many accounts and gain 13 times as many followers as pro-IS accounts.
It is also important to respond to other forms of violent extremism. Extreme right-wing groups also have a significant presence on platforms such as YouTube and Facebook. While steps have been taken to disrupt their presence online, such as Facebook’s decision to ban Britain First from its platform, it appears that these groups are also beginning to migrate to the Darknet.
Overreach
Just as there is an issue of reaching terrorist social media, there are also challenges relating to potential overreach. Machine learning algorithms cannot be expected to identify terrorist content with 100% accuracy. Some content will be wrongly identified as terrorist and blocked or removed. But the challenges here go further than just applying the threshold correctly. They also concern where the threshold should be drawn in the first place.
The difficulties in defining terrorism are well known. Summed up by the slogan “One person’s terrorist is another’s freedom fighter”, one of the most controversial definitional issues is that of just cause. Should a definition of terrorism exclude those such as pro-democracy activists in a country ruled by an oppressive and tyrannical regime? According to many countries, including the UK, the answer is no. As one Court of Appeal judge put it: “Terrorism is terrorism, whatever the motives of the perpetrators.”
If social media companies take a similar approach, this could have some significant ramifications. Indeed, there are already worrying examples. In 2017, thousands of videos documenting atrocities in Syria were removed from YouTube by new technology aimed at extremist propaganda. These videos provided important evidence of human rights violations. Some existed only on YouTube, since not all Syrian activists and media can afford an offline archive. Yet the alternative - to seek to distinguish between just and unjust causes - is fraught with difficulties of its own.
At a time when social media companies face increasing pressure to do more to tackle terrorist exploitation of their platforms, the progress made during the GIFCT’s first year is welcome. But it is only the first step.
Read more: The UK's plan to deny terrorists 'safe spaces' online would make us all less safe in the long run
Social media
Internet
Terrorism
Online
Technology
Propaganda
Radicalisation
Digital
Jihadism
terror
Global Perspectives
Tweet
Share
Get newsletter
You might also like
Blocking extremist sites is not the same as fighting child porn
Could encryption ‘backdoors’ safeguard privacy and fight terror online?
How terrorists use Twitter to become ‘brand ambassadors’
Here’s how radical groups like Islamic State use social media to attract recruits
Sign in to comment
4 Comments
Oldest Newest
Show all comments
Most popular on The Conversation
Cameroon’s Anglophone crisis threatens national unity. The time for change is now
Nigeria is not ready to hold free and fair elections next year. Here’s why
Here are three ways that cities can adapt to changing climates
Explosion at rally proves that Ethiopia isn’t out of the woods yet
Persecution of ethnic Amharas will harm Ethiopia’s reform agenda
Women’s unpaid work must be included in GDP calculations: lessons from history
Police strategy to reduce violent crime in South Africa could work. Here’s how
Scene is set for interesting contest in Zimbabwe’s upcoming poll
Foreign doctorates are attractive – but don’t write off homegrown PhDs
The World Bank is finding new ways to understand South Africa’s issues
Expert Database
Find experts with knowledge in:*
Want to write?
Write an article and join a growing community of more than 69,200 academics and researchers from 2,396 institutions.
Register now
The Conversation
Community
Community standards
Republishing guidelines
Research and Expert Database
Analytics
Job Board
Our feeds
Company
Who we are
Our charter
Our team
Partners and funders
Contributing institutions
Resource for media
Contact us
Stay informed and subscribe to our free daily newsletter and get the latest analysis and commentary directly in your inbox.
Email address
Follow us on social media
Privacy policy Terms and conditions Corrections
Copyright © 2010–2018, The Conversation Africa, Inc.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment