According to the NGO Syrian Archive, there are more hours of video documenting the Syrian civil war than the duration of the conflict itself. Since the Arab Spring, social media have allowed opposition activists across the Middle East and North Africa (MENA) to document human rights abuses by governments and non-governmental forces. For example, 90% of Syrian Archive’s material comes from social media, and social media videos have been used as evidence in international war crimes trials. Social media have also been crucial for the organisation of protests and opposition movements, and for reaching out to international supporters. 

Yet relying on the large, U.S.-based commercial platforms which dominate the social media landscape leaves activists vulnerable. MENA activists have long raised concerns about heavy-handed or biased content moderation policies which remove swathes of content from opposition activists and critical journalists. This does not just affect grassroots activists, but also reputable news outlets, highly-regarded bloggers and prominent NGOs

Social media companies make grand claims about ‘giving everyone a voice’ and standing up for free expression, but in practice they frequently stifle criticism of regimes, hamper political organisation, and destroy vital evidence of human rights abuses. If they want to live up to their supposed mission statements, urgent policy changes are needed. 

What is content moderation?

Content moderation refers to social media platforms’ removal of unwanted content and/or user accounts. This may happen because the content is illegal, or simply because the platforms consider it undesirable for users and advertisers, and thus commercially disadvantageous. The moderation policies of private platforms are guided primarily by their own commercial interests, but play a vital – if often invisible – role in our everyday internet use, as US academic Sarah Roberts has highlighted. Without it, platforms for user-generated content would rapidly devolve into a “cesspool” of graphic violence and abuse, becoming all but unusable. 

Platforms set their own policies explaining what content is banned, but these are generally vague and leave plenty of room for discretion by individual moderators, or by the higher-level policy teams which intervene in the most high-profile or politically sensitive decisions. The major platforms have large teams consisting of thousands of human moderators (typically low-paid workers from outsourcing companies) manually reviewing content which is flagged by users. Platforms also increasingly use AI and other technologies to automatically flag and remove content before any user even sees it. 

Why is content moderation harming activists?

There are four interrelated reasons that the major social networks’ content moderation practices disproportionately silence and harm activists in the MENA region: mistaken takedowns, misguided policies, capitulation to authoritarian governments, and failure to prevent abuse. These problems are by no means unique to the region. Just last month, for example, Nigerians protesting police violence found their social media posts removed, apparently because their #EndSARS hashtag had been confused with misinformation about the SARS-CoV-2 pandemic. 

However, as Tunisian lawyer Wafa Ben Hassine suggests, in light of its young population, high internet penetration, and numerous authoritarian regimes and conflicts, the MENA region is perhaps where these problems are most acutely visible – and where action is most urgently needed to protect the digital tools opposition activists rely on.

Mistaken takedowns

First, content is often mistakenly removed even when it does not violate local laws or platform policies. This disproportionately impacts activists due to the nature of the content they post and/or collect: for instance, political discussions are likely to include keywords relating to topics deemed “sensitive”, while organisations documenting human rights abuses obviously deal with graphic violent content which is likely to be flagged. 

The ultimate responsibility for deciding whether content is banned rests with low-paid contract workers under extreme time pressure, often highly stressed or traumatized by continual exposure to graphic content. Unsurprisingly, wrong decisions are common. This may be compounded by language barriers, especially for minority languages and regional dialects. Moreover, moderators may not even try to apply policies impartially: for example, LGBTQ+ activists have complained that Facebook’s Arabic-language moderators are homophobic. 

In order to remove content more efficiently and proactively, platforms are also increasingly automating their moderation processes. This process is accelerating due to public pressure to deal with terrorist content in the wake of high-profile incidents like the Christchurch shooting, and a series of national laws (spearheaded by Germany’s 2017 Network Enforcement Act) setting short deadlines for removal of illegal content. More recently, the Covid-19 lockdowns prevented human moderators from working, leading platforms to rely even more heavily on automated moderation a shift that could become permanent. 

Commonly used automated tools include keyword flagging, hashing and machine learning. Keyword flagging is the most straightforward, relying on simple software to remove or flag for review any content containing blacklisted keywords. Hashing is a cryptographic tool used to identify identical or near-identical reproductions of content which has already been banned. It is used in particular for violent or terrorist material and child sexual exploitation material. Finally, machine learning algorithms can be trained using databases of content which has already been removed to identify similar content in the future. 

However, all of these tools remain crude and unreliable: MENA activists have raised many concerns about automated removal of legitimate political speech and documentation of abuses. Hashing and keyword flagging remove any instance of a blacklisted word, image or video, ignoring that it may be used legally in certain contexts. Palestinian NGO 7amleh claims any Facebook post containing the word “martyr” is automatically removed; the same reportedly applies to Hezbollah, even when posts criticize them. Graphic content documenting violent conflicts is also particularly vulnerable to automated removal. The use of hashing systems can sometimes mean that the same pieces of content are reinstated and removed again multiple times (which may result in permanent banning of user accounts due to “multiple offences”) or that all posts on the platform with that content is removed if a single image or video is mistakenly flagged once.

With the current state of technology, AI is little better than hashing and keyword flagging at recognising context and nuanced meanings. NGOs documenting abuses in Syria noted an uptick in content removals immediately after YouTube introduced AI moderation in 2017. In general, automation makes already erratic moderation systems even more unreliable. 

Though the major platforms have appeals processes which theoretically allow users to reinstate mistakenly removed content, these are deeply inadequate. In conflict settings, account holders may be dead or inaccessible, and their content permanently lost. Even where they can appeal, it is an arduous and unreliable process, and does not guarantee content will not be removed again.

Misguided policies

It is not just misapplication of moderation policies which silences activists, but often the policies themselves. Except where national laws require the blocking of specific content, Facebook, Twitter and YouTube are all committed to having uniform global moderation policies. These global rules have been disproportionately shaped by US norms and legal traditions, and often ignore local contexts and the specific needs of activists. 

For example, Facebook requires the use of real names, which can place opposition activists in danger. In one high-profile instance, the creator of the famous “We Are All Khaled Said” group, which mobilised many protestors in the Egyptian revolution, used a pseudonym for his own safety. As a result, his account and the group were deleted a few months later. This policy does not just silence or endanger individual activists, but has an obvious broader chilling effect. Without the option to protect themselves by using pseudonyms, many will not speak out at all. 

Facebook’s ban on terrorist content has also been criticized for suppressing political speech. At one point it had an exemption for violent resistance against a foreign occupier. However, this has now been removed to cover all non-governmental, politically-motivated violence, in order to avoid making controversial decisions about what constitutes illegitimate occupation. This has led to blanket banning of political content in occupied territories including Western Sahara and in particular Palestine, where opposition activists and journalists frequently complain that Facebook works with the Israeli government to suppress resistance to the occupation.

Capitulation to governments

As this suggests, the silencing of political opposition on social media is not only the result of mistakes and misguided policies, but also of deliberate suppression, due to legal and financial pressure from regional governments. Platforms will generally be accommodating to censorship requests under local law in order to avoid being blocked altogether. 

As an elected government and US ally, the Israeli government can work particularly closely with platforms. Its cybercrime unit coordinates with Facebook and Twitter to suppress content deemed “inciting” of violence. This term is defined extremely broadly: it has been applied to posts calling only in very broad terms for resistance to Israel’s government, especially if the users involved have high follower counts. Facebook’s Israeli Head of Policy is a former Netanyahu advisor, and a former civil servant who oversaw the cybercrime unit now sits on Facebook’s Oversight Board as a regional representative for MENA. 

Platforms are less transparent about requests from authoritarian governments, but activists have sometimes alleged coordination based on anecdotal or circumstantial evidence. For example, in late 2019, a wave of anti-government protest on Egyptian social media was promptly followed by mass deletion of opposition figures’ Twitter accounts.

Failure to prevent abuse

As noted earlier, despite their indiscriminate and overly-broad enforcement of certain policies against opposition content, platforms frequently fail to prevent obvious trolling and abuse which aligns with government interests – probably to some extent both inadvertently and due to government pressure. Across the region, anti-LGBTQ+ hate speech is rarely removed, and in Israel, explicit incitement of violence against Arabs is generally ignored. Governments also make extensive use of fake profiles and bot accounts to harass critics and spread disinformation

Moderation systems which still largely rely on user reporting to trigger review also give pro-regime forces the opportunity to silence activists, by coordinating to report them and trigger a ban. For example, Egyptian blogger Wael Abbas was banned from Twitter in 2017 after criticizing the Mubarak family, which independent media have alleged was due to coordinated reporting by pro-Mubarak accounts. As Abbas pointed out, this effectively destroyed “a treasure for researchers…thousands of pictures, videos and live streams from the middle of every crisis in Egypt.” None of the three major platforms explicitly forbids this kind of malicious reporting. 

What could platforms do?

Altogether, these issues significantly restrict opposition activists’ freedom of expression, impede political organization, and destroy valuable evidence of human rights abuses. Yet few of them are insurmountable. There are concrete steps platforms’ policy teams could take to make their supposed goal of “giving everyone a voice” a reality. 

Academics and activists have long called for platforms to invest more resources in moderation. This would be perfectly affordable for the enormously wealthy companies which run the largest platforms, and would be the obvious first step if they really wanted to implement their policies in a reliable and consistent way. Hiring more human moderators would both reduce reliance on inaccurate automated moderation tools, and alleviate the time pressure and emotional strain on the existing workforce, both of which would significantly improve decision quality. Platforms should in particular seek moderators with language skills and local expertise from MENA countries and other emerging markets, and invest in training them to avoid prejudiced application of policies. 

Although the quality of moderation could be improved, when millions of decisions are being made every day some error rate is inevitable. Moderation processes should be reformed to ensure that where content is mistakenly removed, it causes minimal lasting damage. Content that is removed should be archived, so it can be reinstated on appeal and/or passed to relevant authorities (for example, where it documents crimes being committed). Appeals processes must also be made more straightforward and accessible, and coordinated malicious reporting must be identified and ignored. 

As well as reforming their internal processes to ensure moderation policies are applied effectively and fairly, platforms should consult more with civil society and activists in MENA countries on both the content and implementation of their moderation policies. Syrian Archive notes that since some errors are inevitable, the debate about which errors are acceptable “must be held in public”. This debate cannot be restricted to a Western public. Indeed, the currently prevalent practice of applying uniform global moderation rules may be unsustainable. Listening to those most affected by policies should include sensitivity to their different needs and cultural contexts. 

Finally, platform policies should be guided by international human rights standards, with input from civil society and human rights experts. They should not cooperate with requests by authoritarian governments that stifle political opposition, violate users’ human rights, or jeopardise their safety. Pro-regime content which violates moderation rules must be removed.

Governments may threaten legal or financial consequences, but major platforms like Facebook, Twitter and YouTube have the resources and power to take a strong ethical stance. Even before the Arab Spring, internal and external criticism made it difficult for governments to completely ban them. Given their current ubiquity and economic importance, this difficulty has only increased. This means platforms have enough leverage to insist on complying with international human rights law, and live up to their promises of protecting free expression. 

Author

  • Rachel Griffin has just completed her master’s in public policy, specialising in digital policy and platform governance. Originally from the United Kingdom, she has a bachelor’s in law from the University of Oxford and worked in tech startups before starting her dual master’s at Sciences Po and the Hertie School in Berlin.