ABA Abroad

How can social media companies identify and respond to threats against human rights defenders?

  •  
  •  
  •  
  •  
  • Print.

global computer threats

Image from Shutterstock.

The promise of greater connection and community in the digital age has been defended by human rights activists around the world. Unfortunately, social media platforms are now a primary tool for coordinated, state-aligned actors to harass, threaten and undermine advocates.

Although public shaming, death threats, defamation and disinformation are not unique to the online sphere, the nature of the internet has given them unprecedented potency. Bad actors are able to rapidly deploy their poisoned content on a vast scale. Social media companies have only just begun to recognize, let alone respond to, the problem. Meanwhile, individuals targeted through such coordinated campaigns must painstakingly flag individual pieces of content, navigate opaque corporate structures and attempt to survive the fallout. To address this crisis, companies such as Facebook, Twitter and Youtube must dramatically increase their capacity and will to engage in transparent, context-driven content moderation.

For human rights defenders, the need is urgent. The United Nations defines a human rights defender as one who, individually or with others, acts to promote or protect human rights. These can include lawyers, journalists and activists. Globally, defenders often face retaliation or barriers to their work. Advocating for human rights, and exposing human rights violations, can bring advocates up against powerful interests. Since 2011, the ABA Center for Human Rights (CHR) has ensured access to legal counsel for defenders, and accountability for perpetrators of intimidation and retaliation against them. We have noted, with concern, the coordination of “traditional” judicial harassment of defenders by governments, such as frivolous criminal charges or arbitrary detention, with online campaigns of intimidation. State-aligned online disinformation campaigns against individual defenders often precede or coincide with official investigations and criminal charges.

Companies have an obligation to prevent and address human rights abuses linked to their activities, according to the United Nations. This is particularly acute in countries where political persecution and impunity for violence undermine any credible efforts on the part of governments to address threats against defenders. And yet, many platforms continue to attempt to regulate online attacks against defenders using varying and privately established standards that appear to be somewhat arbitrarily deployed.

While social media companies generally prohibit incitement of violence and hate speech on their platforms, CHR has had to engage in additional advocacy with social media companies requesting the removal of specific pieces of content or accounts that target defenders. This extra advocacy has been required even where the content clearly violates a social media company’s terms of service and despite initial flagging by a defender. The situation is even more difficult where the threatening content is only recognizable with sufficient local and political context. The various platforms all rely on artificial intelligence, to varying degrees, to identify speech that violates their respective community standards. Yet, current iterations of artificial intelligence are often unable to adequately evaluate context and intent.

ABA Abroad logo.

Online intimidation and smear campaigns against defenders often rely on existing societal fault lines to demean and discredit advocates. In Guatemala, CHR recently documented a coordinated social media campaign to defame, harass, intimidate and incite violence against human rights defenders. Several were linked with so-called “net centers,” where users were reportedly paid to amplify hateful content across platforms. Often, the campaigns relied on “coded” language that hark back to Guatemala’s civil war and the genocide of Mayan communities by calling indigenous leaders communists, terrorists and guerrillas.

These terms appear to have largely escaped social media company scrutiny, perhaps because none is a racist slur per se. And yet, the proliferation of these online attacks, as well as the status of those putting out the content, is contributing to a worsening climate of violence and impunity for violence against defenders by specifically alluding to terms used to justify violence against indigenous communities. In 2018 alone, NPR reports that 26 indigenous defenders were murdered in Guatemala. In such a climate, the fear and intimidation felt by those targeted in such campaigns is not hyperbolic but based on their understanding of how violence can be sparked in Guatemala.

In order to address such attacks, social media companies must adopt policies that allow them to designate defenders as temporarily protected groups in countries that are characterized by state-coordinated or state-condoned persecution of activists. This is in line with international law that prohibits states from targeting individuals for serious harm based on their political opinion. To increase their ability to recognize and respond to persecution and online violence against human rights defenders, companies must continue to invest in their context-driven content moderation capacity, including complementing algorithmic monitoring with human content moderators well-versed in local dialects and historical and political context.

Context-driven content moderation should also take into account factors that increase the risk that online behavior will contribute to offline violence by identifying high-risk countries. These factors include a history of intergroup conflict and an overall increase in the number of instances of intergroup violence in the past 12 months; a major national political election in the next 12 months; and significant polarization of political parties along religious, ethnic or racial lines. Countries where these and other risk factors are present call for proactive approaches to identify problematic accounts and coded threats against defenders and marginalized communities, such as those shown in Equality Labs’ “Facebook India” report.

Companies should identify, monitor and be prepared to deplatform key accounts that are consistently putting out denigrating language and targeting human rights defenders. This must go hand in hand with the greater efforts that companies are finally beginning to take to identify coordinated, state-aligned misinformation campaigns. Focusing on the networks of users who abuse the platform, instead of looking solely at how the online abuse affects defenders’ rights online, will also enable companies to more quickly evaluate whether the status of the speaker increases the likelihood that others will take up any implicit call to violence or will be unduly influenced by disinformation.

This abuser-focused approach will also help to decrease the burden on defenders to find and flag individual pieces of content and accounts as problematic. Many of the human rights defenders with whom CHR works are giving up on flagging, a phenomenon we refer to as flagging fatigue. Many have become fatalistic about the level of online harassment they face. This is particularly alarming as advocates targeted online may develop skins so thick that they are no longer able to assess when their actual risk of physical violence has increased.

Finally, it is vital that social media companies pursue, and civil society demand, transparency in content moderation policy and decision-making, in line with the Santa Clara Principles. Put forward in 2018 by a group of academic experts, organizations and advocates committed to freedom of expression online, the principles are meant to guide companies engaged in content moderation and ensure that the enforcement of their policies is “fair, unbiased, proportional and respectful of users’ rights.” In particular, the principles call upon companies to publicly report on the number of posts and accounts taken down or suspended on a regular basis, as well as to provide adequate notice and meaningful appeal to affected users.

CHR routinely supports human rights defenders facing frivolous criminal charges related to their human rights advocacy online or whose accounts and documentation have been taken down absent any clear justification. This contributes to a growing distrust of the companies among the human rights community as apparently arbitrary decisions about content moderation are leaving advocates both over- and under-protected online.

As the U.N. special rapporteur on freedom of expression explained in his 2018 report, content moderation processes must include the ability to appeal the removal, or refusal to remove, content or accounts. Lack of transparency heightens the risk that calls to address the persecution of human rights defenders online will be subverted into justifications for censorship and restrictions on speech that is protected under international human rights law.

A common response when discussing the feasibility of context-driven content moderation is to compare it to reviewing all the grains of sand on a beach. But human rights defenders are not asking for the impossible. We are merely pointing out that some of that sand is radioactive—it glows in the dark, it is lethal, and there is a moral and legal obligation upon those that profit from the beach to deal with it.

Ginna Anderson

Ginna Anderson, senior counsel, joined ABA CHR in 2012. She is responsible for supporting the center’s work to advance the rights of human rights defenders and marginalized communities, including lawyers and journalists at risk. She is an expert in health and human rights, media freedom, freedom of expression and fair trial rights. As deputy director of the Justice Defenders Program since 2013, she has managed strategic litigation, fact-finding missions and advocacy campaigns on behalf of human rights defenders facing retaliation for their work in every region of the world.


• ABA Abroad is a column highlighting the work of the ABA’s Center for Global Programs, which comprises the Rule of Law Initiative, Center for Human Rights and the ABA’s presence at the United Nations.

Give us feedback, share a story tip or update, or report an error.