Facebook plans to create a judicial-like body to address controversial speech
For the last two years, Facebook has been a company swamped by scandal and public scrutiny.
From being a conduit for Russian meddling in the 2016 election to having the personal data of millions of its users misappropriated by consulting firm Cambridge Analytica, to its role in abetting violence against Muslims in Myanmar, the company has been hit by a stream of damning revelations.
Partly in response, Facebook issued a series of new policies and practices last year to provide more transparency and accountability in how it moderates speech on the platform.
Those steps included releasing internal guidelines for enforcing its community standards—Facebook’s public content-moderation rules—doubling the number of content moderators and reporting on content enforcement actions.
One initiative, announced in a November blog post by Facebook CEO Mark Zuckerberg, captured the attention of legal scholars, lawyers and civil society advocates. It would create an independent body to hear appeals of content decisions by Facebook—either removing or leaving up posts—and its decisions would be binding.
“The purpose of this body would be to uphold the principle of giving people a voice while also recognizing the reality of keeping people safe,” Zuckerberg wrote. It is set to launch by December after a period of initial consultation with experts, followed by global workshops and soliciting of public proposals.
Months earlier, in an April interview with Vox, Zuckerberg had floated the idea of an oversight body, “almost like a Supreme Court,” to define the scope of acceptable speech on the platform. The high court analogy reinforced the notion of Facebook as an emerging, if enormous, nation-state (population 2.3 billion) creating a separate judicial arm.
Unlike an American government entity, though, Facebook as a private business isn’t legally bound to guarantee free speech rights under the First Amendment. Still, Facebook seeks to strike a balance between promoting free speech and competing concerns like user safety, the user experience and its own business interests.
How well the nation-state analogy fits Facebook is one of a host of conceptual and practical questions the project raises: Who will staff this body, and how will they be chosen? How will Facebook ensure the body’s independence? And how will this all work given the scale at which the company operates?
UPHOLDING Community standards
Facebook began providing more details in late January when it released a draft charter for what it calls its oversight board. After consultations the last few months with academics, civil society groups and other experts, the document outlines the basic structure and function of the board. The primary purpose of the board, it states, will be to review specific decisions made when enforcing its community standards. Beyond those standards, it will be guided by a set of values including voice, safety, equity, dignity, equality and privacy. Board members will consist of experts in content, privacy, free expression and human rights, among other areas, and be supported by a full-time staff. Board decisions will be final in the specific instance but could potentially factor into future policy.
The board would not reverse Facebook decisions where doing so would violate the law, which Lawfare contributor Evelyn Douek noted is consistent with the company’s existing policy of respecting the laws of local jurisdictions globally where it operates.
The draft charter suggests that body may number up to 40 global experts who serve part time for fixed three-year terms, and that the initial appointments would be made by a chair or selection committee commissioned by Facebook.
To help foster independence, the board won’t include current or former Facebook employees or government officials. Compensation for board members would be standardized and fixed for their terms. Rules would require their recusal to avoid conflicts and bar them from being lobbied. And board deliberations would remain private.
Exactly how members would be paid is still unclear. One possibility is that Facebook would fund the board, but through a separate entity to preserve its independence.
In recent months, the company has hosted a series of workshops with experts and organizations in cities such as Singapore, New Delhi and Nairobi to solicit feedback on the draft charter. In April, it also began seeking public input on the oversight board through a web-based form that includes a questionnaire and essay section. The questions cover topics such as how board members should be selected, how decisions should be made, and how long members should serve. Facebook has tapped Baker McKenzie to help manage the process, which will result in a report in June summarizing the findings after the six-week public comment period.
“This is a big deal for us, so we’re reaching out to get feedback because it will have an important impact on what we’re doing,” said Peter Stern, policy manager at Facebook, at the time the draft charter was unveiled.
Among others, that feedback is coming from law scholars, lawyers and other legal experts Facebook has reached out to as part of its process. Legal specialists are also likely to play a key part in the board’s ongoing development and eventual operation.
“The board would be looking to recruit significant figures from journalism, human rights, safety and the legal profession, and could expect them to be from a wide variety of countries and cultures,” Stern says.
Those selected would be addressing what Facebook considers its most difficult content decisions. Such cases often involve deciding what constitutes hate speech. That’s because they typically require more cultural and linguistic context to decide than other types of banned content like terror propaganda or nudity, according to Monika Bickert, head of global policy management at Facebook.
“There’s no universal definition of hate speech,” she noted while speaking in December about Facebook’s content-moderation system at Harvard University’s Berkman Klein Center for Internet & Society.
Conversely, the board might also have to determine when content that might otherwise be removed for violating its community standards should stay up because of its newsworthiness or public interest value.
While the oversight board is very much a work in progress, legal experts are already weighing in. Interviews with more than a dozen lawyers and academics who focus on areas such as internet law, content moderation and human rights indicate support for the project. But that support is mixed with skepticism.
That’s not only because of the practical challenges of building a global appeals system but also uncertainty about Facebook’s motives.
“We have a lot of questions and concerns about how to actually implement this in a way that would actually be fair and adequately protect the due process rights of [Facebook] users,” says Corynne McSherry, legal director at the Electric Frontier Foundation. “I genuinely think the jury is out on that.”
Advocacy groups such as EFF have increasingly called on Facebook to disclose more about how it makes content-moderation decisions. They’ve also pushed for more due process—such as providing ample notice of content takedowns and the ability to appeal such actions.
That’s hardly surprising given the storm of public criticism Facebook, Twitter, YouTube and other internet platforms have faced in connection with the proliferation of fake news, fake accounts, online harassment and bullying, as well as election interference in the last few years. The so-called techlash has placed mounting pressure on the platform giants to police online speech more aggressively while also protecting free expression.
Days before Zuckerberg’s November blog post, more than 90 civil society groups published an open letter to Facebook asking that it clearly explain to users why its content had been taken down and to permit appeals, including review by people not involved in the original decision.
Earlier in the year, the U.N. special rapporteur on the promotion and protection of the right to freedom of opinion and expression released a report calling on social media companies to adopt human rights law as the global standard for ensuring free expression on their platforms.
Among the report’s specific recommendations was a suggestion on how to handle content appeals: “Among the best ideas for such programs is an ‘independent social media’ council modeled on the press councils that enable industrywide complaint mechanisms and promotion of remedies for violations.”
David Kaye, the U.N. special rapporteur, says he welcomed Facebook’s step to create an appeals body, even if it’s not the broader type of social media council he recommended. “Overall, I think this is heading in the right direction,” he says.
But he questions why the guiding values listed in the draft charter didn’t include free expression, substituting the weaker term voice. Beyond just enforcing Facebook’s community standards, “I think it’s better for the board to make a broader assessment of whether the rules and their application are consistent with human rights law,” he says.
Facebook has previously said it looks to international human rights law for guidance when it comes to setting limits on speech. It has also noted its involvement with groups such as the Global Network Initiative, which promotes free expression and privacy rights.
A Facebook ‘constitution’
Others have suggested the company should go so far as to draft a constitution of sorts to help guarantee the board’s independence. That’s what Kate Klonick, a professor at St. John’s University School of Law who focuses on internet law, and Thomas Kadri, a PhD in law candidate at Yale Law School, proposed in a New York Times op-ed in November.
“Facebook should consider—especially if it continues to act as a type of governing body—adopting something like a constitution that is harder to amend than its ever-shifting content-moderation rules, which it could alter mercurially to get around decisions issued by its court that it doesn’t like,” they wrote.
A Facebook “Supreme Court,” with its own constitution, could prove more hospitable than an actual court. Suing internet platforms such as Facebook over free speech violations has been a mostly losing proposition for plaintiffs. Courts have generally found that as private companies, they’re free to exercise editorial control over content on their properties.
What’s more, the Communications Decency Act gives Facebook and other online intermediaries broad immunity from liability for content posted by their users.
A judge last year, for instance, ruled against right-leaning political activist Chuck Johnson in a suit he brought against Twitter for kicking him off the service after he posted a threatening tweet. The court rejected his arguments on the grounds of both the First Amendment and the decency act, finding that while Twitter invites public use, “it also limits this invitation by requiring users to agree to, and abide by, its user rules.”
Such rulings don’t mean Facebook can simply ignore calls for greater accountability. With governments around the world probing its business practices and signaling increased regulation, the company has reason to take more action on its own.
Consider that Facebook began allowing users to appeal content takedowns in April 2018. Specifically, that means the ability to appeal removal of posts for nudity/sexual activity, hate speech or graphic violence.
Previously, users could only challenge actions such as account suspensions or entire page takedowns on Facebook. The appeals expansion was announced in tandem with the release of the internal guidelines its 15,000 content moderators globally follow in enforcing its community standards.
Both moves came a couple of weeks after Zuckerberg was grilled by lawmakers in Washington, D.C., about the company’s data security and privacy lapses in relation to topics like the Cambridge Analytica scandal, Russia’s exploitation of Facebook in the 2016 election and racial targeting via its ad platform.
Even after updating its appeal policy, Facebook was slammed in a May EFF report on platform censorship for allowing only a limited scope of appeals. It earned a rating of only one star out of five overall across criteria that also included transparency reporting, providing timely notice and limiting the geographic scope of takedowns when possible.
Monitoring billions of posts
But if Facebook’s ambitious plans for the oversight board are borne out, Eric Goldman, a professor at Santa Clara University School of Law and co-director of its High Tech Law Institute, suggests the company could potentially leapfrog the competition.
“With respect to the appeals process, if Facebook raises the bar, it makes other companies wonder if they have to keep pace with Facebook or look like they’re falling behind,” he says.
Alex Feerst, head of legal at publishing platform Medium and a fellow at Stanford’s Center for Internet and Society, for one, is watching closely. “I have an appreciation for how large and complex this is,” he says of Facebook’s creation of an outside appeals body. “People will be watching to see how they go about solving problems and try to learn from it.”
Indeed, the sheer scale of the project is daunting. Consider that billions of pieces of content are posted to Facebook each day in over 100 languages globally, with more than a million content-related complaints reported a day as well.
Facebook has developed automated systems that can identify much of the content it bans in certain categories such as nudity and terrorism. In his November blog post, for instance, Zuckerberg noted that the company’s AI systems now flag 99 percent of terrorist content before anyone even reports it. In March, Facebook announced similar efforts against content supporting white nationalism and white separatism.
But automated tools aren’t as effective yet in categories that involve more linguistic and cultural nuance such as hate speech, bullying and harassment. That’s where human reviewers come in. A New York Times investigation published in December underscored just how difficult the task is, and how flawed the process.
Facebook last year, for instance, acknowledged it was too slow to combat hate speech in Myanmar used to incite violence against minority Rohingya Muslims. About 700,000 Rohingya fled the country in 2017 in what the U.S. condemned as ethnic cleansing.
To better inform its decisions, the oversight board would be able to call on outside cultural, linguistic or sociopolitical experts when necessary. That would be in addition to any arguments or material submitted by Facebook users.
The draft charter doesn’t indicate what type of caseload the board might carry. Facebook’s Stern says it’s contingent on other factors, like the size of the board and how cases are selected. But the numbers associated with its existing content-moderation efforts are eye-popping.
A report Facebook released in November showed that in just the third quarter, it took some type of action on about 63 million pieces of content, excluding fake accounts and spam. That total included 30.8 million related to nudity/sexual activity, 15.4 million concerning graphic violence and 2.9 million deemed as hate speech.
Facebook hasn’t yet disclosed how many of these matters have been appealed since April 2018, but even a small fraction would still represent thousands of cases, some of which could wind up before the oversight board.
On the last point, the charter suggested cases could be referred to the board by Facebook users challenging a decision as well as by Facebook itself—highlighting especially difficult issues or ones that have sparked significant public debate.
In that vein, think of the public outcry in 2016 over Facebook’s censorship of the Pulitzer Prize-winning 1972 photo “The Terror of War,” showing terrified children fleeing a napalm attack in Vietnam. The platform later reinstated the photo because of its historical importance.
Once referred, cases might then be heard by panels from a rotating set of an odd number of board members. At the end of a session, panels could choose from an eligible slate of cases for the subsequent panels to decide.
Stern stressed, though, that such draft proposals are still under discussion.
For reference, 7,000 to 8,000 cases are filed with the U.S. Supreme Court each term, and it ends up hearing oral arguments in about ٨٠ of those cases, or about ١ percent of the total filed. Similarly, Facebook’s oversight board will hear only what it deems the toughest or most important cases.
Courting legitimacy
Stern and other Facebook officials have downplayed Zuckerberg’s Supreme Court analogy, but the oversight board isn’t without judicial aspirations. The charter envisions independent panels that act like judges, interpreting the equivalent of Facebook laws (its community standards), and issue binding decisions that will serve as a form of precedent.
The quasi-judicial structure itself bestows a sense of fairness and legitimacy. That’s what Facebook badly needs to make the oversight board a success. “What Facebook is trying to do is maintain enough legitimacy with users that people will still use the platform,” says Rebecca MacKinnon, director of the Ranking Digital Rights project at New America.
Facebook last year ranked fourth out of 12 internet and mobile companies in the Ranking Digital Rights Index, which rates major tech companies based on their policies relating to governance, freedom of expression and privacy.
But Facebook’s self-interest and the public interest aren’t necessarily at odds in the oversight board, according to Klonick, who is among the legal scholars the company has reached out to for input on its development.
“It’s very much in the best interests of users,” she says. “Having a diverse body of individuals look at this, having something that’s independent and not tied to Facebook are all really great steps and long overdue, in my opinion.”
In a 2017 law review article she wrote on the “New Governors” of online speech, Klonick argued that platforms’ content-moderation efforts are shaped by three factors: American free speech norms, corporate responsibility and the economic necessity of fulfilling users’ expectations.
For Facebook, there’s the added dimension of public scrutiny it’s come under. The oversight board is emerging amid the ongoing sense of crisis surrounding the company. Every week seems to bring a new issue. Facebook’s plan to merge its WhatsApp, Instagram and Facebook Messenger apps on the back end, for example, has already raised regulatory questions in Europe. So it’s likely the board will continue to be eyed with skepticism until it proves it’s more than just a rubber stamp or a way for Facebook to outsource tough decisions.
The yearlong public process Facebook has undertaken to launch the oversight board, soliciting advice from various groups worldwide, seems to reflect an awareness it has to gain trust from users, civil society groups and others.
“We’re trying to add transparency and accountability and independent judgment, and we think the board is a good way to do that,” Stern says.
Mark F. Walsh is a New York City-based freelance writer. He is a former reporter for American Lawyer Media publications. This article was published in the June 2019 ABA Journal magazine with the title "Facecourt."