Why A Facebook Watchdog Group Is Cheering A Law That Could Hurt Journalists

The Real Facebook Oversight Board wants content moderation, and it wants it now. What happens when journalists are targeted?

Last updated on September 10, 2021, at 8:29 p.m. ET

Posted on September 10, 2021, at 3:42 p.m. ET


Samir Hussein / Getty Images for the Business of Fashion

In the extended universe of the techlash, the Real Facebook Oversight Board presents itself as the Avengers.

The members of the group, described on its website as a “‘Brains Trust’ to respond to the critical threats posed by Facebook’s unchecked power,” were summoned from the four corners of the internet by Carole Cadwalladr, the activist British journalist who broke the Cambridge Analytica scandal.

(The group is not affiliated with Facebook and was started last year in confusingly named opposition to Facebook’s creation of its official Oversight Board, or, colloquially, “Facebook Supreme Court.”)

They include some of the biggest names and loudest voices in the movement to hold tech platforms accountable for their influence: people like Shoshana Zuboff, who invented the idea of “surveillance capitalism”; Roger McNamee, the early Facebook investor who has been publicly critical of the company; Yaël Eisenstat, the ex-CIA officer and former head of election integrity operations for political ads at Facebook; and Timothy Snyder, the Yale historian of fascism.

So it was strange to see this superteam on Wednesday cheerleading a decision from the Australian High Court (the country’s version of the Supreme Court) that does nothing directly to check Facebook’s power while harming the interests of the press:

BOOM💣💣💣
Media companies in Australia can be held responsible for defamatory comments left on their social media pages by members of the public, the country’s High Court has ruled
https://t.co/5y7UHEfTNv via @Verge

11:36 AM – 08 Sep 2021


Twitter: @FBoversight

The 5–2 decision, which came down earlier this week, lays the foundation for defamation suits against Facebook users for comments left on their pages. That means Australian news organizations — and potentially all Australians on social media, though it’s unclear for now — could be responsible for defamatory comments left under their posts on the platform, even if they aren’t aware the content exists.

To avoid lawsuits, these newsrooms may have to shut down comments on their Facebook pages or shift resources from newsgathering to fund content moderation on a massive scale. That’s about as far from the United States’ permissive legal regime for internet content — the one many critics of social media’s influence loathe — as it gets. This is, as Mike Masnick wrote for Techdirt, “the anti-230,” Section 230 being the controversial part of the Communications Decency Act which, with a few exceptions, protects websites from being sued in the United States for content created by its users. “It says if you have any role in publishing defamatory information, you can be treated as the publisher.”

The ruling, meanwhile, says nothing about Facebook’s liability for hosting defamatory content.

The Real Facebook Oversight Board only wrote one word in response to the news, “BOOM,” followed by three bomb emojis. But that one word is revealing, not just of a mindset among some tech critics that removing unwanted content inherently creates a positive impact, but of the reality that the interests of journalists are not always aligned — as has largely been assumed — with the most prominent critics of the platforms.

In a statement, a spokesman for the Real Facebook Oversight Board disputed BuzzFeed News’ characterization of the “BOOM” tweet, writing, “We made no comment on the law, and have not taken a position on it. The position attributed to us in this column is simply false.”

“Every major internet company now has a group of haters who will never be satisfied.”

“Every major internet company now has a group of haters who will never be satisfied,” said Eric Goldman, who codirects the High Tech Law Institute at the Santa Clara University School of Law. “They are opposed to anything that would benefit their target. It leads to wacky situations.”

One such wacky situation: Fox News and the Wall Street Journal have spent years attacking Section 230 for protecting the platforms they allege are prejudiced against conservatives. Now their owner, Rupert Murdoch, potentially faces a new universe of defamation claims in the country of his birth, where he still owns a media empire.

Another: A tech watchdog group that includes Laurence Tribe, the constitutional law scholar, and Maria Ressa, the Filipina journalist who has been hounded by the Duterte regime through the country’s libel laws, has released a favorable public statement about the expansion of defamation liability — an expansion that, as Joshua Benton suggested at Nieman Lab, presents a tempting model for authoritarians around the world.

Started in September 2020, the Real Facebook Oversight Board promised to provide a counterweight to the actual Oversight Board. Itself a global superteam of law professors, technologists, and journalists, the official board is where Facebook now sends thorny public moderation decisions. Its most important decision so far, to temporarily uphold Facebook’s ban of former president Trump while asking the company to reassess the move, was seen paradoxically as both a sign of its independence and a confirmation of its function as a pressure relief valve for criticism of the company.

On its website and elsewhere, the Real Facebook Oversight Board criticizes the original board for its “limited powers to rule on whether content that was taken down should go back up” and its timetable for reaching decisions: “Once a case has been referred to it, this self-styled ‘Supreme Court’ can take up to 90 days to reach a verdict. This doesn’t even begin to scratch the surface of the many urgent risks the platform poses.” In other words: We want stronger content moderation, and we want it faster.

Given the role many allege Facebook has played around the world in undermining elections, spreading propaganda, fostering extremism, and eroding privacy, this might seem like a no-brainer. But there’s a growing acknowledgment that moderation is a problem without a one-size-fits-all solution, and that sweeping moderation comes with its own set of heavy costs.

In a June column for Wired, the Harvard Law lecturer evelyn douek wrote that “content moderation is now snowballing, and the collateral damage in its path is too often ignored.” Definitions of bad content are political and inconsistent. Content moderation at an enormous scale has the potential to undermine the privacy many tech critics want to protect — particularly the privacy of racial and religious minorities. And perhaps most importantly, it’s hard to prove that content moderation decisions do anything more than remove preexisting problems from the public eye.

Journalists around the world have condemned the Australian court’s decision, itself a function of that country’s famously soft defamation laws. But the Real Facebook Oversight Board’s statement is a reminder that the impulses of the most prominent tech watchdog groups can be at odds with a profession that depends on free expression to thrive. Once you get past extremely obvious cases for moderation — images of child sexual abuse, incitements to violence — the suppression of bad forms of content inevitably involves political judgments about what, exactly, is bad. Around the world, those judgments don’t always, or even usually, benefit journalists.

“Anyone who is taking that liability paradigm seriously isn’t connecting the dots,” Goldman said.

UPDATE

Sep. 11, 2021, at 00:29 AM

The post has been updated with a comment from the Real Facebook Oversight Board. The post has also been updated to reflect the fact that Lincoln Project co-founder Reed Galen is no longer a member of the Real Facebook Oversight Board.