The Most Important Part Of The Facebook / Oversight Board Interaction Happened Last Week And Almost No One Cared


0

from the pay-attention dept

The whole dynamic between Facebook and the Oversight Board has received lots of attention — with many people insisting that the Board’s lack of official power makes it effectively useless. The specifics, again, for most of you not deep in the weeds on this: Facebook has only agreed to be bound by the Oversight Board’s decisions on a very narrow set of issues: if a specific piece of content was taken down and the Oversight Board says it should have been left up. Beyond that, the Oversight Board can make recommendations on policy issues, but the companies doesn’t need to follow them. I think this is a legitimate criticism and concern, but it’s also a case where if Facebook itself actually does follow through on the policy recommendations, and everybody involved acts as if the Board has real power… then the norms around it might mean that it does have that power (at least until there’s a conflict, and you end up in the equivalent of a Constitutional crisis).

And while there’s been a tremendous amount of attention paid to the Oversight Board’s first set of rulings, and to the fact that Facebook asked it to review the Trump suspension, last week something potentially much more important and interesting happened. With those initial rulings on the “up/down” question, the Oversight Board also suggested a pretty long list of policy recommendations for Facebook. Again, under the setup of the Board, Facebook only needed to consider these, but was not bound to enact them.

Last week Facebook officially responded to those recommendations, saying that it had agreed to take action on 11 of the 17 recommendations, is assessing the feasibility on another five, and was taking no action on just one. The company summarized those decisions in that link above, and put out a much more detailed pdf exploring the recommendations and Facebook’s response. It’s actually interesting reading (or, at least for someone like me who likes to dig deep into the nuances of content moderation).

Since I’m sure it’s most people’s first question: the one “no further action” was in response to a policy recommendation regarding COVID-19 misinformation. The Board had recommended that when a user posts information that disagrees with advice from health authorities, but where the “potential for physical harm is identified but is not imminent” that “Facebook should adopt a range of less intrusive measures.” Basically, removing such information may not always make sense, especially if it’s not clear that the information in disagreement with health authorities might not be actively harmful. As per usual, there’s a lot of nuance here. As we discussed, early in the pandemic, the suggestions from “health authorities” later turned out to be inaccurate (like the WHO and CDC telling people not to wear masks in many cases). That makes relying on those health authorities as the be all, end all for content moderation for disinformation inherently difficult.

The Oversight Board’s response in this issue more or less tried to walk that line, recognizing that health authorities’ advice may adapt over time as more information becomes clear, and automatically silencing those who push back on the official suggestions from health officials may lead to over-blocking. But, obviously, this is a hellishly nuanced and complex topic. Part of the issue is that — especially in a rapidly changing situation, where our knowledge base starts out with little information and is constantly correcting — it’s difficult to tell who is pushing back against official advice for good reasons or for conspiracy theory nonsense reasons (and there’s a very wide spectrum between those two things). That creates (yet again) an impossible situation. The Oversight Board was suggesting that Facebook should be at least somewhat more forgiving in such situations, as long as they don’t see any “imminent” harm from those disagreeing with health officials.

Facebook’s response isn’t so much pushing back against the Board’s recommendation — but rather to argue that it already takes a “less intrusive” approach. It also argued that Facebook and the Oversight Board basically disagree on the definition of “imminent danger” from bad medical advice (the specific issue came up in the context of someone in France recommending hydroxychloroquine as a treatment for COVID). Facebook said that, contrary to the Board’s finding, it did think this represented imminent danger:

Our global expert stakeholder consultations have made it clear that, that in the context of a health
emergency, the harm from certain types of health misinformation does lead to imminent physical harm.
That is why we remove this content from the platform. We use a wide variety of proportionate measures
to support the distribution of authoritative health misinformation. We also partner with independent
third-party fact-checkers and label other kinds of health misinformation.

We know from our work with the World Health Organization (WHO) and other public health authorities
that if people think there is a cure for COVID-19 they are less likely to follow safe health practices, like
social distancing or mask-wearing. Exponential viral replication rates mean one person’s behavior can
transmit the virus to thousands of others within a few days.

We also note that one reason the board decided to allow this content was that the person who posted
the content was based in France, and in France, it is not possible to obtain hydroxychloroquine without a
prescription. However, readers of French content may be anywhere in the world, and cross-border flows
for medication are well established. The fact that a particular pharmaceutical item is only available via
prescription in France should not be a determinative element in decision-making.

As a bit of a tangent, I’ll just note the interesting dynamic here: despite “the narrative” which claims that Facebook has no incentive to moderate content due to things like Section 230, here the company is arguing for the ability to be more heavy handed in its moderation to protect the public from danger, and against the Oversight Board which is asking the company to be more permissive.

As for the items that Facebook “took action” on, a lot of them are sort of bland commitments to do “something” rather than concrete changes. For example, at the top of the list were things around confusion between the Instagram community guidelines and the Facebook community guidelines, and to be more transparent about how those are enforced. Facebook says that they’re “committed to action” on this, but I’m not sure I can actually tell you what actions it’s actually taken.

We’ll continue to explore how best to provide transparency to people about enforcement actions, within
the limits of what is technologically feasible. We’ll start with ensuring consistent communication across
Facebook and Instagram to build on our commitment above to clarify the overall relationship between
Facebook’s Community Standards and Instagram’s Community Guidelines.

Um… great? But what does that actually mean? I have no idea.

Evelyn Douek, who studies this issue basically more than anyone else, notes that many of these commitments from Facebook are kind of weak:

Some of the “commitments” are likely things that Facebook had in train already; others are broad and vague. And while the dialogue between the FOB and Facebook has shed some light on previously opaque parts of Facebook’s content moderation processes, Facebook can do much better.

As Douek notes, some of the answers do reveal some pretty interesting things that weren’t publicly known before — such as how its AI deals with nudity, and how it tries to distinguish the nudity it doesn’t want from things like nudity around breast cancer awareness:

Facebook explained the error choice calculation it has to make when using automated tools to detect adult nudity while trying to avoid taking down images raising awareness about breast cancer (something at issue in one of the initial FOB cases). Facebook detailed that its tools can recognize the words “breast cancer” but users have used these words to evade nudity detection systems, so Facebook can’t just rely on just leaving up every post that says “breast cancer.” Facebook has committed to providing its models with more negative samples to decrease error rates.

Douek also notes that some of Facebook’s claims to be implementing the Board’s recommendations are… misleading. They’re actually rejecting the Board’s full recommendation:

In response to the FOB’s request for a specific transparency report about Community Standard enforcement during the COVID-19 pandemic, Facebook said it was “committed to action.” Great! What “action,” you might ask? It says that it had already been sharing metrics throughout the pandemic and would continue to do so. Oh. This is actually a rejection of the FOB’s recommendation. The FOB knows about Facebook’s ongoing reporting and found it inadequate. It recommended a specific report, with a range of details, about how the pandemic had affected Facebook’s content moderation. The pandemic provided a natural experiment and a learning opportunity: Because of remote work restrictions, Facebook had to rely on automated moderation more than normal. The FOB was not the first to note that Facebook’s current transparency reporting is not sufficient to meaningfully assess the results of this experiment.

Still, what’s amazing to me is that these issues, which might actually change key aspects of Facebook’s moderation basically got next to zero public attention last week as compared to the simple decisions on specific takedowns (and the massive flood of attention the Trump account suspension decision will inevitably get).

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, policies, recommendations
Companies: facebook, oversight board


Like it? Share with your friends!

0
admin

0 Comments

Your email address will not be published. Required fields are marked *

Choose A Format
Personality quiz
Series of questions that intends to reveal something about the personality
Trivia quiz
Series of questions with right and wrong answers that intends to check knowledge
Poll
Voting to make decisions or determine opinions
Story
Formatted Text with Embeds and Visuals
List
The Classic Internet Listicles
Countdown
The Classic Internet Countdowns
Open List
Submit your own item and vote up for the best submission
Ranked List
Upvote or downvote to decide the best list item
Meme
Upload your own images to make custom memes
Video
Youtube, Vimeo or Vine Embeds
Audio
Soundcloud or Mixcloud Embeds
Image
Photo or GIF
Gif
GIF format