Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress. This is not true.

We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it. What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.

Focusing just on content removals is the wrong way to look at how we fight hate speech. That’s because using technology to remove hate speech is only one way we counter it.

We need to be confident that something is hate speech before we remove it. If something might be hate speech but we’re not confident enough that it meets the bar for removal, our technology may reduce the content’s distribution or won’t recommend Groups, Pages, or people that regularly post content that is likely to violate our policies. We also use technology to flag content for more review.

We have a high threshold for automatically removing content. If we didn’t, we’d risk making more mistakes on content that looks like hate speech but isn’t, harming the very people we’re trying to protect, such as those describing experiences with hate speech or condemning it. 

Another metric being misconstrued is our proactive detection rate, which tells us how good our technology is at finding content before people report it to us. It tells us, of the content we remove, how much we found ourselves. In 2016, the vast majority of our content removals were based on what users reported to us. We knew we needed to do better and so we began building technology to identify potentially violating content without anyone flagging it to us. 

When we began reporting our metrics on hate speech, only 23.6% of the content we removed was detected proactively by our systems; the majority of what we removed was found by people. Now, that number is over 97%. But our proactive rate doesn’t tell us what we are missing and doesn’t account for the sum of our efforts, including what we do to reduce the distribution of problematic content.

That’s why we focus on prevalence and consistently describe it as the most important metric. Prevalence tells us what violating content people see because we missed it. It’s how we most objectively evaluate our progress, as it provides the most complete picture. We talk about prevalence in our Community Standards Enforcement Report report every quarter and describe it in our Transparency Center.

Prevalence is how we measure our work internally, and that’s why we share the same metric externally. While we know our work will never be done in this space, the fact that prevalence has been reduced by almost 50% in the last three quarters shows that taken together, our efforts are having an impact. As reported in our Community Standards Enforcement Report, we can attribute a significant portion of the drop to our improved and expanded AI systems.

We include many metrics in our quarterly reports, which are the most comprehensive of their kind, to give people a more complete picture. We have worked with international experts in measurement, statistics, and other areas to provide an independent, public assessment to make sure we are measuring the right things. While they broadly agreed with our approach, they also provided recommendations for how we can improve. You can read their full report here. We have also committed to undergoing an independent audit, with global auditing firm EY, to make sure we are measuring and reporting our metrics accurately.

LEAVE A REPLY

Please enter your comment!
Please enter your name here